YogAI: Smart Personal Trainer

Pose estimation on a Raspberry Pi to guide and correct positions for any yogi.

Intro

Yoga has an ancient tradition of physical and mental training to improve well-being. Modern Hatha Yoga, which emphasizes physical conditioning and mental strength through training physical postures, has been growing in popularity over the last couple decades.

However, the time and cost of getting to a yoga studio can be prohibitive. Still others would prefer to practice outside of the group setting.

Here, we explore the development of YogAI. With pose recognition, we implement a smart assistant to provide corrective advice to guide practitioners.

Our setup is not limited to Yoga flows. We will also explore analysis and feedback for strength training movements.

Gather Data

We’ll start by gathering sample images for a few common yoga positions. Doing some image and video search, we build up a corpus of labeled Yoga poses.

You can use this gist to download yoga videos to parse into images or use this plugin while you’re running firefox to download google images on poses.

After gathering a couple hundred example images per pose, we group them into directories named for each position.

yoga_poses/
|
|- cow/
|  |
|  |- sample1.jpg
|  |- sample2.jpg
|  |- ...
|
|- plank/
|  |
|  |- sample1.jpg
|  |- sample2.jpg
|  |- ...
|
.
.

We could train an image classifier, like we’ve done in another project. However, there are deep learning models especially well-suited to pose estimation from images. OpenPose or PoseNet help to localize body key points, offering more information than a simple image classifier.

The TensorFlow lite implementation in this repo can be pointed at your directory to superimpose these keypoints over your images. Here’s an example of what the model returns:

We modify a fork to return an array of all the coordinates of each body part found in an image. This array will be the feature vector for a simple ML classifier based on KNN.

See how we label images in the repo to generate a dataset from our samples. We’ll regard these annotations as our ground truth for every sample training pose.

Then, our YogAI application will extract key points and classify a yogi’s poses in real time to guide instruction.

Pose Estimation & Classification

We found that remarkably few samples were required to get decent results. However, more images will help disambiguate similar poses.

For example, we found that the chaturangadandasana and plank poses are very similar but with enough samples the model can differentiate between the two relatively well.

We also found that performance degrades for positions featuring face occlusion like downward dog or forward bend.

Blocked faces can make it difficult for the pose estimator to find other body parts. Since we use the keypoints as a feature vector for our pose classifier, pose detection degrades as well.

To illustrate, consider the downward dog position, where the head is tucked between the shoulders and our photo frames the head blocked by the arms.

On average, analyzing positions like warrior_2, we find the majority of possible keypoints. Comparing with positions like forward_bend, we find 2 keypoints on average across hundreds of samples!

For now, we’ll remove these difficult samples.

It is also important to balance the distribution of class samples in our dataset.

Initially, our model overpredicted the warrior_2 position, which was oversampled. Removing the poses that had too few samples or gathering more samples for under-represented poses will help your classifier differentiate between poses better.

Below, we show the confusion matrices for training our pose classifier under two setups. One with imbalanced data across a more challenging range of positions on the left. The second matrix indicates a stronger concentration along the main diagonal, exactly what we are looking for to improve the classifier.

Source: YogAI: Smart Personal Trainer

Scroll to Top