Back to Parent

How to Realize

We first use Teachable Machine to train the model, and the model was categorized into three different classes, including "Pass by", "Close to Cam", and "No Actions". After training the model, we applied a web camera to capture users' actions and deliver the actions into the model.

For the Pass by class, we tried one action to reflect accordingly, which is the lucky cat waves its hand. In order to realize this, we used Teachable Machine to recognize people's shoulder to make the decision and set one servo into the device. Then link the Particle script into the model, and set the servo motion to a loop with a specific degrees.

For the Close to Cam class, we also set one action, which rotating the tail of the luck cat. We used the similar method as the first class, but set different servo rotation degrees compared to the first one. Also, we set the default distance of ears as a certain value. If the value is less than a certain distance, then the model will execute the code to make the actions.

For the Comfortable Feeling class, we didn’t set the action for the servo, but for the next step, we would like to implement the screen to our device to have more interactions.

Next step, we would like to embed the screen and implement the Raspberry PI into our device. The screen can reflect more detailed actions and have more soft communications between users.


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0