Back to Parent

Outcome


Intent

In our project, we wanted to explore how to build wearable technology that enabled people to think and challenge the way they interacted with others. We were interested in horoscopes and the different predictions and seemingly fantastical claims they made about the compatibility of two different people, so we wanted to see how we could explore this. In addition, we were interested in how to integrate and build with machine learning and communication between devices, which we were able to do in this project.

Context

Since we were given an OLED screen, IMU, and horoscopes, we immediately thought about making something wearable. We initially planned on creating a fitness watch that tells users the specific workout sets based on their horoscopes but after looking deeper into the relationships between different signs, we decided to shift to the compatibility area. The idea of our project was inspired by a TV show in which people would receive signals if anyone close to them is interested in them through an app. Instead of having the user send an “interest” signal, we wanted the Arduino to connect to another one through Bluetooth and inform the user what horoscope the person near them is. We thought it would be interesting to observe how the user would interact with that person given the horoscope. Based on their actions, the device would display whether they chose the right interaction.      

Prototype/Outcome

For our final prototype, we built a system based on linking two Arduino-based wearable devices to identify compatibility between the users. The two Arduino boards communicate with each other using Bluetooth. Once the two Arduinos are connected, one will send their user’s horoscope to the other and with IMU, the watch will track users’ interactions. By building and training an AI model that recognizes gesture types, we display different visual patterns on the OLED screen to tell users whether they were compatible with each other and if their decision to interact was correct. For example, if two people are compatible but they choose not to interact, there will be a sad face displayed, and if they did, there would be a heart.        

330166801 517127317052759 5360630787559642910 n.thumb
Show Advanced Options
337092263 616029503712922 3442065503329191170 n.thumb
Show Advanced Options
336649352 230330682809082 8745555775318998832 n.thumb
Show Advanced Options

Process

In our project, we began by exploring, building, and deploying models with Edge Impulse. We were able to work on a few examples with the IMU to understand how to recognize and classify different gestures. This opened a few opportunities for us in what we would be able to move towards, and we thought we could create a numerology-based wearable that promoted fitness. After creating a band for the Arduino and turning it into a watch, we found it difficult to classify the different activities someone could do with just the data from a single arm/wrist motion. As a result, we had to change ideas, and we thought of using two Arduino watches to communicate with each other. We were able to quickly figure out the Bluetooth Low Energy (BLE) capabilities of the boards, so we moved to using this.. After this, we experimented with different ways to display on the OLED screen, and realized that given the slow drawing time of the screen, it is best to use small images that don’t take up too much of the screen.

To integrate all these, we decided to build a system where two watches, each hardcoded with an astrological sign, connect using BLE and communicate the sign data to each other. We wanted to then track the interaction between the users using Edge Impulse and classify it as either a hug, holding hands, or nothing. Although we trained the model correctly, when deployed, the results were inaccurate. We moved to just tracking a high-five or no interaction, and used this to display on the OLED screen whether this was good or bad. For both of us, writing the code and the basics of the setup was quick, but learning the new technologies in this project took up most of our time. We spent most of the time working and understanding the different aspects that went into our project before combining everything in the end.

Open Questions and Next Steps: 

For our next steps, we want to think about how to better use the OLED screen and the visual aspects. Right now, we have different animations for each user interaction; however, it takes a while to load because of all the pixels, which means users have to wait til the animation is done loading. Several of the guests pointed out that the screen can be used for something more intimate instead of the one that’s providing instant feedback. The screen could be on the bottom/inner side of the wrist (right now it’s on the top/outer side) and provide detailed information on compatibility or user interactions. We also want to add something that provides quick reactions, like an LED or a buzzer so that users can know, in the moment, whether they match with someone else. Lastly, we also hope to provide more recommendations and summaries on the meanings of horoscopes, the people our users meet throughout the day, and how they could interact with them to increase their match.      

Reflection

We think the project turned out the way we imagined it to be other than some changes we had to make for the AI part. Even though we had pretty ambitious goals in terms of visual display and what kind of interactions the AI can recognize, we think the project captured the key ideas and functionalities well. If we had more time, we would work on finding a way to have quicker user feedback so users don’t need to stop and see whether they were compatible. We also hope to figure out how to improve our AI model and deploy it so that the Arduino can accurately recognize various movements. Overall, we had a very fun time exploring wearables and connecting Arduino devices to create new interactions.  

Show Advanced Options
Show Advanced Options
Drop files here or click to select

You can upload files of up to 20MB using this form.