HueHoroscope

Made by Saloni Gandhi and kkuramot

Created: March 29th, 2023

0

Intent

We often associated colors with different things, moods, ideas, thoughts, even subjects in school. When we researched deeper into the belief of horoscopes, we found that each horoscope actually had a specific color associated with it as well. Our big idea of the project was to use color associations to influence horoscope readings through an audio output. Horoscopes are traditionally based on the stars, but what if your daily reading could take into account how you were feeling as well? Our goal was to combine the color of the horoscope with a color that is typically associated with a certain mood, i.e. blue = sad and yellow = happy, and have the speaker output a specific audio based on this specific combination of colors. We wanted to add a spooky quality to horoscopes that took into account your state of being, rather than having the future be in the fate of balls of light, millions of years away. 

0

Context

One of the projects that we took inspirations from was: Botto is a decentralized AI art project that uses input from thousands of individual artists, produces 50 digitally generated pieces of work that is voted on by the population, and then takes the voting into consideration when training its algorithm and creation for the next week. We both really liked this idea of incorporating human ideas and opinions into our technology to create a spookier experience. (Botto). Like we mentioned earlier, we initially wanted to combine the person’s horoscope and their mood for the day to produce a reading that incorporated their own feelings. This idea of giving the user some power or control over the output was important to us, because we thought it gave our technology a “spookier” quality that wasn’t further placing the technology in a black box. However, this idea proved to be much more of a challenge in the actual implementation of our project.

0

Prototype/Outcome

For our physical prototype we used an Arduino BLE sense, breadboard, DFPlayer Mini, sd card, and speaker. Within the Arduino, we had an edge impulse machine learning model uploaded to the board. You can see a picture of our device below. 

0

The prototype we created takes in two inputs: color and proximity. The color input consists of one of the twelve colors we have chosen that each map to a specific zodiac sign. The proximity input is binary in that it reads whether an object is close or far.

0

https://drive.google.com/drive/u/0/my-drive [Video of color system]

Based on the color detected, the system plays the corresponding horoscope reading.


https://drive.google.com/drive/u/0/my-drive [Video of proximity system]

  Based on the proximity of the object, the system sets the volume of the mp3 player. If an object is close, the volume is high and if an object is far or not detected, the volume is low.  

0

How the system works

The Arduino takes in a color and proximity input using built-in sensors

0

The ML model uses the input from the color sensor to identify the correct horoscope.

Using the color/horoscope match, our code determines the corresponding audio track that needs to be played. The input from the proximity sensor is used to establish a volume for the audio.

0

  The DF Mini Player identifies the corresponding audio file and sends it over to the speaker to play it.  

0

Process

To start off the project, we first focused on creating an ML model that is able to detect the various colors of our horoscopes. To do so, we first chose a specific color for each zodiac sign based on research we did about the different zodiac signs. We created cards of each color/zodiac pairing using Figma, which was used as the training data for the model. Finally, we used Edge Impulse to train a ML model that could detect the correct color out of the 12 available options. The model we trained was pretty accurate except for one of the colors. We kept this in mind when moving forward with the project.

We then focused on getting acquainted with the mp3 player and its capabilities. We attached the mp3 player and speaker to our Arduino, got familiar with how to upload files to the SD card, and wrote out some basic code that played audio files in a loop.

We then focused on experimenting with the proximity sensor and its capabilities. We first set up a simple loop that read and printed out proximity values read from the sensor. Next we added what we had learned about audio to create a relationship between proximity and sound. We created audio files of each horoscope reading and played them in a time based loop. We set the default volume to low and then programmed it so that if an object was detected at a close proximity, the volume of the audio would increase. If no object was at a close proximity, it would remain at the current volume.

We then worked on deploying the ML model to the Arduino and utilizing it to detect the correct color/zodiac sign. We ran into issues in this step. The Arduino with the deployed ML model was providing inaccurate readings even though it was working in Edge Impulse. After some debugging, we realized that the model only works if we input colors from the original device that the model was trained on (Kelli’s laptop). Due to varying saturations and brightnesses of different devices, the ML model is unable to provide accurate readings from other devices.

Finally, we worked on utilizing the color input to play a specific track while determining its volume based on proximity of the color. We set up a series of if statements that mapped out each possible color and called the mp3 player to play the horoscope reading corresponding to that color. In addition, we set up an if else statement that detected if the proximity of a color was close by. If a nearby object was detected, it would set the volume to high otherwise it would set the volume to low.

0

Open Questions and Next Steps

One of the biggest things that remains unsolved is the challenge of incorporating human emotion into the color that is picked up by the arduino and then produces a horoscope reading based on the color of their horoscope and the color of their feelings. Maybe instead of having the colors combined, we could show the arduino two separate colors, the first for the horoscope and then the second for the emotions. This would make the training of the machine learning model significantly easier because it would not have to parse the two colors from one shade.

Another comment that was mentioned during our in class critique was with regards to the physical product of the device and questioning how a user would interact with the piece of technology. Thinking about context and form opens a range of possibilities in terms of ideas we could take to further this project. After talking with our guest, we decided that it would be some sort of crystal ball type physical form that had a camera sensor to sense the two colors, and then would change colors to the combined shade and read out the horoscope reading for the day. 


0

Reflection

Unfortunately we were not successful in the full implementation of our initial idea. However, our initial aspirations to take in a color and proximity input and produce an audio output was achieved, which is a huge step in the right direction. Like many of the guests noted during their final words, they were impressed with how well our technology worked on demo day.

I don’t know if our project was received as well as we had hoped because we were unable to incorporate the second color aspect of emotion into our final project. However, our guests were impressed with the horoscope audio outputs and were interested in how we had created the machine learning model. Again, I think that if we had completed a second interaction that took into 2 colors, the idea of human emotion and opinion would have been received a lot more clearly by our users. Nevertheless, we are still proud of the project that we made!

x