The Technology:
We used a Python script to record the memory. We then used a google speech to text API to convert this speech to text. We then run an emotion analyzer on the text.
At first, we wanted to directly run an emotion analyzer on the recorded memory as it is generally more accurate as it can take the tone and inflection of the speech into account, but the technology was not feasible in the time constraint.
The memory gets sorted into an emotion (For now we have joy, fear anger, and sadness)
We then run a Processing script to find and play memories of the same emotion. The lights in the room would change color to correspond to the emotion.
For the audio, we wanted to simulate surround sound by making the recordings pan, or shift, from left to right continuously. We did this through Processing, and the effect that it gave was that the person was walking around you, talking about his or her memories.
Check out our GitHub repo which contains the code and the memories we collected:
Content Rating
Is this a good/useful/informative piece of content to include in the project? Have your say!
You must login before you can post a comment. .