Back to Parent


Training TinyML Model

I started off with a dataset of different recordings of people sighing downloaded from the website freesound.org, and hand labeled by myself with the different emotions I interpret from the sighs. I used this dataset to train the initial classification model, hoping that it could classify different sighs according to the implied emotions. However, the dataset was too small and I did not notice any interesting results. The model worked so badly that it sometimes classifies silence as a sigh, which causes problems.

I changed my goal to training a simple classification model to identify sighs, which turned out to be far more successful, with a success rate of 87%.


Later on, I thought about showing the machine-ness of emotion/empathy shown by the mirror through retraining the model to classify sigh emotions with a better dataset, and displaying the live inference of how much sadness or anger, for example, the model detects in someone’s sigh at any moment. However, I did not have enough time to execute.

Connecting everything and improving the appearance

I initially connected everything and encased the electronic components and breadboards in a cardboard box, using tape to prevent things from moving. (See image below, also note this interesting generation from GPT3)


To make the frame look more refined, I laser cut some chipboard. This is my first time laser-cutting, and I encountered quite some difficulties. Since I don’t have Adobe Illustrator, the software I used - Inkscape, did not produce DXF files that are compatible with the laser cutter software. Luckily, someone at open fabrication hour helped me convert the vector file into illustrator file and printing was done smoothly. I also added a small microphone icon to indicate the location the audience should sigh at.


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0