Back to Parent

Roadmap

Several ways I would implement a refined version of my prototype is:

  • First, retrain the tinyML classification model with more data. I might collect sighs from the audience or passersby by providing them with an emotion prompt: “sadness”, “frustration”, “happiness”, “relief”, “fatigue” and asking them to sigh while I record their sighs. I plan to use this dataset to train the classification model to make it more sophisticated. While the dataset is far from the best training data, it will be hard for both the machine and the human to tell if the classification is working well, pointing to the complexity of emotions and the sigh as a physical and psychological experience unique to humans. (Week 1-3)

  • Second, improve the AI generated response. Instead of GPT3 API which comes with its limitations due to its capabilities to generate conversational text, I might choose existing custom models/solutions that drive character dialogs used mainly in the games industry. For example, companies such as Inword.ai and Character.ai provide such APIs, and there also might be open-source models/projects for driving contextual conversation generation. (Week 4-6)

  • Third, add image generation. I hope to add image generation of faces showing certain emotions, hinting at AetherGnosis’ numerous previous owners. (1 Week)

  • Fourth, refine the interaction between the audience and the mirror, by purchasing and installing external proximity sensors, installing the camera module and implementing a face detection algorithm, implementing a micro-expression detection algorithm on top of it, installing another camera (with a wider field of view) to detect restlessness in the audience’s motion, and connect the prompt for GPT3 API call to these input to make the mirror seem more alive. (Week 7-9)

  • Fifth, refine the frame of the mirror. The appearance of the mirror should be sleek, futuristic design, but covered in scratches and bruises, signaling this is a piece of technology from the future, but has undergone heavy use by multiple users. I will have to fabricate the frame with materials like metal, plastic, or acrylic. (Week 9-12)

  • I would buy external proximity sensors such as an ultrasonic ranger, in addition to some more camera modules. Both of these sensors would add to the interaction experience of the project by making the mirror more responsive to human presence, and more able to sense nuanced emotional input from humans.

  • One key conceptual challenge is the problem of anthropomorphism and anthropocentrism in the portrayal of “machine emotion”. The human tendency to anthropomorphize AI behavior is misleading and even dangerous. What is a more appropriate approach to speculate about a machine's emotion? Currently the mirror hints at that by simulating strange responses to human emotions from a sigh, but there is space for more

  • Ideally the final interaction experience should be a seamless and continuous one. The mirror will respond to every sentence the audience says by listening for input periodically. (The classification algorithm can still classify what it hears by different emotions.)


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0