Back to Parent

Process

LCD

Initially I started experimenting with a 4x20 LCD screen to display the initial welcome text that invites the audience to sigh. The four pins (SDA, SCL) on the side did not work, and I had to solder the wires to 11 pins on the back. The screen did not display text as expected at first, and this was resolved by adjusting the trimpot position at the backpack module, which changes the contrast of the display to show text clearly. However, when I re-soldered everything to assemble the mirror, the screen stopped working. I unfortunately had to give it up and opt for the OLED screen instead.


OLED Screen

The OLED screen was easier to wire up. However, it did not display text as I expected after I uploaded the code. The reason turned out to be issues with the Arduino library. I initially used Adafruit_SSD1306 and the Adafruit_GFX libraries, but after some googling, I saw Arduino Nano BLE Sense users recommend the library Acrobotics instead. I installed this library and the code worked!


The next step is displaying text returned from the OpenAI API call on the OLED screen. I uploaded the Arduino code and ran the p5.js script, which then sent the generated text back to the Arduino. Then I tried to get the OLED screen to display the text. However, the text doesn’t scroll automatically, and could only display the first several words. I experimented with scrolling, but I was only able to get horizontal scrolling to work. Vertical text scrolling seems to require more complicated diagonal scrolling, which was difficult to implement. While experimenting, I got some pretty cool glitching effect documented in the video below, which I was able to recreate to some extent in my final prototype.  


With the code below, by manipulating the parameters in the function “display.startscrollleft()”, I was able to create a glitching effect right after all the welcome text renders, and before the display gets cleared and renders the text again, adding to the mirror’s behavior as a strange device that has a life of its own.


However, the experimentation process made me realized that the OLED screen is not a good candidate for displaying generated text, so I eventually decided to use the iPad for that purpose.

iPad

To display the generated text on the iPad, I found the tool “Duet” from Googling. The tool is very easy to set up, but requires downloading a software and a subscription. It turns the iPad into a second monitor for your laptop, and the only thing I had to do was to drag the p5.js browser window into the second display - the iPad, full screen everything, and enlarge the display to show the canvas.   


Prompt Engineering

Prompt engineering was an important part of this project. Since I wanted the mirror to generate extremely conversational, lazy, and sad-sounding responses, I had to try multiple different prompts to get the model to generate a response that I liked. Through experimentation, I learned that GPT3 is very different from GPT4, while the latter generated more natural responses (sometimes very good ones) in response to very abstract prompts, GPT3 needs very specific instructions regarding the tone, punctuations, and instructions on who to address. (See comparison below) In the end, I used this prompt in my prototype: “talk in a very depressed tone, with a lot of hesitation, sobs, and filler words, and tell me to do something I enjoy (use the word you)” As discussed before, I had to specify specific behaviors, as opposed to emotions, such as “hesitation”, “sobs”, “filler words”, and I also had to tell it to address me, rather than itself.


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0