Back to Parent

Outcome


Img 9237 %281%29.thumb
Show Advanced Options

Intent

In this investigation, we were tasked with studying how A.I. and technology intersect with everyday interactions and domestic rituals. We approached this by exploring the ritualistic behavior of staring at a screen. In the past, when smart devices and televisions were nowhere to be found, people spent more time socializing, reading books, engaging in outdoor activities, pursuing the arts, and generally diversifying their time among a number of activities. Now, we live in a day and age where our devices captivate us and siphon significant portions of our time and attention. We readily spend hours staring at screens with little to no understanding about the innerworkings of the mechanisms and electronic systems that lie beneath. Our goal is to create an unsettling interaction between people and screens to provoke more intentional thinking about behaviors involving screens such as subconsciously checking your phone for new notifications.

Context

  • The theme of superstition and machine assisted beliefs didn’t quite resonate with us. So, we decided to approach the module without it and instead focus on the everyday rituals in our lives.

  • The Unroll by Meijie Hu was the first inspiration for us, as we both agreed that social media usage was a part of many people’s daily ritual. https://meijie-hu.com/Oueksmorphism-1

  • We also were heavily inspired by the eyes in the museum following unsuspecting patrons. In the museum, much of the spookiness we felt came from the patrons being unaware they were being recorded. We wondered how it would feel to have something watch you, and while you know about the recording you don’t understand what it’s getting from the recording. 

Show Advanced Options
New tweet4.thumb
Show Advanced Options

Our artifact is a bot-like entity that watches a person standing in front of it, tracks that person’s movements, and generates a tweet about what it thinks that person is doing. It works by using ultrasonic sensors to gather data on what objects are in its environment. If the left sensor reads a value that is of higher priority than the other sensors, the eyes projected on the TFT screen are instructed to move left. If the left sensor and middle sensor both read a similar priority that is higher than the priority of the right sensor, the eyes are instructed to move toward the center left area of the screen. In general, the location of the eyes on the TFT screen is mapped to input data from each of the three ultrasonic sensors.  

Bot drawing.thumb
Show Advanced Options

To generate tweets (example above), we used the PySerial, OpenAI, and OpenCV Python libraries. First, the bot starts a timer when somebody is in front of it. After 5 seconds, the bot prints a command to the serial monitor to generate a tweet. When that happens, Python prompts GPT-3 to generate a tweet from the perspective of a robot of what the human it's watching might be doing. Then, that text is extracted from GPT-3, overlaid onto a template image (attached below) of a fake tweet, and displayed on a separate laptop screen.

Show Advanced Options
Show Advanced Options

Process

  • We took a parallel development approach to the project. While David worked more with the Arduino, I focused on the tweet generation. For the tweet generation, I broke the project down into components. I needed to be able to read the Arduino output, use OpenAI to generate a tweet, and take an image template and edit it with the generated text. I started by doing preliminary research on each component to determine feasibility. From there, I worked on each component one by one until I finished it. At the same time, David worked on the sensor/Arduino side of things, primarily spending most of his time on the TFT screens, which was causing serious issues with its screen clearing speed.

  • When faced with major obstacles both David and I relied heavily on documentation that existed online to try to solve our issues.

  • One major design decision we had to make was whether or not to generate the tweet based on randomness or based on a trained dataset using Edge Impulse. We decided on going with a random tweet as we decided it’d be too difficult to classify different actions based just on the proximity sensors' movement detection.

Example post %281%29.png.thumb
Show Advanced Options
Img 9123 %281%29.thumb
Show Advanced Options
Img 9158.thumb
Show Advanced Options
Img 9161.thumb
Show Advanced Options
Img 9172.thumb
Show Advanced Options
Bot dead.thumb
Show Advanced Options

Open Questions and Next Steps: 

  • Next Steps - Based on the feedback we received during the in-class demo, we feel the next step would be to add a third screen for a mouth. This would allow Sherlock to better express himself, whether it’s in a nice or evil way. Another suggestion we received was pivoting slightly to a “mind reading” perspective, which we thought was very interesting. It would certainly add to the spookiness of our project. Sherlock would try to follow you and look into your eyes, and its output would be a mind reading instead of a tweet. Finally, we also loved the idea of adding a texting feature, where if you walked past Sherlock he would let you know he saw you.

  • One thing we don’t really know how to approach is how to make our project more interactive. Despite our best efforts it still required a little bit of input from our end during the demo in order to make things work, even without the tech errors. How could we make the design work seamlessly without us there? 

Reflection

We are both happy with the success of the project, although we certainly did not achieve all of our ambitions. We started with a very complicated idea, and we managed to simplify it while still keeping the key elements that we wanted. However, the tweet generation was not as detailed as we would’ve liked and the ultrasonic sensors were not nearly as cooperative as we’d hoped they’d be. Ultimately, with additional time and perhaps a camera instead of the ultrasonic sensor, we could’ve achieved our initial vision.

Drop files here or click to select

You can upload files of up to 20MB using this form.