Back to Parent

Outcome


Think piece:

How They Perceive Us

When it comes to the internet and our smart home devices, “consent” is a vague umbrella term that is often compromised for the benefit of convenience and intelligence. Our everyday interactions with our devices seem harmless to us until they’ve learned so much about us that they can infer, or even control, our next move. Algorithms are designed to incentivize this interaction, through targeted ads and suggestions to personalize one’s experience .This data collection is a spooky fact behind our devices that everyone knows- is knowledge the same as consent? So, how are our devices collecting our personal data, what exactly can our devices find out about us, and ultimately, how do our devices perceive us?

With the pervasiveness of technology and the increasing dependence on our devices, people are losing the distinction between the digital realm and the physical world- especially when our digital realm is personalized and smart. The first tactic our devices use to keep us engaged is in the psychological stimuli we receive from our devices. Algorithms utilize these psychological leverages to encourage interaction with our devices in order to collect, store, and manipulate our personal data. The separation anxiety many adults face with their devices is the result of a dopamine-driven reward circuitry, which companies will use to keep them online. “Adults in the US spend an average of 2-4 hours per day tapping, typing, and swiping on their devices—that adds up to over 2,600 daily touches” [1]. Within the interactions, personal data is collected using features such as autofill, cookies, and search history. Large platforms such as Facebook and Google may even know users better than they do themselves.

Our digital profiles not only consist of basic information, but it also includes our preferences, opinions, habits, and many more characteristics we may never have even thought of ourselves [2]. “Last year Google amended its privacy policy so that data from its DoubleClick ad network could be merged with the other data it knows about you—like your name and your favorite YouTube channels—to build up a very comprehensive picture of you and your tastes” [4]. Not every company has the reach of Google or Facebook, but data can be easily bought and sold between firms specializing in this kind of profiling. In 2018, a reporter from Vice conducted an experiment to see how our smartphones use its monitoring equipment like its mic and camera to absorb audio and videos without distinct consent. The journalist spoke preselected phrases twice a day for five days in a row while monitoring his Facebook. Sure enough, when he used the phrase “back to university”, he saw ads for summer courses. And when he changed his phrase to “cheap shirts”, he quickly saw advertisements on his Facebook feed for low-cost apparel [5]. Data collection occurs even when we’re not interacting with our device, furthering the assimilation of the digital realm and physical world.

When buying, interacting with, or neglecting our devices, we have the right to understand the extent of which our information is being collected and used. A great weight of spooky technology comes from its abstractness and intangibility. However, as new devices and softwares enter the market at alarming rates, have we lost the timing to detach and analyze it? New technology is not only an opportunity for the future, but an opportunity to reflect on the past and present. As technology continues to advance to greater heights, it has become ever so important to understand the ethical decisions of machine learning and its applications in consumer devices.

References:

  1. https://sitn.hms.harvard.edu/flash/2018/dopamine-smartphones-battle-time/

  2. https://gizmodo.com/heres-all-the-data-collected-from-you-as-you-browse-the-1820779304

  3. https://webkay.robinlinus.com/

  4. https://www.wired.com/story/wired-guide-personal-data-collection/

  5. https://www.usatoday.com/story/tech/columnist/2019/12/19/your-smartphone-mobile-device-may-recording-everything-you-say/4403829002/

Catalog Description

Title: How They Perceive Us

Credits: Kimberlyn Cho (B.Arch ‘22), Rio Pacheco (BSECE ‘24)

Description:

A great reason behind spooky technology lies in its abstractness and intangibility. How do our devices perceive us from our day-to-day interactions? Furthermore, do our smart home devices know us better than we do?

Data analytics is a powerful tool that collects personal data from a user to infer the user’s preferences and needs. It provides users with the most personalized experience. From Spotify’s ‘discover’ feature to Netflix’s ‘recommended for you’ options, technology may know us better than ourselves. These online interactions are carried into the offline world, which is most evident in today’s generation.

How They Perceive Us is a commentary on how our devices learn, infer, and affect us through everyday human-machine interactions. This exhibit intends to utilize the discretion and vulnerability we’ve internalized with our devices to materialize a receipt of the interactions. By printing a physical representation of one’s digital profile, this experience aims to provoke the discomfort of coming face to face with the AI version of you.

By utilizing the vulnerability of a bathroom and the intimacy of looking at one’s reflection, this exhibition means to tap into one’s most unassuming and candid state. The discomfort is meant to raise suspicion about the accuracy of one’s digital profile, as well as to incite speculations on ways we passively consent to extraction, perception, and persuasion by our devices. Furthermore, we hope for the print to serve as a reflection on how our digital profile is unconsciously embodied by us physically, mentally, and emotionally in real-time.

VIDEO:

https://vimeo.com/708066499

Img.thumb
Show Advanced Options

Process Description

Initial printing

Initially, we worked on getting the python script to actually be able to print to the thermal printer. Since printing to a thermal printer is not natively supported in python through raspberry pi, we had to use a library called escpos. This library allowed us to communicate directly with the thermal printer through python code, as well as allowing us to print images as well as text.


Detecting faces and printing images


Setup

The next step was to get our raspberry pi to be able to interact with the webcam that we were using to capture images. Since USB webcams are also not native to raspbian OS, we used a python library called fswebcam. This not only allowed us to capture images with the webcam but also alter the resolution so that they could print nicely on the thermal printer.

Image detection

To detect images we used the python library OpenCV. The python script was set up so that an image was constantly being taken, then the image is scanned with OpenCV to detect if a face is present in the image. The image was then printed from the thermal printer.

Data collection and analysis


Once we were able to successfully detect and print faces when detected we looked for ways to extract data from this image. A public API called Hydra AI was one of the more affordable options we found in our searches. This API allowed us to detect age, gender, mask presence, and emotions. After implementing this we ended up with the result shown above.

Creating a story


Our final revision of the printout included comments based on the person's emotions as well as a randomly chosen feedback sentence. If any of the person's emotions were above 20%, our printout would have a small comment below the portion of the receipt with said emotion. Also, we created 5 different possible responses which would be printed at the bottom of the receipt. This made the receipt feel more personal and more like the machine was talking to the person, rather than just giving them information. 

Hiding the camera


The final step was to find a way to hide the camera so that the person looking into the mirror didn’t know they were being photographed. To do this we had a small border of black fabric that wrapped around the front of the mirror. We then cut a small hole in the middle top of the fabric, just enough so that the camera could see through. Since the camera had a black plastic front it blended in very well with the fabric and was almost completely hidden.

Final product



The magic mirror

Our final product included side lighting as well as a “sink” to make the environment seem more like a bathroom. We reduced the black fabric to only be on the top and bottom of the mirror to give space for the side lighting. We also changed the material to white foamcore to better match the environment as well as reflect light onto the person better. We chose to use small, bright lights for the side lighting to imitate a vanity mirror that people get dressed up or do makeup in front of. This encouraged the person to lean in and get closer to the mirror, allowing us to take a better image of them.

The magic printer

To better conceal the thermal printer we enclosed it within a small foamcore box that was on a table outside of the bathroom. This helped us hide the battery pack and well as all the wires connected to it, while also giving it a mysterious allure.  

Rme final project   frame 3.thumb
Show Advanced Options

Roadmap

If you had to build this project over summer 2022 and make it a real, working, interactive prototype / improved exhibit for public audiences, what would be involved?

If we had more time we would have liked to create a more robust prototype of the mirror as well as find a better way to hide the camera. The use of perf board was good for a prototype given the amount of time we had, however creating a new mirror out of possibly wood would help to make the person feel as if it were a normal mirror. Also, the placement of the camera on the top of the mirror would sometimes result in tall people who walked too close to the mirror not being detected. To avoid this issue in the future we could have found a way to hide the camera in the middle of the mirror, possibly using two-way mirror glass or a small cutout.  

Additionally, we would explore ways to further personalize the receipt to emulate the breach of privacy. We could possibly incorporate the user's device to extract data the user unknowingly shared. There's a lot of opportunities to explore additional features on the receipt to further the spookiness of the experience. 

Critical Reflection

One thing we did not plan for was the social aspect of our exhibit. We found that when groups of people came to experience the exhibit together they would compare the results they got on their receipts with each other. This resulted in them talking about their experience longer, as well as having a deeper discussion and understanding of our central question, “How do AI really perceive us?”

On the other hand, the solo experience seemed to lack direction. When a person went into the exhibit without a group they were often left unsure of what to do or how long to stay in front of the mirror. To fix this in the future we could have added a small signal that would alert the person when it was time to leave the bathroom.

Also, the inconsistencies with the image analysis actually helped us deliver our central question rather than hindered it. When people got results that they didn’t expect on the printout it drove home the idea that it’s possible for our online identity to be wildly different from what we think it is. Although not planned for it worked quite well in our case.

Overall, the visitor experience worked well but could have been improved with some minor tweaks. In future iterations of this project, these minor tweaks would lend themselves to a smoother experience.

Drop files here or click to select

You can upload files of up to 20MB using this form.