Back to Parent

Outcome


Show Advanced Options

Intention

The idea behind the project was to create a fortune telling device that would generate daily fortunes by examining emotional expression through facial tracking analysis. We intended for the emotional expression percentages to directly impact the type of fortune given as well as for the device to include a few rather strange and unnerving-sometimes even abusive- fortunes. We placed this device in the elevator of an apartment building in order to have a large group of people establish unique relationships with the device as well as force people to repeatedly interact with it.

Prototype/Outcome

Besides the video, we also developed a web sketch prototype to demo how the interaction with the service works. We used the face recognition library from P5.js to detect and evaluate real time emotion values from the user, and mapped the values to their according GPT3 generated fortune telling sentences. We initially intends to process all the sections in real time, but had trouble implementing the GPT3 library (the official one is written in Python, and the community one in Node.js), so for efficiency we pre-generated the texts and selected the ones that are appropriate to showcase our ideas.

Face marks and reading.jpg.thumb
Show Advanced Options

Precedents & Context

Ideas we’d like to explore in this project

  • Our behavior data can be collected almost anywhere.
  • Many unconsciousness might be unveiled through non-verbal interaction
  • People’s interpretations can supplement those of limited systems
  • Seeing our own figure in the past might enhance our self-awareness


Prior projects that influenced these ideas and our design


Process

Throughout the process of developing our final design, we went through 3 main iterations: 

// 1

At the very beginning, we were inspired by how Ouija boards pick up people’s unconscious inputs and form outputs that can be seemingly unexpected. And we came up with two ideas that centered around the concept of “AI prediction and fortune-telling” and “generative diaries”

Idea 1: an Ouija board-like device that provides the user with daily notes or agenda.

By either collecting the user’s surrounding environment information through a webcam or recording the user’s trivial interactions with other household products as the data source, the device would gradually form outputs that can get very personal to an unsettling level. The interaction mode would be similar to how one would use an Ouija board, by placing their hands onto the board and moving around the sensing area will trigger the service to generate outputs based on the accumulated data.

Idea 2: a mirror that uses machine learning to create diaries for the user.

The mirror can be activated by letting the user stand in front of the device and show their reflection in the mirror as the input. By using technologies that could recognize and process the input (e.g. text recognition, image recognition), the mirror would generate a paragraph of diary that summarize your day. Besides recognizing the reflection, the device would also store historical inputs (e.g. mirror “selfie” from yesterday or days before) for updating the generating outputs.

Both ideas involve generated text, we want to explore the ways of forming the language for the outputs as well. Right now we are thinking about using poem-like structures so that the output can leave space for users’ own interpretations.

Feedback & Iteration

The main feedback we got from Daragh and Policarpo was that we need to consider the time constraints and the technical complexity of building networked systems, and we need to strengthen the story and understand what kind of emotion we’d like to trigger. Based on the feedback, we decided the input might not data from other IoT devices and apply the existing display such as iPad to act as the ouija board or the mirror.

Group 6.thumb
Show Advanced Options

// 2

Iterated Idea

In the second phase, we narrowed it down into the Idea “Machine Learning Mirror that tells my daily horoscope”. The context is like this:

In the morning, I can take a look in the mirror (iPad or phone). I can click a button on it and see or hear a horoscope for today on or from the mirror. The content is based on the alphabet it recognizes from my facial expression and a photo I took in the past (maybe yesterday or the same day last year). The content is unsettling and makes me think the device is monitoring my activities.

Feedback & Iteration

The main feedback we got from Daragh and Policarpo this time include minor technologic part, such as how we could identify an alphabet from facial expression, as well as conceptual part, such as whether we envision this as a poetic or critical experience, and how should we link the concept back to the horoscope/content generated?

Here, we found that the most important thing might be forming concrete content, let the content drive our discussion and decision. Because content acts as bones or scaffold. Content can solidify this project.

The main exploration we did base on the feedback included:

  • Technology: Change from detecting alphabet from facial expression to detecting emotion from facial expression. Because this is what current technology can do and it is more direct as a source of input.
  • Environment: We revisit the environment where we’d like this mirror to be. Should it be at someone’s home, which could be more intimate? Should it be on the street and the users are some random passersby? Should it be in the museum where all users have the expectation that this mirror can have surprising acts?
Group 5.thumb
Show Advanced Options

// 3

Iterated Idea

This elevator is situated in a residential apartment. People know this elevator quite well. Every time you walk alone into the elevator at late night and clicks the floor # button, the mirror will be activated and start to automatically capture your attention, analyze your facial expression, and provide fortune-telling for the next day based on that. You sometimes get a normal response, sometimes you get humane reactions as if they want you to spend more time with it.

Rationale

We set the scene in an elevator in a residential apartment is because

  • The experience of riding in an elevator is something people would feel familiar with
  • We could purposefully make users feel trapped and powerless with our design narrative

The feelings we’d like to trigger are:

  • The machine you are familiar with and thought to be a soulless object can actually be alive
  • The machine being able to analyze your facial expressions is more intrusive, and the experience touches upon the topic of surveillance.
Group 7.thumb
Show Advanced Options
Untitled artwork 11 %281%29.thumb
Show Advanced Options
Group 4.thumb
Show Advanced Options

Open Questions and Next Steps

Something really interesting that we neglected to consider is what could happen when there are multiple people in the elevator. At the current moment, the idea seems to be that it only runs when someone is entirely alone. What could be interesting about exploring how the device changes as more people enter? We also think that it could be very interesting for the device to store information about each individual so that it could perform more complicated tasks, such as weaving advertisements into the fortunes or noticing subtle changes in appearance. It could also be interesting to consider the time span of the fortunes. Do we want them all to just be daily, or would it be interesting for some longer fortunes to be given? 

Group Reflection

  • Starting from forming the story and understanding what kind of emotion we’d like to trigger is important to navigating our direction. Next time maybe we can try to find the balance between exploring technology as well as forming content at the same time. During the process, we can keep revisiting the real content/context or story. This way might help a lot in solidifying the overall concept.
  • We were trying to go for a critical point of view on the idea of using A.I. for fortune telling but could probably reflect more on the form and the tradition that we were borrowing (face reading)
  • Try to seek a stronger connection between the usage of emotion recognition and evaluation with the output
Drop files here or click to select

You can upload files of up to 20MB using this form.