Back to Parent

// 2

Iterated Idea

In the second phase, we narrowed it down into the Idea “Machine Learning Mirror that tells my daily horoscope”. The context is like this:

In the morning, I can take a look in the mirror (iPad or phone). I can click a button on it and see or hear a horoscope for today on or from the mirror. The content is based on the alphabet it recognizes from my facial expression and a photo I took in the past (maybe yesterday or the same day last year). The content is unsettling and makes me think the device is monitoring my activities.

Feedback & Iteration

The main feedback we got from Daragh and Policarpo this time include minor technologic part, such as how we could identify an alphabet from facial expression, as well as conceptual part, such as whether we envision this as a poetic or critical experience, and how should we link the concept back to the horoscope/content generated?

Here, we found that the most important thing might be forming concrete content, let the content drive our discussion and decision. Because content acts as bones or scaffold. Content can solidify this project.

The main exploration we did base on the feedback included:

  • Technology: Change from detecting alphabet from facial expression to detecting emotion from facial expression. Because this is what current technology can do and it is more direct as a source of input.
  • Environment: We revisit the environment where we’d like this mirror to be. Should it be at someone’s home, which could be more intimate? Should it be on the street and the users are some random passersby? Should it be in the museum where all users have the expectation that this mirror can have surprising acts?

Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0