Back to Parent

Outcome


Intent

Inspired by the ancient tales of the Oracle of Delphi, we wanted to explore the ideas of sacrifice in order to gain something. By using modern technology, we are often making sacrifices with our privacy and security.  We wanted to demonstarte this aspect in a more visible, literal sense to make people more aware of the sacrifices they are making. We wanted the user to give up something, in our instance literal information about themselves, in order to use Alexis. In order to make a user more aware we decided to use a virtual assistant model as people are more affected by conversions they have outloud.

Context

  https://vimeo.com/20412632  

A project we drew inspiration from  was Wifi Light Painting by Timo Arnall, Jørn Knutsen, and Einar Sneve Martinussen.  We drew inspiration for our idea to visualize the unseen from the Wifi Light Painting. However, we branched off in an attempt to explain something that is unseen not because it’s invisible, but because it’s camouflaged. People have gotten accustomed to the level of invasiveness that modern technology often entails to the point where they no longer notice it making it, in a way, invisible.

Prototype/Outcome

We had 2 aspects of our outcome: the physical prototype and the voice prototype.

Physical

What we created

 To start off, the user writes their personal information on a paper.  The user reaches through the hand hole to place the piece of paper on the platfrom. A camera mounted within the system will take a picture of the text. Then, I press a button that triggers a motor system to unravel a string. The string is connected to the platform so unraveling it causes the platform to tilt down. The paper then falls into the pit simulating your inforamtion being "eaten up."

Tools and technologies involved

To build the structure I laser cut plywood and I used DC motors for the motor system. I designed the pieces and structure using solidworks.

Technology

What we created

The camera takes a pictures of text and the text is parsed using computer vision. This "persoanl information" along with a prompt is fed to chatGPT, which then outputs a response. A text-to-speech algorithm is used to read this prompt out loud. The user then responds to the voice assist with whatever they want and this is converted to text using a speech-to-text algorithm. This text along with a prompt is again fed to chatGPT for a response and the same process repeats until the user stops responding.

Tools and technologies involved

Much of the code that is being used for this project is sourced from Google’s sample code. Because all of the AI aspects of the project excluding ChatGPT is done through Google Cloud, I found sample documents on GitHub demonstrating how to implement the desired functionality in Python. The code that I wrote was mainly to tie all of the functionalities together. Needed to set up a custom environment for Google’s libraries and their requirements, as well as a Google Cloud account on a free trial. The services in Google Cloud cost money, so I’ve been using credits I received as a part of the free trial. 


Process

Saloni focused on the physical prototype and Dillon focused on the technology prototype simultaneously.  The goal of our project up until the demo was to ensure that our idea would work, technically. As a result, we focused much more on functionality than user experience, and it showed during our demo.

Physical

To start off this process, I began by brainstorming different approaches to making the paper disappear from the platform. After deciding on a solution, I moved on to prototyping. The first step was to create a button powered motor system. Next, I worked on creating a structure to hold the motor and a structure that could hold wound up string. After designing and laser cutting these pieces, I assembled everything together and tested it out.

After getting the motor system to work, I worked on figuring out how to connect it to a platform. I decided to use cardboard for the platform because it was light. I created a hinge for one side of the platform and attached the other side to the motor system using a string. Similarly, I created another motor system that would move the platform in the opposite direction.

Finally, I worked on creating the rest of the structure that would house the motor system and platform. I decided to make it a box with a hole that a user could put their hand through.

Technology

The process of creating the code for Alexis can be broken down into a few steps: research, code search, and implementation. For example, for the speech to text, I conducted research into how feasible it would be, and what solutions would offer the best results. Often, it was deciding between OpenCV or Google. Once I decided on a solution, I would look for sample code or documentation. For the Google Cloud services I found a Github repository containing code provided by Google. Once I found the code and installed the requisite libraries, I would start tinkering with it, figuring out how it worked and would try to insert code or create new methods to suit our needs. 

The decision to add optical character recognition was one very late into the process; we couldn’t decide whether or not it was feasible at that stage but felt that it would be a good decision to at least try as it would elevate the user experience and hopefully elicit more of a feeling of sacrifice.  

Open Questions/Next Steps

The goal for us going into the demo was to ensure that our idea would work in a technical sense. As a result, we focused much more on functionality than user experience, and it showed during our demo. Our next steps are to really improve the user experience. Some of the things we have in mind are:

  • Improve the atmosphere - make the Oracle voice spookier, perhaps speaking in riddles. Paint the sacrificial altar to further encapsulate the feeling we are trying to convey.
  • Build up the experience to where it is 100% automated - currently using the smart assistant portion requires some input from us. We’d like to build it up to where we can be entirely hands-off.
  • Create a physical model for Alexis
  • Redesign the sacrificial altar’s dropping mechanism

Reflection

We achieved a lot of what we wanted to get done before the demo, but we had hoped to have more done for the user experience. We proved that the idea could work, but it would’ve been a really good opportunity during the demo to test out user interactions. Instead, we had to describe verbally what we were envisioning and while we still received very helpful feedback, we were hoping initially to get further for our demo. Some feedback that we received from Zhengfang was that he felt our sacrifice of information didn’t fully capture the feeling that we had envisioned in terms of spookiness. Upon reflection, it feels like part of what allows smart assistants to achieve that level of invasiveness needed to really know the user is a lot of time, something that even in an exhibit we won’t be able to capture. It’s possible that in order to make up for that, we should ask for slightly more personal information. 

Attribution/ References

Sources consulted for code is included in the files themselves. 

Show Advanced Options
Show Advanced Options
Show Advanced Options
Show Advanced Options
Drop files here or click to select

You can upload files of up to 20MB using this form.