Back to Parent

Outcome


Intention

  Sound is the worst of our senses in the recollection of memories, giving new meaning behind the phrase “in one ear and out the other.” This inherent lack of capability of our brain storing auditory information allows meaningful conversation to be completely erased forever. In response, this proposal explores the possibilities of saving that auditory information; when an individual records their conversations, with the use of a phone recorder or microphone with storage, a page is produced based on content and speakers of the conversation. This page includes images of the participating speakers of the conversation and collects photos from the internet of topics relevant to the conversation, overlaying the two image types to establish hierarchy between the factors of conversation.  

Prototype

  With a script created in Arduino, I attempted to create a web page that pulled photos from file folders and collaged them on the web page, but my difficulty arose with the placement of photos on the page; the photos needed to be arranged randomly while simultaneously constricting to a hierarchy. For demonstration purposes, I instead crafted a physical model rather than a digital one, and it took the shape of a phone. Collages were printed on sheets of paper and could be removed to simulate recordings of different conversations.  

Precedents

During Investigation I, I found the recollection of memories particularly interesting in my research; I explored theories and methods in which people tried to store their memories digitally, allowing an individual to save their memories forever. Researchers at Purdue University, for example, used functional magnetic resonance imaging (fMRI), which measures brain activity through detecting changes in blood flow to the brain and is typically the most reliable and functional method of decoding brains, to examine and produce visual processing areas. Researchers at Carnegie Mellon attempted to take this a step further by analyzing more complex thoughts also using fMRI. With my project, I was trying to address the same issue of storing memories of less complexity and with the help of outside tools, such as a recorder.

Process

  While the outcome of this project remained focused on sound, it began as more of a way to use technology to decipher language tone. Originally, a wearable microphone would record sound and separate the audio files into different groups based on ranges of noise level. When the user listened to the recordings to elicit a specific conversation or memory on the computer, images and instrumental music would begin playing and display on the computer monitor. These images and instrumental music files would reside in folders corresponding to the mood they most clearly demonstrate; the noise level and tone interpreted from the microphone and computer would indicate the “mood” folder to be opened. 

The product of this device changed after I created a purpose mind map, (picture shown), in which I decided to focus more on visual cues in remembering conversation. Thus, the final product of the device changed from a sort of mood board to a collage that directly represented the conversation speakers and topics.  

Open Questions and Challenges

The privacy issue that emerges from recording a conversation between two or more people and the ethics of technology intelligently searching through all outlets in order to curate a meaningful collage definitely pose important questions, including but not limited to: When should the contents of a conversation be saved? Most social media outlets use photos to curate a profile of memories; what are the effects of conversation becoming a shareable outlet? How does one determine the boundaries of remembering their own conversation when it involves more than just themselves? Should they receive consent to record, or should it be assumed that the audio file is used solely for personal reasons and memory rather than sharing with the public?

Reflection

If I had the opportunity to complete this project again, I think I would alter the resulting product to be a more tangible, interactive visualization. I believe that an interactive result, whether it be related to light or smell, might provide a more memorable experience than a curated page with related imagery and people because the original memory of conversation already exists with the same topics and people; rather than ingraining a memory by using information already existing in the memory, it might be beneficial to study the results of using two methodologies in the recollection of memories. Unfortunately, I did not get where I wanted to in this project; I attempted coding the resulting pages with Arduino without any success, so my coding abilities have a lot of room for improvement. Also, although the coding primarily navigated computer files (and for demonstration purposes used photos collected prior to the experiment), I think artificial intelligence would have been of use.

Attribution and References

  1. http://fmri.ucsd.edu/Research/whatisfmri.html

  2. https://www.sciencemag.org/news/2018/01/mind-reading-algorithm-can-decode-pictures-your-head

  3. https://www.cmu.edu/dietrich/news/news-stories/2017/june/brain-decoding-complex-thoughts.html

  4. https://www.independent.co.uk/news/health/alzheimers-patients-recover-lost-memories-dementia-disease-study-scientists-memory-loss-reverse-a7862781.html

New doc 2019 02 12 08.56.34 1.thumb
Show Advanced Options
New doc 2019 02 12 08.56.34 2.thumb
Show Advanced Options
New doc 2019 02 05 20.29.20 1.thumb
Show Advanced Options
Img 1841.thumb
Show Advanced Options
Img 1842.thumb
Show Advanced Options
Img 1843.thumb
Show Advanced Options
Img 1844.thumb
Show Advanced Options
Drop files here or click to select

You can upload files of up to 20MB using this form.