Living On Memorial Booth

Made by Laura Rodriguez, Allana Wooley and Shengzhi Wu

Created: March 23rd, 2019

0

Proposal

When people go to cemeteries today, they walk down rows of silent stone engraved with names, dates, and perhaps an identifier or two (mother, child, loving spouse). Cemeteries are established public spaces for respectfully recalling and remembering individuals, but this one-sided interaction, rememberer to rememberee, hasn't changed for centuries. With Me, Myself and I, we envision the cemetery of the future.


Public space for remembrance of an individual is reserved at birth rather than upon death. In this future cemetery, individual plots are built as miniature mausoleums that store annually collected impressions of an individual, physicalized as digital avatars. As an individual ages, the collection of impressions in their mausoleum grows. People can revisit their own past selves as well as the past selves of family members, friends, and even strangers--anybody you might visit in a cemetery.

Interactions with past selves are not without consequences. If an adult visits their 12-year-old self and tells them what happens in their future, it will affect how that 12-year-old version conceptualizes the world and reacts back to the visiting adult and all subsequent visitors. What happens if you tell a younger self you broke up with who they were in love with at the moment of impression-making? What happens if you revisit a favorite memory with this imprint and the way an event has been remembered differ between the avatar and current self? What happens if you tell a younger version a close family member died? What happens if you tell a younger version that you have given up on a long-held dream? What happens if somebody else tells a digital avatar version of you that you yourself have died?

Our future cemetery explores the ways a person is remembered by different people at different times, the ways a person evolves over time, and the ways memory can be subtly manipulated and alternate realities created.


0
Living On Memorial Booth
Allana Wooley - https://youtu.be/SwJX89OKqIw
0

Intention

Death doesn’t mean what it once did. Instead of marking finality and the end of one’s being, death is evolving to mean only that the physical body is around no more.

There are already people and companies dedicated to creating personality-accurate representations of our loved ones after death. Sending a text to the dead and expecting and receiving a response is a new reality. As AI capabilities, machine learning, and the amount of data we pour into technology, from our online personas to our digital accounts and communication logs, grow, so will the accuracy, intuition, engagement and intimacy made possible by these post-death human representations. In the next decade, we will see a continuation of people moving far and often. Meaningful relationships will be scattered across the globe and few people will live close to their relative’s final resting spots. With the complications and expense that comes with needing to travel for funerals and memorial events, and the lack of access to traditional collective memorial sites (i.e. cemeteries), there will be a demand for new ways to commemorate, remember and interact with the beloved dead.

Living On, a remembering booth, addresses these needs. We envision Living On as a start-up originating in New York City. Built off the docks that jut into the harbor, these booths take mourners from the busy, unfeeling and continually moving streets of New York City into an intentionally isolated environment where people can have a chance to pause and remember and speak with the loved ones they’ve lose. Creating an environment within the booth helps reset user mindsets, slow them down, and prepare them for meaningful experiences with those they have lost. Once a living-passed engagement experience is complete, users emerge into the city facing the water and New York City skyline, a gentle reintroduction that continues the flow from the opening and closing meditation periods.

Ultimately, my team is trying to do two things:

  1. Create a thoughtful, meaningful mourning experience for the new realities of an increasingly transient society and increasingly tighter demands for land.
  2. Force engagement with the question of digital life after death. What are the implications for maintaining our loved and lost in a semblance of a semi-sentient form? Who gets to decide what happens to somebody’s data and digital legacy and memory? Built off data, how will these reincarnations shift from the reality of the living person, and how will continued interactions with people shift the avatar even further?

0

Precedents

Companies like Eterninme and Replika are at the forefront of creating digitized representations of individuals after death that can still be interacted with. Simply feed in enough text logs and whatever other data, and their algorithm will spit out a chatbot that can be ‘talked to’ matching the personality and characteristics of the originating human. Individuals are getting into the game, creating their own bots with the text logs they have with a person using machine learning to construct an identity and conversation traits of a person. This digital reincarnation is happening now and relatively accessible.

Visually too, we have seen an increase in the number of people being reconstructed and made to ‘perform’ heedless of what the original living person the image comes from or their family would have wanted. Tupac and Amy Winehouse have been brought back from the dead, turned to holograms and sent on tour. Following their deaths, Phillip Seymour Hoffman and a few Star Wars characters were digitally reconstructed so their final movies could be completed. Each time this has happened, there have been major discussions about the ethical concerns and rights of the dead and their estates to their images, but it hasn’t stopped technologies being used for reconstruction purposes. As the living today build vast digital profiles of ourselves, nearly everybody has enough content online for these 3D renderings as well as vocal reconstruction.

0

Prototype

For our prototype, we created a to scale model of the Living On booth. Displaying the meditation spaces capping each end of the experience as well as the main space with interactive interface table and full-length screens, this model clearly communicates the form and flow through the booth. The scale model was constructed from laser cut foamcore and hand-cut mat board. Because we were using the scale model for illustrating the different areas within the user flow, we keep the fidelity of it clean but simple, so that it would not overpower the purpose of it. We purposely made the outside white and inside black to illustrate how the building would have a similar contrast in materials between the exterior and interior. When designing the scale, we were thinking about masoluems that are found in cemetries and referencing those structures but with a modern form. The form was also inspired by Richard Serra sculptures. It was designed with curved walls to help guide the user through the space, as well as to allow the screens in the main room to surround the user, so it would be a more immersive experience. It was designed intentionally for the user to enter and exit through sepearte areas, so that the space would help empahsize the feeling of a journey because it would direct the user to move through the entire structure.


Model plans and construction

Below is a video of the lighting up prototype of the scale model (we used keynote to prototype this)

0
0

The interactive interface was made in Sketch and prototyped in Principle for the video. The interface was designed to be simple with limited selections for the user, only focused on selecting the person they want to talk to and the time frame of memories. Because the physical token was part of the interaction, we made sure to design the digital element to work smoothly with it. We took into consideration the animations and interactions that would occur on the interface when the token activates the table, as well as how we could use the visual design and animations as cues to inform the user without having to have written or spoken instructions during the experience. (For example, the animation of the picture of the selected person disappearing into the table as a signal that the user is calling the person, then the digital avatar would appear on the screens).


Wireframe Flow

Below is a video of the interaction sequence of the tabletop interface

0

Additionally, we created an acrylic mockup of the interaction experience. With a slanted clear acrylic reflecting what’s playing from an iPad resting atop a box, we can create a kind of holographic screen matching the scale of our model. Using voice triggers and natural language processing capabilities, one only needs to speak to the reflected image to get back a specific, targeted response. We intentionally glitched the reflection to put a bit of distance between the viewer and the reimagined avatar. For an even uncannier semblance between representation and life, the avatar could be played as though on the other end of a Skype or Facetime call.

0

Unity Prototyping Process & Techniques

We used Unity as the major tool to develop the prototype for this concept. In order to make an interactive demo, we firstly utilized IBM Waston voice recognition API to understand what the users are saying, and convert the voice into text, then we also used Nature Language Processing (NLP) and Understanding(NLU) technology to make sense of what a user said. The NLU part we used Google Dialogflow, which is a GUI tool to help quickly build a voice agent demo. I entered several pre-build intents, such as when a user says "I miss you", the system will recognizes it, and send back the intent of "miss_you" as a JSON file. As such, we can easily decode the JSON file, and use the intent to trigger certain actions.

To make the interaction believable, we recorded a few clips of a Allana talking in a black background, so that we can cut the clips piece by piece and use the voice and recognized intents to trigger different piece of video. However, since the videos are recorded and cut by pieces, so the transition between different clips are not smooth, and it is easy to tell the gaps between each videos. Therefore, we additionally add a video glitch effect to both give a sense of Sci-fi and smooth the transition.

Eventually, the project is running on a laptop, and streaming into an iPad, which placed on the box we made using the pepper ghost technique, to show the holographic image of the AI avatar.

The accompanying video is less about the concept and more a perspective level demonstration of the experience somebody might have moving through the entire booth. What do they see? What do they hear? How do they engage with the avatar they call into being? We want to make sure the soft, slow, deliberate pacing through the booth is understood.

The Unity glitch effect reference:

https://github.com/keijiro/KinoGlitch

0

Process

Meeting for the first time, we spent an hour just talking about death and what we found interesting about the way people are memorialized today. From this meeting, we talked about what it means to represent those who have died in a living context, where remembering takes place, and the dystopian, ill-considered consequences that may come from creating a digital avatar that lives on following death. Those who give themselves this power become Frankensteins of sorts, creating monsters from the digital collages they reanimate.

From this discussion, we settled on the idea of a booth that is situated in no particular place, but can instead create its own self-contained environment specially designed for remembering and engaging with the dead. We originally created a circle of screens because we imagined a user calling back an entire collective of their lost at a time. While we moved away from that idea for the purposes of our prototype and demonstration, the circular main space remained. Imagining it would be startling to move straight from the city into an emotionally intense communication with the dead, we added meditation pods to the front and back of our experience flow, giving the individual time to rest, calm and remove themselves from the concerns of the outside world. Similarly, when exiting the experience, moving from a real interaction with a much loved deceased family member or friend immediately into the world would be jarring and diminish the importance of the interaction that just took place. A second meditative experience provides built in space to decompress, reflect and prepare for reentry back into the world.

Imagining our booths already placed throughout NYC and in use a decade from now, back casting forced us to consider the practicalities of our memorialized solution. The technology we are proposing may seem advanced, but all of the components are in fact already in place and in active use, for many of the same purposes. Eye tracking and motion aware software is not difficult to access, and is being used widely in video games today. Natural language processing technology is developing quickly as AI develops. And, as mentioned previously, image and personality reconstruction are already happening. The societal and cultural hunger for and openness to new places and ways to mourn is here as well. Honestly, the booths could pop up today, with the right funding and a company interested in advocating for their inclusion. The biggest hurdle for making a booth like this reality are the legal issues around the use of a person’s likeness following death. What rights the dead do and don’t have will no doubt be a major legislative question in the next ten years, as the use and reuse of the dead becomes more possible and more common.

Assured our idea is founded in ground truths, we moved into layout out the particulars of the physical space, points of interaction, and emotional journey a user will flow through with hybrid space maps and experience mapping.

0

Open Questions and Challenges

At the heart of our original conversation for the memorial booth, was a question about what happens when death is not the end of a relationship. Does this make it more difficult for the memory and pain of loss to fade? Will the remaining living reject life and those alive around them, choosing instead to exist in a liminal space between death and life, neither able to go back in time nor to move forward?

Further, we are curious about what kind of avatar would be recreated based solely on digital remnants—pictures, videos, posts, text logs are certainly a part of a person in life, but they fall far short of encompassing the person in sum. Further, any sort of algorithmically-compiled avatar is subject to change and evolve as new data is fed into its database. At what point does an avatar stop reflecting the image and become a new entity? How does that evolution change the legal and ethical conversation?

x
Share this Project

Courses

48528 Responsive Mobile Environments

· 18 members

This 15-week course introduces students to responsive mobile environments and encourages them to explore speculative terrains that intersect art, technology and design and space. Iteratively, intro...more


About

~