Back to Parent

Unity Prototyping Process & Techniques

We used Unity as the major tool to develop the prototype for this concept. In order to make an interactive demo, we firstly utilized IBM Waston voice recognition API to understand what the users are saying, and convert the voice into text, then we also used Nature Language Processing (NLP) and Understanding(NLU) technology to make sense of what a user said. The NLU part we used Google Dialogflow, which is a GUI tool to help quickly build a voice agent demo. I entered several pre-build intents, such as when a user says "I miss you", the system will recognizes it, and send back the intent of "miss_you" as a JSON file. As such, we can easily decode the JSON file, and use the intent to trigger certain actions.

To make the interaction believable, we recorded a few clips of a Allana talking in a black background, so that we can cut the clips piece by piece and use the voice and recognized intents to trigger different piece of video. However, since the videos are recorded and cut by pieces, so the transition between different clips are not smooth, and it is easy to tell the gaps between each videos. Therefore, we additionally add a video glitch effect to both give a sense of Sci-fi and smooth the transition.

Eventually, the project is running on a laptop, and streaming into an iPad, which placed on the box we made using the pepper ghost technique, to show the holographic image of the AI avatar.

The accompanying video is less about the concept and more a perspective level demonstration of the experience somebody might have moving through the entire booth. What do they see? What do they hear? How do they engage with the avatar they call into being? We want to make sure the soft, slow, deliberate pacing through the booth is understood.

The Unity glitch effect reference:

https://github.com/keijiro/KinoGlitch


Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!

0