Back to Parent

Outcome


Overview


This project uses recorded depth and color video from a kinect 2.0 to render an object (in this case, me) in 3D using homemade "pixels" to remap the depth and color data. Each pixel can be manipulated by a corresponding depth value and create a 3 dimensional object in virtual reality space. This creates a virtual reality video experience. 


All code and resources on this project and open source. They are free to use, distribute, modify, and adapt. 

Me vr.thumb
Show Advanced Options
Show Advanced Options
Show Advanced Options
Show Advanced Options
Show Advanced Options
Show Advanced Options

The Video


The first step of this project was to record data from the kinect. I used a program built on the node.js framework called electron. Electron allowed to me to stream kinect data to a viewing window. This was the easiest part. Now I had to record this video. I used screen recording software called OBS Studio. Unfortunately, I could not record the feed directly from the electron applet and had to literally record my screen! I cropped the record size to the correct resolution and got what looked like usable footage.

Frame.thumb
Show Advanced Options

At this point, I needed an object to "project" this data onto. To do this, I initially tried to use a subdivided plane object. I thought that if a plane has the correct number of squares, the resolution would work out. However, each square is not individually able to be manipulated. My instructor informed me that a better approach would be to generate a mesh object made of thousands of square faces (each is actually 2 triangles). To do this, I used the code below:

Show Advanced Options

The Canvas


This produced an appropriate canvas of squares that could function as pixels. Each square shares a vertex with its neighbor, so they are as close as possible (providing density to the images) while still having individual depth mobility. The following images show how dense the object looks from a distance.

Close.thumb
Show Advanced Options
Closer.thumb
Show Advanced Options
Closest.thumb
Show Advanced Options

The Shader

At this point, I needed to do something to each of these faces. I had never written a shader before, but my instructor was immensely helpful in getting me through the process. There are two parts:

A vertex shader that applies data to every vertex in an object. 

A fragment shader that takes depth and color data and outputs this data applied to a fragment for the next stage of the graphics pipeline. 

The entire scene is rendered using Mozilla's A-frame VR framework. In A-frame, shaders can be registered and implemented in a scene. The code below is the shader I used for this project. It is written in WebGL, a javascript API that compile into GLSL (OpenGL Shader Langue), a C-like language that handles data in the graphics pipeline of the rendering process. 

Show Advanced Options

Virtual Reality Scene

This was made using A-frame VR off of an existing open license scene from my Reality Computing class. The scene itself is based on the A-Frame demo script. The code is below: 

Show Advanced Options

Results

After many iterations and changes, I had a working shader. The end result was still lackluster, however. This is for two reasons. The first is because I cannot find a way to actually record the kinect feed directly. No matter how closely I crop the video area, the color data on the left and the depth data on the right will never be perfectly aligned (as they would be from a direct feed). 

The other reason my results do not look so good, is because of the "halo" effect of depth data. If you look my outline on the right in the photo below, you will see a white halo. This results from being too close to the camera. However, if I am too far, there is no depth data for the rest of my body. The halo effect combined with slightly misaligned cropping throws all of the pixels widely around my rendering in the video. You can see this in the second photo below as well as how the depth data is mapped on every square face. 

Test.thumb
Show Advanced Options
Me vr2.thumb
Show Advanced Options

Future Plans

I would definitely like to continue working with the kinect. This is a very affordable and accessible tool to render 3D images/videos in a virtual reality setting. 

As for my original plan, I would like to continue with higher quality video. I know there is a way to collect direct kinect output using electron. I would also like to use the libfreenect (or another library) to collect data from 2 kinects simultaneously.

Lastly, though I've been assured its impossible, I won't stop trying to have all of this data collected, processed and displayed live over the web. Virtual reality video chat can be here today; just as soon as I get all this together. 

Drop files here or click to select

You can upload files of up to 20MB using this form.