Back to Parent


After many iterations and changes, I had a working shader. The end result was still lackluster, however. This is for two reasons. The first is because I cannot find a way to actually record the kinect feed directly. No matter how closely I crop the video area, the color data on the left and the depth data on the right will never be perfectly aligned (as they would be from a direct feed). 

The other reason my results do not look so good, is because of the "halo" effect of depth data. If you look my outline on the right in the photo below, you will see a white halo. This results from being too close to the camera. However, if I am too far, there is no depth data for the rest of my body. The halo effect combined with slightly misaligned cropping throws all of the pixels widely around my rendering in the video. You can see this in the second photo below as well as how the depth data is mapped on every square face. 

Content Rating

Is this a good/useful/informative piece of content to include in the project? Have your say!