This project uses recorded depth and color video from a kinect 2.0 to render an object (in this case, me) in 3D using homemade "pixels" to remap the depth and color data. Each pixel can be manipulated by a corresponding depth value and create a 3 dimensional object in virtual reality space. This creates a virtual reality video experience.
All code and resources on this project and open source. They are free to use, distribute, modify, and adapt.
Is this a good/useful/informative piece of content to include in the project? Have your say!