Another me in Another World

Made by Yu Mao

Based on Microsoft Kinect, KScan3D and Unity, the project aims to reconstruct 3D models from human players to create immersive interactive experience where players are able to interact with the avatar with the same physical experience in the virtual world.

0

Team Members

Yu Mao (Robotics Institute, School of Computer Science)

Min Wang (Language Technology Institute, School of Computer Science)

0

Intention

    Role-play experience is an important part of gameplay experience. Well-designed video games often attempt to keep the player in the state of immersion by creating a role-play mechanism in the context. It is noteworthy that in this model virtual world and real world has no necessary relationships and is separated.  In this project, we want to bridge the virtual world and real world by enabling player-like character model generation and interaction based on mesh reconstruction and motion sensing techniques.       
0

Outcome

As shown below, the player can interact with their avatar that shares the same(almost) physical appearance.
0
another me in another world - interaction
Mao Ryan - https://www.youtube.com/watch?v=wAg54wc37gg
0

Process

Basically there are three steps to implement the project.


Step One - Mesh Reconstruction using Kinect

The first step is to use Kinect to capture the color and depth image of the player from different perspective. Then, we use tools such as KScan3D to reconstruct the 3D mesh model.

0
Human Mesh Reconstruction
Mao Ryan - https://www.youtube.com/watch?v=eyz3EXtFFgM
0

Step Two - Kinect Motion Sensing

The second step is to use Kinect to detect the dynamic gesture of human player and transfer the result into game engine state input.

0
gesture recognition using kinect
Mao Ryan - https://www.youtube.com/watch?v=z6-kvl8_vSA
0

Step Three - Game Development

The final step is to develop an immersive interactive experience using Game Engines such as Unity3D.

0

Reflection

We will probably continue the development of the project. At this stage it is still a simple prototype. We can add more features such as skeleton matching. So the avatar's animation can be directly driven by the player. 

Also, we can write our own computer vision algorithm for gesture recognition. So we can have more options and flexibility in the design of the gaming experience. 
x
Share this Project


Focused on
Skills
Tools
About

Based on Microsoft Kinect, KScan3D and Unity, the project aims to reconstruct 3D models from human players to create immersive interactive experience where players are able to interact with the avatar with the same physical experience in the virtual world.