Back to Parent

Outcome


Team Members

Yu Mao (Robotics Institute, School of Computer Science)

Min Wang (Language Technology Institute, School of Computer Science)

Intention

    Role-play experience is an important part of gameplay experience. Well-designed video games often attempt to keep the player in the state of immersion by creating a role-play mechanism in the context. It is noteworthy that in this model virtual world and real world has no necessary relationships and is separated.  In this project, we want to bridge the virtual world and real world by enabling player-like character model generation and interaction based on mesh reconstruction and motion sensing techniques.       

Outcome

As shown below, the player can interact with their avatar that shares the same(almost) physical appearance.
Show Advanced Options

Process

Basically there are three steps to implement the project.


Step One - Mesh Reconstruction using Kinect

The first step is to use Kinect to capture the color and depth image of the player from different perspective. Then, we use tools such as KScan3D to reconstruct the 3D mesh model.

Show Advanced Options

Step Two - Kinect Motion Sensing

The second step is to use Kinect to detect the dynamic gesture of human player and transfer the result into game engine state input.

Show Advanced Options

Step Three - Game Development

The final step is to develop an immersive interactive experience using Game Engines such as Unity3D.

Reflection

We will probably continue the development of the project. At this stage it is still a simple prototype. We can add more features such as skeleton matching. So the avatar's animation can be directly driven by the player. 

Also, we can write our own computer vision algorithm for gesture recognition. So we can have more options and flexibility in the design of the gaming experience. 
Drop files here or click to select

You can upload files of up to 20MB using this form.