Back to Parent

Outcome


DISCLAIMER: All the technology I need for my project to work as envisioned does not exist or is not readily available right now (technology like object recognition) so my project in a sense is "hard-coded" to only work in one specific place (one room) currently. The hope is, as technology advances and becomes more readily available, I can pop that into the project and make it work as envisioned. 

GENERAL PROJECT PROCESS DESCRIPTION:

Creating A Rendering of the Room: Because efficient, accurate, readily-available object recognition does not currently exist, I have to recreate the room in an online rendering so that I can manually annotate the objects in a different language (and display these annotations in Augmented Reality). The technology to create a room rendering that is accurate is not readily available, so I tried doing it through photogrammetry and ReCap/ReMake. I used a Ricoh to take panoramic pictures of the room, used my code to change those panoramic pictures to 6 side-view pictures, and funneled those pictures into ReCap/ReMake to try to produce a rendering of the room.

Displaying renderings in AR: I used AR.js and Aframe to display the rendering in Augmented Reality (AR). I hosted everything on glitch because I use WebSockets to communicate with the Vive Tracker which should be attached to the mobile device displaying everything. I need the Vive Tracker because mobile devices do not have accurate enough location tracking that I can easily use to figure out how the device is moving. By using the Vive Tracker, I can know where the mobile device is in the room and that allows me to know portion of the rendering I should be displaying (that's what the mover code does). 

Show Advanced Options

RESULT OF PHOTO TO 3D: The rendering made using ReCap/ReMake were not as accurate as hoped they would be. The portion of the room that is rendered is far less than I expected, and even that portion is lacking in accurate in some parts. 

Below is an example rendering (one of the better ones):

Screenshot %2859%29.thumb
Show Advanced Options

It works better in smaller rooms (it captures the entire room).

Here is a sample rendering from that:

Screenshot %2871%29.thumb
Show Advanced Options
Screenshot %2870%29.thumb
Show Advanced Options
Show Advanced Options

The code above allows the rendering to display in AR and makes it so that is is fixed when you turn your mobile device.  

The screenshot below is an example of how it renders. (The first 2 pictures are of the first rendering I display above, the next 2 are of the second rendering I display above).  

18289776 1371901589568167 1322320773 o.thumb
Show Advanced Options
Screenshot %2861%29.thumb
Show Advanced Options
18426342 1379692665455726 1859300666 o.thumb
Show Advanced Options
Screenshot %2872%29.thumb
Show Advanced Options

I have to actually scale these renderings to accurately represent the portion of the room they are a rendering of. And after that I would have to annotate them.

(This is not too difficult of a task, just kind of tedious, so I have put it off because I want the rest of the project to work properly before doing this part.)

Show Advanced Options

The mover code above in theory would allow the Vive Tracker to control what is displayed in your mobile device (by moving the camera in the rendering to look at the proper part of the rendering). I say in theory because I have not actually tested it out yet. 

 https://rapid-fountain.glitch.me/ 

This is the link to the website. Run this on your mobile device with a Vive tracker attached.

https://github.com/mtegene/reality_computing_final_project

This is the link to the git hub project, where you can find the mover code and some of the renderings I made.

DISCLAIMER: project is still not done because I need to do the scaling and annotating, and more importantly, make sure the Vive Tracker's information is being correctly used to modify what is displayed.

Drop files here or click to select

You can upload files of up to 20MB using this form.