Presence

Made by paddepal and Nicole Xiang

Created: May 8th, 2023

0

Presence

2023

Pranav Addepalli, Nicole Xiang

Credits

This project was built by Pranav Addepalli and Nicole Xiang in the 2023 edition of 48-528: Responsive Mobile Environments. Pranav and Nicole are both rising seniors studying Information Systems and Human-Computer Interaction. Special thanks goes to the professor, Daragh Byrne, and the teaching assistant, Zhenfang Chen, for their support throughout this project.


Description

Presence aims to question the way we interact with everyday objects. Mirrors are rooted in reality, an object that is taken for granted to be a reflection of now. Especially in a fast-paced world today, our presence, or the state of existing, is often overlooked as we focus on everything that is and/or about to happen. But have you thought about what affects this current state?

Our project aims to explore the concept of relativity, and show that the past and the now are interdependent. The project consists of a mirror with a built-in display that, when someone walks up to it and is detected by a hidden camera, will show the person reflected like a mirror, but also lagged behind and in grayscale. This intentional delay changes the way we interact with regular mirrors -- rather than see yourself, you see the version of you that led to this new perception of you.


Video demo: https://vimeo.com/824940337?share=copy


0

Creative Documentation 

Process

The project can be broken down into 4 phases: ideation, pivot, software implementation, and physical setup.

Phase 1: Ideation

With our first prototype “Mirror Mirror on the Wall”, we tested out ways we can incorporate a mirror into a smart environment. Using the feedback we received from critique, we shifted our project direction from style filters to “going back in time”. However, after researching existing tools and raspberry pi functionalities, we realized that it is too difficult to recreate the past (i.e. what users would look like / be doing in the 1900s) in the given timeframe. This is when we struggled a bit because we had many different ideas for what we could do with a mirror but none of them felt meaningful enough. After discussing with Daragh, one idea we had was a mirror that will detect your emotion — whether you are happy, sad, or angry — and filter the display accordingly. However, the openAI tool we found for emotion recognition has a low accuracy, which led us to narrowing down emotions to just smiles. The idea then became the mirror wants to control you by only showing the normal mirror display when you smile, and when you don’t, it puts on a distorted image of you. This smile detector algorithm worked pretty well and we were finally able to move forward with a set idea. 

0
First prototype: Mirror Mirror on the Wall
Img 3713.thumb
0

Phase 2: Pivot

Two days before the exhibition, it seemed like our project (the mirror and monitor) was either moved to somewhere else or was taken. We were able to find the monitor but the mirror was missing. Since there wasn’t a lot of time left, we decided to go to Home Depot and see if they had any mirror film to remake the mirror. There was no luck finding the correct film; however, we did find reflective spray paint which actually turned out to be better than the film.

This rebuild of the project also made us wonder once again what is the purpose/goal we are trying to achieve. We started thinking: “What does the mirror really mean/represent?”, “Do we need a Raspberry Pi?”, “What kind of camera should we use?”, etc. These questions made us step back and that’s when a new idea popped up: relativity. Everything around us and everything that we do is connected in some way. What we do a second ago affects what we are doing right now but we often don’t realize that; instead, we focus on the future: what will happen, what can go wrong, and how can I improve it? A mirror always reflects on the now, so what if our mirror reflects on the immediate past to visualize how everything is dependent on each other? With this, we want to remind our audience that the current moment is the most important and by seeing the past, they see the version of them that led to this new perception of themself.

0
Spray painting
343607647 544973077811864 3216170882761311798 n.thumb
0
After spray painting: good mirror effect!
343577267 993991671970051 2333837820627923384 n.thumb
0

Phase 3: Software

Once we decided on the new idea of creating a delay mirror effect, we started implementing the software. This mainly consists of 1) detecting the face 2) creating a buffer for storing frames so that we can grab video frames captured a few seconds ago 3) image filter when a face is detected, and 4) LED light color which changes based on whether the user is in the mirror. We spent a lot of time exploring what kinds of filter are possible (see videos below). In the end, because of time limits (the day before the exhibition), we decided to go with a straightforward gray filter (which symbolizes the past) when faces are detected. There was also some difficulty connecting the Arduino (which controlled the LED strip) and the mirror display algorithms so we had to drop the color change feature and just sticked to one color animation.    

0
Testing camera
344785366 620268766815618 1214749464631329269 n.thumb
0
Testing out smile detector and image distortion
0
Testing LED strip animation
0
Exploring more image distortions
0
0

Phase 4: Physical Set Up

Lastly, we needed to connect the monitor and mirror so that the mirror lays directly on the monitor. In addition, because the table stand we had for the exhibition has a small surface area, we had to make sure whatever that’s holding everything needs to be small but strong enough. In the end, we used duct tape to connect the monitor and mirror and the monitor stand to hold up both parts, as well as a wood stick on the back to balance out the force going forward (see image). One other problem we had was getting enough light for the camera at the exhibit (which was pretty dark). Luckily, we were able to find a spot to put light that provides just enough brightness for the camera to see.  

0

Build

We used the following resources to create our final exhibit:

  • 24”x24”x1/4” clear acrylic sheet
  • Rust-Oleum Specialty 6 oz. Mirror Effect Spray Paint
  • 20” Monitor
  • Macbook Air laptop
  • USB webcam
  • LED light strip WS2812B
  • Arduino Uno

The final product was placed on a 4-foot tall speaker stand, borrowed from the IDeATe Media Lab. We planned for interactions in our exhibit to be single-user, with people coming in from the side and looking into the mirror for a few minutes. The interaction was very simple in that there is no explicit action a user needs to take. Instead, we play on the simplicity and agency offered by regular mirrors, where people just walk up to one and look into it. The interaction that people had with our project was in their exploration of why the display was lagged, why it turned grayscale, and overall experimenting with the display.

0

The blue flow is the normal experience when there is nobody there, and the red flow is when a user walks up to the mirror.

While developing our project, we experimented with some other technologies. We also used:

  • Raspberry Pi 3B+
  • Raspberry Pi 5MP Camera Module
  • Arduino Nano 33 BLE
  • Reflective window film

We used the mirror effect spray paint to cover the back of the acrylic sheet, leaving a square in the center that was the size of the monitor. This way, after turning the sheet over, we were able to have a mirrored bezel around the monitor screen. We took the front cover of the monitor off so that it could be flush with the back of the acrylic. We used hot glue to attach the monitor to the acrylic.

We attached a small black piece of cardboard above the mirror to hide the USB webcam. The webcam was connected to the Macbook, which was hidden next to the exhibit. The Macbook was used because our alternative, a Raspberry Pi 3, did not have the processing power or software packages to support the program we were running for the entire exhibit.

The LED light strip was attached to the back of the acrylic along the edge. An Arduino Uno was used to power and control it.

0

On the Macbook, we ran a Python program that used OpenCV and the Haar cascade models for facial detection as well as read from, augment, and display images from the camera. The source code for our exhibit is below. 

Full code can be found here: https://github.com/pranavaddepalli/rme-final

0
//PYTHON PROGRAM
import cv2
import numpy as np
import time
import serial

OFFSET = 25 #NUMBER OF FRAMES TO OFFSET
# ser = serial.Serial(port='/dev/cu.usbmodem141401') #SERIAL PORT
# print(ser.name)
# ser.write(b'WE OUT HERE ON THE SERIAL PORTTTTT\n')

time.sleep(2)

DISTORTSPEED = 2; # HOW FAST TO DISTORT (1 IS .1 PER FRAME OUT OF 1)

cap = cv2.VideoCapture(0) # 0 is the default camera device index

buffer = [0 for i in range(OFFSET)] # buffer to hold last 5 seconds
t = 0

facetime = 0 # track how long the face has been on screen

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

brokenmirror = cv2.imread('brokenmirror.png')
brokenmirror = cv2.cvtColor(brokenmirror, cv2.COLOR_BGR2GRAY)

# initalize with five seconds of buffer 
while t < OFFSET:
    ret, frame = cap.read()
    buffer[t] = frame
    t += 1

t = 0

probs = np.random.random(buffer[0].shape[:2])

def distort(image, amt):
    amt = max(0, amt)
    amt = min(1, amt)
    # AMT SHOULD BE BETWEEN 0 AND 1: 0 IS MORE COLOR, 1 IS MORE GRAY
    # output = image.copy()
    # colorspace = image.shape[2]
    # black = np.array([0, 0, 0], dtype='uint8')
    # white = np.array([255, 255, 255], dtype='uint8')
    
    # output[probs < (amt / 2)] = black
    # output[probs > 1 - (amt / 2)] = white

    # # grayscale
    # r, g, b = cv2.split(image)
    
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    graybgr = cv2.cvtColor(gray, cv2.COLOR_GRAY2BGR)
    output = cv2.addWeighted(image, 1 - amt, graybgr, amt, 0)
    return output


while True:
    grayedout = False
    ret, frame = cap.read()
        
    # write to the buffer
    if t >= OFFSET:
        t = t % OFFSET
    
    res = buffer[t]
    buffer[t] = frame

    t += 1

    # detect face
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(frame, scaleFactor=1.25, minNeighbors=3, minSize=(30, 30))
    
    # if there's a face, distort
    if (len(faces) > 0):
        # draw a rectangle first
        # for (x, y, w, h) in faces:
        #     cv2.rectangle(frame, (x, y), ((x + w), (y + h)), (255, 0, 0), 2)
        
        facetime += DISTORTSPEED
        grayedout = True

        # ser.write(b'face')
    
    # otherwise, continue
    else:
        if(grayedout):
            facetime -= DISTORTSPEED
            if(facetime == 0):
                grayedout = False
        else:
            facetime = 0

    
   
    res = distort(res, facetime / 10)

    res = cv2.flip(res, 1)
    
    # res = cv2.addWeighted(res, 0.5, brokenmirror, 0.7, 0)
    
    cv2.imshow("a", res)
    
    if cv2.waitKey(1) == ord('q'): # press 'q' to exit the loop
        break

cap.release()
cv2.destroyAllWindows()
Click to Expand
0
//ARDUINO LED CODE
#include <FastLED.h>

#define LED_PIN     13
#define NUM_LEDS    164
#define BRIGHTNESS  64
#define LED_TYPE    WS2811
#define COLOR_ORDER GRB
// CRGB leds[NUM_LEDS];

#define UPDATES_PER_SECOND 100



//#define CLK_PIN       4
#define VOLTS          5
#define MAX_MA       4000


CRGBPalette16 currentPalette;
TBlendType    currentBlending;

extern CRGBPalette16 myRedWhiteBluePalette;
extern const TProgmemPalette16 myRedWhiteBluePalette_p PROGMEM;

CRGBArray<NUM_LEDS> leds;

#define TWINKLE_SPEED 4
#define TWINKLE_DENSITY 5
#define SECONDS_PER_PALETTE  30
CRGB gBackgroundColor = CRGB::Black; 
#define AUTO_SELECT_BACKGROUND_COLOR 0
#define COOL_LIKE_INCANDESCENT 1


CRGBPalette16 gCurrentPalette;
CRGBPalette16 gTargetPalette;

void setup() {
    Serial.begin(9600);
    Serial.setTimeout(1);
    delay( 3000 ); // power-up safety delay
    FastLED.addLeds<LED_TYPE, LED_PIN, COLOR_ORDER>(leds, NUM_LEDS).setCorrection( TypicalLEDStrip );
    FastLED.setBrightness(  BRIGHTNESS );
    
    currentPalette = CloudColors_p;
    currentBlending = LINEARBLEND;

    FastLED.setMaxPowerInVoltsAndMilliamps( VOLTS, MAX_MA);
    chooseNextColorPalette(gTargetPalette);
}


void loop()
{
    if (Serial.read() == "face"){
      EVERY_N_SECONDS( SECONDS_PER_PALETTE ) { 
        chooseNextColorPalette( gTargetPalette ); 
      }
      
      EVERY_N_MILLISECONDS( 10 ) {
        nblendPaletteTowardPalette( gCurrentPalette, gTargetPalette, 12);
      }
      drawTwinkles(leds);
      FastLED.show();
    }

    else {

      static uint8_t startIndex = 0;
      startIndex = startIndex + 1; /* motion speed */
      
      FillLEDsFromPaletteColors( startIndex);
      
      FastLED.show();
      FastLED.delay(1000 / UPDATES_PER_SECOND);}
}

void FillLEDsFromPaletteColors( uint8_t colorIndex)
{
    uint8_t brightness = 255;
    
    for( int i = 0; i < NUM_LEDS; ++i) {
        leds[i] = ColorFromPalette( currentPalette, colorIndex, brightness, currentBlending);
        colorIndex += 3;
    }
}

void ChangePalettePeriodically()
{
    uint8_t secondHand = (millis() / 1000) % 60;
    static uint8_t lastSecond = 99;
    
    if( lastSecond != secondHand) {
        lastSecond = secondHand;
        if( secondHand ==  0)  { currentPalette = RainbowColors_p;         currentBlending = LINEARBLEND; }
        if( secondHand == 55)  { currentPalette = myRedWhiteBluePalette_p; currentBlending = LINEARBLEND; }
    }
}

void SetupTotallyRandomPalette()
{
    for( int i = 0; i < 16; ++i) {
        currentPalette[i] = CHSV( random8(), 255, random8());
    }
}

void SetupBlackAndWhiteStripedPalette()
{
    // 'black out' all 16 palette entries...
    fill_solid( currentPalette, 16, CRGB::Black);
    // and set every fourth one to white.
    currentPalette[0] = CRGB::White;
    currentPalette[4] = CRGB::White;
    currentPalette[8] = CRGB::White;
    currentPalette[12] = CRGB::White;
    
}

// This function sets up a palette of purple and green stripes.
void SetupPurpleAndGreenPalette()
{
    CRGB purple = CHSV( HUE_PURPLE, 255, 255);
    CRGB green  = CHSV( HUE_GREEN, 255, 255);
    CRGB black  = CRGB::Black;
    
    currentPalette = CRGBPalette16(
                                   green,  green,  black,  black,
                                   purple, purple, black,  black,
                                   green,  green,  black,  black,
                                   purple, purple, black,  black );
}

const TProgmemPalette16 myRedWhiteBluePalette_p PROGMEM =
{
    CRGB::Red,
    CRGB::Gray, // 'white' is too bright compared to red and blue
    CRGB::Blue,
    CRGB::Black,
    
    CRGB::Red,
    CRGB::Gray,
    CRGB::Blue,
    CRGB::Black,
    
    CRGB::Red,
    CRGB::Red,
    CRGB::Gray,
    CRGB::Gray,
    CRGB::Blue,
    CRGB::Blue,
    CRGB::Black,
    CRGB::Black
};


void drawTwinkles( CRGBSet& L)
{
  uint16_t PRNG16 = 11337;
  
  uint32_t clock32 = millis();

  CRGB bg;
  if( (AUTO_SELECT_BACKGROUND_COLOR == 1) &&
      (gCurrentPalette[0] == gCurrentPalette[1] )) {
    bg = gCurrentPalette[0];
    uint8_t bglight = bg.getAverageLight();
    if( bglight > 64) {
      bg.nscale8_video( 16); // very bright, so scale to 1/16th
    } else if( bglight > 16) {
      bg.nscale8_video( 64); // not that bright, so scale to 1/4th
    } else {
      bg.nscale8_video( 86); // dim, scale to 1/3rd.
    }
  } else {
    bg = gBackgroundColor; // just use the explicitly defined background color
  }

  uint8_t backgroundBrightness = bg.getAverageLight();
  
  for( CRGB& pixel: L) {
    PRNG16 = (uint16_t)(PRNG16 * 2053) + 1384; // next 'random' number
    uint16_t myclockoffset16= PRNG16; // use that number as clock offset
    PRNG16 = (uint16_t)(PRNG16 * 2053) + 1384; // next 'random' number
    // use that number as clock speed adjustment factor (in 8ths, from 8/8ths to 23/8ths)
    uint8_t myspeedmultiplierQ5_3 =  ((((PRNG16 & 0xFF)>>4) + (PRNG16 & 0x0F)) & 0x0F) + 0x08;
    uint32_t myclock30 = (uint32_t)((clock32 * myspeedmultiplierQ5_3) >> 3) + myclockoffset16;
    uint8_t  myunique8 = PRNG16 >> 8; // get 'salt' value for this pixel

    // We now have the adjusted 'clock' for this pixel, now we call
    // the function that computes what color the pixel should be based
    // on the "brightness = f( time )" idea.
    CRGB c = computeOneTwinkle( myclock30, myunique8);

    uint8_t cbright = c.getAverageLight();
    int16_t deltabright = cbright - backgroundBrightness;
    if( deltabright >= 32 || (!bg)) {
      // If the new pixel is significantly brighter than the background color, 
      // use the new color.
      pixel = c;
    } else if( deltabright > 0 ) {
      // If the new pixel is just slightly brighter than the background color,
      // mix a blend of the new color and the background color
      pixel = blend( bg, c, deltabright * 8);
    } else { 
      // if the new pixel is not at all brighter than the background color,
      // just use the background color.
      pixel = bg;
    }
  }
}

CRGB computeOneTwinkle( uint32_t ms, uint8_t salt)
{
  uint16_t ticks = ms >> (8-TWINKLE_SPEED);
  uint8_t fastcycle8 = ticks;
  uint16_t slowcycle16 = (ticks >> 8) + salt;
  slowcycle16 += sin8( slowcycle16);
  slowcycle16 =  (slowcycle16 * 2053) + 1384;
  uint8_t slowcycle8 = (slowcycle16 & 0xFF) + (slowcycle16 >> 8);
  
  uint8_t bright = 0;
  if( ((slowcycle8 & 0x0E)/2) < TWINKLE_DENSITY) {
    bright = attackDecayWave8( fastcycle8);
  }

  uint8_t hue = slowcycle8 - salt;
  CRGB c;
  if( bright > 0) {
    c = ColorFromPalette( gCurrentPalette, hue, bright, NOBLEND);
    if( COOL_LIKE_INCANDESCENT == 1 ) {
      coolLikeIncandescent( c, fastcycle8);
    }
  } else {
    c = CRGB::Black;
  }
  return c;
}

uint8_t attackDecayWave8( uint8_t i)
{
  if( i < 86) {
    return i * 3;
  } else {
    i -= 86;
    return 255 - (i + (i/2));
  }
}

void coolLikeIncandescent( CRGB& c, uint8_t phase)
{
  if( phase < 128) return;

  uint8_t cooling = (phase - 128) >> 4;
  c.g = qsub8( c.g, cooling);
  c.b = qsub8( c.b, cooling * 2);
}

// A mostly red palette with green accents and white trim.
// "CRGB::Gray" is used as white to keep the brightness more uniform.
const TProgmemRGBPalette16 RedGreenWhite_p FL_PROGMEM =
{  CRGB::Red, CRGB::Red, CRGB::Red, CRGB::Red, 
   CRGB::Red, CRGB::Red, CRGB::Red, CRGB::Red, 
   CRGB::Red, CRGB::Red, CRGB::Gray, CRGB::Gray, 
   CRGB::Green, CRGB::Green, CRGB::Green, CRGB::Green };

// A mostly (dark) green palette with red berries.
#define Holly_Green 0x00580c
#define Holly_Red   0xB00402
const TProgmemRGBPalette16 Holly_p FL_PROGMEM =
{  Holly_Green, Holly_Green, Holly_Green, Holly_Green, 
   Holly_Green, Holly_Green, Holly_Green, Holly_Green, 
   Holly_Green, Holly_Green, Holly_Green, Holly_Green, 
   Holly_Green, Holly_Green, Holly_Green, Holly_Red 
};

// A red and white striped palette
// "CRGB::Gray" is used as white to keep the brightness more uniform.
const TProgmemRGBPalette16 RedWhite_p FL_PROGMEM =
{  CRGB::Red,  CRGB::Red,  CRGB::Red,  CRGB::Red, 
   CRGB::Gray, CRGB::Gray, CRGB::Gray, CRGB::Gray,
   CRGB::Red,  CRGB::Red,  CRGB::Red,  CRGB::Red, 
   CRGB::Gray, CRGB::Gray, CRGB::Gray, CRGB::Gray };

// A mostly blue palette with white accents.
// "CRGB::Gray" is used as white to keep the brightness more uniform.
const TProgmemRGBPalette16 BlueWhite_p FL_PROGMEM =
{  CRGB::Blue, CRGB::Blue, CRGB::Blue, CRGB::Blue, 
   CRGB::Blue, CRGB::Blue, CRGB::Blue, CRGB::Blue, 
   CRGB::Blue, CRGB::Blue, CRGB::Blue, CRGB::Blue, 
   CRGB::Blue, CRGB::Gray, CRGB::Gray, CRGB::Gray };

// A pure "fairy light" palette with some brightness variations
#define HALFFAIRY ((CRGB::FairyLight & 0xFEFEFE) / 2)
#define QUARTERFAIRY ((CRGB::FairyLight & 0xFCFCFC) / 4)
const TProgmemRGBPalette16 FairyLight_p FL_PROGMEM =
{  CRGB::FairyLight, CRGB::FairyLight, CRGB::FairyLight, CRGB::FairyLight, 
   HALFFAIRY,        HALFFAIRY,        CRGB::FairyLight, CRGB::FairyLight, 
   QUARTERFAIRY,     QUARTERFAIRY,     CRGB::FairyLight, CRGB::FairyLight, 
   CRGB::FairyLight, CRGB::FairyLight, CRGB::FairyLight, CRGB::FairyLight };

// A palette of soft snowflakes with the occasional bright one
const TProgmemRGBPalette16 Snow_p FL_PROGMEM =
{  0x304048, 0x304048, 0x304048, 0x304048,
   0x304048, 0x304048, 0x304048, 0x304048,
   0x304048, 0x304048, 0x304048, 0x304048,
   0x304048, 0x304048, 0x304048, 0xE0F0FF };

// A palette reminiscent of large 'old-school' C9-size tree lights
// in the five classic colors: red, orange, green, blue, and white.
#define C9_Red    0xB80400
#define C9_Orange 0x902C02
#define C9_Green  0x046002
#define C9_Blue   0x070758
#define C9_White  0x606820
const TProgmemRGBPalette16 RetroC9_p FL_PROGMEM =
{  C9_Red,    C9_Orange, C9_Red,    C9_Orange,
   C9_Orange, C9_Red,    C9_Orange, C9_Red,
   C9_Green,  C9_Green,  C9_Green,  C9_Green,
   C9_Blue,   C9_Blue,   C9_Blue,
   C9_White
};

// A cold, icy pale blue palette
#define Ice_Blue1 0x0C1040
#define Ice_Blue2 0x182080
#define Ice_Blue3 0x5080C0
const TProgmemRGBPalette16 Ice_p FL_PROGMEM =
{
  Ice_Blue1, Ice_Blue1, Ice_Blue1, Ice_Blue1,
  Ice_Blue1, Ice_Blue1, Ice_Blue1, Ice_Blue1,
  Ice_Blue1, Ice_Blue1, Ice_Blue1, Ice_Blue1,
  Ice_Blue2, Ice_Blue2, Ice_Blue2, Ice_Blue3
};


// Add or remove palette names from this list to control which color
// palettes are used, and in what order.
const TProgmemRGBPalette16* ActivePaletteList[] = {
  &FairyLight_p,
  &Snow_p,
  &Holly_p,
  &Ice_p  
};


// Advance to the next color palette in the list (above).
void chooseNextColorPalette( CRGBPalette16& pal)
{
  const uint8_t numberOfPalettes = sizeof(ActivePaletteList) / sizeof(ActivePaletteList[0]);
  static uint8_t whichPalette = -1; 
  whichPalette = addmod8( whichPalette, 1, numberOfPalettes);

  pal = *(ActivePaletteList[whichPalette]);
}
Click to Expand
0

Reflection

Because we rushed the final project and scaled down our ideas throughout the process, we were not confident in our final installation. However, we learned through the exhibit that the agency our project offered to people made it so that they actually really liked it. We received comments from people telling us about their different experiences and interpretations of the project, indicating how powerful simplicity can be. We’d like to introduce the installation better with clearer object labels and exhibit labels, as well as improve the lighting to make the interaction smoother. Overall, we found this project surprisingly successful.

0

Roadmap

In the future, to see this as a real exhibit piece, the product will need to improve in its presentation. Mounting the mirror on the wall would bring more realism to the project as well as present it better. This can be done in areas where the walls are stronger, rather than the foam core walls used in our exhibit. The screen can also be better integrated with the rest of the mirror to not seem like it is an actual screen. This could be done using reflective film, or by making the entire mirror a large screen with film on top to make it seem like a mirror. An open challenge is finding a way to hide the camera so that the mirror appears to be more real. Currently, we have placed the camera above, but it might be possible to drill a hole in the actual mirror and hide the camera behind it. Research will need to be done in order to find out the best way to hide the camera.

In addition, the room of the exhibit should be connected with the interaction. We hope to see an experience where users walk up to the mirror, and instead of just the screen going grayscale, the lighting around also turns white and starts to flicker. Through this, we want to make the user think they are looking into the past. In addition, we want to incorporate audio into this project. Instead of just showing images from before, it would be interesting to also play audio using a surround sound speaker setup from a few seconds ago. Building this immersive experience would involve figuring out how to connect different devices, including a microphone, speakers, lighting, and the mirror itself.

To achieve this in the next 12 weeks, finding a new place to do the exhibit would be the first step. Ideally, it would be in a real, isolated room with strong walls, customizable lighting, and the ability to embed speakers in the room. Then, basic tests of connectivity with the room would need to be conducted. After this, the program will need to be modified so that it can trigger these devices to change and play different things during the interaction. Doing this will enable us to scale up the project while keeping the core functionality the same.

x
Share this Project

Courses

48-528 Responsive Mobile Environments - Spooky Technology

· 12 members

As part of this project-based course, we’ll get hands-on with emerging technologies, concepts and applications in the internet of things through a critical socio-technical lens. Over it’s 14-weeks,...more


About

~