Back to Parent

Outcome


StudSeer

How Machine Learning is changing our relationship with our tools

A the outset of this project, I interviewed a friend who I knew had an interesting relationship with modern technology. While talking about social media ad placement and personalization, she mentioned that sometimes she would try to trick or train the algorithm with her behaviors. She called it her personal algorithm, and when asked to visualize what it looked like, she said

“When I think of it, its an old-school computer with an old monitor and a separate keyboard with mitten glove hands. It’s the size of the computer I had when I was a kid although I’m sure it’s not as big as I thought it was.”

From these two characterizations of her algorithm, I became curious about the way that we relate to machine learning differently than we do other physical or digital entities. 

For this project, I wanted to explore what is spooky about our relationship with Machine Learning as it becomes a common facet of our daily life?

Show Advanced Options

Machine Learning in the Home

In particular, I am interested in machine learning models that are too human even if they don't trigger an uncanny valley response. 

From my perspective, there are ML models that analyze data in ways that are completely foreign and inscrutable to humans, and there are models that seem to replicate our human cognition in eerie ways. 

Screen shot 2021 03 05 at 10.18.19 am.thumb
Show Advanced Options

StudSeer is a prototype that helps tell the story of what happens when our tools are no longer just a means to an end. Machine Learning has led to the creation of a class of tools that behave like small humans. StudSeer takes the common household stud finder, a device that uses electromagnetic fields, (undetectable to humans) to see though walls, and uses a ML model to allow it to fulfill its only function in the same way a human would–with sound. 

Screen shot 2021 03 05 at 10.37.10 am.thumb
Show Advanced Options

Functional Architecture

  1. A teachable machine model was trained using a traditional stud finder to collect audio of knocking on walls over studs and over hollow sections.
  2. The model was trained to recognize 5 classes | Stud, No Stud, Buzzer Sounds, "Hey StudSee", and Laughing
  3. The javascript for the ML model was exported and hosted on P5.js
  4. The P5.js program could then trigger a Particle Photon by publishing ML readings to Particle's cloud server
  5. Code loaded onto the Photon would then trigger a buzzer and LED pattern depending on what events were detected. 
20210302 125617.thumb
Show Advanced Options

Training the Algorithm

The first prototype of StudSeer mostly focused on working out the basics of physical prototyping–light, sound and triggering them.  After working through the basic elements of the code, the focus then shifted towards the Machine Learning and Networking components. 

Screen shot 2021 03 05 at 9.41.08 pm.thumb
Show Advanced Options

Teaching the Teachable Machine

To train the ML model, I found a room with relatively low background noise, and went through the wall of the room with a traditional stud finder to mark where the studs were. I then recorded sound from both studs and hollow walls to create the first two classes. Once those were working I then went through and recorded the audio for the voice triggers and samples of the buzzing noise to prevent self-triggers. 

Teachable Machine allows for easy exporting to p5.js with an inbuilt feature that requires very little modification. 

P5js.thumb
Show Advanced Options

A small bit of added code allows p5.js running on a laptop to publish the readouts of our ML model as events in the Particle Cloud IDE. 

Show Advanced Options

Through the Cloud

The code below is loaded onto the Particle Photon which allows it to receive the published data from p5.js and trigger outputs through simple conditional statements in the loop function. 

Show Advanced Options
20210304 124108.thumb
Show Advanced Options
20210304 124051.thumb
Show Advanced Options

Future Directions

As a first attempt at using both Particle's cloud based physical computing platform as well as using Teachable Machine in code, I am wildly pleased with how this project turned out. 

In some respects, the seamlessness and apparent utility of the final product may have actually been taken too far though. The authoritative yes/no of the buzzer is too easily trusted and doesn't fully allow the user to experience moments of doubt or uncertainty about this new ML model in their home. 

Future iterations might use that experience of uncertainty as a jumping off point to look deeper into how ML models change our truth-seeking behaviors in the face of the unexplained. 

Drop files here or click to select

You can upload files of up to 20MB using this form.