TuneOut
Made by MacKenzie Cherban, Ling (Leah) Jiang and Alessandra Fleck ·
Made by MacKenzie Cherban, Ling (Leah) Jiang and Alessandra Fleck ·
Helping you tune out until you're ready to tune in.
Created: February 24th, 2018
We see the strong association between music, memory and emotion. Listening to the songs with heartbreaking memories means hurting people’s feeling for the second time. Thus, we started with this design question-- how might we provide real-time music therapy for people who are struggling with painful memories and help them to let go? The goal of our project is to help people cope through the manipulation of music. Allowing people to set how long they want to avoid a song, hoping that over time, we can reintroduce them to the song that they once loved before the heartache.
An ugly breakup can make a one’s favorite song become unlistenable. The user is able to upload a song into the device through an auxiliary input, the song can be uploaded from iPod, laptop, cassette, and even record player. They are then able to listen directly through the device, repeat the song, cry over the song, and when they are ready, set the length of time they wish to distort the song. The harder the breakup the long the distortion. While at home, the song will be distorted through our device, however, we also provide airpods that distort the music when encountered in the wild.
We created a desktop device where you can upload a song, listen one last time, and then tune it out. This device allows you to set the amount of time you tune it out, with a timeline slider, you can play the song, set the distortion, and clear the distortion. Tune out also comes with a pair of smart headphones, that store your song preferences and can be kept and charged within the device itself. These are for users to take with them as they may encounter songs on the go.
In order to accomplish this prototype, we used particle photon, an mini sd player, servo, and some LEDs all housed within a Dieter Rams inspired box.
//----NEOPIXEL SETUP-----
// This #include statement was automatically added by the Particle IDE.
#include <neopixel.h>
static const int PIXEL_PIN = D5;
static const int PIXEL_COUNT = 4;
#define PIXEL_TYPE WS2812B
Adafruit_NeoPixel pixels = Adafruit_NeoPixel(PIXEL_COUNT, PIXEL_PIN, PIXEL_TYPE);
//-----DF PLAYER SETUP-----
# define Start_Byte 0x7E
# define Version_Byte 0xFF
# define Command_Length 0x06
# define End_Byte 0xEF
# define Acknowledge 0x00 //Returns info with command 0x41 [0x01: info, 0x00: no info]
//--------TIME--------
unsigned long lastSampleTime = 0;
unsigned long sampleInterval = 250; // in ms
//-----SERVO PINS-----
static const int servoPin = A5; // Set Servo Pin
Servo myservo; // create servo object to control a servo
int pos;
//-----BUTTON PINS-----
static const int bOne = D2; // button one
static const int bTwo = D3; // button two
static const int bThree = D4; // button three
//-----VARIABLES BUTTON ONE-----
int bStateOne = 0;
int lbsOne = 0;
int bCountOne = 0;
int switchOne;
//-----VARIABLES BUTTON TWO-----
int bStateTwo = 0;
int lbsTwo = 0;
int bCountTwo = 0;
int switchTwo;
//-----VARIABLES BUTTON THREE-----
int bStateThree = 0;
int lbsThree = 0;
int bCountThree = 0;
int switchThree;
bool currentPlatformState = false;
void setup (){
myservo.attach(servoPin); // attaches the servo on pin 9 to the servo object
Serial.begin(9600);
Serial1.begin(9600);
myservo.attach(servoPin); // attaches the servo on pin 9 to the servo object
pinMode(bOne, INPUT);
pinMode(bTwo, INPUT);
pinMode(bThree, INPUT);
pixels.begin();
pixels.setBrightness(90);
pixels.show();
execute_CMD(0x3F, 0, 0); // Send request for initialization parameters
while (Serial1.available()<10) // Wait until initialization parameters are received (10 bytes)
delay(30); // Pretty long delays between succesive commands needed (not always the same)
// Initialize sound to very low volume. Adapt according used speaker and wanted volume
execute_CMD(0x06, 0, 0x30); // Set the volume (0x00~0x30)
delay(500);
execute_CMD(0x07, 0, 2); // Sets the equializer
delay(500);
execute_CMD(0x16,0,0);
delay(500);
// buttonOne();
// buttonTwo();
// buttonThree();
//setSong();
}
void loop (){
pOff(0);
pOff(1);
pOff(2);
pOff(3);
delay(50);
unsigned long now = millis();
if (lastSampleTime + sampleInterval < now) {
lastSampleTime = now;
bStateOne = digitalRead(bOne);
bStateTwo = digitalRead(bTwo);
bStateThree = digitalRead(bThree);
if (bStateOne != lbsOne) {
if (bStateOne == HIGH) {
bCountOne++;
Serial.print("Button One: ");
Serial.println(bCountOne);
buttonOne();
}
}
if (bStateTwo != lbsTwo) {
if (bStateTwo == HIGH) {
bCountTwo++;
Serial.print("Button Two: ");
Serial.println(bCountTwo);
buttonTwo();
}
}
if (bStateThree != lbsThree) {
if (bStateThree == HIGH) {
bCountThree++;
Serial.print("Button Three: ");
Serial.println(bCountThree);
buttonThree();
}
}
delay(50);
lbsOne = bStateOne;
lbsTwo = bStateTwo;
lbsThree = bStateThree;
}
}
void buttonOne() {
if (bCountOne % 2 == 0) { // SECOND PRESS
switchOne = 2;
} else if (bCountOne % 2 == 1){ // FIRST PRESS
switchOne = 1;
}
switch(switchOne){
case 1:
pBlink();
delay(50);
execute_CMD(0x03,0,1); // set new song
delay(500);
break;
case 2:
if (currentPlatformState == true){
pBlink();
servoDOWN();
delay(50);
} else {
execute_CMD(0x03,0,1); // set new song
delay(500);
}
break;
}
// execute_CMD(0x03,0,1); // set new song
// delay(500);
}
void buttonTwo() {
// servoUP();
// delay(50);
currentPlatformState = true;
execute_CMD(0x03,0,3); // set new song
delay(500);
if (bCountTwo % 2 == 0) { // SECOND PRESS
switchTwo = 2;
} else if (bCountTwo % 2 == 1){ // FIRST PRESS
switchTwo = 1;
}
switch(switchTwo){
case 1:
servoUP();
delay(75);
break;
case 2:
servoDOWN();
delay(75);
break;
}
// switch(switchTwo){
// case 1:
// pOn(0);
// pOn(1);
// pOn(2);
// pOn(3);
// delay(50);
// execute_CMD(0x03,0,3); // set new song
// delay(500);
// break;
// case 2:
// pOn(0);
// pOn(1);
// pOn(2);
// pOn(3);
// if (currentPlatformState == false){
// servoUP();
// delay(50);
// } else {
// execute_CMD(0x03,0,3); // set new song
// delay(500);
// }
// break;
// }
}
void buttonThree() {
myservo.write(125);
delay(50);
execute_CMD(0x03,0,5); // set new song
delay(500);
}
void pOn(int n){
pixels.setPixelColor(n, pixels.Color(0, 255, 0));
pixels.show(); // This sends the updated pixel color to the hardware.
delay(10);
}
void pBlink(){
for (int i = 0; i < pixels.numPixels(); i++) {
pixels.setPixelColor(i, pixels.Color(0, 255, 0));
pixels.show(); // This sends the updated pixel color to the hardware.
delay(750);
pixels.setPixelColor(i, pixels.Color(0, 0, 0));
pixels.show(); // This sends the updated pixel color to the hardware.
delay(750);
}
}
void pOff(int n){
pixels.setPixelColor(n, pixels.Color(0, 0, 0));
pixels.show();
delay(10);
}
void pixelsSOLID(){
pixels.setPixelColor(0, 200, 0, 150);
pixels.setPixelColor(2, 200, 0, 150);
pixels.setPixelColor(3, 200, 0, 150);
pixels.setPixelColor(4, 200, 0, 150);
pixels.show();
delay(10);
}
void fadeIn() {
int i, j;
for (j = 0; j < 255; j++) {
for (i = 0; i < pixels.numPixels(); i++) {
pixels.setPixelColor(i, 0, 0, j);
}
pixels.show();
delay(10);
}
}
// 255 to 0
void fadeOut() {
int i, j, h;
for (j = 255; j > 0; j--) {
for (i = 0; i < pixels.numPixels(); i++) {
pixels.setPixelColor(i, 0, 0, j);
}
pixels.show();
delay(10);
}
delay(500);
}
void servoUP() {
for (pos = 0; pos <= 35; pos += 1) { // goes from 0 degrees to 180 degrees
// in steps of 1 degree
myservo.write(pos); // tell servo to go to position in variable 'pos'
delay(15); // waits 15ms for the servo to reach the position
}
}
void servoDOWN() {
for (pos = 35; pos >= 0; pos -= 1) { // goes from 0 degrees to 180 degrees
// in steps of 1 degree
myservo.write(pos); // tell servo to go to position in variable 'pos'
delay(15); // waits 15ms for the servo to reach the position
}
}
// By Ype Brada 2015-04-06
// https://community.particle.io/t/a-great-very-cheap-mp3-sound-module-without-need-for-a-library/20111
void execute_CMD(byte CMD, byte Par1, byte Par2) // Excecute the command and parameters
{
// Calculate the checksum (2 bytes)
int16_t checksum = -(Version_Byte + Command_Length + CMD + Acknowledge + Par1 + Par2);
// Build the command line
byte Command_line[10] = { Start_Byte, Version_Byte, Command_Length, CMD, Acknowledge, Par1, Par2, checksum >> 8, checksum & 0xFF, End_Byte};
//Send the command line to the module
for (byte k=0; k<10; k++)
{
Serial1.write( Command_line[k]);
}
}
Click to Expand
Our first iteration of the project occurred as a mock up 3D model in Rhinoceros. Upon making the prototype in Rhino there were several iterations of laser cutting the same pieces developed in the model to assume that the sleekness of the design and degree to which everything fit flush with one another was achieved.
Components such as the front cover in white acrylic was cute several times and arranged differently in three instances to accommodate for a clear, well composed front piece that could provide all necessary components and buttons in an intuitive way. For example, the holes for the speaker were rearranged from their original rectangular shape to the circular one to provide more room between cover components.
In terms of the earbud pop-up piece, the original idea was to integrate a solenoid. However as we wanted a clear means of brining up the earbuds with the press of a button then back down with another press, smoothly, we instead opted for a mechanism that involved a servo rotating a gear with a shaft that would push up the earbud piece vertically.
The rise of technological innovations has made access and storage of digital files faster a more reliable. The music industry has particularly expanded through such advancements in the digital realm. Where there were once CD's , cassette tapes and other forms of holding a copy of a song tangibly, the storage of music now comes in less tangible forms such as applications and sound clouds. Where there was once a musician playing music at a specific restaurant, the musician now records in a studio where the music is broadcasted internationally to any location at any time. Music is something that is often overlooked, but continues to exist subconsciously as it is introduced throughout the day in various occasions. A song associated with a particular environment causes one to remember the feelings and visual qualities of the site as a schema in the brain. A similar event occurs during interactions and relationships with one another. Music is one of several maps our brain uses to perceive and event or relationship. However, in the case of a relationship, when all goes wrong, or ends, how does one cope with the situation, if one of the maps used to remember the significance of the relationship is not something that an individual can destroy, alter or pass by. In an age where the increase in the use of digital files to encapsulate memories becomes more prominent, how can one undergo a ritual of forgetting, when the artifact holding the memories to forget are not of a domain that the individual has control of?
Some of the prior work that we saw that helped influence the design include not specific projects that sought to address the situation, but rather a mixture of existing devices that are already used in the public to distort, mute or change what we hear. For an instance, earbuds already change what we should be hearing in a location by inputting another sound in our head. Headphones for musicians change the sounds of audio input to help them look at the different ways a song can be composed. Noise cancelling headphones, such as those produced by Bose, measure the frequency of incoming waves and upon identifying the value of the wave, sends a wave out that is 180 degrees "out of phase" of the wave that is coming to our ears. These different types of sound altering devices that already exist in society inspired us to create a sound device that could hypothetically use such technology to help an individual take a digital music file and alter it to help them cope with their own needs. Since a song cannot be altered and destroyed, to be able to have a device where one would be able to alter a song to their own needs was something that we thought would help in addressing a ritual of forgetting with a non-tangible artifact, music.
There are several questions that still remain unaddressed. To what extent does altering music help cope with trying to forget an event associated with that song? Does the idea of wearing a device in the ears to cancel out what is playing in our surroundings avoiding the situation? Does one really forget a song when the distortion period concludes or will there always be a trace of the memory present?
In future explorations, it will be interesting to go into more depth with how we can perform rituals of forgetting in the digital age. Look into how we can perform such actions in a way that is respectable to others, yet still effective in addressing the need to let things of the past go. Will a ritual of forgetting really be relevant in the digital age when the artifact can never really be destroyed? Perhaps as we come to a dead end with digital files, where our authority over a memory becomes less alterable within our domain, the idea of forgetting will become less relevant that the idea of coping with what is at hand. Perhaps how we address situations and seek to let them go will become less an action of deleting the memory and more a series of therapies to understand and respond to the situation appropriately in the future.
Next time we approach a similar project, the process that carries the user from the start of using a device to the end result will be something we will really want to readdress and focus on. Though we had a clear sense of where we wanted the user to feel and be as a end result of using the product, the passage that would lead the user to feel such a way was lacking in this iteration of the project. Though we thought distortion would be an obvious way to avoid the original song triggering the memory, how the distortion is made, its factors of customization and effectiveness will need to be re-evaluated. If we were to approach the assignment again, we would most likely seek to organize a pattern and mapping of distortions for different case scenarios. Enable to establish such maps there might need to be a few interviews and user experience studies to further analyze the effectiveness of a particular distortion over a set amount of time.
Given the relatively short amount of time we had to complete the assignment, we got to a place that we are comfortable to be in for a first iteration. We covered a broad spectrum of ideas and methods for addressing a ritual of forgetting the memories associated with a song. However, for a second iteration there will most likely be a more in depth strategy, that narrows down some of our broader ideas and questions, into a cohesive device with a clear sense of function and direction.
One of the most important ideas that we took away from the project was the fact that forgetting in the digital age is a sensitive subject. How we as designers approach methods and solutions for addressing such concerns is vital to how people will interact with one another. In some cases a ritual of forgetting could cause one to isolate themselves from the public and perhaps the solution to forgetting the past needs to seek a more therapeutic cause. A cause where the fabric of social interactions and engagements are not torn.
Past.FM: http://ciid.dk/education/portfolio/idp12/courses/tangible-user-interface/projects/past-fm/
Remix Source: https://www.youtube.com/watch?v=CTylpVDozCg
This project is only accessible by signed in users. Be considerate and think twice before sharing.
The theme for 2018 will be the exploration of human memory and how digital and connected technology can support, augment, enhance, effect and alter the ways in which we remember, recount and reflec...more
Helping you tune out until you're ready to tune in.