Back to Parent

Outcome


Intention

The idea of this project was twofold: to allow the audience to create their own audio while at the same time being able to visualize what they were making. We wanted to create the capability to visualize rhythm in vibrant visual patterns. Not only did we want to emphasize the link between audio and visuals, but we also want to create a unified experience for the audience as they come together to create the sounds that generate, in real time, the visuals they see on screen in front of them. We were motivated by wanting to encourage collaboration and create an engaging experience, one in which the audience could actively participate and also clearly witness the effects of their participation. This, we believed, would create a unique experience since we would be combining concepts of interactivity with performance art.

Performance

The end result can be seen in the video below. This is what happens when the performance goes as intended--that is, someone in the audience creates their own rhythm and the rest of the people in the audience follow along. The creation is a visualizer that listens for any noise and visualizes it using FFT. A simple microphone was used to pick up the noise and the rest was taken care of by a computer hooked up to an HDMI split that displays the result on the three screens in the room. The performance outcome is dependent largely on 2 things: 1) How involved the audience is willing to be, and 2) The rhythms that members of the audience create.

Show Advanced Options

Context

Rhythm and art have bee intertwined for almost as long as either have existed. While many visual forms utilize rhythm in the form of pattern, repetition in either form or content, we wanted to examine rhythm in its most basic form. Taking inspiration from an experience of "play" most people share: clapping in kindergarten, we wanted to create a generative, fun, interactive work.

Some of the things we were inspired by are a TED talk with a one-man performance utilizing common items to examine rhythm. Our program, created with Java and Processing, took inspiration from audio-visual bridges such as the iTunes visualizer and pixel music 3000, which had a similar feel of color-changing objects on screen whose size reflected how loud the input was. The idea of visualizing sound to allow us another dimension of interactivity is well explored in the "demoscene" phenomenon, where real or virtual processes are automatically translated into visual effects.We wanted to both tap into this vivid digital subculture and connect  it to childhood or everyday experiences.

Process

After we first came up with the idea of messing with rhythms, we thought of audio visualization. The project is then divided into two parts: performance and programming. For the performance, we want to do layers of rhythms performed by three group members that involve drumming. But then we thought about involving the audience, so we eventually decided to let our group members lead the audience to clap alongside with us.  

For the programming, we talked about using Javascript and some other programs, but eventually settled with Processing since it has a mature FFT analyzer for audio. However, making our desired visual design is hard since we need to do a lot of math in the programming to get the right sizes and colors of shapes. We changed from using lines to using circles, and from having one line of circles to having three lines. We also added rhythmic dots to the background. At first, we had the colors also respond to sound input, but the randomness of the FFT output didn't really work with that. Eventually, the change in color and in the background dots are proportional to time elapsed and go in loops since rhythms are about time and looping, while sizes change with the loudness of the sound input. 


Screen shot 2016 11 21 at 10.47.45 pm.thumb
Show Advanced Options

Evaluation

The intent was there and the means to have a fully successful performance was there as well. The downfall of this performance was a lack of audience coordination. While this got better as the performance progressed, a simple explanation of what was going to happen would have made this happen more immediately and in a less awkward manner. As a whole, nothing went critically wrong, but we are our own worst critics. In our eyes (as well as many others), this could have gone much better if we had properly coordinated with the audience. Again, the performance got better as it progressed, but it could have had a much better start. The intent was matched quite well, but we could have done a better job achieving the desired outcome.

The biggest changes to make for a new iteration would be to:

1) Coordinate with the audience before starting to make sure that they know what is going on.

2) Raise the noise floor on the visualizer to a point where it only picks up sounds produced by members of the audience, rather than every little noise made in the room.

Group Reflection

We realize in hindsight that we should have planned a little better and had a better way of explaining to the audience what they were expected to do. Also, we should have specified in the tech rider that we wanted people to be seated at tables since that would have made our performance significantly better

We should've coordinated the audience better before starting the performance. On the technical side, we could make use of thresholds in sound and make the visual effects more complex. The noise floor should've been raised, so that only major noises (people's rhythms) would be represented, rather than every little noise that plays in the room.

Sources 

https://processing.org/

We used a processing project that allows us to take audio input, and use FFT to get some frequency data out. Then we used this to create the visuals. The programming to create the visuals themselves was our own work, but the tool we used to turn audio into data was from processing.

Proposal

We will visualize rhythms in beautifully coded visual patterns. We will have two performers on stage who are going to play some random rhythms with the use of drums and perhaps other instruments, and our goal is to involve the audience to clap or somehow produce some rhythmic sounds with us. Our program, coded in Processing or Javascript, is going to take audio input and produce patterns based on analysis of the audio input. 

This project encourages collaboration as we try to involve the audience to produce some unified rhythms with the performers, and might give control over to the audience once they are comfortable with the idea. At the same time, we are trying to create a unified experience that links what we hear with what we see. We encourage people to see beyond what is immediately visible and imagine what is potentially visible. 

Drop files here or click to select

You can upload files of up to 20MB using this form.