Blinking
Made by Ruihao Ye, Carolyn Cai, Tian Zhao and ssidique
Made by Ruihao Ye, Carolyn Cai, Tian Zhao and ssidique
Create a generative art piece as well as a dialogue between two participants using only blinks.
Created: November 14th, 2016
We used webcams to focus on the eyes of our two participants. These two participants have to keep track of how each other blinks by staring at the other person's eyes on screen. When a participant notices a blink, he clicks on the mouse. The mouse click helps generate our visualization by adding multi-colored balls to the screen as well as playing music. We used a combination of two projectors, that use projection mapping using Millumin, as well as Unity for the actual visualization and sounds. This process provides an emergent outcome, because blinking is a natural, almost uncontrollable aspect of life. Based on how people blink, the patterns generated vary by quite a lot. In addition, we encouraged audience interaction which adds a layer of ambiguity to the outcome of our performance, and provides the opportunity for the messages and ideas of the piece to be directly instilled in the performers.
Our ideas were mainly informed by the readings from this module (“Performing Interactivity” and "The Methodology of Generative Art"). The concept of using the human body to drive an emergent audiovisual experience comes mostly from the reading on generative art. Specifically, we used the "aleatoric" process of creating generative art in our computer program. While there is a set structure or algorithm to the image/sound generation, it relies quite heavily on randomness. So, things like the sizes, initial locations, and pitches associated the spheres are decided randomly; however, their color and pitch ranges, as well as their motion, are set based on which player is doing the clicking.
We decided to use human blinking in order to have an element of audience interaction and participation, which we feel is quite important for media performances. This idea comes from "Performing Interactivity;" we wanted to incorporate some of the different categories of interaction into our performance. By blinking (clicking the mouse), people have the chance to become co-creators for the artwork. At the same time, by observing each others' blinks, they can have a conversation that is digitally mediated through the use of the webcam.
The intended message and purpose of the piece are varied. The initial intent was to create a piece to turn the unconscious act of blinking into something conscious. The next was to add a level of intimacy to blinking and staring, specifically between the two participants in the performance. Finally, we wished to instill a sense of zen and relaxation through the visualization, as well as to make the eyeblinks visual and permanent through the creation of a sphere.
We chose Unity to visualize the eye blinks. The participant can trigger a new ball by clicking the mouse when they see the other participant blinking. For every mouse click, a random note from C3 to C5 will be played, and the color and size of each ball is generated randomly using the unity engine. Each participant will trigger a different color scheme of balls during the experience. We tried to incorporate the sounds and the visual effects together into the program to present the process of blinking naturally to the audience.
Also, we designed an algorithm based on the time interval between two blinks, so the program could adjust the quantity of the balls generated. More specifically, when the two participants blink really fast, or when they blink as the same time, several colorful balls will burst out on the board. We made this intentionally to emphasize the interaction between the participants.
Timeline:
Initially the idea for the eyeblinks were variations on a sort of eyeblink game, where two people would try to synchronize eyeblinks. Making it visually appealing yet fast in such a visualization turned out to be challenging, as a few ideas were considered. For example, at one point the idea to implement a sort of rhythm matching game was considered, where one eye from each performer would blink as a pair of eyes, and the goal would be to match average eyeblinks while making the blinks as similar as possible. However, it was determined that such a project's scale was larger than what we could reasonably finish, and as such, through multiple reevaluations, we decided to consider a variant of a "game" where circles would drop, the sizes and mass determined by accuracy of blinks. However, it would still be too complicated to implement, and finally we settled on a variant on the idea, in which two participants in the performance would try to match a buttonclick to the eyeblink of the other person. Such clicks would be visualized as one of two different spheres with colors and size selected from two different pools of color.
After the simulation (in Unity) was finished, we worked to combine the projections of the eyes and the game screen with a program called Millumin. However, a few obstacles appeared in the process of implementing the visual. The first was that we attempted to attach a second camera in order to capture two sets of eyes independently. Unfortunately, we were using an IDeATe computer for the project, and the cameras required software not already installed, and therefore we were unable to use two cameras. In the end we settled on using the single facetime camera to take two pairs of eyes from either side of the camera, and although the image zoomed in was slightly grainy, they were effective in showing the eye movements (albeit with a slight delay attributed to the software and the camera).
The second and final problem was that the program was unable to show the desktop with the camera views. However, we quickly fixed this by layering two projectors on top of each other, and the result was reasonable in visual quality.
We overall really enjoyed working on the project. However, we see a lot of potential for improvement. Certain aspects of our project became impossible due to technical issues that plagued us throughout the projection.
Specifically, we had issues connecting our webcams and adding a recording of our desktop to Millumin. The former was an issue with the software and the hardware we used for the project, as the IDeATe computer did not allow for the installation of third party software without administrator privilege, yet the cameras IDeATe lending had required third party software already not on the computer. Therefore, we had to settle with just the FaceTime camera on the Macbook.
The second was that within the program we intended to use for our presentation, there was no explicit option to allow for projecting our desktop along with the feed from the camera. There was also very limited documentation on the program itself.
These setbacks forced us to change our image layout to a composition that was not as visually integrated as our original plan was. Once we learn how to do those things, we feel like our true vision of our project can be fully realized. However, the basic version of our vision for the project did come through in the final presentation, as the technical basics of reaction to eyeblinks and the digital reaction with the generation of a sphere worked fairly well. A level of intimacy between performers could also be achieved with our setup to provide a personal meaning to the piece. Although both were limited in their impact due to the technical limitations, as a demonstration of a concept, the demo worked well for such a purpose.
We realized that basic technical setup could be really significant to a project. Though we did not have much difficulty preparing our performance, we had trouble setting up two webcams and connected both of them to one laptop. If we could make two webcams, we could have provided a more straightforward experience for the participants by allowing them looking into each other's eyes directly. However, the outcome using only one webcam that comes with the laptop is still quite impressive. There are potential drawbacks to forcing direct eye contact, as noted by one of the TA's reviewing our performance, the biggest being the psychological threat seen through direct eye contact. Therefore, whichever implementation used depends greatly on who is participating in the performance and what the central message of the piece is.
Another aspect of the setup which provided difficulty was the program used for showing the eyes and the screen. We decided to use the program Millumin for projection mapping; however, there was no explicit tool to project a portion of the desktop. Therefore in the final performance we stacked two projectors, which ended up being fine presentation and techwise (the fans pointed outwards instead of upwards, and as such overheating was not much of an issue). Optimally, the final performance would have been limited to one screen for simplicity and tech, but again, with the tools we had at the time we had to substitute.
Further, given more time to refine the program, the visualization would have used eyeblinks instead of mouseclicks to count as the mouseclick action to create a sphere. However, for practicality as a demonstration, we used mouseclicks instead of eyeblinks.
Also, the visuals could be better connected to the project. Initially we intended to have two sets of eyes straddling the game vertically, but due to technical issues we were unable to put them on opposite sides, and as a result the final product had them both on top. This may have reduced the impact of the directionality of the game itself, and as such a future iteration may implement directionality in a stronger manner. However, then issues of location may come up with regards to implementation and the meanings attributed to or felt by the piece.
The sounds could also evolve to have more meaning. For our presentation, we used calming sounds to add a level of zen to the performance and to instill that sense to the performers. More meaningful sounds, color palettes and setups could enhance this feeling, create other feelings or enhance such other meanings of the project.
Finally, there is potential for variation of this project. More people and more inputs can be done so that more than two people are participating at one time. With a higher quality camera (without software restrictions), we could be able to track actual eyeblinks effectively and accurately. And there are plenty of other meanings to explore that tie in with the piece, such as intimacy in physical or virtual space, turning blinking into a conscious activity etc.
Cite and attribute any sources you used directly in your project. Document this carefully: be very clear about what media you worked with, where you found it and how/where it was incorporated.
Applications: Unity (for the sphere/circle visualization and interaction from performers), Millumin (for projection of the camera feed)
Reference and attribute any texts, concepts, ideas or related material that informed but wasn’t directly used in the creation of the outcome.
In order below:
Camera concept: #Interview (Shia LaBeouf, Aimee Cliff)
The piece reveals a level of intimacy without speech that we wished to create with our piece.
Visualization concept: Messa di Voce (Golan Levin, Zachary Lieberman, Jaap Blonk, and Joan La Barbara)
The piece's visual aesthetic was what we were going for in the visual, specifically the first bit.
Staring concept: The Artist is Present (Marina Abramović)
This piece is the underlying concept for our piece, with multiple interpretations. One would be the conscious nature looking gets when staring into the eyes of someone else, which was one of the goals of our performance.
(Our proposed project was originally a hand gesture / shadow puppet performance, but we decided to change it to this one, which we feel is more interesting. We were struggling to come up with how to have an element of randomness for the previous performance when we came across a project titled "Blinking" on the course website, which inspired us to think about ...blinking.)
The intentions/goals of this performance are twofold. First is to create a generative art piece using the blinking of the participants (2 performers) to drive random image generation processes. At the same time, the participants will be able to have a sort of conversation/dialogue with each other using only blinks, becoming aware of each others' presence and physical processes.
Instead of using motion detection software, we will have two people sit across from each other and play a game where they detect each others' blinks, pressing a button whenever one person sees the other person blink. This interaction triggers an animation which is projected onto the wall along with the participants' eyes. We will distinguish each person's responses using different colors and sounds for the animation, and characteristics of the blinks (i.e. duration between blinks) will be used to determine the nature of the animation.
We hope that this will be an interesting experience for both the participants and the audience.
Estimated performance time: 1-2 minutes
New creative industries are empowering new modes of collaborative consumption, creation and reuse of media. This often relies on successful collaborations between cross-trained artists, designers a...more
Create a generative art piece as well as a dialogue between two participants using only blinks.