#Siftt
#Siftt is a multimedia work in collaboration with Casey Farina. At the heart of this piece is a streaming Twitter feed that grabs specific tweets in real time. We simultaneously track the current top trending tweet and any tweets containing, “#siftt.” This information is mapped onto musical parameters (i.e. pitch, spatialization, volume, duration, etc.) to create a musical texture. Additionally, the tweets are projected as visual animation above a horizontal barrier. Some of the objects make it past the barrier and are transformed into animated symbols to be interpreted as the performance score. As the musical symbols move down the screen, many different instruments are waiting at the bottom to be played with the corresponding symbol.
The motivation behind this work is to demonstrate a creative and musical way of interpreting mass amounts of data. “Big Data” is all the rage these days and we live in a society that is constantly collecting, storing, visualizing, and making predictions based on huge amounts of data. We are surrounded by it and apart of it all at once. Never in human history have we ever had this amount of information be transmitted and collected. Furthermore, we wanted to create a way for the audience to participate in the performance by influencing the music and score creation. With the use of graphics, we are able to create an artistic visual experience that also serves as the musical score. Allowing the audience to see this process provides a transparency to the ideas happening during the performance. It removes a layer of abstraction that can be difficult to understand when seeing an experimental work performed for the first time.
Below is a short trailer from one of the performances of this work. I have also attached the audio from a another realization of the piece using different instruments and different electronic sounds. Casey designed the graphics and animations while I focused on the sound design and programming with the Twitter API.
For any technophiles that are interested in what we used:
- openFrameworks for animation and visual design
- pure data for audio engine and sound design
- node.js for accessing Twitter API
- OSC (open sound control) to connect everything
Audio excerpt: