Mosh Mural (a teaser)

Working on this project for a while and it’s finally taking shape. I hope to be able to post something more soon about how it looks. But in the meantime the code is available here at github.

The central premise is that visuals at concerts often concentrate on the music or the band, but rarely offer the audience an opportunity to feedback into the music and the aesthetics. The mural functions as a mirror for the audience, offering them a new way to engage the performance and via the visuals the performers. It allows a feedback mechanism that is tuned to both the audience’s motion and energy as well as the music.

The program uses the Kinect to capture data about the space and people and augments it with audio data from FFT algorithms.

Check a teaser below, but more documentation will follow soon. The project is a work in progress as we’re trying to fine tune the visuals and interaction for the ITP Spring Show.

 

Appropriating New Technologies

Computer vision isn’t necessarily new. In fact there’s been a bit of appropriating this technology already posted on my blog earlier, but there’s a lot of room to explore and here’s an exploration using OpenFrameworks and OpenCV to recognize faces, flip them and attach them at roughly where the chin is, hence bringing new meaning to the term “double chin.”