top of page

Final Project: Interactive Audiovisual Interface


Our group began the final project with an idea to create an interactive music

experience. We played with different ideas on how to interact with the musical

environment including color and shape cues and DJ-ing with dance. Ultimately, we

decided to try to modify audio and visual output with both shadow movement and

pressure pads.

The basic idea was to project a video piece onto a two-way projection screen and

have people stand in front of the projector to create a shadow. That shadow would then

be read by a webcam behind the screen, and the shadow’s actions would be translated to

a change in the video output. Simultaneously, four pressure pads connected to a makey-

makey board will result in audio output change.

This system design catered to individual strengths of the members of the group.

Gus’ main focus was designing and writing a max patch to read visual input from the

webcam in order to modify video. He also mixed the audio for the piece. Logan focused

on the video art piece, designing and editing the visuals in final cut pro. Annie designed

and constructed the pressure pads.

As with any system, we ran into more than a few snags along the way. From a

half-baked idea to implement LEDs into the foot-pads, to last minute coding bugs, we all

ended up learning a great deal about each other’s specialty through team troubleshooting.

In creating the pressure pads, Annie spent a lot of time choosing materials to build with,

stumbling upon the frosted corrugated plastic as an alternative to plexiglass when the

plan was to light the pads internally. The next challenge was to ensure that the circuit

would remain broken until the pad was pressed, while having a reliable connection at that

point. In coding, Gus first had to find ways to practically realize our vision of sensing

shadow input via webcam. Next, he spent time researching and field testing the options

for sensing systems and taught himself various programming techniques needed to

finalize the max patch. Ultimately, the largest hurdles included incompatibility on his

computer (resulting in an untestable patch), an unanticipated bug in the final presentation

– We theorize that the jitter object was at fault in the end, failing to output. Logan spent

his time designing and editing the visual output feature of the instillation. Major

troubleshooting on his end included rendering video to be compatible with the max patch

as well as a last minute scramble to edit in white bars (for sensing) when the code failed

to overlay them.

In the end, our instillation was extremely successful in sensing pressure cues and

subsequently mixing audio output, and the video added to the overall aesthetic and

ambient quality of the experience despite not reacting appropriately to the shadow-cue.

Recent Posts
bottom of page