User Tools

Site Tools



Authors Tom Stoll, Dennis Heihoff (and Irina Likhtina)
Affiliation Dartmouth College/unaffiliated
Code Github Link

Motion tracking statistical analysis of video controlling audio synthesis.

1 Stack

Python + Numpy + OpenCV


2 Process

Films are analyzed for motion vectors using OpenCV (corner detector + Lucas-Kanade algorithm). These vectors are simplified (start point - end point only) and binned according to their locations on the screen, their angles, and weighted by their relative lengths. Furthermore, brightness mean and variance are extracted for each of the 64 grid positions per frame.

This data is stored in a binary file that can be read by SuperCollider. SuperCollider sonifies the data using a bank of 64 simple Synths, each of which are fed motion/brightness data per time point.

3 Results/Observations

We successfully mapped motion to sound. We avoided making one-to-one mappings of spatial locations to pitches or other overly simplistic correlations. The nature of the histograms is such that most spatial bins only contain one or two binned angles. This sparsity creates regularly spaced active channels that map into frequency bands in the sonification. Furthermore, the dynamic changes in motion (accelerations) create natural/realistic sounding sonic morphologies. It should also be noted a spectral synthesis approach to sonification was not successful.

Password for each video: hamr

Motion Examples (intercut)

Score for A Clockwork Orange (short)

4 To do

1. Create a mapping layer that takes into account each spatial/angle channel's variance over time and reorders this data into configurable patterns or bin configurations.

2. Use dimensionality reduction to reduce the remapping layer such that only a limited number of significant components drive a synthesis algorithm.

synaesthesia.txt ยท Last modified: 2013/06/30 17:12 by synaesthesiatic