|
|
MEAPsoft Music Showcase
Here are some examples of music made with MEAPsoft, using specific "composer" modules.
If you do something exciting with MEAPsoft, we would love to include it on this page -- please contact us at meapsoft@music.columbia.edu
SimpleSortComposer
- Original: chris_mann.wav
Processed: chris_mann_glissando.wav
Start with Chris Mann speech soundfile. Detect events, with sensitivity and
density set to high. Extract AvgFrequencySimple. Run SimpleSort, low to
high, giving us a speech sound glissando.
Inspired by Carter Scholz's "Managram".
[Douglas Repetto]
- Original: cage.wav
Processed: cage_crescendo.wav
Start with the John Cage speech soundfile. Detect events, with
sensitivity and density set to high. Extract ChunkPower feature. Run
SimpleSort, low to high, and apply fade in/out and crossfade for
10ms. The result is a slow crescendo moving from Cage's gentle
speaking voice to the crowd's bursts of laughter.
Audio source: John Cage Lecture Reading: on Rauschenberg, Duchamp,
Johns etc. at L.A. County Museum of Art, 1965, from archive.org.
[Douglas Repetto]
-
Original: beatit.wav
Processed: beatit_LB_GF.wav
Processed: beatit_LF_GB.wav
Start with Michael Jackson's "Beat It" soundfile. Detect beats. Extract ChunkStartTimes. Run SimpleSort, low to high, with "reverse" checked in the Universal Chunk Operations section. This will give you an output with the global form of the input track running forwards, but the local audio chunks playing backwards. Now do it again, but this time run SimpleSort high to low and uncheck the "reverse" button. This will give you an output with the local chunks playing forward, but the global form of the song going backwards.
Inspired by Thomas Dimuzio's "Yawriats ot Nevaeh" and "Nevaeh ot Yawriats".
[Douglas Repetto]
-
Original: Bach - Ach Wie Nichtig, Ach Wie Fluechtig (mp3)
Processed: Ach wie nichtig-MEAPED.mp3
- I set it to detect beats, unchecked the half-tempo box, and unchecked "1st event = track start" (to avoid excess silent chunks).
- I enabled the Avg Pitch Simple feature extractor, and selected the "Likelihood" Meta feature extractor. I wanted to find which pitches were most likely to occur in the piece.
- I selected the Simple Sort composer, and set the individual chunks to be reversed (under Universal Chunk Operations). Then I applied a short fadein/out and crossfade (around 4 on the 0-50 scale) to smooth out the transitions.
- I ran the synthesizer!
[Jeff Snyder]
ThresholdComposer
- Original: ave_maria.wav
Processed: ave_maria-one_pitch.wav
Start with Ave Maria soundfile. Extract events. Extract
frequencies. Run ThresholdComposer using a narrow frequency range
(115-125Hz) so that we end up with all instances of the few pitches
that fall within that range.
[Douglas Repetto]
IntraChunkShuffleComposer
-
Original: cheerleader.wav
Processed: cheerleader_shuffle_1.wav
Processed: cheerleader_shuffle_0.5.wav
Start with Dow Jones & the Industrials' "Hold that Cheerleader"
sample. Extract beats. Run IntraChunkShuffle with 4 sub
chunks. Resulting soundfile maintains the form and forward energy of
the original, but with lots of local chaos. Do it all again but
extract beats with "cut tempo in half". Still the same form, but less
local chaos since the segments are larger.
[Douglas Repetto]
-
Original: plum.wav
Processed: plum_scrambled_wobbly.wav
Start with the Suzanne Vega "My Favorite Plum" sample. Extract events
with sensitivity and density on high. run IntraChunkShuffle composer
with 10 sub chunks. Apply fade in/out at 10ms. The result is still
clearly the same song, but some strange new qualities. There's a low
frequency amplitude modulation (introduced by the tiny chunk size and
fade in/out process), and the shuffling of tiny chunks introduces some
very strange phrasing in the vocal.
[Douglas Repetto]
-
Original: zappa_solo.wav
Processed: zappa_solo_ICT.wav
I've found that the IntraChunkShuffle composer does amazing things to
guitar solos. Applied to the solo from Frank Zappa's masterpiece, My
Guitar Wants To Kill Your Mama.
[Ron Weiss]
RotComposer
- Original: tubby.wav
Processed: tubby_rotate.wav
Start with King Tubby's "Forever Dub" sample. Extract beats. Extract
arbitrary feature (we'll ignore it). Run RotComposer with 4
beats/measure, 2 beat rotation, rotate right. The result is still
relatively cohesive rhythmically, but has a very strange accent
structure because we've rotated the accents out of their normal
positions.
[Douglas Repetto]
-
We can also have some fun with hip hop. I used RotComposer to swap
adjacent chunks (i.e. "a b c d" becomes "b a d c").
Geto Boys - Damn It Feels Good To Be A Gangsta:
Original:
damn_it_feels_good_to_be_a_gangsta.wav
Processed:
feels_damn_it_to_be_a_good_gangsta.wav
Outkast - Miss Jackson:
Original:
miss_jackson.wav
Processed:
(by ROT; the beat detector didn't work perfectly on this one): miss_jackson_rot.wav
(and just for fun I reversed each chunk): miss_jackson_rev.wav
[Ron Weiss]
MeapaemComposer
- Original: groove.wav
Processed: grooveevoorg.wav
Start with Madonna's "Get into the Groove" sample. Extract events,
with sensitivity set at a little less than halfway. Extract arbitrary
feature (we'll ignore it). Run MeapaemComposer. Hear the non-groovy
goodness as each small segment is played forwards and then backwards,
maintaining the song's form but disrupting its rhythmic flow entirely.
[Douglas Repetto]
-
Original: Jeff Snyder - Getting Dub (mp3)
Processed: getting-dub-MEAPED.mp3
- I set it to detect beats, and unchecked the half-tempo box.
- I enabled the Length feature extractor (doesn't matter which one).
- I selected the MEAPAEM composer
- I ran the synthesizer!
[Jeff Snyder]
NearestNeighborComposer
- Original: LIB.wav
Processed: LIB-order-10.wav
"Let It Be" beat-tracked with MFCC features, then composed by nearest neighbor walk, then selecting only every 10th segment for the synthesizer (by modifying the EDL outisde of MEAPsoft). [Dan Ellis]
-
Original: oops.wav
Processed: oops_NN.wav
Start with Britney Spears's "Oops, I Did It
Again" sample. Detect events and set sensitivity and density to
high. Extract features: AvgMelSpec, AvgSpecCentroid, AvgSpecFlatness,
ChunkLength, ChunkPower, SpectralStability, with all weights set to
1.0. Run NearestNeighborComposer. The result groups similar sounding
segments of the song together, so that there are several clumps of
drum hits, vocalizations, synth noises, etc.
[Douglas Repetto]
-
Original: tiny_cities.wav
Processed: tiny_cities_nn.wav
Nearest neighbor Modest Mouse gibberish
[Ron Weiss]
-
Original: Chords - from "Three Voices", by Morton Feldman (sung by Joan La Barbara) (mp3)
Processed: Chords-MEAPED.mp3
- I set it to detect events at relatively high sensitivity and density
- I enabled AvgChroma and AvgPitchSimple, and weighted them both equally, to try to catch similarity in both chords and notes, and to seperate out the breath noises.
- I selected the nearest neighbor composer, and applied a very short (about 4 ms) crossfade.
- I ran the synthesizer!
[Jeff Snyder]
NearestNeighborSwapComposer
-
Original: Genie.mp3
Processed: Genie_NNSwap.mp3
Start with Christina Aguilera's "Genie In a Bottle".
Detect beats. Extract AvgMFCC feature. Run NearestNeighborSwapComposer.
The result maintains the form of the song but swaps similar segments for
one another.
[Douglas Repetto]
HeadBangComposer
-
Original: William Byrd - Galliard (23b) (mp3)
Processed: Galliard-23b-MEAPED.mp3
Original: Gregory Isaacs - (from slum in Dub) (mp3)
Processed: gregory-isaacs-MEAPED.mp3
Original: Jeff Snyder - If that's the Way you feel (mp3)
Processed: itsalright-jeffsnyder-MEAPED.mp3
- I set it to detect events at very low sensitivity (to avoid false triggers) and medium density
- I enabled the Length feature extractor
- I selected the Headbang composer
- I ran the synthesizer!
[Jeff Snyder]
-
Original: flaming.wav
Processed: flaming_headbang.wav
Head banging to The Flaming Lips - Yoshimi Battles The Pink Robots Part 2.
[Ron Weiss]
VQComposer
-
Original: spoon.mp3
Processed: spoon_mfcc_vq.mp3
I ran the beat segmenter on Spoon - All The Pretty Girls Go To The
City, extracted AvgMFCC features, and used VQComposer to reconstruct the
input with 50 codewords and 1 beat per codeword. So the original song
is reconstructed using only 50 representative chunks.
Processed: spoon_chroma_vq.mp3
Same as above but with AvgChroma features.
[Ron Weiss]
HMMComposer
-
Original: LIB.wav
Processed: LIB_hmm.mp3
I ran the beat segmenter on The Beatles - Let It Be, extracted
AvgChroma features and used HMMComposer (25 states, 8 beats per state)
to generate a sequence of segments that mimics (to some extent) the
high level dynamics of the input. Note how segments from one part of
the song seamlessly transition to segments from a completely different
part.
[Ron Weiss]
More Advanced Examples
-
Epitaph [Larry Polansky, Dartmouth College]
-
Hanging Betsy [Roger Dean, Australia]
Read the description of Hanging Betsy.
-
Original: davedrums-original.mp3
Original: harpsichord-junk-original.mp3
Processed: drums-n-harpsihord-MEAPED.mp3
I took a recording I had of my friend Dave Skogen (of the Youngblood Brass Band and Cougar) improvising on the drums. The recording wasn't to a click track, and it would be a hassle to try to cut together parts that were the same tempo. I wanted to get only the phrases that he played which would fit together rhythmically, so I used the headbang composer. After that, I had a nice groovy and impossible beat (it jumps rapidly between different playing styles, including waving brushes in the air for a whooshing effect) I then took another bit of source material I had lying around - me testing some just intonation chords on harpsichord, and sorted that file by average mel spectrum. Then I pulled the two tracks into a sequencer (digital performer) and chopped up the sorted harpsichord to match the tempo of the new drumbeat.
The details:
- I loaded in the recording of Dave on drums. I set it to detect events.
- I enabled the length feature extractor.
- I selected the Headbang composer and applied a short fadein/out, but didn't select crossfade (so that the clicks would be removed but there wouldn't be any tempo change from overlapping of chunks.
- I went to the "options" pane and set it to save .feat/.edl files. I wanted to be able to look at the length data later.
- I ran the synthesizer.
- I loaded in the recording of my random harpsichord chords. I set it to detect events.
- I enabled the Average Mel Spectrum feature extractor.
- I selected the sort composer and sorted from high to low.
- I ran the synthesizer.
- I opened up the .edl file from the drums in a text editor. Looking at the segment lengths I found that the most common length was .368, and 60/.368 is about 163.043 BPM. This told me the tempo of my new drum track.
- I placed the drumtrack into a sequencer and set the tempo to 163.043. Then, I chopped up the harpsichord chords in their sorted order and placed them at will according to the beat grid associated with that tempo. Voila!
[Jeff Snyder]
|
|