nime final performance and documentation

So I kind of dropped the ball on documenting my NIME work during the second half of the semester leading up to the actual performance.  I’m going to use the concept presentation that we did mid semester to show the process I went through in creating the music and the interface and I’ll write a little about how I developed the actual software and wearables for the show.

You can hear an example of the arduino based synth I made in the soundtrack to this piece: The Life of Cranes.

My goal was to create music using samples the way Terry Riley used them, cutting them up and rearranging them, in a way that didn’t necessarily sound like glitch.  The end performance obviously sounds pretty glitchy, but I got close to achieving what I thought I could, and kind of decided that it might not be possible to do exactly what I was hearing in my head.  I used Processing and Pure Data, talking to each other through OSC, to make the software.  It’s a relatively simple concept: I have two force sensing resistors embedded in my shorts, one which determines how fast to play the sample in whatever channel I’m on, and the other at what point to start the sample.  This gives me a certain amount of control over the sound, while also bringing in a lot of randomness.  The more I played with the instrument, the better I got at getting the exact sound I wanted, but as you can hear in the performance, it was pretty difficult to do.  I had other controls—how long the sample would play for and what channel, or sample, was being manipulated.  I could also record my performance of a particular sample in real time and then play it back.  That’s what creates the loops.  I would record all of the inputs I was giving as an array in Processing, so then that array would play over, sending the same signals to Pure Data, as though I was repeating the performance.  In the end it is very much like an MPC or any general sampling device, but without a central metronome, and without a lot of the other features.  My ultimate project is to build this out into a normal interface (not a butt interface) to give me control over more aspects of the sample and the way they are played (using my hands instead of my butt should certainly help) while still keeping the core principles intact—randomness and no metronome.

I think the performances I could do with a real interface would be a lot more interesting.  For NIME, in the interest of time more than anything, I felt I had to speed through some of the concepts I wanted to spend more time on.  The thing I find most fascinating about this method of composition—if you can call it that—is the way that loops of samples have there one melody and rhythm, yet they influence each other as they loop in and out of phase.  To really appreciate that, it works better to introduce each new sample alone and let them sit for a while, so you can get a sense of the repetition and melodies working.  You get some snippets of that in this performance, but overall its too crazy to really understand why I like it, I think.  I also screwed up a few times, mostly in the middle section, when I’m playing the Gaelic psalms, and the Whitney beat keeps coming in.  That wasn’t intentional.  The way I had organized the channels (and the limitation of having to go back and forth with two sock switches, which were a little unreliable) caused the samples to play out of order.

As for the construction of the butt interface, it involved a few simple soft circuits and a lot of wiring.  Besides the FSRs in the butt, which were made of conductive thread and resistive foam, I had a potentiometer made of resistive thread on the arm sleeve, and three switches made on conductive thread and fabric, one on each sock and another on the sleeve.  The potentiometer on the sleeve controlled the length of the sample.  Because I had a limited range, that attribute was much less dynamic than the time and playhead controlled by the butt FSRs, but that was okay for this particular performance.  The switches on my socks went back and forth between channels 0 to 7, each of which had a different sample which could be played live and then record the performance.  The switch on my sleeve controlled the recording, I could press it to start and end a recording, and press it again to delete that recording on a given channel.

I did a lot more sewing this semester than I had in the past, and got pretty okay at it.  I had never worked with wearable, but Merche and Antonius helped me a lot with understanding how the circuits worked and how to build them into clothes so that they would be relatively reliable.  The process took a lot of trial and error and I was often temped to just sew prefab FSRs and switches into my clothes because they are so much more reliable and less likely to fall apart due to my shoddy sewing.  But I learned a lot about how circuits work by actually building them myself, and in the end I think I actually preferred what I had made to the prefab versions.  The butt FSRs in particular were much more effective because the foam could actually contour to my butt, giving me a lot more control over the interaction.

Before the butt suit, I had built a really basic controller (images TK) which I hope to expand on in the next version of this.  For the suit, I simply soldered all of my leads to a serial connection and used that to go into the board, where the wires from my interface going into the arduino were replaced with the same wires from the suit.  The arduino code was super simple, just serial read for six analog inputs, which went through USB into my computer and went into Processing.  Processing handled all of the inputs and recording functions, and only sent signals to Pure Data when it wanted to play a sample.  Pure Data was used simply to play samples and add a touch of cross fading and fourier analysis to the samples to prevent as much clipping and glitchy sounds as possible.

Another aspect of this process was cooking the samples themselves.  I started off with a bunch of different kinds of songs and samples and it sounded really crazy and glitchy, to the extent that I had doubts whether I would ever finish the project in a satisfactory way.  I really wanted to use the vocal track from “How Will I Know” that came out after Whitney Houston died, but most of what I could get from that song actually didn’t sound that great.  But after I cut the samples up and focused on really specific sections and layered them together instead of using other songs, it started to sound cool.  The Gaelic psalms then worked well because the vibe was very different and they had a different range.  “How Will I Know” is in F#, but the psalms were in G, so I had to pitch shift them down a half tone, which adds to the not great digital quality of the sound, but wasn’t a deal breaker for me.  The Lil Wayne song, “3 Peat,” was also in F#, which was good, though not as important since it is used just to create chaos at the end.

When I have the new interface built out I really want to experiment with different songs and samples and combinations to try to achieve music that can be interesting on its own.  The butt interface worked well for this specific performance and the music had to match it by being over the top, but I think I could create some really pretty and interesting music with simpler elements once I get working with a new interface.

Advertisements

Author: owen ribbit

poop

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s