For my final project I made a geometric music video, which is set to perform and generate to my friend YMTK’s song ‘Down Baby.’
The program could be applied to any song, which would generate a unique (within parameters) sketch based on the inputs of the particular audio track.
The sketch is a fork of Dan Shiffman’s example from our nature of code class Animated Network Visualization
I believe the model is based on a perceptron.
From Shiffman’s Nature of Code book,
Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron. A perceptron consists of one or more inputs, a processor, and a single output.
A perceptron follows the “feed-forward” model, meaning inputs are sent into the neuron, are processed, and result in an output. In the diagram above, this means the network (one neuron) reads from left to right: inputs come in, output goes out.
The idea of this project is to create a mutating perceptron environment, that creates animating graphics based on the song selected programatically.
The way I described the project to my friend whose song I am using was as ‘a neural network visualization based on one of your songs.’ You can see the primary source documentation of this, and just how weird it must be to be my friend, in the text I sent my friend pictured below:
I have set the radius of each input and the output to be mapped to the amplitude of the song playing. Which creates a ‘dancing’ or ‘performing’ perceptron.
I have also adjusted the opacities of elements in the sketch both for artistic purposes and to attempt to make visible the movement of the machine.
You can see the YMTK version by following the link below
And to show the generative possibilities based on different audio inputs, here is a version set to the Calle 13 song – El Aguante