visuals data flow : audio data from Ableton sent via jack to pd is then analyzed and sent as osc messages to processing over a local network.
Processing receives pitch and volume information to set various positional, quantity, size, and color parameters for the draw routines.

using the proMidi 2.0 library in processing and either midiYoke (xp) or an IAC midi bus (osx) midi sequencing of
processing sketches is possible. To keep the midi events and processing framerate in sync I wrote a little pd patch
which turns midi ticks, using [midirealtimein], into a framerate value. This value is updated every 1/4 note however is
averaged over a few bars such that the framerate is then not updated every beat, but every few bars. Doing it this way
just seems to make more sense then always calling framerate() within processing.
ProcessingAbletonPd01.png
software system



currently i am playing audio in Ableton live and sending audio data to processing. to complete this task i must send the
audioStream to Pd, in side of which data is then analyzed and passed on as float values via OSC which "plugs" the data
into the sketch.


Pd_patch.png
analyze and send
AbletonShot.png
using Send A




squaresExample.png
color, and alpha changes



using pdVst i can open Pd in Ableton on Windows Xp (this could be done with jack on mac/linux) and route audio using sends in Ableton. This then comes into Pd like a normal input which is then analyzed with fiddle~. after which
the pitch and volume data is sent to processing via OSC messages.

Ascii
ascii_0.png ascii_1.png
have been working on sketches which map pitch to font selection and volume to font size. The parameters will determine the look of generated abstract ascii images. Eventually I would like to hook up an expression pedal to control the color.

Pd/Gem

using the above pdVst and Ableton setup I am playing around with sending data to gem over a network. One laptop will be dedicated to running
Pd/Gem taking video input from a DV camera. I will use video feedback and video fx in Pd/Gem to manipulate the image. Fx and size parameters are currently getting pitch and volume information, volume is mapped to pix_gain and pitch is mapped to the size of the cube that the image is being textured to. Gem allows for easy manipulation of video which has prompted me to use it instead of processing for this purpose. Eventually I would like to look into ways of communicating video between Gem and Processing such that I could texture map a processing sketch to an object in Gem while both patches are running at full functionality.