Workshop with Henry J. Ng – ng (1)

The first round of the t-stick workshops concluded at the New Adventures in Sound Art (NAISA) space in Toronto, on 29 September. I want to thank the NAISA administration for letting us come in right before Toronto’s Nuit Blanche kicked off. An installation for the all-night event was partially set up in the space, giving a true workshop, or work-in-progress, feel for the t-stick workshop participant, composer Henry J. Ng, and I.

Henry had mentioned his interest in seeing “under the hood” of the t-stick and so, a good part of the workshop session was spent examining how gesture extraction is done in Max/MSP. This was the extent of the technical discussion and I only briefly went over the actual hardware of the t-stick. Henry was really the only composer keen on using our time together to understand how data is streamed through the various levels of programming (i.e., raw, cooked, instrument and system layers). For an explanation of these layers, refer to the blog entry, entitled T-Stick output namespace, under the Mapping menu item. I concluded this part of the workshop by reinforcing the point that I want the workshop participants to throw ideas at me for the different modes of performance needed to realise their compositional projects. Then, I will do the necessary programming to extract the required physical gestures.

Next, I gave Henry some playing time while we both observed the sensor outputs at the ‘raw’ level.  Actually, this lead us to an interesting discovery about what happens to accelerometer data when the t-stick is in free-fall.  The data streams from all three axes momentarily read zero. I haven’t found an actual playing technique that results in all axes sending out zero.  The implication is that we may have discovered a particular data profile that indicates throwing the t-stick into the air. Any comments about this phenomena from my readers are welcomed. Your comments may even help me figure out how to extract a gesture from the data output.

 

As a result of going through some Max/MSP patch details and explaining how the gesture extraction algorithms are a result of my multi-layered mapping strategies, I was struck by the presence of a recurring dilemma. While developing mappings for a digital musical instrument (DMI) – and I’m going to say here, mappings for any type of DMI – there arises a conflict between sound selection and sound initiation or excitation. More on this in future posts…

Advertisements

About dndrew

Orchestral, chamber and interactive music composer Digital musical instrumentalist Real-time software systems designer Computer music educator
This entry was posted in Sessions and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s