The third t-stick composition workshop got under way at the matralab, in Montreal, on 24 September. This time, I met with American composer Patrick Hart to work through some of his ideas on the gestural control of recorded sounds and music. I had prepared a fairly narrow range of sound types and some preliminary mapping (see video example, below), but over the course of the workshop, it became clear to me that Patrick had a much richer palette of sounds in mind.
After playing the t-stick a bit, we discussed what synthesis platform would best for sample playback. We compared Logic’s EXS sampler and Native Instrument’s Battery. We also briefly looked at cataRT (Real-Time Corpus-Based Concatenative Synthesis), developed at Ircam. This last piece of software appears to have an impressive range of parameters to which one could map t-stick data streams. However, we weren’t convinced of cataRT’s potential for real-time control. It tends to be demanding on the computer’s CPU.
I am happy to report that we did not spend a lot of time on technical issues. In fact, I detected a naturalness, or familiarity, with gestural controllers in Patrick’s handling of the t-stick. He seemed very comfortable with the technological ‘magic’ of the DMI. So, the remainder of the workshop was spent on listening to sounds and discussing large-scale physical movements that might be appropriate for Patrick’s compositional project.
It’s still early in the process, but I sense the development of a piece of minimal movement with maximal sound variety. By minimal movement, I don’t mean gestures that are low in energy or tiny in expanse. Rather, I am referring to a minimal movement vocabulary of highly energetic physical gestures.