Henry Ng and I met for a second face-to-face workshop session at the EMF Institute, last weekend. He and I were both in New York for the T-Stick Composition Workshops and so, we enjoyed some downtown walking and a concert together in addition to our workshop session.
Henry has laid out some very interesting challenges for us to overcome. He’s exploring a concurrence with gesture and sound spatialisation in his t-stick composition. In addition to initiating and modulating sounds, he would like the t-stick to control the localisation of these sounds in the context of a multi-channel work.
I am reminded of a comment I received from a former teacher. He appropriately speculated that the effectiveness of the t-stick may be diminished if I ask the stick (programme it) to do too much. He was responding to ‘one’ of my definitions of the t-stick as an agent of both electroacoustic music synthesis and diffusion. In short, if I use the t-stick to create (initiate and modulate) sounds in an interesting way via effective gesture-to-sound mappings, will I also be able to use the t-stick to simultaneously diffuse my sounds in a controlled and meaningful way? Perhaps too much of one thing (either creating sounds or diffusing sounds) diminishes the other. For instance, it’s very useful to be able to develop mapping strategies that seemingly allow the initiation of a sound to be separate from its modulation; or that one can initiate new sonic events while at the same time modulating a presently sounding event – permitting the possibility of polyphony. What happens to my control over polyphony if I also have to concern myself with controlling where the sound is placed in the performance space?
To begin problem solving the challenges posed by Henry, we implemented some Spat (http://www.forumnet.ircam.fr) examples that he had prepared and brought with him for the workshop. Because we are continuing to use the extremely useful DOT Mapper software to convey t-stick gesture extraction data to synthesis or in this case, Spat parameters, we first had to write up an XML document that declares the Spat parameters to the mapper. Second, we included the necessary dot.admin Max object and the message routing in Henry’s example patch. This part of the process was swift and yielded immediate results. Third, we made some preliminary mapping tests with a five-channel system provided by the EMF. We primarily experimented with ‘throwing’ the sound to specific loudspeakers through rotational movements of the stick. This lead to a data recording segment, during which I recorded the t-stick datastreams of specific t-stick gestures (recorded using my Datastream Max patch). Henry can now simulate some of the t-stick movements in an effort to see what gestures may be effective spatialisation gestures.
Throughout the workshop, I believe Henry and I were contemplating the task of simultaneously playing and diffusing sounds – which I mention above. To this dilemma, I added a former proposition. I have suggested to Henry that he concerns himself solely with the diffusion aspects of his piece. This isn’t to say he would develop his spatialisation regardless of the sounds of the t-stick; rather, I have proposed he uses sounds and sound-to-gesture mappings that I have ‘already’ established for the stick. Consequently, the creative challenges of his work may be more restricted, and I would say more limitations in-play allows the imagination to wander more freely. Plus, I’d be happy to illustrate how a fixed voice for the t-stick can be used to compose ‘different’ works.