On 9 Dec., 2010, I was invited by the Electronic Music Foundation Institute (EMF), in New York, to give an overview of, and brief performances on, the t-stick digital musical instrument. This was an event that coincided with the beginning of the second round of meetings for the 2010 T-Stick Composition Workshops. I truly enjoyed the close contact I had with the audience during the talk. Most importantly, I had a lot of fun auditioning some of my new material for the soprano t-stick. For me, the studio presentation-demonstration has always been an unusual context in which to ‘perform’, but I have to remind myself of the origins of the musique de chambre tradition from time to time and in this light, I appreciated the EMF giving me this intimate performance opportunity.
It occurred to me that there is something of New York City in the t-stick, although this impression may be the result of the new sounds and sound-to-gesture mappings on which I’m currently working. I’ve been using physical modelling to develop sustained sounds that I hear as quite compatible with the timbre of woodwind instruments. In addition, I’ve been using abrupt rotational gestures and shaking to access a wide ‘wide’ variety of audio samples. Patrick Hart was kind enough to lend me a large collection of sounds after laying them out in Logic’s EXS software sampler. I found it quite easy to import the sampler into one of my own projects and to begin playing/manipulating the collection of sounds. Briefly, with my sustained and full-bodied wind-like timbres, I clearly imagine both the endlessly stretched avenues of Manhattan and the constant intensity of the streets (e.g., the quantity of people, inexhaustible and contrasting movement trajectories of city traffic, also sound trajectories and smell zones). With regard to the large array of audio samples, I sense a metaphor that compares the diversity of samples with the diverse origins of New York City’s inhabitants.
In these presentation-demonstration situations, I don’t often remember to address the point of computer-assisted performances. I am sometimes under the impression that my audience may believe I’m controlling a set of preprogrammed sequences and that the timbral subtlety I strive for is only the result of organising a set of studio-designed audio files. I would like to assure everyone that the sounds ‘flow’ from my instrument as they would resonate from a normative acoustic instrument – this takes some explanation and defining of the word ‘instrument’. Nonetheless, I wish to state that I do not control predesigned computer sequences and for the most part, the sound you hear is entirely a consequence of my real-time manipulation of the stick.
In the case of triggering (predesigned) audio files, I am constantly exploring new methods of handling the continuant segment of a triggered sound. In my opinion, the terminology ‘trigger‘ implies a rather static duplication of audio. My colleague Joe Malloch recently made an insightful suggestion on terminology. Instead of speaking about triggers, we should consider envelopes. The common use of ‘envelope’ in acoustical foundations is the ‘amplitude envelope’ (i.e., ADSR). Enveloping, break-point filtering or function multiplying are not limited to modulating amplitude – far from it. So in Joe’s statement, I infer two things: (1) a sound that the t-stick initiates, for example, may have a controllable and malleable continuant and decay structure; (2) the shaping of the decay structure can be a result of modulating amplitude in addition to a host of other synthesis parameters. There should be nothing static about playing the t-stick.