An important concept in sound design is to put sounds into movement, i.e. expressing meaningful and effective — interesting, emotionally compelling, even experientially correct or physically accurate— change in the sound over time.
The movement of sound amongst sounds, and sounds within a sound, is so common a concept that there’s even a plugin called Movement that expresses this idea — and there are hundreds of other plugins that incorporate movement as a guiding principle in the design of their feature sets.
Techniques for Sonic Movement
There are many ways to impart sonic movement to sound. The main ones used in a DAW environment typically are:
– Slope automation — shaping a series of breakpoints that define slopes of values that change over time.
– Parameter modulation — linking sound parameters to each other, so that one controls another via a Source, Destination and Amount logic.
– Performative gestures — using MIDI or OSC controllers of various kinds where sound parameters are manipulated in real time based on human time varying input.
In addition to the main three techniques above, there are also other ways to put sound into movement:
– MIDI and data files — some software allows for importing text-based data such as MIDI files which can serve as a basis for manipulating sound. Note that most controllers use MIDI or OSC data signals but this data can also be represented in text files.
– Control voltages — some digital software emulates the voltage schemes of analog synthesis modules such as Euro Rack which determine all aspects of a sound through modeling electrical signals. And of course, the analog variety uses real control voltages!
– Algorithmic design — tools that are more like software integrated development environments (IDEs) such as Max/MSP, SuperCollider or Chuck allow for sound to be shaped through coding, whether with visual programming nodes or script-based editors.
– Performing sonic instruments — sound designers can also perform the sounding objects themselves, or even perform the recording equipment by treating it like an instrument, e.g. by twirling a microphone in fast circles over your head while making a recording.
Breakpoint slope editing (image source)
A modulation matrix showing the linking of sound parameters via Source, Destination and Amount (the orange horizontal bars with their numerical values to their right) (image source)
A MIDI controller for hardware-based expressive control over sound (image source)
MIDI file data (image source)
Digital emulation of analog control voltages (image source)
Performing directly with sound-making objects: Ben Burtt at work (image source)
Texture and Gesture
A pair of contrasting compositional concepts that are often used to describe the expressive movement of sound over time is texture and gesture. Texture can be through of the sound in its ‘steady state’ mode, which may be the sound of a vibrating body where not much action beyond a basic vibration is occurring. Gesture then builds on the basic steady state sound through dynamic articulation. The images above and below evoke this general distinction between the repeating vibration and its temporal variation.
Sonic gestures don’t necessarily have to be tied directly to human gestures as a literal input source to shape the sound. Rather, gesture is a term used perhaps akin to phrasing in melody making. A sonic gesture in the sound design context would be an identifiable sound shape set off against and amidst other sound shapes. Such shapes can be dramatic or subtle, i.e. there may be ‘slow motion’ gestures or dramatic ones. What’s important is that compositionally, we notice these distinct shapes in the evolution of the component sounds.
Visual analogy with a sonic gesture (image source)
Visual analogy with a sonic gesture (image source)
Here is an example of a sonic texture, one hour of a V8 engine idling for some reason:
The disc fight from the original Tron (1982) is a good example of a scene loaded with quite a lot of successive sonic gestures.
Hopefully this contrast between the Tron frisbee (technically it’s an ‘identity disc’ ; ) and the idling car engine, combined with the preceding visual metaphors, have clarified this compositional contrast between Texture and Gesture for you.
Both textures and gestures can be creatively and interpretively treated along a spectrum between naturalistic and stylized renderings. A naturalistic sonic gesture might try to map changes in a sound based on easily experienced changes in events recognizable from real life that are related to speed, volume, direction, proximity, impact force, pitch, spatial reverberation, being in a state of either on r off and son on.
In a highly stylized sonic gesture, you might aim more for what just ‘feels right.’ Here, many kinds of associations might guide your design and you don’t have to follow any sense of realistic reference to events as we normally experience them. The sound design might become more ‘musical’ or an artistic form of ‘organized sound,’ and less tied to faithful representation of visualized events.
Below are some screenshots from the app, TouchOSC. OSC is another data protocol, like MIDI, that can be used to shape sound in real time. Apps such as these provide a wide variety of UI elements that can be used to create a very diverse palette of gestures, in this example via touchscreen elements.
TouchOSC (an app) UI elements for producing a wide variety of human gesture data to shape sound.
Textures, while relatively steady state, can still be complex, and be comprised of many different kinds of timbral components, internal rhythms and pulses, or layers with other sounds. Sound design in an editing and mixing context is inseparable from layering — one is always stacking sounds on top of sounds, to produce richness and variety even in something as narratively simple as the background machine hum of a factory.
Tying these two concepts together, watch a bit of Butch Rovan’s Collide, where these concepts working both separately and together are very well illustrated.
In electroacoustic music, one of the main conceptual frameworks used in the analysis and production of organized sound is Dennis Smalley’s Spectromorphology which is just as relevant for sound designers. This framework is based on a series of contrasts, oppositions and metaphors for the movement of sound in its timbral (color) dimension.
Below are some screenshots from the various conceptual models that make up the spectromorphological framework, to give you an idea of both its logic and aesthetic realizations.