Electronic Music and Sound Design

Technological Origins and Aesthetic Trajectories

The 1950s-60s

At the start of this article, I want to quickly convince you that the general history of electronic music has a lot of relevance for sound design. Actually maybe you have already been convinced but in case not, here is some more background.

All media richly cross-pollinate each other, a condition called Remediation:

The central idea of the book is the concept of remediation. What the authors mean by this is the tendency by which one medium comes to be represented in another one. This is not particularly new as it comes more or less directly from McLuhan who argued that the content of one form of media is always another form of media — the content of print is writing, the content of radio is speech, and so on. Bolter and Grusin’s notion of remediation is essentially an update of this for the digital media age where they see all forms of new media as repurposing or refashioning older forms in one way or another. (source)

Most of the tools we use in sound design today originated in two primary areas of practical and creative activity: the electronic music studios of the 1950s and 60s, and the realm of sound production and post-production associated with music, film and television. In film, ‘sound effects’ were usually propped base originally, e.g. shaking a metal sheet to simulate thunder, or using coconut halves for horse galloping etc.

But there was also a rich development in electronic analog equipment that was designed to record, edit and mix all the sounds together whether as a music track, radio broadcast or film soundtrack, usually based on magnetic tape or optical film.

In electronic music, oscillators, filters, function generators, amplifiers and all kinds of technical devices were developed to experiment with pushing the envelope on manipulating sound to exploit its plasticity — its capability to be shaped — as much as possible.

Here I will go over some of the ‘early days’ — and just lightly touch on the earliest days— of electronic music. To set up a little atmosphere for this discussion, watch the first few minutes of this video about Berna 3, a software suite that emulates the electronic music studios of the 50s and 60s as a software environment. Electronic music instruments in this period originated as test equipment used in radio stations and scientific contexts.

This kind of digital software emulation of analog equipment is another example of remediation. Watch the first few minutes and then maybe just jump around the video a bit to check out some of the various modules in Berna 3.

In this mid-20th Century context there were two competing schools of electronic music making in Europe — French and German — which represented very different approaches. The French school emphasized composing with recorded sounds, while the German school used electronic signals generated from equipment as its main source material.

Obviously today such an aesthetic rivalry might seem silly and it is certainly outdated, but these early approaches led to many subsequent developments. That history of creative practice is certainly relevant because all of today’s practices emerged from this historical background.

The German approach was called Elektronische Musik and the French approach Music Concrète. You can use a kind of sports metaphor perhaps and describe it as a French and German rivalry between microphones and oscillators! (that might help you to remember this historical moment through image association).

A visual mnemonic device

There was of course much experimentation with electronic music prior to these 50s-60s approaches. What really got electronic music going in this period, however, was magnetic tape, which originated as WW2 military communications technology in the 1940s. Thus, one thing both the French and German approaches had in common was the audio medium of magnetic tape. They both produced what is known as ‘tape music’ or tape-based music.

Electronic music studio for making ‘tape music.’ (image source)
Daphne Oram, 1962 (image source)

Since usually the names of men are associated with the early era of tape music, here’s a short entry on one of the women pioneers:

The BBC Radiophonic Workshop produced effects and theme tunes for the British broadcaster, including iconic sounds for the sci-fi television and radio programmes Doctor Who and The Hitchhiker’s Guide to the Galaxy, using electronic oscillators and tape loops decades before synthesizers were common. That many of its engineers were women was, and still is, a rarity. Last week, two of them, Daphne Oram and Delia Derbyshire, were celebrated anew in Synth Remix, a concert series of live performances and DJ sets touring Britain….

In the 1950s, Oram became intrigued by the potential of tape recording to transform music by exploding space and time. She was a fan of musique concrète, regularly staying up all night to mix her own tracks. In 1958, after years of badgering the BBC to modernize its music, Oram and her colleague Desmond Briscoe were given a room with some old equipment. Thus began the workshop. (source)

We will contrast these two aesthetic approaches — the Oscillators vs the Microphones! — with the two examples below.

Representing Team Germany
Representing Team France

There are generally three ways to produce sounds: through synthesis (originating sound out of math, whether embodied in analog circuitry or the code of digital software), recording (capturing sounds happening in the real world and working the the stored media), and processing (audio effects which alter either synthesized or recorded sounds).

The early German and French approaches produced a great variety of tape music works, and both schools utilized various forms of processing to modify the electronic or recorded sounds. Also, there are French tape music works featuring electronic tones and German tape music works featuring recorded sounds, so one shouldn’t create too strict a division between these general approaches to making tape music.

Digression: Before the 1950s-60s

It’s worth a brief discussion of what we can call the Pre-Tape Era of electronic music making. Here is how electronic instruments were presented to the public circa 1907 in Scientific American.

This is Thaddeus Cahill’s Telharmonium also called the Dynamphone, originally patented in 1895 and which was quite the electrical beast!

The Front End
The Back End!

The Telharmonium can be considered the first significant electronic musical instrument and was a method of electro-magnetically synthesising and distributing music over the new telephone networks of Victorian America….Cahill’s vision was to create a universal ’perfect instrument’; an instrument that could produce absolutely perfect tones, mechanically controlled with scientific certainty. The Telharmonium would allow the player to combine the sustain of a pipe organ with the expression of a piano, the musical intensity of a violin with polyphony of a string section and the timbre and power of wind instruments with the chord ability of an organ. Having corrected the ‘defects’ of these traditional instruments the superior Telharmonium would render them obsolete. (source)

Patent technical drawing.

The instrument was partly inspired by much earlier inventions, such as Soemerring’s musical telegraph (1809).

Cahill also made much smaller electronic instruments, such as the Alectron.

Maurice Martenot’s Ondes Martenot (1928) had a bit of a heyday:

Oscillating radio tubes produce electric pulses at two supersonic sound-wave frequencies. They in turn produce a lower frequency within audible range that is equal to the difference in their rates of vibration and that is amplified and converted into sound by a loudspeaker. Many timbres, or tone colours, can be created by filtering out upper harmonics, or component tones, of the audible notes.

In the earliest version, the player’s hand approaching or moving away from a wire varied one of the high frequencies, thus changing the lower frequency and altering the pitch. Later, a wire was stretched across a model keyboard; the player touched the wire to vary the frequency. In another version the frequency changes are controlled from a functioning keyboard. Works for the ondes martenot include those by the French-born Swiss composer Arthur Honegger, the French composer Darius Milhaud, and the American composer Samuel Barber. (source)

The last early electronic instrument that I would be remiss without mentioning is Leon Theremin’s Theremin (like many inventors, he named his invention after himself : )

It consists of a box with radio tubes producing oscillations at two sound-wave frequencies above the range of hearing; together, they produce a lower audible frequency equal to the difference in their rates of vibration. Pitch is controlled by moving the hand or a baton toward or away from an antenna at the right rear of the box. This movement alters one of the inaudible frequencies. Harmonics, or component tones, of the sound can be filtered out, allowing production of several tone colours over a range of six octaves. The American composer Henry Cowell and the French-American composer Edgard Varèse have written for the theremin. The instrument was used in recordings by the American rock group the Beach Boys and in the soundtracks of several science fiction films. (source)

The Electro Avant-Garde in Film Sound

To make the connection between mid-20th century electronic music and popular culture in the realm of film sound (not yet called sound design), the best example for illustration purposes is the film Forbidden Planet (1956) scored by Bebe and Louis Barron. As Rebecca Leydon writes in her chapter in Off the Planet,

A very legal fair use screenshot of a Xerox of a university library book : )

In this excellent chapter is an excellent citation from Philip Brophy,

The more one Xeroxes book chapters in a university library, the more slanted the text becomes!

The Barrons’ approach was somewhat unique in that they designed tube-based circuits that would frequently self-destruct during sound production. The ‘death throes’ of melting overheated electrical components were actually in many cases a desirable sound result. Rather than using pure sine waves that later electronic testing equipment might generate,

Below are some clips from Forbidden Planet and also from Staney Kubrick’s 2001: A Space Odyssey (1968) which provides further illustration of Brophy’s point about atonality in film often signaling presences of alien cosmic otherness.

Musique Concrète

Musique concrète, the French approach to early tape music, used recording tools originally associated with radio stations, and mixed together natural, electronic and instrumental sounds. There were two Pierres associated with the development of musique concrète, Pierre Schaeffer — a radio engineer, broadcaster, and writer— and Pierre Henry, who was a classically trained composer.

The first musique concrète composition actually dates from 1948, Etude aux Chemins de Fer by the Shaeffer Pierre (the video is shown above), is considered to be the first electroacoustic tape piece and used turntables:

The noise collage «Études aux chemins de fer» is seen as the first piece of music to organize noises on the basis of an entirely musical aesthetic. Its first public performance in the «Concert de bruits» radio broadcast in Paris on 5.10.1948, along with three other noise collages, marks the birth of the French «musique concrète» school, which draws its material from sounds that are concretely available, deriving specific creative rules from them in each case.

The «Études aux chemins de fer» is based on recordings that Schaeffer made at the Gare des Batignolles in Paris with the aid of six engine-drivers «improvising» according to his instructions. When working on these as a composer, Schaeffer was aiming among other things to use alienation techniques to expunge the semantic components of the noises and emphasize their musical values like rhythm, tone colour and pitch. (source)

Schaeffer is credited as being the inventor of the sound sample as a repeating audio loop, a technique widely used in today’s popular music and sound design. In the passage above, I have placed in bold a very key concept in electroacoustic music, namely the idea of eliminating concept-based associations with sounds in order to emancipate their compositional potential.

Traditionally, composition moved from the abstract to the concrete — from concept and written notes to actual sounds. Schaeffer’s approach reversed the process, beginning instead with fragments of sound — field recordings of both natural and mechanical origin — which were then manipulated using studio techniques.

Dissatisfied with the state of music at the time, Schaeffer sought to create a new musical language which divorced sounds from their sources, thereby reducing music to the act of hearing alone — what he called “reduced listening.” Take, for example, a field recording of a train moving along its tracks. At face value you’d simply note the sound of a train; beyond that realization, listeners wouldn’t give it much more thought. Schaeffer, however, realized that there was a rich vein of musicality hidden within such seemingly mundane sounds: snippets of complex rhythms, unique timbral characteristics, tonal peculiarities, strange and interesting textures. Our perception of these qualities, he recognized, was hindered by the associations and references that sounds carry. The trick was to find a way to hide the associations in order to bring the musical qualities of those sounds to the forefront. (source)

Schaeffer’s approach was to begin with a sound, not with a compositional idea. The composition would emerge from working with the sonic material, rather than make the sounds conform to any pre-existing structural concepts. This is analogous to another approach associated with French documentary production, Cinema Vérité, which is perhaps best defined by the maxim: “Shoot First, Ask Questions Later.” In other words, a vérité approach starts with accumulating compelling footage, and then discovering the story later during the editing phases.

Sonic Phenomenology

The concrète school of composing brought into general terminology the idea of L’Objet Sonore or the sound object. As Simon Fraser University’s Sonic Studio archive defines the term in its hyperlink-rich entry:

Pierre Schaeffer, with whom the French version of this term (l’objet sonore) is most associated, describes it as an acoustical “object for human perception and not a mathematical or electroacoustical object for synthesis.” The sound object may be defined as the smallest self-contained element of a SOUNDSCAPE, and is analysable by the characteristics of its SPECTRUMLOUDNESS and ENVELOPE.

See also: GRAININTERNAL DYNAMICSMASSTIMBREVOLUME.

Though the sound object may be referential (e.g. a bell, a drum, etc.) it is to be considered primarily as a phenomenological sound formation, independent of its referential qualities as a SOUND EVENT. Schaeffer: “The sound object must not be confused with the sounding body by which it is produced,” for one sounding body “may supply a great variety of objects whose disparity cannot be reconciled by their common origin.” Similarly, the sound object may be considered independently of the social or musical contexts in the soundscape from which it is isolated. Sound objects may be classified according to their MORPHOLOGY and TYPOLOGY.

Recording and processing a sound object is often the starting point for ELECTROACOUSTIC music composition.

See also: MUSIQUE CONCRETETAPE LOOP. Compare: SOUND EFFECTSOUND SIGNALSOUND SYNTHESIS(source)

The sound object is an important idea that defines much electroacoustic music — the treatment of sounds as objects of perception in their own right, and not tied by our thoughts as to the origins of their production in real events. Our natural listening is typically ‘source-seeking.’

Making music with the idea of severing our associations between sounds and their sources is not very much unlike how we normally listen to music actually.

When you listen to instrumental music, such as played by a piano or violin, are you constantly imagining the wood, strings and fingers actuating these sounds? Those image associations may intrude from time to time, but the physical sources of piano and violin notes are not really the main focus of our attention. Rather, we let the music be music, and lose ourselves in the play of tonalities as existing in their own perceptual realm, so to speak, and not merely the acoustic effects caused by fingers on keys or bows on strings.

The continuing relevance of the Sound Object concept in more contemporary art contexts.

Electroacoustic music that takes the sound object as its main aesthetic reference is sometimes called Acousmatic music. Here is a brief conversation about Schaeffer, sound objects and acousmatic music for further elaboration,

As noted in the interview above, the two Pierres established the concept of sampling and samplers, which have become a mainstay in today’s audio practice. To give an idea of how over the last 70+ years sampling techniques have evolved from turntables in a radio studio to plugins in a DAW, watch this short snippet on Izotope’s Iris2 sampler for an interesting spectral twist on the concept.

Deep spectral sampling in the digital environment.
Stay up to date

Subscribe to our newsletter to stay up to date with all things Optophonia.

The Optophonia Festival of Electronic Music, Performance Visuals and Audiovisual Culture