Articles about generating melodies algorithmically typically do not invoke phenomenology as a design method. More characteristic approaches use Markov chains, cellular automata, machine learning from a digital corpus, probabilistic or chance methods etc. Automating melody generation has been researched and related systems developed for at least half a century now. It’s not a new topic, but the approach discussed in this article may be.
What phenomenology brings to the table here as a method is consideration of the lived experience of the computational melody. This lived experience is normally not taken into account, as evidenced by the decades of computer music performances where the only attendees are the composers in the program. Or maybe it is audience reception, not lived experience, which is the cultural culprit for this rarified art form.
Additionally, phenomenology can be utilized in tandem with any other algorithm design method, in the sense that its insights can be used to modify the various off-the-shelf or common approaches.
Here’s an example of a melody I generated in this manner, that I think meets certain aesthetic criteria, but which isn’t based on any pre-existing model of what a melody generating algorithm should have as its key features. Instead, listening-as-I-programmed in a more phenomenological way is what designed the algorithm, which in turned designed the melody.
Hopefully you enjoyed that at least a little : )
This is obviously a very different kind of automated melody than, say, one that is emulating a Bach fugue, which is characteristic of many melody algorithms which derive a rule set based on some pre-defined works. It is also not really ‘cold and mechanical’ in a computational sense but has a human relatable emotive quality. This emotional aspect is also something that probably phenomenology can support where other approaches would lack a technique to bring out this kind of result.
This program is part of a family of tools I’ve developed to support my own music making, and so there are some initial limitations I am working within:
- I don’t perform live music, and am very much a studio-based producer. This means that my programs don’t have to be satisfying in the entirety of their output in real time, but rather I can mine the output for the sections I like the most.
- Thus, I’m not aesthetically an adherent of schools of thought and making which oblige one to accept the output of the machine in its entirety. I am not making meta-art, or machines that make art, in which I have no critical role to play or editorial powers over. Rather, I use these tools in an explicitly assistive way and am fine with discarding most of the output of the programs, in favor of my own taste, or aesthetic judgement, compositional goals, intended final context etc.
- I don’t make music that I consider to be related strongly to genre, so there is no attempt to mimic the usual patterns, harmonies, tempo, keys or trajectories of established musical forms.
- I take timbre to be critical in way that is usually not remarked upon in the literature on melodic algorithms. It is usually assumed that since a melody is a sequence of pitches, then it is simply a matter for orchestration to decide which pitch-producing instruments will be assigned the melodic line(s). Of course, a sequence of notes can be played on any note-producing instrument, but since I make electronic music, I have often noticed that a melody can sound amazing when expressed in one timbre and horrible when expressed in another. Phenomenologically, there is a melody-timbre complex (an ‘eidetic essence’ in the language of phenomenology, i.e. a discovered structure of lived experience) that produces strong interactions with each other. While practically I may generate melodies as patterns of MIDI notes as its own process, the right timbre for these notes has to be found to embody the melody most effectively.
- This means that this particular melody generator is deeply indebted to the piano timbre that was used in its making. The melodies this generator produces do not usually sound as appropriate when used on other timbres, but sometimes other suitable timbres can be found in my soft synth collection. A design implication is that the parameters of this algorithm may very well need much tweaking in order to work on a very different timbre, which is also a very exciting opportunity and way to think about melody generators — that they express something of the spirit of a particularly chosen sound. For example, if I wanted to use this algorithm on ‘super saw’ timbres in a dance track, I would change the note range selection values since usually dance melodies are more stepwise in character — i.e., they don’t leap around the octaves as much, since that’s harder to dance to.
- This close connection to a particular timbre also means that the melody generator is not only producing material for a composition, but to a certain extent is a composition, or at least has very strong compositional leanings. It produces proto-compositional material.
- I tend to work in the Max programming environment for this kind of work, so the algorithm is worked out in that context.
Discovering What a Melody Algorithm Needs
At the outset, I decided that I would proceed by shaping aleatoric (random) processes. As demonstrated in many a YouTube tutorial video, strictly randomly selected pitches are not very pleasing to listen to. Randomness is a raw material that like any needs to be shaped. Ultimately I ended up with 17 modules for shaping randomness into something musically worthwhile, and which had interactive and generative features. These modules are:
Go: starts / stops the melody, which also controls parameters in other modules.
BPM: sets a beats per minute tempo, which is utilized by other modules.
Pulser: generates a low piano note at the pulse (BPM) rate. This can be turned on and off and used as a kind of metronome reference.
Downbeater: essentially a slower pulse, generating a low piano tone every four pulses (assuming a 4/4 time signature). This is switchable on and off, like the Pulser.
Phraser: this is the trickiest component of the algorithm, producing discrete rests so that the melodic lines can start and end in a, well, melodic way! The program essentially produces a continuous stream of notes, and it’s the Phraser’s job to introduce pauses into the stream. Pauses are not a negative absence of notes but instead are positively introduced, just like rest notations in a score. The Phraser is also responsible for a cool variable called
fingerSpeed which is how fast the notes are playing. It also has special
fingerSpeeds (Fast and Faster) that are made available to the Ostinator, discussed below.
Durator: this module articulates the length of each note, which can be selected from multiples of its own interactive seed element called
durSeed (an integer value controller).
Velocitor: The velocity of each note is randomly selected, with a base level added so that the notes are never sounded too quietly.
Pitcher: Every melody generator needs pitches! Pitcher’s job is to select a note within a 2-octave range of the ‘white keys’ comprising MIDI notes 48 to 71. By settling on just two octaves of white keys in the midrange frequencies (middle of the keyboard, more or less), the melodies can be placed in the following harmonic/scale contexts: C Major, D Dorian, E Phrygian, F Lydian, G Mixolydian, A Minor, and B Locrian.
Restrictor: Random melodies can become annoying when pitches move in large leaps from each other. Good melodies usually move within narrower ranges. The Restrictor module makes sure that the melody is never jumping across both octaves all the time.
Trajector: Working in tandem with the Restrictor, this is an interactive table controllable element where the user can define, and redefine, a general pattern of fourteen seed notes (representing the two octaves of white keys) for the moving restricted range to respond to. You can see how this works in the video below, where the pattern drawn in the left table generally (loosely, not strictly, because of the randomness in the process) limits the ranges of notes that are selectable for use by the Pitcher.
Octaver: this provides a simple interactive way to shift the main melody up or down one or two octaves, by adding or subtracting 12 or 24 to the main pitch out.
Ostinator: This is a module that may only make sense because of the piano timbre that was chosen for developing the algorithm. The module works in either a generative or interactive mode, with the former called ‘Decider’ which automates the ostinato elements. As mentioned above, the Durator is producing two speeds for the Ostinator to choose from, Fast and Faster. The Ostinator can also move across 6 octaves of range: the two octaves used by the main melody (+0 to the pitch output), and +/- 12/24 for up to two octaves above or below the main melody. These parameters are also provided for interactive control by the user.
Rebeller: this element selects any notes across a six octave range strictly on an occasional and purely random basis. This allows for non-scale notes to creep into the composition, and thus adds the once in awhile flourish of dissonance. It can be turned off / on by the user.
Feedbacker: three delay lines are possible for looping melodic sections, with each taking the previous one as its input. These can be turned off / on, and have individually settable delay times.
TimeWarper: This module allows for ramps to be applied to the main BPM, so that the overall tempo can be gradually increased or decreased over time. It is currently purely generative but will eventually have some interactive controls added.
What Makes This Program Phenomenological?
At the start of the process, I did not know I would end up with these 17 modules, or that these would be at least a minimum set required for making piano-based melodies that I considered to be actually musical, i.e. listenable with aesthetic and emotional qualities. These module concepts emerged through careful listening to the output of the first algorithmic results and reflecting on my experience of what I was liking, not liking, wanting, not wanting, being reminded of, thought I was hearing, fantasized about hearing, etc. In other words — by paying attention to my lived experience of the algorithmic output and translating those subjective features into program modules.
One of the classic treatises of phenomenology, Husserl’s Phenomenology of Internal Time Consciousness, took melody as its example for the study of time-based experiences. I won’t repeat or summarize his explications of primal impressions, retention and protention here, other than to mention this as a useful reference for any readers interested in what phenomenology has had to say about melody in its earliest historical formulations.
Each module presented itself in my constructive imagination as a core essence of melody which needed its own programmable solution. I did not know in advance that I would be setting up over a dozen specialized modules to address key aspects of melody as I worked with my aleatoric raw material. While the end result can be said to be in a sense ‘random,’ the shaping of this randomness is far from being random. It’s in this shaping of the random in response to my experience that the essential structures of what this melody generator needed became clarified.