Chapter 1

Introduction and History

Topics: Pythagoras (music, nature, and number), the Antikythera mechanism, Kepler’ s harmony of the world, cymatics, fractals, electronic music, computers and programming, the computer as a musical instrument, running Python programs.

1.1 Overview

This chapter provides a quick tour of some of the major technological landmarks in Western music history and computer science. When we think of computer music, we usually imagine electronic technologies, particularly the synthesizer, computer, and sound recording devices. These devices are products of the information age in which we live. This age focuses on computational thinking, that is, using computers in creative ways to manipulate data and perform various tasks, usually involving some form of programming. The introduction of computers and, in particular, computer programming has also expanded the sonic and structural boundaries of music composition and performance.

In the 20th century, the fundamental education of an individual consisted of the three R’s—reading, writing, and arithmetic. In the 21st century, with the proliferation of computing devices, this list now consists of four R’s, that is, reading, writing, arithmetic, and programming.

Once computer programming is mastered, new vistas of creative expression become available. This new expressive capability is not confined only to computer music—it is available in every area of the arts as well as the sciences. Accordingly, the programming skills you will acquire in this book are not specific only to making music. They may be applied to creative endeavors in all areas of human knowledge and expression.

1.2 Connecting Music, Nature, and Number

The development of music and mathematics is connected to humanity’s early observations of nature, and attempts to explain and formulate aspects of the human experience. The ancient Babylonians, Egyptians, and Greeks investigated the origin of sound and resonance, and developed the notion of musical scale in terms of integer ratios. To them nature was a harmonious artifact, in which humans found themselves exploring and creating. Mathematics was created around that time, and in its early phases, it was intricately connected to nature and music. Even more recent concepts of the golden ratio, Fibonacci numbers, Zipf’s law, cymatics, and fractals are all based on this ancient theme. In this book, we let this ancient theme guide us, as we interweave music, number, and computer programming.

1.2.1 Pythagoras—Harmonic Series

The ancient Babylonians, Egyptians, and Greeks were fascinated with the technological nature of music—perhaps even more than we are today. For instance, Pythagoras (c. 570–c. 495 BCE) discovered that musical pitch intervals could be described by numbers. He and his students are credited with the discovery of mathematics, a term which they coined. Pythagoras left Greece at a young age to be educated in Egypt. There he associated himself with Egyptian priests, who at the time studied astronomy, geometry, and religion (all at once, without the divisions we have today). Pythagoras spent close to 20 years in Egypt, but then was captured during a war and was transferred as a slave to Babylon (an area now part of Iraq). There, through his knowledge and intellect, he gained access again to the educated elite and continued his studies in astronomy, religion, geometry, and music.

Pythagoras’s contributions helped shape the ideas of subsequent philosophers, mathematicians, and scientists, including Plato, Aristotle, and many more. Aristotle tells us the Pythagoreans discovered that musical harmony can be explained by numbers; they took up mathematics, and “thought its principles were the principles of all things. Since, of these principles, numbers are by nature the first, and in numbers they seemed to see many resemblances to the things that exist and come into being” (Aristotle 1992, pp. 70–71). This observation suggests that everything we experience through our senses can be described (e.g., measured and represented) by numbers, in some way or another, and then it can be turned into music. For instance, consider the music stored on your digital music player (inside the machine, this music is represented by numbers).* Also, consider the concept of sonification, that is, the conversion of arbitrary data to sounds, so that they may be perceived more easily. In other words, music and numbers are interchangeable.

One of the major discoveries contributed by the Pythagoreans, which helped shape the nature of music theory many centuries later, is the observation that strings resonate in simple ratios. In particular, they observed that strings exhibit harmonic proportions—they vibrate at integer ratios of their length, that is, 1/1, 1/2, 1/3, 1/4, 1/5, etc. (see Figure 1.1). The instruments of the era, the lyra and the kithara (the latter etymologically related to the modern guitar), were most probably used in their experimentations.

Figure 1.1

Image of String resonating at integer ratios

String resonating at integer ratios.

This was a major discovery, since it demonstrated that integers emerge from the natural properties (or geometry) of a string. Accordingly, the 19th century mathematician Leopold Kronecker said, “God made the integers; all else is the work of man” (Bell 1986: 477). He argued that arithmetic and mathematical analysis must be founded on integers. The Pythagorean discovery is even more profound when considering the implications of string theory in physics, which states that the universe consists of subatomic particles resembling one-dimensional resonating strings. These ideas are related to the fields of cymatics and fractal geometry (see the following sections).

Finally, Pythagoras and his students worked on a theory of numbers and explored the harmony of the spheres. The harmony of the spheres (or musica universalis—music of the spheres) is the philosophical belief that the planets and stars moved according to mathematical equations. Since numbers are connected to musical notes, the orderly movement of planets was said to create an astronomical symphony. According to different religious/philosophical traditions, this music could be heard only by the most enlightened individuals. However, with the advent of the modern computer (and the knowledge you will accumulate in this book), this music is now accessible to everyone.

One of the major discoveries of this era (first described by Plato, in Timaeus, and then by Euclid, in his Elements) was the golden ratio (or golden) mean. This special proportion, which humans find aesthetically very pleasing, is found in natural or human-made artifacts (Beer 2008; Calter 2008, pp. 46–57; Hemenway 2005, pp. 91–132; Livio 2002). It is also found in the human body (e.g., the bones of our hands, the cochlea in our ears, etc.). The golden ratio reflects a place of balance in the structural interplay of opposites.

1.2.2 The Antikythera Mechanism—The First Known Computer

Ancient astronomical models were well established. They were used to construct the first computing machines approximately 2,100 years ago (Vallianatos 2012). Of these early computational machines, only one survives, in the National Archeological Museum of Greece, in Athens. Interestingly, these machines would have been unknown to us had it not been for the early 20th century discovery of fragments of a working model on a 2,000-old shipwreck near the island of Antikythera (see Figure 1.2).

Figure 1.2

Image of Fragment from the Antikythera mechanism

Fragment from the Antikythera mechanism.

The Antikythera mechanism uses the same design principles (i.e., employing gear ratios to implement mathematical relations) as the much later (19th century) Difference and Analytical Engines designed by Charles Babbage and Lady Ada Lovelace (see Figure 1.3). The connection between these machines and modern computers is indisputable.

Figure 1.3

Image of Part of Babbage’s difference engine

Part of Babbage’s difference engine.

1.2.3 Johannes Kepler—Harmony of the World

The Pythagorean ideas and theories inspired many in the centuries that followed, including Johannes Kepler. In 1619 Kepler wrote his seminal work Harmonices Mundi (Harmony of the World). In this book, Kepler describes physical harmonies in planetary motion. His work contributed significantly to the scientific revolution that brought us out of the dark ages.

In this book Kepler presents his third law of planetary motion, that the distance of a planet from the sun is inversely proportional to its speed. Based on this result, he also discusses the harmony found in the motions of the planets. In particular, he discovered that the speeds of consecutive planets approximate musical harmonies. The only exceptions are Mars and Jupiter. However, we now know that this is the result of a missing planet, whose mass is found in the asteroid belt between Mars and Jupiter. This belt was discovered 150 years after Kepler’s death.

Kepler argued that planets can be thought of as “singing” together in near harmony. This harmony fluctuates as planets slow down and speed up (i.e., each has a minimum and maximum angular speed). Only rarely do planets “sing” in perfect concord.

This kind of sonification (i.e., turning data into music) has been applied to many natural and human-made phenomena to generate sounds that are not too foreign to our ears, as might initially be imagined (see Figure 1.4). Later in the book, we explore this idea of sonification, so you too can create your own experiments related to the Pythagorean ideas. Recently, geologist John Rodgers and jazz musician Willie Ruff helped materialize Kepler’s Harmonices Mundi by sonifying actual orbital data of planets in our solar system. This recording can be easily found on the Internet and is very inspiring to listen to.

Figure 1.4

Image of Kepler’s study of musical notes representing the motion of the known planets

Kepler’s study of musical notes representing the motion of the known planets (capturing changes in speed as planets traverse their elliptical orbits around the sun).

1.2.4 Cymatics

Cymatics (from the Greek κύμα, “wave”) is the study of visible (visualized) sound and vibration in 1-, 2-, and 3-dimensional artifacts. It was influenced significantly by the work of the 18th century physicist and musician Ernst Chladni, who developed a technique to visualize modes of vibration on mechanical surfaces, known as Chladni plates (see Figure 1.5).

Figure 1.5

Image of Chladni plates, vintage engraving. Old engraved illustration of Chladni plates isolated on a white background

Chladni plates, vintage engraving. Old engraved illustration of Chladni plates isolated on a white background. (From Charton, É. and Cazeaux, E., eds. (1874), Magasin Pittoresque.)

When drums or gongs are struck, they vibrate in similar ways. That similar modes of vibration relate to musical pitch, rhythmic subdivisions, and sound timbre is interesting and suggests that many aspects of music and sound can be described computationally and controlled through software. Cymatics is an inspiring young field of exploration—for more information, see Evan Grant’s TED Talk, which demonstrates the science and art of cymatics, through beautiful visualizations of soundwaves (Grant 2009).

1.2.5 Fractals

In the spirit of Pythagoras, mathematical descriptions for musical organization continue to be pursued. The hierarchical nature of music has led many to consider fractal geometry as an interesting candidate for such descriptions. Fractals are self-similar objects (or phenomena), that is, objects consisting of multiple parts, with the property that the smaller parts are the same shape as the larger parts, but of a smaller size. Fractals were developed by Benoit Mandelbrot to study harmonic proportions in nature (Mandelbrot 1982). Figure 1.6 displays a fractal tree (also known as a Golden Tree, since it incorporates golden ratio proportions). This fractal is constructed by dividing a line into two branches, each rotated by 60° (clockwise and counter-clockwise), with a length reduction factor equal to the golden ratio (0.61803399…). These smaller lines, again, are each subdivided into two lines following the same procedure. This repetition/subdivision continues on and on (theoretically) to infinity. Interestingly, similar patterns appear extensively in nature (as they maximize the amount of matter that can fit in a limited space, that is, touching but not overlapping). Such artifacts are very easy to construct using a computer.

Figure 1.6

Image of Fractal tree

Fractal tree.

The Harvard linguist George Kingsley Zipf (1902–1950) was a great influence on the development of fractals. In his seminal book, Human Behavior and the Principle of Least Effort, Zipf reports the amazing observation that word proportions in textbooks, as well as notes in musical pieces (among other phenomena), follow the same harmonic proportions (i.e., 1/1, 1/2, 1/3, 1/4, 1/5, etc.) first discovered by Pythagoreans on strings. Zipf proportions have been discovered in a wide range of natural and human-made phenomena, including music, city sizes, salaries, subroutine calls, earthquake magnitudes, thicknesses of sediment depositions, extinctions of species, traffic jams, and visits to websites, among many others.

Zipf proportions are also known as pink-noise, harmonic, and 1/f proportions and can be considered to be measures of variety or interest. At one extreme is a random probability of occurrence (i.e., chaos or white noise, such as radio static) where events are unpredictable and seemingly unorganized. In the mid range lie fractal and brown-noise that have some discernable organization. At the other extreme are very monotonous phenomenon (aka black-noise proportions), such as a musical piece consisting mostly of one note. In physics, white-noise, pink-noise, brown-noise, and black-noise proportions are known as power laws. Psychologists have shown that people prefer music, and other experiences, that have a balance of predictability and surprise, and so having a computable measure of this likelihood can be useful in computer music making.

Many interesting attempts have been made to generate music from fractal artifacts. Conceptually, the process is relatively straightforward—it involves converting aspects of a fractal object to aspects of a musical artifact. For instance, the placement and size of a line in Figure 1.6 could be converted to the pitch and duration of a note. As the fractal object is being visually generated through a computer program, that same program could output the corresponding musical notes to a MIDI file, thus generating a fractal music piece. The process of mapping visual elements to audio elements is called sonification. Sonification is an art in itself, as there are many possible ways of converting between visual elements and audio elements. (For instance, consider how you might sonify Figure 1.6.) The trick is to identify which visual elements to select, and how to map them to audio, so as to generate the most aesthetically pleasing (or scientifically interesting) audio artifacts. Sonification and fractals are explored later in the book.

1.3 Computer Music History

Throughout human history, technologies have consistently influenced our societal development, with periods of accelerated influence occurring at times such as the Renaissance, the Industrial Revolution, and the Information Age. This is paralleled by a relatively similar pattern of music technology developments. The earliest harps, horns, and drums are clearly technologies and their development and usage relied on new technologies of their day, very similar to the way computers are applied to music production in our age.

Landmarks in the history of music technologies include the use of written notation from around mid 9th century CE, the development of polyphony in the centuries that followed, and organ building improvements and equal temperament in the Middle Ages.

The Renaissance and Baroque periods saw an obsession with music of the spheres resulting from the newly developed field of astronomy (see above), and a peaking of craftsmanship in the violins of Stradivarius and in compositional technique in the fugues of Bach. The study of alchemy led to 19th century chemistry and physics, which provided new metals and efficient methods to improve instrument fabrication. This surge in instrument development went hand in hand with increases in orchestra size. Also, industrialization was a common underscoring theme in music, such as Wagner’s “Der Ring des Nibelungen.” Early 20th century landmarks include the automation of music via the player piano, the technological abstractions of electronic and recorded sound in the music of Schaeffer and Stockhausen, and parallel abstractions in the musical structures and notations of Debussy, Stravinsky, Schoenberg, Xenakis, Cage, and others.

This history is continuous in its highlighting of human curiosity and creativity. However, the developments in knowledge and technology are not deterministic and did not follow a simple evolutionary path (i.e., a path of increasing complexity). For example, the interests of Pythagoras resurfaced and inspired (more than one thousand years later) Kepler’s explorations of musical patterns in astrological movements and Fourier’s investigations into sonic spectra in the later 18th century. In between these investigations were centuries of explorations that followed other technological paths. The path of technological development is in no way straight or predictable in advance, even if such developments appear as a logical sequence with hindsight.

1.3.1 Automated Music

One of the characteristics of the computer as a music machine is that it can be automated by programming. Automatic instruments have existed for a long time, probably since antiquity. One possibility is the hydraulis which is attributed to Ctesibius of Alexandria (3rd century BCE). The hyrdraulis, about which little is known (due to the loss of ancient knowledge mentioned earlier), used water pressure to drive air through pipes, thus producing sounds (similarly to later ecclesiastical organs). Another possibility is the wind organ developed by Heron also of Alexandria (1st century CE), which was driven by a wind wheel. These and later designs were passed through Byzantine and later Arab scholars to Italy around the Renaissance period.

These designs contributed to later automated instruments, such as the barrel organ of Henry VIII built in 1502. It was manually driven, but the course of the following century led to fully autonomous instruments driven by clockwork mechanisms (similar to the Antikythera mechanism, whose design principles were also passed through Byzantine and later Arab scholars).

In order to increase the repertoire used in automated music machines, an alternative was sought to barrel organs that used replaceable barrels, which were expensive to produce and on which playing time was limited. A solution to both these problems presented itself in the 18th century in the form of the punch card technologies employed by Jacquard weaving looms. Scores were made in the form of holes punched in paper tape or cards. The cards could be strung together to create long sequences. This became a new form of musical notation, which was not efficient for human reading but quite efficient for machine reading. The machine became the interpreter of these machine-specific scores. Such instruments constitute more than an amusement even if their quality of performance was quite low. They enabled musical performances to be captured and transported, to be reproduced on demand, and replayed time and again for closer inspection.

Perhaps the most sophisticated (and certainly the most popular) automated instrument was the player piano. Although its development historically paralleled the gramophone, its sonic quality was far superior for quite a while and brought music on demand into many homes in the first half of the 20th century. The availability of automated musical performances in the home changed the role of the audience, affecting (not always detrimentally) concert attendance and the social status of musical performance skills. The player piano, more than electronic recording technologies, was the parent of MIDI sequencing in choosing to capture pitch, duration, and force (velocity) for each note. The piano rolls were editable and so “near perfect” performances could be created, and composers were not slow to realize that piano rolls could produce music beyond that humanly performable. In this way the composer first became a nonperforming producer, involved in all the steps from conception to final sounding.

1.3.2 Early Computer Music

The first public performance of computer music was programmed by Geoff Hill and Trevor Pearcey and generated by CSIRAC (Council for Scientific and Industrial Research Automatic Computer) at the Australian Computer Conference in August 1951. At this time, computer music was little more than a computational barrel organ playing popular tunes of the time; however, to do so at that time was no easy task, especially given the fickleness of the valve components, the timing constraints of the memory using mercury delay lines, and awkward punched paper tapes for describing programs. CSIRAC was the first computer in Australia and a machine intended purely for scientific research, so the achievement is a remarkable example of how quickly people turn any technology to musical purposes.

Computer-based music composition had its start in the mid-1950s when Lejaren Hillier and Leonard Isaacson did their first experiments with computer-generated music on the ILLIAC computer at the University of Illinois. They employed both a rule-based system utilizing strict counterpoint and also a probabilistic method based on Markov chains (also employed by Iannis Xenakis around the same time). These procedures were applied variously to pitch and rhythm, resulting in “The ILLIAC Suite,” a series of four pieces for string quartet published in 1957.

The recent history of automated music and computers is densely populated with examples based on various theoretical rules from music theory and mathematics. While The ILLIAC Suite used known examples of these, developments in such theories have added to the repertoire of intellectual technologies applicable to the computer. Among these are the Serial music techniques, the application of music grammars (notably the Generative Theory of Tonal Music by Fred Lerdahl and Ray Jackendoff), sonification of fractals and chaos equations, and connectionist pattern recognition techniques based on work in neuropsychology and artificial intelligence.

Arguably, the most comprehensive of the automated computer music programs is David Cope’s Experiments in Music Intelligence (EMI), which performs a feature analysis on a database of a particular composer’s works (Cope 2004). Using this analysis, EMI can then compose any number of pieces in that composer’s style (e.g., J.S. Bach, Chopin, etc.). The term style, here, is a function of many musical aspects, including melody and harmony.

In terms of melody, EMI captures repeated ideas that run through the works in its database, that is, common melodic material that a composer tends to use. Finding such repeated ideas is a complicated task, as the same melodic idea can be presented in a variety of different ways within a single piece: notes can be added to it or removed, it can be sped up or slowed down, and it can be played over different harmonies. This requires sophisticated pattern recognition within a complicated context and is one of the major accomplishments of Cope’s research.

In terms of harmony, EMI extracts chords from the works in its database and then constructs its own chord progressions. These progressions do not replicate the ones in the database’s works; they are novel. However, they are stylistically similar to (i.e., follow similar construction rules as) the analyzed works. In this way, the composer’s “harmonic style” is also replicated in EMI’s new compositions.

EMI’s database can, actually, be loaded with the works of more than one composer. When this is done, its resulting compositions blend the styles of those composers. The results are sometimes odd, but quite often surprisingly clever and beautiful.

1.3.3 Electronic Music

After Thaddeus Cahill’s relatively unsuccessful attempt at creating a massive organ-like device using early American telephone technologies called the Telharmonium, one of the first electronic performance instruments was Leon Thérémin’s device invented in the 1920s in Moscow. The Theremin, as it was known, was played by positioning each hand at a varying distance from two antennae. The location of the hands changed the electromagnetic fields generated by electricity passing through the antennae, one controlling volume, the other the pitch of a constant and relatively pure tone. The Theremin made quite an impact, with pieces being written for it by Aaron Copeland and Percy Grainger, although the most popularly known example is in the opening of the Beach Boys’ hit “Good Vibrations.”

The first popular electric keyboard instrument was the Hammond organ, invented in 1935 by Laurens Hammond using electromagnetic components to generate sinusoidal waveforms which could be combined in various combinations using drawbars. The drawbars acted similarly to pipe organ stops, but rather than simply turning on or off oscillators, they controlled their degrees of loudness. The B3 model, first produced in 1936, has become legendary in gospel, jazz, and rock music. It provided a relatively affordable and portable keyboard instrument for music performance, and the timbral variety “synthesized” through drawbar settings gave to keyboard players a taste of customizable timbre that would later be expanded by the synthesizer.

The solid body electric guitar was developed after some initial production of semiacoustic electric models in the 1930s. Following early experiments by Adolf Rickenbacker and Les Paul, the first production models appeared in the early 1950s from the Gibson and Fender companies. The major technical hurdle was the refinement of the pickups to eliminate noise and provide a clear signal, which was solved largely by the development of the twin-coil “humbucking” pickup.

The early development of recording technologies by Thomas Edison was done with mechanical technologies around the turn of the 20th century. It was not until electronic amplifiers became available in the form of vacuum tubes that the minute etchings of the recording process could be played back with any fidelity. Even then the making of recorded cylinders was tedious and specialized. Building on this research, the first commercial magnetic tape recorder was introduced in 1948. The ability to record, not only play back, was the shift necessary to motivate musicians to use this technology creatively.

In Paris in the late 1940s after World War II, Pierre Schaeffer developed a compositional use for the previously reproduction-focused tape recorder. The compositional technique Musique Concrète, as it became known, used recorded sounds of both instrumental and environmental origin, manipulated them through variations in timbre, pitch, duration, and amplitude, then collaged these sounds into a polyphonic musical form.

Tape-based compositional works were produced by Karlheinz Stockhausen in Cologne from the mid-1950s, which he called Elektronische Musik (Electronic Music). As well as treating recorded sounds, Stockhausen and contemporaries such as Edgard Varèse focused on synthesizing new timbres using oscillators, filters, and amplifiers.

The successful commercialization of synthesizers came with the release in 1964 of the Moog synthesizers. The technical breakthrough that made these instruments possible was the use of transistors instead of vacuum tubes, which dramatically reduced the instrument’s size and increased the stability of voltage control. One of the more popular early recordings using the Moog synthesizers was Wendy Carols’s “Switched-on Bach,” which was a notable achievement at the time, but created a legacy of imitative thinking which still haunts synthesizer usage, as more recently reinforced in the General MIDI specification. The most popular of Robert Moog’s synthesizers was the Mini Moog, one of the first portable all-in-one synthesizers, still highly regarded 50 years after its release (see Figure 1.7).

Figure 1.7

Image of The Mini Moog synthesizer

The Mini Moog synthesizer.

The use of recording as a compositional and synthesis tool did not change much from the days of musique concrète until the late 1970s, when the development in Australia of the Quasar and M8 digital synthesizers by Tony Furse influenced the commercially successful Fairlight CMI developed by Peter Vogel and Kim Ryrie, and at the same time the New England Digital Synclavier was developed in New Hampshire by Sydney Alonso, Jon Appleton, and Cameron Jones. The Fairlight and the Synclavier introduced sampling technologies (short-duration digital recording) to commercial music making in 1979. Both instruments were also capable of sound synthesis processes and used keyboard controllers for performance, attached to computer systems for storage, display, and editing of waveforms. A version of the Fairlight is now available for the Apple iPad, which highlights how much computing power and expense has changed in the last half a century or so.

Digital technologies made their way into synthesizers first as memory banks for presets, most famously in the Sequential Circuits Prophet V, and later in the sound synthesis engine itself, notably with the Yamaha DX7 (see Figure 1.8). The release of the DX7 in 1983 coincided with another significant event in electronic music history: the introduction of the Musical Instrument Digital Interface (MIDI) standard.

Figure 1.8

Image of The Yamaha DX7 synthesizer

The Yamaha DX7 synthesizer.

Developed by Dave Smith of Sequential Circuits, and with input from other major manufactures of the time, notably Roland and Yamaha, the MIDI standard replaced the plethora of interconnecting standards such that equipment from different manufacturers could communicate. MIDI began as a note-based live performance protocol, intellectually indebted to music notation and player-piano technologies. The MIDI standard has expanded over the years to include file formats, sample transfer protocols, the General MIDI standard sound set, a music XML format, and a range of other musical and operational parameters.

The synthesizer, in its keyboard form, has remained quite stable since the 1980s, with some controller extensions modeled on other instruments including guitar, woodwind, and percussion. Research continues into new instrument designs, as it always has, with STEIM in the Netherlands and the HyperInstrument group at MIT’s Media Lab contributing significantly during the 1990s, but with developments expanding quite broadly since then. Many of the latest research developments are evident in the proceedings of the annual New Interfaces for Musical Expression (NIME) conference.

Along with advances in MIDI and the digital synthesizers, the 1980s also saw an accelerating increase in personal computer ownership and with it the expansion of music software. Most significant from a commercial aspect was the rise of the MIDI sequencer software. Building on the techniques of earlier electronic sequencers to repeat short series of notes, software sequencers continue to provide more comprehensive musical transformations.

Alongside sequencing, music notation programs were also appearing at this time, although it took the desktop publishing revolution of the early 1990s for all the appropriate technologies to fall into place, notably the postscript font-description language and laser printing. Computer music publishing is now the norm rather than the exception. The first program to successfully combine both sequencing and notation was C-Lab’s Notator on the Atari computer, which proved the rule that you only need one “must have” program to sell a computing platform. Over the years, this program has transformed into Apple’s Logic Pro software.

As personal computer power increased in the late 1990s, synthesis software (long the domain of expensive systems such as the Fairlight or computer workstations) became accessible. This is evident in the current popularity of hard disk recording systems, such as Pro Tools, as well as real-time signal processing systems, which are becoming practical on mobile computers for reverb and equalization, and even real-time synthesis as complex as frequency modulation, granular, and physical modeling.

The integration of many of these technical threads in computer-based composing, recording, publishing, and multimedia occurred around the late 1990s, and now digital music systems provide rich and expressive tools for the musician. Around the turn of the 21st century the increases in computing power reached a threshold where personal computers were powerful enough to manage most audio and some video processes in real time. This saw the concentration of computer music systems in software or “virtual” versions of what had been previously separate hardware components. The laptop computer had become the one-stop digital music workspace and an instrument for live performance. This process of concentrated computing power continues, with mobile devices such as smartphones and tablet computers increasingly becoming the site for computer music practices.

1.3.3.1 Reflection Questions

  1. What were the dominant technological drivers of the past few centuries?
  2. Where do you think the current borders of technical innovation are that will affect music making?
  3. Given that new materials such as iron and aluminum have shaped the development of acoustic instruments, what changes have driven electronic/computer instrument development?
  4. What have been the major developments in automated music described in this chapter?
  5. The use of electronics has shaped music making over the past 100 years. Who were some of the musicians to first pioneer the use of electronic devices for music?
  6. What has been the impact of audio recording on music making?
  7. What was the basis of the compositional technique known as musique concrète?
  8. What changes occurring during the 1990s are described in this chapter?

1.4 Algorithms and Programming

Computers have been traditionally programmed to calculate solutions to numerical problems (the name “computer” itself reflects this—modern computers were viewed as a replacement for human computers in the military). This view, of course, is very restrictive, as any normal computer user can attest. Computers are wonderful for playing games, searching the Internet, and for making music. In this book, we introduce computer programming in the context of connecting number, music, and nature.

One of the more significant advantages of the computer for music making is its ability to be programmed: its ability to automatically do a series of tasks and to do them quickly. This is, of course, the basis for all software development but can also be the basis for a music making practice. Algorithmic music using a computer takes advantage of this ability to automate a series of instructions (an algorithm) to musical ends.

Definition: An algorithm is a series of steps (or instructions) for performing a task.

Examples of algorithms include instructions for assembling a bookshelf (assembly instructions sheet), steps for making spaghetti sauce (a recipe), and instructions for performing a musical piece (a musical score).

Computers can be programmed to follow such series of instructions using a programming language. When programming computer music, the challenge is to write instructions that lead to interesting and expressive music. Musical algorithms can describe how each of the musical elements is specified and varied as the piece proceeds. This can include control over the pitch, duration and loudness of notes, the timbre of sounds, the use of structural features such as repetition and variation, as well as tempo, volume, balance and so on.

The ability of computers to run algorithmic processes (programs, or sequences of steps) gives the impression that computers have autonomy and are possibly “smart.” At its most advanced levels this autonomy is referred to as Artificial Intelligence (AI), most well known through systems such as IBM’s Deep Blue for playing chess, and popularized through science fiction systems such as Hal in the science fiction film 2001—A Space Odyssey and androids such as R2D2 in Star Wars or robots such as Walle in the film of the same name.

In algorithmic music systems the intention and possibilities are generally far more modest, even though some comprehensive systems, such as Experiments in Musical Intelligence by David Cope, can construct complex and complete pieces. Generally, algorithmic composition systems are used for more mundane purposes, such as generating a tonal melody of a few bars, creating valid variations in a 12 tone row, suggesting possible chorale harmonization, or sonifying mathematical structures such as fractals or artificial life worlds by converting the numbers generated by formulae into pitches, rhythms, and form.

Many algorithmic systems deal with music at the note level, specifying or manipulating attributes such as pitch, duration, and dynamic. This is historically the most prevalent way of thinking about music and is the basis for common practice notation, so it is not surprising that note-based generative systems are common. Algorithmic processes can be applied in many ways to notes. Small pitch changes at the frequency level can be used for microtonal music, or loudness may be controlled by a function introducing a kind of jitter or instability to the note which, if subtle, may add some life to an otherwise static electronic performance. Similarly, subtle changes can be applied to the dynamic levels of a repeated phrase in order to provide variety which masks the machine-like repetition to some degree; we will explore this example later. Algorithmic systems can be used to generate note-level scores for either acoustic or electronic realization.

Algorithmic processes can be applied to music at a structural level to manipulate measures, phrases, or sections of music.

1.5 The Computer as a Musical Instrument

There are many ways to make music using computers. Some musicians prefer ready-made production software, such as GarageBand, Audacity, and Ableton Live (to name a few). Other musicians prefer more versatility and power—they utilize music programming environments, such as CSound, SuperCollider, Extempore, PureData, and Max/MSP. This book prepares for the second approach, by introducing a simple, yet powerful programming language (Python) and several programming libraries for generating sounds, processing images, and building graphical user interfaces.

The ability of computers to follow arbitrary musical (or other) processes makes it possible to design and implement new musical instruments, running on regular computing platforms, such as a laptop or a smartphone. For instance, through Python and the libraries provided with this textbook, we can develop many different types of computer music instruments. These instruments may have graphical user interfaces, which increases usability. Such instruments may be used to compose and perform classical, popular, or avant garde musical pieces.

Part of the rationale for thinking about computer musical instruments is that it takes years to master playing a guitar or violin; computer musical instruments, on the other hand, can be much easier to learn for beginners. The goal is not to replace traditional instruments—there will always be a need for them. Instead, we wish to allow more people to engage in musical performance. This is similar to the ease with which someone can play a game of football on an XBox, as opposed to a real, physical game of football. The computer, through its constraints and affordances, makes it easier to play (or compose) music (Magnusson 2010). Computer-based musical instruments may be used by a single performer or by many performers in ensembles, such as in laptop or iPad orchestras. Additionally, one could mix traditional instruments and computer instruments. Finally, computer musical instruments can be designed to do things beyond the capabilities of traditional instruments or human performers. This creates exciting new musical possibilities.

As an instrument, the computer becomes a vehicle through which you express musical ideas. Any music instrument (or music technology) provides new capabilities, but also comes with constraints. For instance, acoustic instruments are easier to play in certain pitch ranges, and many orchestral instruments can play only single melodies (not harmony or chords).

Similarly, computers as musical instruments have unique characteristics and limitations. For instance, using MIDI they can generate 128 different sounds, 16 of which can be simultaneous. Each of these 16 instruments can play many simultaneous notes. So, conceivably, a single computer can play as many notes as a medium size orchestra and perform musical pieces of arbitrary complexity (e.g., pieces that a human orchestra could not perform with traditional instruments). They can also play back arbitrary sound recordings and loop them or mix them to construct an arbitrary soundscape. In terms of limitations, computers are “dumb” if not programmed, perhaps even more musically dumb than a drum, which can at least make an interesting noise when struck. To make computers come to life as musical instruments, someone needs to “play” them, by writing and using software, in order to generate music.

So, like other musicians, developers of musical software utilize their knowledge of an instrument to achieve musical tasks with certain aesthetic objectives or goals in mind.

When viewed as a musical instrument, the computer becomes a partner in the music making process. Thus, music making moves beyond simple human-directed activity to become a collaboration with the computer; by assisting, the computer has an influence. However, the more responsibility a computer has over the music production, the greater the expectations on the programmer and composer using the system. This might seem counterintuitive but it is true. A greater emphasis on the unintelligent, generic computer requires more of the intelligence and ingenuity of the human programmer and composer.

Finally, as with traditional musical instruments such as the guitar, intimate engagement with the computer as an instrument requires that you increase your skills; for the computer this means your programming skills. As with any musical instrument, the greatest satisfaction will result when you learn to play the computer well. This involves studying the principals involved, regular practice, and an immersion in the computer music culture through listening, reading, and discussion.

This book will assist you to compose musical processes for offline playing (e.g., generating a musical piece and saving it as a MIDI file) and using the computer for real-time performance. The musical possibilities are endless.

Let’s begin.

1.6 Software Used in This Book

This book teaches the programming language Python. Also, it comes with a software package which contains several libraries for music-making in particular and creative computing in general, such as a library for music (MIDI and audio), one for images, and one for graphical user interface (GUI) development.

Python is a general purpose programming language designed to be easy to read and to use. It includes a large and comprehensive library of functions for common computing tasks. Python is widely used by leading computing companies such as Google, and thus many resources are available (including this book) to assist in learning it. The version of Python used in this book is called Jython, which is implemented on top of the Java Virtual Machine (JVM). Since the JVM is a truly portable programming environment (i.e., it runs on all popular computing platforms), any code you develop using this book will run identically on different computing platforms. This portability is very desirable, since it allows you to share your algorithmic music compositions with collaborators around the world, regardless of what computer system they are using (e.g., Windows, Mac, or Linux).

The music library provided supports arbitrary music programming tasks through Python. It provides a music data structure based upon note/sound events, and methods for creating, organizing, manipulating, and analyzing that musical data. Generated music scores can be rendered as MIDI or audio files for storage and later processing or playback. Also, the music library allows the playback of arbitrary notes and sounds in response to user-initiated events (see GUI library below). The music library is used and incrementally presented through the remaining chapters, in conjunction with traditional computer science concepts and Python programming skills. After all, the purpose of this book is to teach you programming in Python through music making. The audio library is used later in the book when we build interactive musical instruments in software.

The image library allows the reading and writing of digital images. These digital images may originate from your digital camera or be downloaded from the Internet. Once an image is read into a program, the image library allows accessing and manipulating image elements (i.e., pixels). For instance, one could read in a image and use its varying colors (or luminosity) to drive a musical process. The image library is covered in Chapter 7, the chapter on sonification.

The GUI library allows development of graphical user interfaces to drive (or be driven by) arbitrary musical processes with an emphasis on musical performance. The idea is that, through this library, you may develop arbitrary musical instruments to be used in performance (as well as in composition). The GUI library is covered in Chapter 8, the chapter on interactive musical instruments.

This software is available for download on the website associated with this book. Installation instructions are provided alongside the software.

1.6.1 Case Study: Running a Python Program

There are two ways to write Python code: directly into the interpreter, and using an editor. The first way is easier for small programs that you intend to run only once. For example, if you want to perform a quick calculation, or if you want to create a melody consisting of a few notes, you may use the interpreter directly.

For a more substantial program (or a program that you intend to run many times) you should use an editor to type your code. See some suggestions on the textbook website. Actually, any text editor will do—some of them are better because they color-code different parts of a Python program (e.g., comments are green, strings are red, reserved words are purple, and so on). This makes programs easier to read.

Using your favorite editor, type the following program:

# playNote.py
# Demonstrates how to play a single note.
from music import *	# import music library
note = Note(C4, HN)	# create a middle C half note
Play.midi(note)	# and play it!

Save this program in your music programming folder under the filename “playNote.py”.

Observe the following points:

  • Python is case sensitive. For example, there is a big difference between “note” and “Note” (see line 6 above).
  • Be very careful when typing. Most of your errors will likely be caused by a typo.
  • Empty lines (vertical space) are NOT important to Python. For example, lines 3 and 5 are there to improve readability.
  • Comments allow humans to see the algorithmic process involved, so they are very important. They are ignored by Python. For example, see the comments above following a “#” up to the end of the line.
  • Line up comments whenever possible—it is considered good style (e.g., lines 4, 6, and 7). If a comment is too long to fit on one line, you may put it above the statement(s) it explains, as in lines 1 and 2 above.

When finished typing, run it.

Running this program will ensure that you understand the basics of the software development process using Python and that everything has been installed properly. The program should generate a single note. The note has pitch C4 (i.e., middle C), and duration HN (i.e., half note). Always make sure your volume is adjusted properly before running programs that generate sound.

If you have reached this point, congratulations! You have written and run your first musical program in Python. The rest is incremental and straightforward. Enjoy!

1.7 Summary

This chapter introduced the fundamental ideas and concepts that we will explore throughout the rest of the book. It talked about the early origins of music, mathematics, and computing. It revealed some inspiring areas of exploration, including the harmonic series, the golden ratio, Kepler’s Harmony of the World, Zipf’s law, cymatics, and fractals. It presented some of the pioneers in computer and electronic music. Also, it introduced useful terms, such as algorithm, that we will see again and again. Finally, it introduced the idea that nature, music, and number are all somehow intertwined, i.e., that one can be transformed into another. These inspiring ideas and concepts will guide our creative explorations throughout this book, and hopefully beyond it.


* This applies to all information (e.g., text, images) stored on a computer, or the Internet—the term digital refers to representing information using numbers (i.e., converting information to data).

See Chapter 7 for a more in-depth discussion of sonification.

How you run your program depends on the editor used. To run your programs, see setup instructions or follow your instructor’s directions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset