In this chapter we will cover a range of basic and intermediate topics. Our goal is to offer a broad perspective on concepts such as nonlinearity (as it relates to game music), requisite skills helpful to game composers, as well as foundational components of game music such as loops, stingers, and layers. We will explore these topics in depth in later chapters.

Many of the following concepts are entry level and can be understood by novice game composers. However, we will quickly move on to more intermediate topics. If at any point you have trouble with material later on, we recommend returning to this chapter to refamiliarize yourself with the basics. You can also use the index and companion site for reference as needed.

What is Nonlinear Music?

In order to define nonlinear music we must first understand what linear music is. We have already encountered both linear and nonlinear audio in the previous sound design chapters as well as in Chapter 1. These same concepts apply to music. Linear music has a structure that flows predictably from A to B to C sequentially each time a track is played (see Figure 5.1). The player has (virtually) no interaction whatsoever with linear music. This is essentially how all film and popular music is heard. It sounds the same every single time from start to finish.

Figure  5.1  The sequence of linear music flows in order, every time the music is performed.

By contrast, nonlinear music flows non-sequentially and in no chronological order. So A could flow to C and then back to A again on one playthrough, and in the next it might start on B and flow to C and then to A (see Figure 5.2). This means 100 playthroughs could theoretically have 100 different outcomes musically. In video games, these changes occur when the player takes an action or makes a decision that affects the game in some way, changing the state of the game (or game state). This concept of nonlinearity is the core of video game music. Other media (with some rare exceptions) exclusively uses linear music, but games are unique. Games need music that can change and adapt when the player makes choices, so understanding how to write nonlinear music is absolutely essential for a game composer.

Figure  5.2  The sequence of nonlinear music is more like a logic tree; each section of music can move to any other section, or even loop back on itself making the “performance” of the music different every single time.

Interactive Music

When we talk about nonlinear music there are actually two different categories that we are referring to: interactive and adaptive music. These terms are not always well defined and are often used either interchangeably or with ambiguity between definitions. Although some gray areas do exist, there are important differences between them, which we will explore in the following chapters. The defining factor that separates these terms is how the player interacts with the music.

Players interact directly with the music itself when dealing with interactive music. Games like Guitar Hero and Rock Band are examples of interactive music. Players are actually influencing the music that is playing in real time. In these games players have agency over individual notes and rhythms. In a sense, the game state becomes a playable musical instrument. In a slightly more distilled sense, mini-games or small events within a game can also be interactive. It is very common for games in the Super Mario franchise to have direct methods for players to interact with music. For example, by collecting items players can piece together a melody note by note in Super Mario Galaxy. These examples may not be fully interactive in the way Guitar Hero is, but they can be considered to have interactive musical components.

Adaptive Music

Adaptive music can be defined as a soundtrack where players have indirect influence over the music. In this case the player can affect the game state, which in turn affects the music. Or in other words, the music adapts to the game state. Any time we hear a transition from one musical cue to another, we are really hearing adaptive music responding to changes in the game. The player has taken an action, thereby altering the state of the game and the music is adapting to compensate.

Adaptive music will be our primary focus in these next few chapters mainly because in learning the methods behind adaptive scoring you will be capable of writing music for a wide range of game genres and situations. However, the applications for interactive music are far-reaching and continue to be explored by game designers and audio professionals. The overview in “Aesthetic Creativity and Composing Outside the Box,” Chapter 9, page 330, will cover some more experimental methods of interactive scoring and provide some ideas for future development.

Challenges of Nonlinear Music Writing

Nonlinear music is at its core very different from the music that you would listen to on the radio, or during a film or television show. Video game scores need to be responsive to the changing states of the game and must be created with this in mind. Our challenge then is to write music that functions well in a variety of contexts. As a composer, the priority changes from writing music that fits a specific scene (film music), to writing music that fits the overall mood of a particular game state and is flexible enough to transition based on unpredictable player input.

Picture yourself playing a game where the object is to solve a puzzle and then defeat an army of zombies. It would be an impossible endeavor to write linear music that synchronizes exactly to the player’s actions during the puzzle because every player will take a different path to solve it (for advanced solutions to this problem see the sections on “Vertical Layering” and “Horizontal Resequencing” in Chapter 9, pages 000 and 000). You can however write nonlinear music that supports the mood and emotion of the scene as the player is solving the puzzle. Common styles that fit this situation would be subtle, yet mysterious music that does not distract from the player’s problem solving. This music should be written to transition smoothly after the puzzle is solved because the mood will immediately change to something else afterward. It is up to the composer to think ahead and plan the musical pieces to fit well together, and to allow for a cohesive overall player experience.

Essential Skills for the Game Composer

Game soundtracks range in genre from electronic to orchestral to ambient and beyond. This makes video game soundtracks an extraordinarily diverse medium, and affords composers the opportunity to create unique and personal sound for each game. However, this also means that new composers usually need to be skilled in a few different musical idioms. The two most common styles in games are cinematic and electronic. Often composers need to work out some combination of the two. This is not to say that composers should always try to write within those frameworks, but it does mean that developers will likely ask for or provide references in these styles. From that, we can ascertain a list of skills that will be helpful in streamlining creativity and workflow (also see “Essential Soft Skills and Tools for Game Audio,” Chapter 1, page 16).

Consistency

Although this is not something often discussed, arguably the most important skill for new game composers is the ability to consistently and frequently write music within a given timeframe. It is a myth that great music comes from a spark of inspiration, manifesting itself as a fully formed magnum opus. Inspiration can indeed strike at any time, but in reality that is the first step in a long process which requires intentional and consistent practice to bring great music to fruition.

Basic Loops and Stingers

These are the “hard” technical skills for the new composer, which you will need under your belt if you expect to start scoring games right away. We have discussed loops in depth in “Essential Soft Skills and Tools for Game Audio,” Chapter 1, on the companion site, so refer back to that section to refresh your memory. Loops and stingers function in a very similar way musically, but there are some important differences that we will cover in the following sections.

In adaptive music, loops are musical tracks whose ending seamlessly blends back into the beginning without pops, clicks, pauses, or obvious changes in musicality. The simple way to do this is to plan to copy your first bar into your last bar (or a slight variation of it to keep things from getting boring). When this is not possible, it’s important to identify four things: instrumentation, dynamics (and expression, modulation, and other relevant MIDI parameters), density, and tempo. These four things must be the same for the first and last notes (or possibly full bars) of the loop in order for the transition to be seamless. In the Sound Lab (companion site) you’ll find an audio example, score, and screenshots that demonstrate composing music with looping in mind.

Stingers are short musical fragments that play linearly and are triggered by a player action or game event. These can be thought of as straightforward cue, written as a very short piece of music. The key when writing stingers is to convey the mood of whatever is triggering the stinger.

In the Sound Lab (companion site) is an example of a short death stinger, meant to trigger when a player has died. Notice that unlike a loop, this stinger can change dynamics and articulation at the composer’s discretion. The dynamics increase steadily throughout, and end in fortissimo. We also end with another double stop, which increases the dynamics and density of the stinger. It sounds very “final,” which works well when triggered at the death of the player character. If you listen carefully you can also hear the note hold on the last bar. This would not be appropriate with a loop because the rhythmic rubato would feel uneven when transitioning back to the start of the cue. Stingers like this are short because they need to match an animation or briefly highlight something in the game. However, stingers can also be longer at which point they function more like a linear cue.

Critical/Analytical Listening

One absolutely indispensable skill for game composers is the ability to listen to music critically and analytically. This goes hand in hand with aural skills, which help with identification of chord progressions and melodic shapes. Aural skills, however, are not enough. As a game composer you must also be able to play a game and understand how the implementation may have been executed. To do this it is important to play games often, and listen not just to the content of the music, but to exactly how the music changes and transitions. Are the transitions smooth? Did the composer write a linear transition, or are two cues crossfading together to adjust to the gameplay? Most importantly, ask yourself how you would approach those same game scenes.

On top of that, it’s also important for composers to understand what’s going on technically in a game’s soundtrack. Try to familiarize yourself with a broad range of instruments and plugins. Learn how synthetic sounds are made, and have a foundational knowledge of recording techniques. These will all help you listen more critically to a game’s score, and in turn will help you be more analytical about your own work.

Arranging and Orchestrating

While it is not necessary to be an expert in all types of arrangement and orchestration when composing for games, it is necessary to have a strong knowledge of one particular style at least. You can then use this as a foothold into other styles. Many composers are experts in a niche genre of game music, and don’t need to venture too far outside of their comfort zone. However, this can be a difficult path, and for the average person learning a wider variety of musical idioms usually creates work opportunities. Being able to compose flexibly can often lead to more work and a more interesting day-to-day lifestyle. As you build your career and gain more experience, you will most likely be forced to adjust your style as projects can be very different in terms of the mood and the artwork.

Exploring Music as an Immersive Tool

As human beings we perceive music as an inherently emotional medium. This makes it an effective tool for communicating mood and setting in video games. This communication (when successfully done) is an effective way to immerse the player in the game world. Referring back to our previous example, the mysterious and subtle music we created for the puzzle scene has two functions: 1) to encourage player action and 2) to immerse the player in the mystery and intrigue of the game. Our puzzle track adds tension, which compels the player to take action to solve the puzzle. By adding subtle elements of mystery to the track we can convey the primary mood of the scene, thereby focusing our player’s attention on the game world and story events, rather than focusing attention on the music itself. This is the essence of how music immerses players in a game.

The Role of Music in Immersion

Immersion as it relates to music comes down to a single question: what does this game need? In essence we are asking what task our music needs to accomplish in a given scenario. The answer to this can be split into two very broad categories: emotional impact and gameplay support. These are certainly not the only tasks music can accomplish, but they are two very important ones that are used in just about every game.

Emotional impact usually comes very naturally to us as composers. If a game calls for an emotionally charged moment, we usually have the freedom to write out a compelling thematic cue which will draw the player into the story of the game. This is a more overt process, and we are free to score any and all aspects of the scene. A fantastic example of this is the track “All Gone (No Escape)” by Gustavo Santaolalla, heard in the game The Last of Us. The instrumentation is very “no frills,” comprised of only a small string ensemble. This makes the whole cue feel intimate and vulnerable, which exactly mirrors the fear of loss that Joel is living through.

The track is triggered near the end of the game, where the player is carrying an unconscious Ellie through an army of enemies. This scene is the climax of Joel’s character arc, where he decides that Ellie’s life is more important than the life of every other survivor on the planet. The music here justifiably takes precedence over almost all other aspects of the game through strong emphasis in the audio mix. This was a bold and intentional decision on the part of the audio team at Naughty Dog, and it pays off. The cue only enhances immersion by pulling players into the emotions of the scene. On top of this, the entire structure of the track heavily leans toward an emotive climax, which adds a heightened sense of tension and build-up to the scene. Ironically, not all thematic cues meant to deliver an emotional impact need to have a strong melody, but in this case the player is left with one that is beautiful and memorable. In that way “All Gone (No Escape)” expertly accomplishes its goal of delivering emotional impact.

The other example of musical immersion is much more subtle, and it is unique to games (as opposed to film and television). This is music written to support the gameplay experience. This is the core of video game music because this type of musical immersion has an effect on what actions the player will take. In turn, it changes the player’s experience completely. One example of this is the speed of gameplay. If we once again take to our hypothetical puzzle example, we can replace our mysterious music with an action-packed cue set at 200 bpm. This kind of music will make the puzzle scene less cerebral and more of a frantic race to finish in time. Any game from The Legend of Zelda franchise is a great example of this. During time trial encounters a cue will play. When the timer gets closer to zero the cue will change to a faster, more active track, highlighting the fact that the player is quickly running out of time. Similarly, most (if not all) infinite runner games apply the same principles to elicit excitement and energy in players. The important takeaway from these examples is that the tempo and rhythmic activity of the gameplay music indirectly elicits excitement in players. The faster the track, the more this will take effect.

Gameplay-specific immersion also manifests by adapting to and influencing player action directly. When a player enters a room and an ominously low drone triggers, this feels immersive because it is adapting to her actions. The ominous music influences how the player perceives and reacts to the scene. In this case the player might proceed quite cautiously. She might even take out her weapon and ready it for battle. Or she may leave the room entirely! Regardless of the player’s subsequent action, it is the music that is immersing the player in the experience and influencing how she interacts with the game.

Mood in Game Music

There are many other ways music can be used for player immersion both directly and indirectly, and composers are thinking of more creative methods every day. However most of these methods are tied to a very important aspect of game music: mood. Mood can be a nuanced characteristic of any kind of music, but in games it can be particularly hard to pin down. How do you translate a mood that is slightly anticipatory, yet somber, set in a post-apocalyptic cyberpunk universe? And how do you then turn that into a musical cue with appropriate harmony and instrumentation? And then how do you assure that millions of players with differing backgrounds and life experiences will also associate those particular harmonies and instruments with your version of an anticipatory yet somber mood set in a post-apocalyptic cyberpunk universe…? The answer is you can’t – at least not fully.

Attempting to produce a universal gameplay experience for players through music actually defeats the purpose of video game interactivity. Every player’s experience is unique. As a composer, it is important to keep this in mind and formulate your own connection to the game. How do you feel during this scene? How would you feel if you were a character in this universe, experiencing these events first hand? Are there any true experiences you’ve had that resemble these events? Don’t just ask these question – answer them. Try to articulate exactly what you’re feeling when you play an early build. Perhaps you don’t just feel sad, you feel bittersweet nostalgic – or maybe it’s more of an empty tragic feeling. These details will make a difference, and although you can’t force players to feel the same way that you do about a scene, this line of thought will ensure that the gameplay music you write will be authentic, and therefore it will be more immersive to the player. This is your starting point for using music to create mood.

Determining the mood of a game scene is a very intuitive process for most composers because it is a reaction to early game materials that are presented to us (an early build, concept art, storyboard, etc.). The next step is a bit trickier as we now have to account for the details of the gameplay. We know how we want players to feel, but how do we want them to act in game? More to the point, how do we get out of the way so that players can decide for themselves how to act? The latter is the trickiest to achieve compositionally, yet it is also the most important because it results in a feeling of agency for the player.

Composing music to accompany gameplay really needs to begin with actually playing the game. If developers are interested in your input on implementation (and if they are not, convince them to be) then you need to play the game to accurately judge what music it needs and how it should react to the player. Playing through a level a few times with no sound will make it clear exactly what is so challenging about this type of scoring. Our sensibilities tell us to compose something extraordinary, yet playing the game tells us that something more ordinary, or even no music at all, may be more appropriate for immersion. This issue can be solved in a variety of ways, but the result is a musical systems that can accommodate a range of moods.

The Sound Lab

To explore this let’s head over to the Sound Lab where we will get our first taste of using middleware. We will return to this same example later on with a more technical focus, but for now we will simply draw our attention to how mood can change in a game scene.

 

 

Diegetic vs. Non-Diegetic Music

Before moving onto more intermediate topics it is important to take note of the difference between diegetic and non-diegetic music. Non-diegetic music is any music whose source is not present in the scene. Any time Darth Vader is shown, the “Imperial March” is played. This is non-diegetic music because there is no orchestra present on the Star Destroyer, so the source of the music is not in the scene. Most game music is non-diegetic because themes, battle music, and underscore do not have instruments playing on screen.

Diegetic music, by contrast, is music where the source is present in the scene. If you are playing a game and you pass a musician on the street playing the violin, this is diegetic music because the source of the sound is in the game itself. This is highly relevant to our last topic of immersion. Diegetic music usually makes a scene feel more immersive. Non-diegetic music can do the same, but careful consideration must be taken. The music you write should not distract the player or call attention away from the scene unless you want to break the sense of immersion.

Diegetic Music in Games

Diegetic music is a bit less common than non-diegetic music in games, but it offers a unique opportunity for immersion. A fantastic example of diegetic music enhancing immersion is “Zia’s Theme,” as shown in the game Bastion. In this scene Zia is playing guitar and singing. As you walk your character closer to Zia, the sound becomes louder and clearer until she is singing right in front you. In a sense you (as the player) are following the sound of her voice to your destination. This is a scenario where a very high level of immersion can occur because the music is inseparable from the gameplay.

In some cases, examples like this can border the lines between adaptive and interactive music. In The Legend of Zelda: Ocarina of Time one of the most iconic scenes is set in The Lost Woods. Here the player follows a mysterious melody through a forest maze. At first this melody seems like non-diegetic underscore. Eventually the player realizes that the music actually has its source in The Forest Temple, which is where the next objective is located. Only by listening carefully to the music is it possible for the player to find her way to the next objective. In this example the music actually affects player action in the game. Without the music, the player would be unable to complete this task, so interaction with the source of the music is crucial. Most game scoring is non-diegetic, but this kind of creative diegesis can be breathtaking.

To summarize, diegetic music is music that functions as a sound effect. It is placed in 3D space, and the player can interact with its source in some way. Non-diegetic music can adapt extremely well to player actions, but because the source is not present in the scene it lacks something visual for players to interact directly with. Where immersion is concerned, the high level of interactivity the player has with the game environment, the more immersive the experience. This extra dimension of interactivity is what makes diegetic music feel so much more immersive.

Visiting Artist: Martin Stig Andersen, Composer, Sound Designer

Music, between Diegetic and Non-Diegetic

Dealing with the challenges of game music I often find it helpful to dismiss the traditional dividing line between music and sound design, and instead enter the ambiguous midfield between diegetic and non-diegetic. This allows for anchoring music in the game’s world, in this way providing an alibi for its presence. The approach is particularly helpful in dealing with areas of a game in which the player is likely to be stuck for a while. In such cases non-diegetic music easily becomes annoying to the player, acting like a skipping record reminding the player that she’s indeed stuck. If, however, the “music” or soundscape somehow appears to emanate from an object within the game’s world, say a generator, it’s more forgiving to the player. The idea of having music disguised as objects belonging to the game’s world also has special relevance in horror games. Where non-diegetic music, regardless of how scary it might be, always runs the risk of providing comforting information to the player, such as when and when not to be afraid (as if taking the player by the hand), music residing somewhere in between diegetic and non-diegetic will appear more ambiguous and unsettling to the player.

Production Cycle and Planning

The production cycle for game music, as in sound design, can be very broad. Because development cycles are so unique to the developer, there really is no “one size fits all” approach to planning and executing a score. However, there are a few general trends to consider.

Time Management for Indie Projects

When working with some developers, composers are often given a surplus amount of time to work on a soundtrack. This is due to indie developers needing extra time for fundraising and marketing. At first glance the extra time may seem helpful, but this is actually one of the most difficult scenarios for composers. It can be extremely challenging to keep yourself active and engaged in the game material when deadlines are stop and go, and when there is little direction or feedback for your work.

The best approach to combat this is to become your own audio director. You will have to adopt organizational responsibilities on top of your creative duties. Lay out an internal schedule for yourself and stick to it. Include time for brainstorming (one of the most important parts of the creative process; see Chapter 6), writing, and review. Do your best to stay on top of your creative point-person and ask for feedback. Maintaining communication with the developer will ensure that you are both moving in the same direction.

Time Management for AAA Projects

At the other end of the spectrum, some developers may put you in the position of writing music under a very tight timeframe. If at all possible, it is best to maintain a consistent work schedule and try to anticipate deadlines with room to spare. This is not always under our control, unfortunately, and the occasional 12-plus-hour workday is something most of us have to deal with on occasion. When this happens make sure not to neglect your mind and your body. Eat regularly and take breaks often to keep your mind sharp. After the deadline has passed, take a day or two off, even if it is mid-week. Overworking yourself is the quickest way to mediocre work and to a potential decline in your health (see “The Pyramid of Sustainability,” Chapter 10, page 345).

When planning a soundtrack it is very common to find logical deadlines for composers to deliver a number of tracks. These are called milestones. Milestones are important because they keep teams on track and help you plan out reasonable timeframes to write within. Overall, when deciding on these milestones with your developer, it is important to keep in mind that they are very subject to change as issues come up during development. Try to be flexible, but keep your writing as consistent as you can. If you’re juggling a few projects at once, find an appropriate amount of time per week to devote to each project. This will help you avoid having to reacquaint yourself with your previously written material each time you sit down to write.

Working with a Team

Especially at the indie level, it is quite common to see game composers responsible for roles like music editing, orchestration, implementation, recording, and even sound design and implementation (refer back to the companion site, Chapter 1, for more roles in game development). However there are times when a composer may end up working as part of a team, or even as a co-writer on a project. These collaborative roles are a great way to gain a foothold in the industry and make some powerful connections. If you are lucky enough to land a project at the AAA level it is unlikely that you will be solely responsible for the above tasks, so it is important to know how to navigate these interactions. We have listed a few guidelines for common game music roles below.

Orchestrator

As an orchestrator it is your job to take the MIDI mockup that the composer has created and use it to create sheet music for the orchestra. This could go one of two ways; it could be a transcription job where you are simply taking notation in one format and delivering it in another; or it could be a very creative process whereby you interpret the composer’s work and help the music “speak” in the orchestral medium. These are two very different approaches however, so it is important to clarify at the start how much creative control you have. Either way, be conscientious. The composer has likely spent months or even years working on the score, and may have a difficult time relinquishing control.

Engineer (Recording, Mixing, or Mastering)

As an engineer you are responsible for recording (and possibly mixing) the game soundtrack. There is less ambiguity here in terms of creative control. The composer will usually have an idea already of how the music should sound and your role will be to create a recording environment that suits the imagined sound. If you are mixing or mastering, then your role will be to take the pre-recorded tracks and make the mix as clear as possible while maintaining the emotional intent of the soundtrack. If you are placed in the role of engineer, get to know the music well. The composer may be open to ideas on how to achieve a certain aesthetic.

Performer

Musicians and performers have one of the most fun jobs in the realm of game audio. If you are performing on a soundtrack then you are part of one of the final stages of the entire process, and you are the closest link the composer has to the players. In regards to creative freedom different projects and composers will require different types of performances. Some musical styles require precise adherence to the sheet music. Others require a high degree of improvisation. The most important thing as a performer is to open a line of communication with the composer up front to clarify what she is looking for. Quite often the composer will research a particular performer to find the right sound for the game. If this is the case then the music should already be well suited to your playing style. Even if this is not the case, composers often welcome extra takes with some improvisation or interpretation, so don’t be afraid to ask up front what is needed from you.

Composer/Co-Writer

There are two main methods of co-composing. The most common way is to work on separate tracks. Games like Civilization, Detroit: Become Human, and Lawbreakers use locations or characters as an opportunity to call on different composers to write in contrasting musical styles. Another method is to find elements within each cue and split up the work. For example one composer can work on the lyrics and vocal components of a track while the other works out the instrumentals and orchestration. Bonnie Bogovich and Tim Rosko used this method of collaboration for their title theme to I Expect You to Die. This method can be difficult, and it requires flexibility and open lines of communication for feedback. It can be very rewarding however, and sometimes it leads to long-lasting partnerships.

Platforms and Delivery

When creating your soundtrack it is important to consider the final listening format of the game. The differences in audio capabilities between consoles and mobile devices are considerable and there is little consistency in home speaker systems. If you are delivering for mobile, it’s important not to overcrowd your arrangements. Mixes can become muddy very fast, and frequencies above 15 kHz can sometimes be piercing. Creating a clear arrangement is the first step to a solid mix, but EQ can also be used on the master to declutter some of those problem frequency ranges. If you are composing for console then a good mix will generally transfer well. The key is to produce an effective arrangement, and a clear mix will likely follow. Take the time to check your mixes often on a few different listening devices and make the appropriate changes. We will dive back into some more specifics on mixing your soundtrack in Chapter 9.

Most developers will ask for a specific delivery format so that they can prepare the files for implementation into the game engine. Music usually takes priority in terms of fidelity so most often you will be asked for WAV files in stereo, but there are exceptions. A developer may ask you for mp3s to save space, but these do not loop well and are therefore not adequate choices for music. If space is a concern a better choice would be Ogg Vorbis. This is one example from many, so it is best to familiarize yourself with the majority of file types and compression formats (for more information see Chapter 8).

The Sound Lab

Head over to the Sound Lab for a wrap-up of the topics discussed in Chapter 5. We will also explore mood and immersion. We will return to this same example later on to dig into some more complex musical techniques.

Bibliography

Clark, A. (November 17, 2007). “Defining Adaptive Music.”Retrieved from www.gamasutra.com/view/feature/129990/defining_adaptive_music.php

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset