In this chapter we provide a more detailed look into sound design for game audio as a process. Here we cover more intermediate topics, which include the purpose of sound in games, sourcing sounds, and designing sounds. We will explore the process of layering, transient staging, frequency slotting, and effects processing within a framework that’s easy to understand. We will also break down sound components into categories, which we have found simplifies the process and ensures detailed designs.

Dynamic Audio

Dynamic range is the ratio between the loudest and softest sound. Dynamic audio is a term that we use in regard to games to categorize audio that changes based on some kind of input. We have left this definition intentionally broad because it encompasses other kinds of audio systems. Adaptive and interactive audio also fall under the dynamic audio umbrella. Adaptive audio reacts to changes in the game state. In this way players can influence adaptive audio indirectly. Most of the more complex game audio examples such as Red Dead Redemption 2 are adaptive audio systems. By contrast, players can influence interactive audio directly through some mechanism in the game. For example Guitar Hero, Rock Band and Parappa the Rapper are examples of interactive audio.

Sound effects are not simply there to fill space. They serve the purpose of giving feedback to the player and providing a sense of space or setting, which immerses them into the game world. As humans we hear sounds all around us 24/7. These sounds provide information on the space we occupy and warn us of the level of danger. Since we are so spoiled by sound in everyday life, as players we expect to hear the same level of immersive audio to inform us of important factors in gameplay. In a real-world setting we wouldn’t just rely on sound and visuals for immersiveness. We would also smell, touch, or even taste the environment! But in a virtual setting we can only see and hear (at least at this point in time). To compensate for this, we exaggerate the sound to enhance the immersiveness of the experience.

In Chapter 2: The Basics of Nonlinear Sound Design we provided an overview of sound design as a process. In this chapter we will aim to provide a solid understanding of the theory and practice of game audio to allow you to apply these ideas to your own workflow and process. We will explore how to create immersive sound design for games by breaking down the tools and techniques used by professionals in the industry. While reading, keep in mind that designing the sound is only half the battle. An asset may sound great out of context, but in the game it might not be a good fit. Having an understanding and appreciation for how sound functions in games goes a long way in helping to produce quality and immersive audio for the medium.

Sound Effects

Sound effects are broken down into a few subcategories:

  • Hard/literal sound effects
  • Designed/non-literal sound effects
  • Foley (performed sounds)
  • Ambience

Hard Sound Effects

Hard sound or literal sound effects are literal sounds that represent realistic objects or actions in game. These are common sounds that are easily recognizable. Hard effects include car engines, doors opening and closing, weapon fire, explosions, and wood creaks.

Designed Sound Effects

Designed sound effects or non-literal sound effects are sounds that typically stem from the imagination. They are abstract sounds that you wouldn’t associate with the sound that makes it, like UI sounds. In reality, the action of moving a mouse over a button doesn’t have a sound tied to it. Designed or non-literal sounds are often found in sci-fi or fantasy games. Objects like space lasers, fantasy game special abilities, and cartoony bonks and boinks fall into this category. Getting familiar with the work of Ben Burtt, a sound designer noted for his work on Star Wars, Wall-e, and E.T. is a great way to get some inspiration for sci-fi and fantasy sound design.

Foley (Performed Sounds)

The idea behind Foley is that sounds are performed by sound-effect artists or Foley walkers. These sounds are recorded in synchronization (with linear media) to complement or emphasize a character, object, or action. In games designers use Foley techniques to create footsteps, armor, cloth movement, and prop handling among other things. Foley sounds are often recorded while the performer is watching a video clip of the gameplay or character animations.

It’s important to point out different uses of the term Foley. Film sound designers might insist that Foley only describes sound recording performed in sync to video or animation. In this pure definition, a library of footsteps shouldn’t be referred to as a Foley library. In game development you may find the term Foley used to refer to a blend of performing in sync with video for cinematics and recording sound effects without a video reference. Film sound designers would argue that this is simply sound-effect recording but nevertheless you may hear the term Foley used in game audio.

Ambience

Ambience or background sounds are the glue for the rest of the soundscape. They provide an immersive setting which gives players a sense of the space their character is in. Sounds in this category include nature sounds, real-world spaces, or fantasy spaces.

As sound designers are tasked with designing aural experiences that may not always reflect reality, they often take some creative liberties while designing the ambient sounds. For example, outer space is a common setting for games. In reality, the vacuum of space precludes sound waves from traveling. Yet sound designers still design appropriate sonic environments to serve the game and aid the players in their tasks. The idea is to create a believable and immersive atmosphere, even though the result may not be true to life. While sound designers have creative freedom during instances like this, the craft still requires quite a bit of knowledge and mastery of tools and techniques. Sound designers have to know what reality sounds like before they can craft fantasy sounds.

The Function of Sound in Games

As we have mentioned earlier, the method you choose to design a particular sound effect will be determined by its role or function in the game. Below are a few common functions of game sounds to keep in mind.

  • Provide sonic feedback to the player
  • Communicate emotion/set the mood
  • Provide a sense of space
  • Define realism through literal and non-literal sounds
  • Establish a sonic identity or brand
  • Establish structure, narrative, and pacing

This is not an exhaustive list, but these functions work together (and overlap in many ways) to increase the overall immersion of a game. In order to fully realize the immersion in a soundscape, these functional roles need to be understood. Although games are nonlinear, the techniques adapted from the linear world work well as a starting point for designing assets. Games come with other technical challenges, which we will continue to explore throughout this book. For the moment, let’s break down and explore some specific functions of sound in games.

Sonic Feedback

Sonic feedback refers to the communication of information to a player when she takes an action in game. The feedback is relayed via a sound effect that is triggered during or after the player’s input and typically provides the player with some information on the action itself. For example if a player swings a sword, she will likely hear a swoosh sound. If she swings the sword again and hits an enemy she will hear a swoosh as well as an impact. This aural feedback tells the player that she hit her target. This is a simple example, but games take this concept to varying levels of complexity as it is an integral part of the player experience.

A Heads-Up Display or HUD, a part of the game’s user interface, visually relays information to the player but may also utilize sound to offer feedback. These sounds may provide negative or positive reinforcement as the player interacts within the game. Similarly in-game dialogue can advance the story and provide the player with instruction for the next mission or objective. Other sonic cues can help the player anticipate a change in intensity. For example, as ambience darkens or music adopts a tense tone, the player will become suspicious of what is to come. When the sound adapts prior to seeing the action change on screen this helps the player anticipate and prepare for what is ahead.

Communicate Emotion/Set the Mood

Communicating emotion is something you should be very familiar with from the film world. Sound in games will work to set the mood and induce physiological responses during situational scenarios. This is especially true in games where the player character might be facing a significant risk. For example, a boss battle typically has heightened music and sound effect elements to inform the player of the challenge ahead. After a battle has been completed the sound drops down to a calmer tone to imply safety for the moment.

Provide a Sense of Space

Spatialized audio provides a sense of directionality and environmental setting. Whether the game is set in a realistic or fantasy setting, sound is responsible for immersing the player into the space. Sound can help the player recall settings or characters as well, and it helps them situate themselves in a scene. Spatial feedback is also necessary for the player to determine the size and geometry of each area of the game.

Define Realism through Literal and Non-Literal Soundscapes

Audio can be further broken down into diegetic and non-diegetic sounds. Diegetic sounds are emitted from objects within the game world. Non-diegetic sounds come from sources that are external to the game world. Sounds that the player character and non-player characters can hear in game can be considered diegetic: a door creaking as it opens, weapon sounds, footsteps, ambient insects, music playing from an object in game, etc. Any sound whose source you cannot find in a game scene and which the characters cannot hear are non-diegetic: underscore, user interface, narration, etc. An interesting third category is meta diegetic, which is sounds that only specific characters can hear (telepathic communication, voices in your head, earworm music, ringing after a loud noise, etc.).

The use of diegetic and non-diegetic (literal and non-literal) sound stems from the film world. Similar to film audio production, game audio designers will blend a mix of diegetic and non-diegetic sounds in a scene. Diegetic sound will present what the player character hears while non-diegetic sound guides the player’s emotions. Defining realism through audio doesn’t necessarily mean the game world translates directly to the real world but that the game environment is made believable by the player through immersive visuals, narrative, and sound.

Establish a Sonic Identity or Brand

The strategic positioning of audio assets to reinforce the sonic identity or branding is something companies have been capitalizing on for ages. When you turn on an Xbox console, the logo startup sound is an excellent example of establishing brand identity. Brian Schmidt, sound designer, composer, and creator of GameSoundCon, crafted the original Xbox startup sound. This audio experience was not only sonically pleasing but was a technical challenge to implement due to the hardware limitations of the time. As a guest on the Twenty Thousand Hertz podcast (Episode #54: Xbox Startup Sound), Brian describes his experience getting over the technical hurdles.1

When a company decides to expand their market often the audio experience needs to be revisited. For the premier of the Xbox 360, Microsoft brought on Audiobrain, whose Michael Sweet,2 composer and sound designer, helped design a startup sound which set the logo’s sonic identity and also offered a detachable audio experience to be played at the end of marketing promos and commercials. With each new version of the console, the sonic logo adapts to the marketing and brand needs but still stands out as a recognizable audio experience.

Another example is the simple four-beep sequence known as the Halo. This start beep has stuck with the series because it easily identifies the “3-2-1 GO!” countdown and has become a sound players expect to hear. If we break down the sound, it seems to be three very quick 500 Hz sine wave beeps followed by a 720 Hz sine wave played slightly longer to emphasize the “GO!”

An audio experience, whether intentionally branded or made iconic over time, doesn’t have to be a user interface or logo sound though. Weapons, vehicle sounds, and even ambient soundscapes can become a sonic identity that creates a link between the player and the game.

Establish Structure, Narrative, and Pacing

Sound (or lack of sound) can provide a narrative structure for the player. Games that have a distinct transition between scenes often employ sound effects that emphasize those transitions. Likewise, fading the ambience into a new ambience is very meaningful to the player because it offers a sneak peak of the environment that is about to be explored. It can even alert the player to potential threats. The opposite (a continuous ambience between scene changes) may signal to the player that the environment she is now entering is similar to the current environment. In horror games this schema is often exploited by adding threatening sounds to an otherwise non-threatening ambience, or by adding an unseen enemy to an area with a very “safe”-sounding ambience.

The function of acousmatic3 sound is to provide foresight to the player. In film, when we hear a sound off screen we can’t adjust the camera to pan to the direction of the sound. In games the player not only can but usually will adjust her viewpoint to investigate an off-screen sound. By using acousmatic sounds we can draw the player to an important gameplay event, or an object which will aid in the completion of a puzzle or task. A strong example of directing attention through sound can be found in Playdead’s 2D side-scroller Limbo. At one point in the game the player will walk past a trap sitting up in a tree. A spider then attacks, and as its large leg stomps the ground a sound is triggered to signal the trap falling from the tree off screen. A sonic cue then directs the player back to the area where the trap fell. Without this sonic cue the player would have to rely only on the vague memory of the trap she walked past in order to defeat the spider. This particular use of acousmatic sound supports both the structure of the gameplay (the spider puzzle in particular) as well as the narrative (the order of events that transpire).

Designing the Sonic Palette

Just as visual artists choose a color palette before they begin their work, sound designers need to consider the sonic “colors” that will best paint the aural picture. Establishing a sonic direction prior to the actual design sets the path toward a unified soundscape. During the pre-production stage of development the sound designer can select the palette and decide on a direction. This is not usually a one-person job as the development team often has input. In Chapter 2 we covered the pre-production process and discussed questions to ask to define your overall aesthetic. The answers to these questions should inform the choices you make about your palette. Gameplay, story, and narrative all play an important role in these choices as well. To illustrate this let’s take a look at a few examples of sonic palettes.

Blizzard Entertainment’s strategy card game Hearthstone is an excellent example of a well-thought-out sonic palette. Before entering a match, the player is fully immersed into a tavern-like soundscape. Tavern patrons’ walla, wooden drawers, leather material, paper and books, and metal bits and chains are all elements present in the palette. After pressing the shop button the sound quickly takes the player through a wooden door with a little bell jingle to signal the entry.

A well-designed and implemented soundscape is defined by players not noticing the sound. Players should be so immersed in the game world that they don’t particularly notice any audio at all. However, with a poorly selected palette, players will pick out the sounds that bother them or don’t feel right in the scene. The wrong sound in the wrong place can cause the player to lose their immersive connection. While technical items like noticeable loop points and sloppy transitions can quickly break immersion, overloading or underloading the sonic palette can easily yield the same result.

Tonal Color

Music is often described by tonal color, but sound designers can use this to define UI (user interface), ambiences, and other sound effects as well. A well-crafted sonic palette will have a unified tonal color with similar qualities that tie the sounds together. You can think of these similarities as the tonal flavor.

Let’s look at a fictional FPS (first-person shooter) game with a pixel art style for the sake of this exercise. The menu music is a synth rock loop and the UI sounds were created from various switches clicking on and off. In game the player is greeted by realistic weapon sounds, no background ambience, and an orchestral music track. The change in music feels a bit off (but there must be a reason behind the switch, right?) so the player forges on only half immersed in the game world. Firing away at NPCs (non-player characters) and running around the battlefield feels a bit like a typical shooter, but the player intuitively notices that some sounds have a lot of reverb and others are dry. On top of that, the player’s footsteps have noise that fades in and out as each step is triggered. Suddenly, the realistic, yet low-quality soundscape is topped with an 8-bit retro level up stinger that signals the player’s progress in the game. At this point the player pulls back from the partial immersion and questions the sound choices. What is the sound palette trying to tell the player? The confusing mix of sounds is not easily enjoyable and does not always fit with the experience. Next, the player visits the in-game shop to upgrade weapons. A pistol upgrade is rewarded with a low-quality “cha-ching” sound and its tone clashes with the music. The entire experience is a mixed bag that doesn’t offer the player much immersion.

This scenario, while fictional, is not that far-fetched. This example can serve as a “things to avoid doing” reference during the sonic palette-building stage. For starters, there is a mix here between realism and low-fidelity sounds. The art is pixelated, so a lo-fi and retro palette might work well if it were ubiquitous throughout the game, but the inclusion of an orchestral gameplay track and the realism of the soundscape send mixed signals. The lo-fi sounds are also seemingly unintentional. Realism and lo-fi don’t usually mix. If the aim is realism, then the sound palette should be as high quality as possible. If the visual aesthetic is more abstract (as is the case with pixel art) there is more room for intentional synthetic or lo-fi sound palettes.

Less is More

KISS4 is an acronym for “keep it simple stupid.” This principle has been around since the 1960s and it can be applied very appropriately to game audio. Imagine a game where the sounds and music are wall to wall (in other words everything makes a sound, and the mix is always densely packed). In everyday life noisy environments pull our attention in too many directions and cause us to lose focus, and the same is true of game scenarios with too much audio. When this happens, valuable information meant for the player will get lost in the chaos of the mix. Too much aural feedback means that none of it is breaking through the wall of noise. Important sonic details meant to influence the structure of the experience will be inaudible. By planning your palette ahead of time you can make practical use of the KISS principle to determine exactly how much or how little audio your game needs in each scene. More importantly, you can plan out what priority each of those sounds should take in the mix to ensure that nothing of value gets lost in translation. This applies to music as well.

Since we don’t have a soundtrack following us around in real life, it makes sense to strip back the game score and be mindful of the soundscape. The score and sound design can both provide the emotional setting and appropriate cues to the player without being too overt. This can be a difficult task for a content creator. Often times we want to show the world what we are made of, which results in grandiose audio elements mashed together in a wall-to-wall masterpiece. As we now know, this is not ideal to support gameplay. Dialogue fighting with sound effects cuts the information pathway to the player and makes the words unintelligible. Music on top of other audio elements interrupts the immersion. Scenarios like this are to be avoided like the plague. Sometimes we need to put our egos aside and create a mix that is less outlandish and more suitable for the game.

The KISS principle can also be useful when it comes to layering sounds, plugin effects, and implementation (all of which we will cover later in the chapter). It feels good to create a complex system of asset implementation, and middleware certainly offers the tools to do so, but it is only worth doing if it supports the game experience. It can be helpful to ask yourself the following questions before designing any sounds: “Will this support gameplay? Will this provide the proper aural feedback? Is this essential to the immersive experience?”

Naughty Dog’s The Last of Us is a shining example of “less is more.” The intro starts out with a clock ticking and cloth Foley as the character Sarah lays in bed. Joel unlocks the door and speaks into his cell phone. Sarah and Joel talk about his birthday gift as the clock gently ticks in the background. Delicate Foley accents the character movements on the couch. Joel briefly turns on the TV and a low musical pad makes a short appearance. After putting Sarah to bed the game cuts to a scene where Sarah is awakened by a late night call from her Uncle Tommy. With the music stripped away, the Foley moves to the background while the dialogue and beeping phone take center stage. The player doesn’t know what to expect but understands something isn’t quite right. An explosion outside interrupts the eerie silence after Sarah turns on the TV. The soundscape slowly builds as Joel and Sarah meet up with Uncle Tommy. The low string pad is reintroduced to score the tension.

This scene is effective because it is dead simple. The soundscape is almost entirely Foley, and the music is as sparse as the soundscape. Because the mix leaves room for the sounds that support the mood and narrative, the audience is able to project its feelings of tension as the scene develops. If the score was more complex, or the soundscape lacked the intricacy of the Foley elements, the scene as a whole would have had far less of an impact.

Visiting Artist: Martin Stig Andersen, Composer, Sound Designer

Sound Design: Less is More

Even though it’s sometimes tempting, and your teammates may persistently request it, adding sound to every object in a game often causes nothing but a cacophony of unfocused sound, obscuring valuable gameplay information. Next time you watch a movie, notice how just many things in the picture don’t make sound. This is a conscious decision made by the filmmakers aiming to help direct the focus of the audience. One of my favourite examples is the Transformers movies in which sounds of explosion and mass destruction are sometimes omitted entirely in order to allow the audience to appreciate smaller scale actions and sounds, such as lamp-posts being run over or a window smashing. In games we can use this trick in order to highlight gameplay-relevant sounds, and when doing so, not surprisingly, the player starts listening! When you’ve helped the player solve a couple of challenges by bringing the most relevant sounds to the fore, you’ve caught her attention! You may receive bug reports that certain sounds are “missing” but remember that you are the sound designer, not the graphics guy who populated the level with potential sound sources (without ever thinking about sound).

Choosing the Right Source for Your Sonic Palette

Effects plugins and other software are a great help when designing sounds, but even the best of such tools need a solid sound source to work their magic. Selecting the right source material is an important part of the process. In Chapter 2 we briefly discussed ways to obtain source material; in the following sections we will break down the process even further by exploring how to source from sound libraries, synths, and recordings. We will also examine ways to be creative about selection, editing, layering, and processing when filling out your sound palette.

Pre-Production Planning

In “Production Cycle and Planning,” Chapter 1 (page 18), we covered the basics of pre-production. Here we will discuss mapping out the sonic identity of the game with the development team. Identify the intended genre and create a list of keywords that fit the mood(s) of each level or area. Before creating assets ask some questions to determine how sound will be most effective:

  • What is the overall aesthetic?
  • How will sound provide feedback to the player in the game?
  • How will sound drive the narrative?
  • How will sound be used to reward the player?
  • If there are weapons in the game how will you ensure the player’s weapons feel satisfying and powerful?
  • How will sound set the mood or environment?
  • What will be the sonic brand?

There are plenty more questions that can be asked before getting started, but the basic idea is to understand the intent of the developer. By understanding what the developer is trying to accomplish, and what the game is meant to convey, you will have a better idea of how sound will fit into the game. Keep in mind that this process is very personal – some designers don’t need much information to get started while others like to really immerse themselves in the development process. In truth, each approach is valid and it changes based on the needs of the project and the sound designer. Over time you will develop an intuition about what games need and the intent of the developer.

Examining Aesthetics

When examining the aesthetic of the game think about environments, tech, character types, and gameplay. If you are working on a sci-fi game where the player commands spaceships, try to uncover how the ships are built and the mechanics behind them. There may be varying levels of ships where some are designed with the highest tech available while others might be pieced together from scavenged parts. The sound detail will be telling the story of each ship, where it came from, and where it is headed. In other words, the source you choose will serve as the foundation of the game’s narrative. If only high-tech source is chosen, then the scavenger ships will not sound believable within the context of the game. Conversely, if only metallic clinks and clanks are sourced for all ships, the high-tech warships might sound like space garbage.

In short, pay attention to the aesthetics of the game during pre-production and log as much information as you can about everything that makes a sound. This will inform your choices for source material, and you will be on your way toward an effective sound palette.

Research

Once you have an understanding of the game it’s time to do some research. Gathering references from other games and even movies in a similar genre is a great way to generate some ideas. A lot of great artists and designers find their inspiration from other works. Mix engineers often use a temp track of their favorite mix in the genre as a guide while they work. Taking one or more existing ideas and combining them or experimenting with new ways to expand on them is a great way to generate new ideas. The point is to generate a new and unique idea without copying directly from references.

Organization

During the bidding process the developer may have provided a general list of assets the game will require. Play through the latest build, if you have access, and identify additional assets that may be required (there often are plenty). If you are working as a contractor, it’s a good idea to create a spreadsheet to track your assets and share progress with the development team. Solid communication skills are an important part of the process. Always communicate progress and provide detailed implementation notes via spreadsheets and in implementation sessions where applicable. (For more information on the bidding process see “Business and Price Considerations,” Chapter 11, page 369.)

In Chapter 1 we directed you to the Sound Lab to check out the example asset list. Take some time to review it before moving on. While each team may have its own preferred method of organization, the asset spreadsheet should contain headers such as:

  • Sound asset/game event
  • Game location
  • Description
  • Loop
  • 2D/3D events
  • File name
  • Schedule/priority
  • Comments/notes

The spreadsheet could be shared over a cloud-based service like Google Docs or Dropbox Paper to allow for collaboration. The developer (probably) isn’t a mind reader, and will need the sheet to keep track of your progress for milestone deliveries, to provide feedback on delivered assets, and even to set priorities for assets. All of this organization will go a long way toward making your palette appropriate and effective. For more information about asset management refer to “Essential Soft Skills and Tools for Game Audio,” in Chapter 1.

Transitioning from Pre-Production to Production

As you plan for asset creation it can be helpful to build a list of source material that fits the game’s sonic identity. This source can be used as a building block for your sound effects. You might find your SFX library can cover some of your source needs, but you may have to record assets as well. Create a list of specific items that you can realistically record. Projects never have unlimited time or resources to work with, so think logistically. Planning all of this in advance allows you enough time to gather your source without bleeding into your production (asset creation) time.

Once you have a list of source material, you are ready to move on to the production phase.5 In this phase you will record, synthesize, or pull material from libraries to build your palette. In the following sections we will cover various ways to accomplish this.

Revisions and Reiteration

Creative work is a subjective medium and therefore feedback or criticism can hold a bit of bias. Regardless of how much effort you put into creating or mixing sound for games, feedback is often inevitable so be sure to plan time in your schedule for revisions.

Understanding sound as a subjective medium can help you more willingly leave your ego aside when delivering work. It’s something that can be learned over time and you will get better at pouring your heart and soul into a creative work only to have someone pick it apart. Often times, after rounds of feedback and revisions, the audio asset might sound that much better. In the end teams build games and being open to working with feedback will ultimately improve the game’s audio.

You won’t get very far if you fight tooth and nail on every bit of feedback you receive so learning to digest and implement feedback is a necessary skill.

Sourcing Audio from Libraries

Library Searches

When searching sound libraries for appropriate source material start by thinking creatively about key words and phrases to search the metadata. Designers can accumulate hundreds or thousands of hours of audio from library purchases. Using generic search terms can leave you with far too many audio clips to listen to, and many of them will miss the mark in terms of direction. By using specific and unique search terms you can make things easier on yourself by narrowing the results. A search for “metal” will yield a wide range of generic audio clips. Be specific and try using phrases isolated by quotes to produce more useful results: “metal snap,” “trap,” “spring action,” “squeak,” “metal close,” or “metal impact.”

Adding Custom Content to Libraries

Another point to consider with sound-effect libraries is that audio designers add their own sounds to their library. This is useful because it allows them to search everything at once. The downside is that personal recordings can become a nightmare to sort through if they are improperly tagged and labeled. When adding recorded source material to the library, be mindful of how you categorize and name each and every file. A buzzing synth sound is best categorized by how it sounds rather than what it might be used for. “Energy_LowPitched_Saw_01” is a more useful title than “Fun_Buzz_01.” Avoid using vague file naming or taking the easy road when naming your sounds. It can be difficult to find the time for this in the midst of a busy day, but in the end you will save yourself a lot of time and a bad headache when you have to search for sounds.

It’s also good practice to leave your library intact and copy assets to the working project directory. Ideally you are working with multiple hard drives so your sound-effects library and your project sessions are on separate drives. When you pull a sound from a library and add it to your session it should be copied to the session so you aren’t overwriting the original library asset. The sound-effect library should be something you build and grow over time. You will certainly find use for sounds in multiple projects so keeping the raw library source intact is good practice.

Backups

When you are first starting out a simple backup solution might work just fine. But as you continue to work on larger and more complex projects, and as you build your library out, your backup solution should also evolve.

Sourcing Sounds through Synthesis

Synthesizers are an invaluable source for generating sound design layers. In Chapter 2 we discussed ear training and the programs and tools available to build this skill. A similar method can be used to train your ears for knowing which oscillators or parameters to use when designing synth-based sound source. Syntorial6 is recommended for tuning your ear to the fundamentals of synthesis and for a solid understanding of subtractive synthesis. Although the topic of synthesis can fill a whole book on its own, in the sections below we have outlined some basic ideas for generating source through synthesis.

Tools for Synthesis

There are many options when selecting a synthesizer for sound design, and each synth has its own strengths, weaknesses, and specialized workflow. Your “go-to” synth will depend on the type of sounds you are looking to populate your palette with, as well as your preferred method of design. Synths like Absynth are great for Match 3 genre game sound effects. We used a combination of Absynth and xylophone and chimes when designing sound for Breatown Game Shop’s Sally’s Master Chef Story. It also works well for magic or special-ability source layers in fantasy-game sound design.

Absynth’s semi-modular approach allows for some creative routing and effect chaining. Native Instruments’ Massive is a fantastic to tool for ambient, UI, engine, humming, and buzzing sources. Omnisphere can also be a great tool for ambient sound design due to its intuitive user interface. However, as with all source material usage, make sure you are familiar with the license agreement. In a quick review of the Omnisphere agreement, it allows use in music but requires an additional license for sound design. Again, each project will call for a unique sound palette, which will inform your choice of tool or tools to use for synthesis.

In the past synths were limited to hardware models, and there were only a few manufacturers. Today there are many more hardware synths, but also a tremendous amount of software-based synths with a variety of synthesis types. This means that your options are plentiful. Keep in mind that it’s not about having all these tools. As a sound designer you will come to understand how to manipulate parameters in just about any synthesizer to create the source you are after. Your understanding of the fundamentals of synthesis, and your ability to quickly learn a synthesis workflow, will have a far greater effect on your sounds than any synth in particular.

Although the theory behind synthesis should be your primary focus, at some point you will have to select one or more synthesizers to work with. This can be overwhelming if you are new to the process. Start by investigating a few synthesizers and their architecture. Become familiar with the available components and eventually this will lead you in the right direction. Selecting a synth comes down to what you feel most comfortable working with and what best fits your needs. Every synth, whether hardware or software, has its own setup, interface, and working parameters. Most synthesizers offer a very broad range of possible sounds, so the one you choose should best reflect how you like to work and what feels intuitive for you. Keep in mind there is a range of affordable or free software options to get you started. It’s best to give some of those a try before diving into an expensive option.

Hardware vs. Software

Before we move on to synthesis, an important distinction that needs to be made is between hardware and software synths. These categories of synthesizers will change your workflow the most, so here we outline a few key items. First, hardware synth can be expensive, large (usually), and often have a steeper learning curve. Because many of them are analogue, and often lacking the ability to save presets, makes them harder to control. Despite this, some designers choose hardware options because they prefer the tangibility of the knobs and sliders as well as their lack of impact on CPU. Hardware synths often have a very “personal” sound as well, which it is argued can be difficult to emulate with software. However, other designers find they don’t have the room for hardware synths and are perfectly happy using a mouse or MIDI controller to edit the UI on a virtual synth. Software synthesizers are also great for learning the basics because they are cheap, and usually come with presets that sound great out of the box. Software synthesizers have the added bonus of almost always having multi-voice polyphony (more than one note can be played at a time), which many reasonably priced hardware synthesizers do not have, as well as flexible or even modular architecture.

What Can We Do with Synthesis?

Synthesis is capable of emulating sounds and well as generating unique sounds. In Gordon Reid’s Sound on Sound article “What’s in a Sound?”7 he poses the question “What are harmonics, and where do they come from?” In the article he explains how the sound of many real-world instruments could be synthesized with a number of harmonic waveforms. He also discusses Fourier analysis, in which a unique waveform can be crafted from a set of harmonics. The key point here is that synthesis is capable of just about anything if you understand the way sound works.

To better illustrate this point for those new to synthesis head over to the Sound Lab (companion site) to review the fundamentals of synthesis. We will cover some theoretical basics, and key terms like oscillators, waveshapes, envelopes, and filters as well as provide additional educational resources.

Using Synthesized Sounds Alone

Human ears are great at matching a sound to visuals. When designing sound effects it helps to use source sounds that are captured in the real world so the sound can offer a bit of credibility to the player. This technique works well across many games but there are specific instances or scenarios that require all synthetic source. A good example of this is retro-style games with chiptune soundtracks. Synths can be used to generate retro-styled sound effects to go along with the chiptune background music. There are some retro synth emulators that do a great job of providing presets which recreate classic NES and other early gaming console chip sounds. Plogue chipsounds8 is a VST that emulates 15 different synths 8-bit with a nice amount of detail.

Having a synth with pretty close presets can be great when you are in a time crunch but these retro sounds are fairly easy to create from scratch using basic waveforms like saw, triangle, square, and a noise generator. With any of these waveforms assigned to an oscillator the attack, sustain, and release can be adjusted to find the right accent for the sound, and standard processing like reverb, delay, chorus, and phasers can give the sound a bit of space and depth since the waveform or noise will be dry. Most virtual synths have an arpeggiator which will give the sound a bit of a melodic pattern to it. Lastly, a resonant low-pass filter will warm up the sound (see “Sourcing Sounds Through Synthesis” on page 58 for more information on generating source with synths).

Using Synthesized Sounds with Recorded Audio

Other designers are probably using the same synths and sample libraries that you are, so you will want to rely on layering and processing to create unique sounds. In working with fantasy and sci-fi sounds, their character and properties are subject to imagination which means you have something of a creative license, but don’t rely solely on synth instruments for these otherworldly sounds. As previously mentioned, layering real-world or mechanical sounds with synthesized or synthetic source can add credibility for the player. Even though the sound is processed and blended, the listener can identify and feel comfortable with the sound.

The question is “How can we use synthesized source along with sounds captured from real-world objects?” The answer to this goes beyond the standard laser shot created with a single oscillator and pitch envelope. Synthesized source can be used in a lot of ways like filling out the frequency spectrum of a recorded real-world sound by adding a bit of weight in the low end or packing more punch on the attack. Compressing a weapon sound doesn’t always give it the raw edge that is often sought after. Adding tonal elements from a pitched-down synthesized saw wave can ensure the frequency ranges are covered and add a bit more low-end weight to the weapon. Overall, adding a synth source to a mechanical recording can create a thicker and wider sound if that is what you are after.

A “noise sweep” is a common synthesis technique which utilizes a raw white-noise signal and a resonant low-pass filter. Opening and closing the cutoff and resonance creates whooshes, impact effects that can be layered with a mechanical recording to add a bit of an otherworldly or sci-fi effect. For example, layering a recording of kicking a metal object with a noise-sweep-generated swoosh can be used for impact sounds in a robot-centric game.

Mixing Synthesized Material

Critical listening and analyzing reference sounds is always a good way to ensure you are using the proper source in creating the sound you are after. Mix engineers often keep a reference track in the session to inspire the direction for the final sound. Understanding how to use tools like an oscilloscope will offer a view of the waveform and general amplitude envelope and spectrum analyzers will reveal frequency content. Some DAWs and 2-Track editors offer these tools but they can also be acquired as a VST plugin. With these tools a sound designer can try to construct a similar envelope, wave, and frequency spectrum.

When working with soft synths the source layers created via MIDI could be rendered or bounced out and imported to a new track in your session. Working with a rendered audio file will allow you to edit and process the layer to better combine it with your other source material.

Light compression can “gel” these mixed source layers together and make them sound more cohesive. This is a process generally referred to as “bus compression” or even “glue compression.” It’s a rather simple process but can go a long way in making independent layers processed through a compressor gel together. When two or more sounds are run through the same compressor their transients are shaped similarly and the resulting sound feels more like a unified element.

Heavy compression isn’t necessary for this process as a very slow attack, quick release, and subtle gain reduction will provide a transparent process that will get your layers.

A lot of virtual synths have built-in effects such as delay and reverb. Try bypassing those effects and bouncing out the dry sound into a new track. Then process all the layers with the same or very similar reverb and/or delay in your DAW to help them sound cohesive when triggered together.

Layering a very dry synth source with a heavily delayed real-world recording will be difficult to gel using the compression method described above alone. Always think about your sound palette and the spatial properties your sound requires. Take time to listen to the mix of layers to ensure nothing sticks out or feels like it is from a different space.

In summary, synthesis is a great way to create source layers for all kinds of sound palettes. Engine sounds, explosions, UI, sci-fi, fantasy, energy, power sounds, and many more types of audio can be generated very believably using synthesizers. Going out and recording an actual explosion sound is dangerous if you try to do it on your own, and expensive when you hire someone to do it in a controlled environment. If the budget allows for the recording session then by all means go for it. However, most of us don’t have that luxury. By mixing layers of library explosions, some custom synthesized energy sounds, and original debris recordings you can create some truly amazing explosion sounds.

Sourcing Audio through Studio and Field Recording

In Chapter 2 we discussed microphones and field recording equipment. Here we are going to discuss some specific uses, ideas, and techniques for recording original source for your palette.

Take a moment to note the sounds that currently surround you. Experiment with objects in your current space to see how they might tell you a unique story. While making your bed, the flap of sheets could bring a winged creature to life. After baking, slamming an oven door shut could be a layer in a mechanical robot’s footsteps. When recording source, pretty much anything can be useful. Keep your ears open and think creatively!

Foley and Sound Design Props

In Chapter 1 we defined Foley as a process. Here we will explore recording Foley and sound effects to generate source layers for sound design. Effective Foley will sound like it was part-recorded during the production, and not after the fact. Finding the right source to record and using the proper techniques are core ingredients to a recording session. A bit of creative editing can also go a long way toward making the Foley sync believable with a game.

There are an unlimited number of objects that could be used as sound design source or to recreate a sound for use in game. Here we are going to explore Foley and sound design props that we have used over the years to record source. This should get your gears turning by opening up an abundance of ideas for recording and creating sounds.

One fundamental tenet about Foley sound design is that an audio designer will often exaggerate a sound in order to make the scene feel believable. This means the object or action you are seeing in the visuals is not always used literally to record source. The sound of a sword plunging into an enemy was certainly not recorded by acting out the exact action. More likely it was recorded by stabbing vegetables or fruit. This is true even with subtler sounds. The sound of footsteps on snow may have instead been recorded with kitty litter to emphasize the “crunch” of each footstep. The takeaway here is that the best way to record source for a sound is to use your imagination and find sonic similarities rather than visual similarities. As sound creators we are interested in soundalikes, not lookalikes. It is also important to keep in mind that Foley and sound-effect recording is not magic by any means. It is really about being creative and building a mental database of interesting sonic characters by listening to and analyzing the sounds around you.

Fruits, Vegetables, and Foods

Produce makes for an excellent sound design prop.9 Vegetables and fruit can be used for fight scenes, gore, weapon impacts, and more. As you are preparing food or watching food being prepared, take note of the types of sounds you are hearing and experiment with those sounds. They can be used to recreate or simulate many real-life and fantasy sounds. In the Sound Lab (companion site) we provide additional ideas for Foley and sound-effect props and source. For now let’s explore some additional categories of props and source.

Household Items

There are many other miscellaneous items that can be used for Foley. If the fruit smashing wasn’t gooey enough for you, wet sponges or water-soaked dish towels can add more of a watery splash effect. There are plenty of items sitting around your house or studio that work very well for a Foley session. Don’t overlook anything! Remember that we are after soundalikes and not lookalikes, and it can be difficult to predict what an object will sound like when a microphone is placed in front of it. Keep an open mind and a critical ear.

The key is to experiment, try new things, and always be listening to the sounds around you. There really is no limit to the kinds of household props that will prove useful for recording source. A simple toy like a Slinky can be used for laser source material (see Figure 3.1).

Figure  3.1  (Left) Recording session with a Slinky using various household items as “amplifiers” and recording with a contact and shotgun mic. (Right) Recording session with Deep Cuts Gear (@deepcutsgear) “SOV” trucker hat trimmed with coins for jangly source. Photo: Armina Figueras.

Tools and Machinery

Garage sales, thrift shops, and flea markets are great places to source used tools and unique items. Vintage machinery such as adding, sewing, or dowel machines can provide interesting mechanical clicks and ticks to use as source. UI sounds and other mechanics are great places to apply this kind of source material. For instance, you can recreate the sound of opening a safe or lock box by processing these same clicks and clanks. Puzzle games that have interesting locks and treasure mechanisms could benefit from adding this level of detail.

Power tools like drills or electric screwdrivers produce servo motor layers for machines, robotics, or other motorized devices. Elevators and dumbwaiter motors can be designed with pitched-down power tool layers. There are numerous powered tools that can produce source for these kinds of layers. Compressors, industrial fans, chainsaws, power saws, and welders to name a few are all great starting points. Experiment with a mix of contact microphones and placements to give yourself a few perspectives to work with.

Clothing and Materials

Recording Foley in a home studio can produce very usable source material. In Chapter 1 in the Sound Lab (companion site) we touched on building a Foley pit. Here we will talk about how to use it to record effective source. Professional footstep Foley artists are known as “Foley walkers.” They are well practiced in matching the pace and feeling of the characters’ movement in games and film. Footsteps are recorded with the artist walking in place and positioned inside a Foley pit, which is filled with various terrains. When recording footsteps for games, walking away from the mic and then back to it won’t produce usable source. Walking in place in front of the mic will capture the proper perspective but it can be difficult to pull off because it is awkward walking in place without naturally stomping your feet. Take it slow and work on the heel-to-toe roll.

The type of shoe you wear for the Foley session should match the character’s footwear in the game. The terrain types throughout the game environment will also need to be matched. Hardware stores are a good source for obtaining tiles, turf, wood chips, stone, gravel, and more. A little patch of grass or a piece of broken concrete from around your home could also work. As a side note, professional Foley studios often dig out their ground terrain pits so that the footfalls don’t sound hollow from lack of depth.

The player character in the game Journey by thatgamecompany traverses miles of desert sand. The sound of the footsteps were created by poking fingers into a pile of sand.10 Using fingers instead of a foot allowed for a softer step to fit the soundscape. This is an important concept to understand: The Foley must match the aesthetic of the game. The point of Foley is to produce perfectly synced sound material. It cannot be perfectly synced if you are recording generic footstep sounds for a uniquely portrayed character. At that point you might as well use a sound library. Go the extra mile and pay attention to the details of the visuals so that your Foley can be unique and well synchronized.

Continuing from the above line of thought, to recreate non-human character footsteps try slapping a hand or slamming a fist onto different surfaces. By processing these sounds in interesting ways the result can be surprisingly effective. A large robot footstep can be created by slamming a metal door or stepping on a sturdy baking pan. To avoid the hollow metallic sounds of the pan, turn it over so the bottom of the pan is facing up and place some books underneath. This will ensure the sound has thickness and density to it, which is an important characteristic of the physics of any character. With the pan set on the floor, step with a hard-soled boot or punch down onto it with a fist. Be sure to protect your knuckles if you choose the latter.

Characters are usually outfitted with clothing and equipment. These kinds of accessory sounds should be layered with the footsteps to add more realism and detail. Leather and denim make great cloth movement source when rubbed against itself in front of a microphone. Additionally, few feet of chain can be purchased at any hardware store and used for armor movement. The chains can be recorded by manipulating them in your hands or by wearing them around your waist or arms and moving around in front of the mic. Be mindful of the clothing you wear so you don’t capture unwanted cloth movement along with the chains. We highly recommend reading Ric Viers’ Sound Effects Bible: How to Create and Record Hollywood Style Sound Effects11 as an additional resource.

Unique Instruments

If you travel internationally you can find some really interesting handmade instruments from various locales. We have a collection of instruments from travels of our own as well as gifts from friends and family. Just because the intended purpose of the item is musical doesn’t mean we can’t use it a bit more creatively!

Idiophones (instruments that produce sound by vibration in the body of the instrument) are examples of musical instruments that can be used as fun and interesting sound sources. We once obtained a unique idiophone called a “dance rattle.” A dance rattle is a bamboo stick with dried coconut shells attached by plant fiber strings. When the rattle is shaken it produces a clack sound as the coconut shells bang together. This was used in a project which required skeleton bones breaking apart and dropping into a pile on the ground.

In general percussion instruments and accessories are great for sound design source. Wood blocks and wooden mallets can generate tonal wood hit sounds. They can be used to accent movement in cartoon-style games or as UI sounds in learning apps or kids’ games.

The waterphone is a highly unique instrument to be used as sound source, and it has been used in horror films to create eerie and haunting background ambiences. It is an inharmonic percussion instrument which has a bowl at the bottom acting as a metallic resonator. A metal cylinder stems from the middle of the bowl and there are metal rods of various lengths and diameters attached to the rim of the bowl. The resonator (the bowl) contains a small amount of water, which makes the sound dynamic and ethereal. Waterphones can be played with a bow or mallet and used to create complex source layers.

Bowing

Once you have recorded most of the objects or instruments around you, try recording them again, this time armed with a cello bow. You will find the resulting sound to be very different when objects are bowed. Instruments that are commonly bowed and recorded for source are electric basses and other string instruments, cymbals, and various types of glass. Be sure to use rosin on the bow to create friction with the object you are bowing.

Field recordist Thomas Rex Beverly has captured the sound of bowing cacti, which is a very distinctive and organic source (see Figure 3.2).

Figure  3.2  Bowed cactus. Photo: Thomas Rex Beverly.

Visiting Artist: Thomas Rex Beverly, Field Recordist

Which Cacti Make Cool Sounds?

Generally, I look for cacti with strong, thick needles that aren’t too dense. This is more difficult than you would imagine. Most easily accessible cacti have short spines that aren’t strong enough to bow. However, after testing many cacti and after many failures, you’ll find the perfect cactus. You’ll take a violin bow to the cactus spine and hear guttural screeches with intense, physical energy and hear thick bowed spines growling like supernatural animals. Cactus sounds are incredibly soft and intimate in real life, but when recorded from two inches they morph into otherworldly creatures brimming with ultrasonic energy.

Recording Techniques

For Bowed Cactus 1 I used a Sennheiser MKH50 and 30 in Mid/Side. In Bowed Cactus 2 I used a Sennheiser MKH8040 and MKH30 in Mid/Side. The MKH8040 is rated to 50 kHz, so it picked up more ultrasonic content than the MKH50 did in the first bowed cactus library.

I positioned the microphones as close as possible to the spines and was careful to not stab the sharp spines into the diaphragm of my microphones while I was frantically bowing. With close proximity, I was able to capture as much of the short-lived ultrasonic energy as possible. Then, with endless experimentation I gradually found the sounds of supernatural cactus creatures!

Procedural

We won’t go into detail here because there is already a great book that covers procedural audio. We recommend reading Designing Sound by Andy Farnell12 as it’s a great book to reference for working with Pure Data but also worth a look for those interested in MAX/MSP. The idea behind the book is interactive sound design in Pure Data, which can be useful for games, live performances, and interactive experiences.

Electromagnetic Fields

All electrically charged objects around us emit electromagnetic fields, which can lead to otherworldly or ghostly source recordings. Below we will describe some tools with which you can begin experimenting by capturing the sounds of televisions, computer monitors, cell phones, electric toothbrushes, blenders, transformers, shredders, game consoles, and any other device you have access to. Not every device will produce a pleasing or useful sound but part of the fun is uncovering the unexpected sources. When you are recording your mobile phone, be sure to capture the audio during standard processes like switching between apps and sending or receiving data. You will be pleasantly surprised by the sounds emitted from the device. Spectral repair plugins like iZotope RX can help clean up any unwanted noise or frequencies to make the recordings more useable.

Slovak company LOM has developed boutique microphones with a collection that captures electromagnetic waves. The Elektrosluch by LOM will take you into a secret world of inaudible source material. The instrument is sold as a DIY kit or as a fully manufactured device. All of the equipment is made available through LOM’s website in small batches. It also sells Uši microphones and electret condensers that set out to capture delicate sounds the ear can identify, which also go above or below the frequency range of human hearing.

If you can’t spare the budget for the Elektrosluch, you have the alternate option to make your own device with inductive coils that will convert changes in the electromagnetic domain into electric signals. A single coil guitar pickup plugged into a DI or amplifier will also work well. Any old FM/AM radio will also pick up electromagnetic signal. In fact, that’s exactly what they are designed to do! They will pick up actual stations of course, and plenty of white noise, but you can tune between stations and poke the antennae to listen to smaller devices emitting EM.

This type of source is widely applicable for sci-fi ambiences, alien ships, energy bursts, articulations, or the humming of more realistic objects in a game. A little bit of processing can go a long way toward making this source a more usable and polished asset since the raw recordings will already possess the otherworldly aesthetics.

Circuit Bending

Taking electronics or electronic kids’ toys and altering the circuits to generate new sounds is called “circuit bending.” These sounds are strange, but wonderful. By nature, most sounds generated by circuit bending are glitchy and noisy, but sometimes you strike gold and find a sound that is usable in one of your projects.

Before forging ahead into the world of circuit bending, you should have a basic understanding of electronics, circuits, and soldering. Start your experimenting with devices that don’t mean much to you, as you can easily fry the circuits with a misstep. Be cautious as you work with live electronics and take precautions to ensure your safety. Most importantly, never EVER circuit bend electronics with a wall outlet. Use battery operated electronics ONLY.

Circuit bending is unpredictable. The point isn’t to control sound; it’s all about finding new and unique sonic characters. The variations on sonic possibilities are endless. Pitch, speed shifting, automating patterns, and modulations are all performable bends. The voltage drop crash is a common bend that simulates the sound a device might make as the batteries are dying.

At its most basic, bending is just another way to find interesting sounds. You can read deeply about circuit bending in Nicolas Collins’ book Handmade Electronic Music: The Art of Hardware Hacking.13 If you aren’t into taking devices apart and bending, you can always try to distort or warp sound by sending signals through vintage outboard gear or software plugins.

Vocal Mimicry

It’s in our nature to make sounds with our mouth, so don’t underestimate the power of your own voice and mouth when sourcing sounds. Vocal mimicry can be a great way to quickly generate source for UI, creatures, wind, vehicles, and weapons. The pops, clicks, and clucks that can be made by positioning the tongue and lips a certain way are highly useful for these situations.

Let’s take a close look at our body’s process for creating sound. Air flows from the chest through the larynx and into the vocal tract, and then exits the body via the mouth or nostrils. Speech and various sounds are created as muscle contraction changes the shape of the vocal tract and pushes the airflow out into the atmosphere. The larynx contains our vocal cords, which vibrate. Muscles in the larynx control the tension on the vocal cords, determining the pitch of the sound produced. The nostrils, mouth, and throat are resonators like the mouthpiece on a horn. As the resonators change shape, sound is created.

We apologize for the seemingly tangential biology lesson, but it’s necessary to understand the physics of how we create sound. Professional voice artists often study the muscle contractions and air-flow process to understand how to control the sounds they make. It’s just like understanding a musical instrument, effects plugins, or synthesizer architectures. If you really know the tool you can control the end result instead of shooting in the dark in the hope of generating a useful sound.

When tasked with creating subtle UI sounds, using your mouth and voice to generate sound source can be a lot quicker and easier than trying to tweak a synth. A good number of casual games use mouth or tongue pops for notification sounds and button tap sounds. Another simple noise that works well for button taps or clicks is the mouth pop. Suck your lips into your mouth with your jaw closed. Then open your jaw quickly to generate the sound. The harder you suck while opening the jaw, the louder and more defined the sound will be.

Vehicle and weapon sounds can also be designed using mouth and voice source. A resonating buzz produced in the vocal tract can be a solid source layer for a vehicle engine or RPG pass-by projectile. Although these vocal sounds might seem to stick out of an AAA quality sound effect, they usually don’t. Remember that we are creating source, and then manipulating the sounds and adding layers later in this chapter in the section on “Sound Design Techniques.” Applying plugin effects like pitch, chorus, distortion, and reverb will also allow you to manipulate your vocal sounds so they match the visuals, and don’t stick out of the mix.

To gain some vocal mimicry practice, position your tongue so it is touching the roof of your mouth with your jaw in an open position. Once your tongue is in place, pull it down quickly to generate the pop or cluck. Next, make an “ah” sound and try shaping your tongue in different ways. Alternate between a “u” shape and and “eeh” shape. This will divorce your association between your jaw, lips, and tongue to allow for sounds that range beyond the vowels. Also try playing with varying degrees of lower jaw position and you should be able to sculpt the sound to your liking.

This may come as a surprise, but animals can make useful vocalizations as well. Animal sounds are not only usable for creature sounds; they are also great source material for vehicle and weapon sound design. The roar of a tiger or lion can be pitch shifted, time stretched, and blended in with engine sounds to add a ferocity to the vehicle. Growls work well in vehicle pass-bys or engine revs to give them a bit more bite. In general, animal sounds add a very intuitive (and usually threatening) mood to well-crafted sound effects. We’ll discuss layering animal sounds in more depth later in this chapter in the section on “Sound Design Techniques.”

Field Recording

In Chapter 2 we discussed microphone types and in this section we will explore field recording for new and unique source.

Recordings captured outside of the studio are referred to as field recordings. These recordings are a crucial element of your sound palette, and often spell out the difference between a generic soundscape and an immersive one. Common targets for field recording are ambiences, sounds found in nature, human-produced sounds that are difficult to capture indoors, large sounds like vehicles and technology, and electromagnetic sounds.

Field recording is an art form just like photography. When you are sitting out in nature with an array of microphones and field recorders armed to record, your headphones become your eyes as you monitor the lush soundscape that surrounds you. There are many challenges when recording out in the field that you won’t encounter in a studio setting. We will discuss some of them in the “Location Scouting and Preparation” section below. For now, let’s focus on the variety of source that can be captured and used in your sound palette.

Finding Sounds in the Field

Sound designers are always looking for unique and fresh sounds for their games. In the field there is a whole world of interesting sounds waiting to be captured. Some sound recordists capture audio out in the field to create libraries of SFX collections to be used by other designers. Field recordings can also be used with minimal processing to create realism in a game, or manipulated into an immersive and otherworldly soundscape.

FPS games like EA’s Battlefield require realistic sonic elements from battles off in the distance. Field recordings can truly capture the realistic background ambience required of games like this. There are a good number of re-enactment groups that might be willing to allow a recording session during one of their performances. If you know someone in the military you may alternatively request access to a training base to garner more authentic recordings. If you are lucky enough to have an opportunity like this, you will certainly want to have the right equipment. At a minimum you should bring several microphones to record at different distances. The audio team at DICE used field recordings of military exercises as ambience in Battlefield.14 More focused recordings of specific objects at the base were then used as details to fill out the soundscape.

Animals are a very common target for field recordings, but they can be unpredictable. Setting out to capture seasonal sound like bird call can be tricky depending on your location and the time of year. Game production doesn’t always adhere to nature’s schedule! There are also other difficult factors that may prove to be an obstacle in producing usable field material. You may scare off the birds if you come marching in with all of your equipment for example, so it’s a good idea to research your subject for a more successful session. Perhaps visit the location once or twice before the session with minimal gear to see how your subject behaves and to check for noise or other obstacles.

Visiting Artist: Ann Kroeber, Sound Recordist, Owner of Sound Mountain SFX

On Being Present in the Field

To me the most important thing is to be present and all ears when recording. Have good sensitive microphones and recorder then turn gear on and be completely in the moment. Forget thinking about technique and expectations. Simply listen and point microphones where it sounds best. Find a way to get away from background noise and the wind.

When recording animals really be there with them, tell them what you’re doing and how the microphones are picking up sounds … you’d be surprised how they get that and what they can say to you when they know.

Ambience and Walla

Ambiences and walla are two more commonly targeted sounds for field recording sessions. Games generally need quite a bit of both, so the more distinguished and original ambience and walla that you have in your library, the better prepared you will be. Try starting out with local field trips and practice capturing some environment sounds. To continue improving your skills as a sound recordist the trick is to always be recording. Just get out there and explore new areas to record. If you like to travel internationally, be sure to pack up your gear so you can grab some local sounds for your library. Recording urban ambiences in different cities and in different countries will provide a wider library of ambiences for your collection. An understanding of your game’s sonic needs will define the recording format in which you will capture the sound; mono, stereo, quad ambience, surround, and ambisonic are all options for ambience recording.

Creativity in the Field

Regardless of the source you are looking to record, there are many sounds just waiting to be captured in the field that you could never recreate in a studio. Let’s say you have found a fence near a train yard. Connecting a contact mic to the metal fence or placing it inside one of the posts as a train rushes past can be striking. This shift in recording perspective makes a huge difference in the resulting sound and how you can apply it to your palette. If you have a few mics to work with it’s often a good idea to capture multiple perspectives at once.

Recording water and other fluid sounds may present a challenge. The fluid will sound different depending on the container you use. If you set out to recreate the sounds of a character sloshing through water you may run into some trouble. Filling a metal bucket with water outdoors and setting up a mic to record your hands splashing around sounds like a good idea, but the recorded audio will have a thin and resonant sound as it reflects off the metal material. This only works if the sound was intended for a character confined to a metal space. A better option would be to use a larger plastic container or a kiddie pool. You might also try lining the metal with towels to reduce the metallic resonance in the sound. Experimenting with the amount of fluid can help as well. Too little water will sound like puddles rather than shin-deep water. Too much water will sound like the character is swimming in an ocean. With field recording it’s best to listen critically, be flexible enough to try different approaches, and be prepared to think on your feet.

Getting Started

When selecting source to record in the field be mindful of your equipment and what it can handle. Wind can be a big factor whether or not you plan on capturing it in the recording. We will further discuss ways to deal with wind in the “Consider Conditions” section later in this chapter. If you have a noisy preamp in your field recorder, you may want to avoid trying to capture very quiet settings. Similarly, you will need microphones and preamps that can handle extremely loud sounds if you want to record rifle sound effects in the field.15 Practice makes perfect, so record as much as you can and analyze the results. You will eventually gain an intuition about which recording setups work best for the environments you record.

Microphone Choice and Placement

There is a whole host of things to think about to ensure you capture a usable sound. The quality of the recording as we stated above is dependent on the environment, technique, and equipment. Choosing the right mic, polar pattern, sensitivity, and proper gain settings along the input path and monitoring those levels are key components. Mic placement could be viewed as more important than the microphone choice. Let’s explore some options.

Proper placement really depends on the sound you are trying to capture and the end result you are looking for. Placing the mic too far from the source may introduce too much room or environment sound into the recording. Close-miking can cause proximity effect, a bump in the 200 to 600 Hz range in the recording, though the proximity effect can be useful if you are looking to thicken up the sound at the recording stage. A bump in the low mids can be handled with EQ more easily than working to remove the sonic character of the room.

Since there are a large number of choices when it comes to microphones, it can be difficult to know which one to arm yourself with. Knowing your source will help you determine the best mic, pick up pattern, placement, preamp, and use of pads or limiting. We suggest testing your recordings with several of these combinations to determine the right fit, and after some extended practice you will have some go-to techniques to use in the field. Experienced sound recordists typically have their go-to mic picked out after years of experimenting. Some mics will color the sound in a way you may or may not prefer so let your experience and practice in the art of recording guide you on your journey.

Consider Conditions

When choosing a mic be sure you consider its limitations in regard to very cold weather, high humidity, wind, dust, and even really loud sounds. High SPL16 mics are best suited for capturing loud sounds like weapon fire or explosions. In cases of high humidity, higher quality mics might fair best as they are crafted with better RF technology, which helps quickly acclimate to changes in climate conditions. In this case, the mic should be wiped down and placed in its wood box with a desiccant bag to draw out any moisture. Protection against rain can also be tricky as putting a plastic covering over the mic can degrade the recording quality.

Wind moving across the microphone capsule can ruin the recording. There is a variety of manufacturers like Rode and Rycote that manufacture wind protection devices and there are also plenty of DIY options, which we discuss on the companion site in Chapter 1.

Research your mic before moving it into different conditions so you can be prepared to capture the best quality sound but also protect your investment.

The project’s budget will determine the equipment and location of the recording sessions. When the budget permits, equipment can be rented or purchased to handle all the needs of the recording. If you are just starting out and working on projects with smaller budgets you can make it work with what you have. As you work on more projects, your equipment list will expand. We suggest looking into renting mics before you buy to see if they are the right fit for your workflow.

Microphone Choice Considerations

In Chapter 1 on the companion site, we discussed different types of microphones and the pickup patterns along with ways to avoid wind and handling noise during a recording session. Here we will explore some practical uses.

There are typical mic choices, patterns, and common practices you will read about or hear about in the community, but it’s always good to experiment and uncover new ways of capturing sounds. Microphone selection is a subjective process defined by each individual’s tastes and sensibilities. We have favorite placement and positions and they make great go-tos under tight deadlines. When a project has some time in pre-production it’s good to play with different polar patterns, frequency responses, and mic positions to experiment with levels, phase, and delays between microphones. There are many factors that affect the state of the object being recorded and the environment, which can change the end results. Here we will explore some choice considerations.

You are probably familiar with or have used dynamic mics at some point in your audio journey. While they are used in a lot of live recording situations, dynamic mics are good for capturing loud source. Condenser mics will always offer the ability to capture smaller details but can be too sensitive and even damaged when capturing loud sounds. A combination of the two could be useful for capturing specific source material. For example, the recording of a door closing could benefit from a dynamic mic for capturing the low-end impact of the door meeting the frame while a condenser mic could capture the higher frequency content like latch movement. The two sources can be blended during editing to produce a full-bodied yet highly detailed sound.

While a narrower pattern offers direct capture of the source, omni patterns are a better choice when wind is an issue. Omni microphones also tend to have low self noise which is useful for recording very quiet sounds. Polar patterns are basic knowledge that all recordists should be familiar with, knowing the pros and cons of each. Once you have an understanding of your source and the expected outcome of the recording session you can explore different microphone types.

A sensitive large diaphragm condenser mic with a narrow cardioid polar pattern is great choice for picking up subtle nuances from certain sources due to its lower noise floor. Microphones that lack good frequency response and noise floor will require more restoration of the source during editing and may not be as processing friendly as higher quality source.

A shotgun mic is very versatile for sound-effects recording. We use the Sennheiser 416 and Rode NTG2 for animal recording, gears and gadgets, doors, impacts and more. A shotgun is great for recording source, with rejection on the side and rear working best for sounds that need to be captured close and without room ambience in the recording. It’s also a great choice for Foley source and SFX recording where small details need to be captured. Keep in mind the quality of the mic pre is important when recording quiet Foley or sound effects, for example cloth movement. The noise level of the mic pre may mask the quiet sound of softer material cloth movement like cotton for example. Of course, it is equally important to have an acoustically controlled room when recording quieter sounds.

The different formats of shotgun mics from short, medium, to long, and mono or stereo, make for more confusing choices. A longer shotgun is often the best option for capturing distant sounds with the highest off-axis rejection. A shorter shotgun will have a slightly wider pickup range but will still be great at focusing on the source. The longer shotgun mic isn’t always physically the best for maneuvering around a session.

Pencil or small diaphragm condenser mics or SDCs are great for capturing more high-end detail. They also offer a more natural sound whereas large diaphragm condensers often color the sound. The SDCs also have a fast transient response, which is great for producing brilliantly detailed source. We have used the Oktava MK12 with a hypercardioid capsule for capturing delicate cloth movement which requires more detail in the sound.

Lavalier mics are commonly used for capturing voice while placed on the actor but these small mics can also work well for stealth recording. They can also be used for boundary-style recording by securing the lav mic to a reflective non-porous surface.

Hydrophones are microphones made for the specific purpose of recording in water. They can be used for picking up animal sounds like dolphin and whale calls in large bodies of water. Hydrophones may not always capture the sound underwater that we might expect to hear as a listener and it might sound a bit too thin to our ears. Mixing the hydrophone recording with a recording from a condenser mic above the water can work well if you apply a band pass filter to the mids. There are plenty of DIY options on the internet or you can invest in a manufactured option like the JrF Hydrophone,17 which can be purchased for under $100 USD to get you started.

Contact microphones are piezo transducers designed to pick up sound from an object it is physically touching. Signal is derived from mechanical vibrations instead of airborne soundwaves. You can DIY (do it yourself) a contact mic and there are also other budget-friendly options such as the Zeppelin Labs Cortado MkII. When using this type of mic you might be surprised to find not all objects you think will produce beautiful resonances will create an interesting sound. These microphones are best for creating abstract sounds but can also be layered with airborne sound-wave recordings to add an extra dimension to the sound.

Binaural and other spatial microphones are becoming more widely used with VR (Virtual Reality)/AR (Augmented Reality) audio and other immersive experiences because they capture sound similar to the way humans hear. Our ears and brains can perceive accurate spatial positioning in just about a 360-degree sphere. These mics offer multiple diaphragms in one mic capsule. The audio team at Ninja Theory used binaural microphones to record abstract voices to simulate a “voices in your head” effect which creates an ASMR experience.18 A binaural mic can also be used to capture reference recordings which can be useful for building accurate sonic spaces. Soundfield microphones are useful for capturing ambisonic audio, which offers mimicking of how sound is perceived from all directions. This can be very useful in building immersive sonic experiences in virtual reality. The price of binaural microphones can vary dramatically, starting at $100 USD and reaching to thousands of dollars.

Another interesting device for capturing sound is a parabolic mount. An omni mic is placed into the center of the polycarbonate dish to assist in capturing long-distance sounds, birds, or sporting events.

While it’s a good idea to understand the pros and cons of microphone types and polar patterns, hands-on experience will allow you to use your ears to define your go-to tools for capturing different subjects.

Placement Considerations

Now that you have some considerations for different microphones to work with, you can begin experimenting with placement. In a single-mic scenario, start off with getting the mic very close to the source, about 3 inches (2.5 cm), then back it up as you monitor with headphones, using your ears to find the best location. If you feel the sound is too narrow, switch to a different pickup pattern with a wider range or back the mic up a bit. As an experiment, listen to how the sound changes at 6 inches (15 cm) and 12 inches (30 cm) from the source. Then try different mics and pickup patterns at each of those positions on the same source.

If you are looking to capture a stereo sound, start by setting up a pair of small diaphragm condenser microphones, preferably a matched pair, configured in an XY or right-angle (90 degree) position. This is a great technique for recording ambiences but can also be good for a more direct sound when you are after more depth in the sound. The 90-degree angle doesn’t offer a super-wide stereo image but does offer minimal phase issues. Experiment with widening the mics beyond the 90-degree angle to increase the stereo width, but be mindful of the loss in the middle as as you adjust the position. An ORTF position is a take on XY and allows for a wider stereo image without sacrificing the center. This technique uses a stereo pair of mics placed at a 110-degree angle from each other to mimic the way human ears hear. A stereo bar is used to position the mics and attach them to one stand or boom pole. Marking the bar can help easily identify the XY or ORTF positions. Since the mics are small condensers you can fit them into a blimp or will have to use separate wind protection when you record outdoors.

Ambience recordings can benefit from Mid-Side technique. This consists of a cardioid mic as the mid and a figure eight mic as the side. The mid mic acts as the center channel and the side mic pics up the ambient and reverberant sounds coming from the side. This technique will produce a recording that needs to be decoded before the proper stereo image can be heard. The decoder is often referred to as an MS decoder. The recording will produce two channels but the side channel will need to be split into two and panned hard left and right respectively. Changing the blend between mid and side in the mix will offer more control over the stereo width. This can come in useful when you want to adjust the room in the mix. Another bonus of MS recordings is that they play back in mono without any phase cancellation (unlike stereo recordings). MS shotguns are a great choice when you want to focus on one specific sound but you want to capture the ambience around it without committing to a true stereo recording.

Binaural mics are often attached to a dummy head to properly position the mics at ear level to capture sound as humans hear. There are also wearable mics that can attach around the sound recordist’s ears. One needs to be mindful of breathing and head movement when doing it this way. Ambisonic mics captures four-channel, 360-degree sound information that not only positions on the same plane as our ears, but also above and below our head. These techniques offer greater immersion as they give a more realistic sound based on the way we hear sound naturally.

When using a lavalier as a boundary mic it should be placed as close as possible to the surface without it touching the surface. This will allow you to capture a full sound, as the reflection off the surface doesn’t have enough time to go out of phase since the gap between the surface and the mic is so tiny. It’s not something you might use all the time but a useful thing to know. For example, recording ice-skate blades scraping across ice can be done by attaching a lav mic to the boot, but you can also attach it to the plexiglass wall surrounding the rink to try out the reflective “boundary” technique.

A contact mic can be placed directly on the resonant body. Attach the mic with double-sided sticky tape, sticky tack, or gaffer tape. You should do some test recordings as you secure the mic to the object as too much tape can deaden the resonance you are trying to capture.

Regardless of the placement you choose, having a backup mic in position in case you bump the main mic or clip the input signal is useful. A backup mic can also offer a slightly different color to the sound so you can decide in post-production which to use.

Input Gain Considerations

The microphone is connected to a recorder in order to capture the soundwaves. Poor preamps, input levels, and sampling/bit rate can also make or break the recording.

Remember, you want to choose a high-sensitivity mic with low self-noise. The mic’s circuitry will introduce noise into the audio path. Self-noise is the noise introduced to the audio which will result in an audible hiss when capturing quiet sounds.

Ensure you have phantom power set up if your mic requires it. Do a test recording to check levels ahead of time. In digital equipment, 0 dB19 is too loud and could lead to clipping of the signal. It’s best to leave headroom of around -6 to -12 dB. A lot of field recorders make checking levels visually easier by including a marker in the middle of the meter.

When recording quiet sounds the tendency is to boost the gain which, in turn, introduces system noise. This is where gain staging comes in to play. The output level of the mic should be considered and switching to a higher output mic might be necessary if you can’t make the source any louder. Additionally, you can try moving closer to the source or remember you can achieve a bit of gain with compression, limiting, and EQ in post-production.

In-line mic pres like the Triton Audio Fethead or Se Electronics Dynamite have ultra-clean, high-gain preamps. When used on a low-output mic they can save the day by providing that extra boost of clean level to quiet source.

Performing Sound with Props

The idea behind performing Foley or sound effect with props is to tell a story with sound by capturing the movement so it best fits the visuals or animations. For example, a character walking through a haunted mansion on a wooden floor could benefit from wood creaks randomly triggered along with footsteps on wood. To really achieve a believable result, the wood creaks should be performed and captured. Let’s say you have a wooden floor that creaks with the slightest movement. Standing with your feet firmly planted about 12 inches (30 cm apart), start to shift your weight from the left to the right. Adjusting the speed at which you shift the weight will provide a drastically different creaking sound.

Later in this chapter we will discuss recording footstep source in “Sound Design Techniques.” There we will examine how Foley artists or Foley walkers produce varied heel-toe footfalls with some scuffs and minor weight shifts. A sound-effect recordist new to performing Foley may tend to walk with a flat footfall which generates a stomping or plopping sound that isn’t very convincing. Others new to the field may overcompensate and the result will be overdone heel to toe.

The same can be said for other actions in game which require synced sound. If a character tosses papers off a desk, capturing the sound of simply dropping paper on the floor won’t satisfy the player. Capturing the hand sliding the papers across the desk should be the first action you start with. Having animations or video capture to work with will allow you time to re-enact movement to produce believable sonic movement. The idea behind it is to actually perform the sounds. Control the sonic output of the object you are working with to get the best fit for the animation.

We have discussed sound being exaggerated to create a believable experience for the player or listener. An example of this is picking up an object. In real life we often don’t hear a sound when picking up an object but in game audio we may enhance the action to add drama to it.

Let’s look at this in another scenario where we have the player character in a game outfitted with a backpack. What does it sound like when you wear a backpack? It may not sound like much at all but in game we want to exaggerate the sound a bit to accent the character’s movements. To achieve this you could fill a backpack with some books and jingly items like a set of keys and shake it in front of a mic. Try moving and shaking the backpack in various ways and at various speeds to generate source that will best tie in to the visual action.

If you are performing with the props be sure to tie back your clothing or wear something that has less swish or swoosh when you move around to avoid introducing unwanted sound into the recording. Also be mindful of your breath as you get close to the mic to perform with your props.

Summary

The idea is to capture the sounds or performance with enough duration to ensure you have enough to work with in editing. Variation in the frequency of events if being performed will offer more to work with in the editing process. For example, when recording footsteps performing slow, medium, and faster paced steps along with soft, medium, and hard impacts will offer enough variety to match various needs in game. When recording door opening and closing sounds, you can apply this same principle. Open the door gently and then more aggressively so the handle and lock mechanism will sound different in each approach. A door creaking open can also benefit from this process as some creaks may need to be quick and high pitched while others need to be long and low pitched. Always record additional takes so you don’t have to worry about a lack of source in the editing process. Finally, it is good practice to “slate” your takes by recording your voice calling out some descriptors to later identify the recording.

Location Scouting and Preparation

Location, location, location … choosing an adequate location to record is a vital part of the process. There is a huge list of obstacles that can get in the way and spoil a recording session. Preparation to reduce or eliminate unwanted noise and smooth out logistics will ensure that quality source is produced.

Noise Sources

If you are recording in your home studio you will have to think about outdoor and indoor noise sources. Here we’ve listed a few sources starting with some of the more obvious ones and moving onto others that are less obvious.

Outdoor

  • Traffic and vehicles
  • Landscapers (lawn mowers, leaf blowers, snow blowers)
  • Humans (conversation, coughing, kids playing)
  • Air traffic (proximity to an airport)
  • Animals (crickets, dogs, cats, squirrels scratching, birds, etc.)
  • Weather (wind, rain, leaves rustling, branches falling, humidity, thunder)
  • Time of day/season

Keep in mind that a lot of these noises can be used as source as well.

Indoor

  • Other humans or pets
  • TV, radio or phones (these can also be listed as outdoor if you have neighbors who enjoy watching or listening at inconsiderate volumes)
  • Faucet leaks
  • Tiled or empty rooms
  • Pipes, heating and cooling systems or units
  • Creaking floors, sticky floors
  • Sump pumps
  • Cloth movement from your own outfit
  • Breathing
  • Clocks
  • Fans on computer and other equipment
  • Refrigerator humming
  • Reflective surfaces
  • Frequency build-ups

Scouting the Location

The ideal recording location is a room that has a very low noise floor, a balanced frequency response, and that is extremely quiet. Unless you are in a dedicated and acoustically fitted space it will be difficult to find a dead space to record in. To avoid these issues, prepare by scouting out the location first. To scout the space, sit quietly in the space you’ve chosen with recording equipment and a pair of closed back headphones and monitor the feed. Listen carefully for any unwanted sounds. Be sure to listen long enough to give less frequent sounds a chance to crop up. Make sure to record the clap of your hands in the area and listen for reflections, flutter echo, and any frequencies that tend to accumulate.

Scouting external locations also includes finding interesting places to record legally. Nobody needs to take on jail time for grabbing some ambience. Train stations, automobile repair shops, junk yards, shipping container lots, construction sites, and many other public locations can yield amazing source material. Seek out small businesses that can provide unique source and make your case with the owners. You’d be surprised how generous people can be with their time and resources when approached with a professional and kind attitude.

Locations to Avoid

Avoid recording in empty rooms, especially if they are symmetrical. The lack of objects to absorb sound reflections will leave you with too much room sound in your recording. This can give recordings a tinny flavor, or cause the build-up or drop-out of certain frequencies. Filling the room with absorption and diffusion materials can help reduce these issues. A full bookshelf is an effective natural diffuser and can provide some degree of absorption. The books won’t completely fill the space however, so you may end up with absorption at one frequency and none at others. Researching and testing your indoor space will take some time, but it will save you time cleaning up recordings later on.

Bathrooms are even worse for recording than kitchens because the entire room is usually one giant reflective surface. The idea is to find a place to record clean source that offers numerous processing options afterward. By recording in a heavily reflective room you are limiting those options. Of course, if you are after the reverberant sound of a bathroom then by all means give it a go. Just be aware that reverb is easy to add after the fact, but when it is “baked” into the recording it takes a lot more effort and patience to remove it.

Location Preparation

It may be necessary at times to record various materials in a location that is not ideal. In situations like this, consider the reflective surfaces and added noise around you before pressing the record button. In a kitchen for example, you’ll need to unplug the fridge to reduce hum, but try to expedite the recording as much as possible to avoid spoiling your groceries. Use towels to line reflective surfaces to increase sound absorption and decrease reflections. A reflection filter or baffle situated around your mic and sound source can also be useful. They can be purchased online at relatively low cost. If you are crafty, you may be able to whip up your own in a DIY fashion. Hanging a blanket over two mic stands or over a door can provide some further sound isolation. Pretty much any thick and malleable material (blankets, comforters, pillows, jackets, etc.) can be set up or hung around the room to act as a makeshift acoustic treatment. Pillow forts were fun to make as kids, and now you can build them for work!

Outdoor Locations

Recording outdoors requires a lot more navigation because you have far less control over noisy sound sources. Sometimes it’s possible to find a quieter space to record in if you do a bit of exploration. For outdoor ambiences you may find a quieter spot in the woods rather than out on the grass by a soccer field. When the source of noise is intermittent (in the case of cars driving by, or a crowd of pedestrians) you may also be able to simply wait it out, or chop out the extraneous sounds back at the studio.

Recording quieter soundscapes outdoors can be even more troublesome. If the noise floor is louder than the sound source, the recording will be unusable. Environment, microphone technique, and equipment are all a factor that especially impact outdoor recordings. When capturing quiet soundscapes any background noise at all will interfere with the usability of the audio in game. To bypass this issue the sound recordist must have patience and willingness to stake out a location to find the most quiet of times. This can take hours or days, or even relocating and trying a different location. Signal to noise ratio can be a problem when trying to capture quieter soundscapes since quieter sounds will require you to increase the input gain quite a bit. A large diaphragm condenser mic will offer a lower noise floor and adding in a low-noise, in-line microphone preamplifier to the mix could offer about 20 dB of clean gain. You can also try a different microphone if the mic you chose has a low output. A recorder with cleaner preamps can also help you capture a higher quality recording of quieter sounds.

Surprise Sounds

Although preparation is extremely important most of the time, there are situations where you will be pleasantly surprised by a completely unpredictable sound. If you are ever caught by a captivating sound and you don’t have the time to prep in advance, capture it anyway. Having a noisy sound is always better than having no sound at all. However that doesn’t mean it’s okay to be a sloppy recordist and “fix it in post.” Do your best to produce the best clean and dry recording as possible, and fix only what you need to afterwards.

Miscellaneous Factors

There are a number of miscellaneous and unpredictable factors that can get in the way of a quality field recording. Equipment can malfunction and add unwanted noise or hum into the recording. Stomach gurgles can strike if the wrong food (or no food at all) is eaten prior to the session. Humidity can dampen the high frequencies or even damage the microphones. Similarly, a very windy day can make it tricky to capture quality source. Luckily, most of these are predictable and preventable. Bring backup equipment, eat a nutritious meal beforehand (not a gas station burrito), and check every weather app you can to find a location where the conditions will be right for recording. Even if wind is unavoidable, using a windscreen can reduce its effect.

Here is a list of things to consider when recording.

  • Pre- and post-roll a few seconds for each new track.
  • Slate your take with enough detail to work with in post.
  • Capture more source than you think you need and include variety.
  • Monitor and spot check levels (it’s good to take monitoring breaks by taking off the headphones and listening to the environment in real life).
  • Experiment with mic placement and pickup pattern.
  • Consider the environment and work to protect your equipment.
  • Think outside the box.

Processing Field Recordings

There are a vast number of processing techniques you can use to clean up audio after the session is through. If you don’t own a windscreen, wind can often be reduced or removed completely from a recording by using a dynamic EQ (low shelf) and spectral repair instead. A de-plosive can also help.

There are also many ways that you can creatively process field recordings to create otherworldly sounds. Martin Stig Andersen, composer and sound designer of the brilliant soundscapes in Playdead’s Inside and Limbo, uses low rumbles and pulsing sounds to envelope the player. These sounds are manipulated in a way that creates such an ominous soundscape the players feel tethered directly to the game world.20

You can achieve similar results by experimenting with some field recordings from earlier in this chapter. For example, taking a recording of crickets or cicadas and running it through various effects like delay, tremolo, or a granular pitch shifter can produce an ambiguity in the sound that will make the non-literal atmosphere almost “come alive.” Players will consequently feel immersed in the detail that these techniques add.

Another way to really mangle your field recordings is with a sampler. With the recording as an oscillator you can transform the sound through filters, modulation and envelope shapes to create surreal pads and granular tones.

Don’t forget about pitch shifting up and down to get a sense for how your recording will respond. While extreme time stretching can introduce artifacts into the sound it might just be what you are looking for if the game’s soundscape calls for a non-literal steampunk atmosphere.

Later in this chapter we discuss various ways to use effects processing as a sound design tool. There you should experiment and practice with your field recordings by running them through various effects and chains of effects. Be sure to play with the order of effects in the chain and take note on how running one plugin into another changes the final sound.

Field Recording Summary

To summarize, selecting and preparing a location is an important factor in how your source recordings will turn out. To produce an effective sonic palette, you will need quality source. To get quality source, all of the above preparations and considerations should be taken seriously. Whether you are recording indoors or outdoors there will be various noise sources to contend with. By researching the location ahead of time, listening with recording gear, and applying some of the sound absorption techniques we have mentioned, you should be able to grab some high-quality, low-noise audio source for your palette.

Discover and record sound source to use as source layers and recreate sounds. Practice with various location scouting, preparation, and mic placement.

Designing Sound Effects

At this point we have covered building a sonic palette as well as sourcing material via libraries, synthesis, and recordings. In this section we will explore ways to take source material and use it to produce specific sound assets for games.

Getting in the Mindset

Have you ever asked yourself when “creative sound design” is appropriate? Do realistic graphics call for outside-the-box thinking when it comes to designing sounds? Or do they require perfect realism all the time? In truth, fantasy and sci-fi aren’t the only genres that rely on experimentation to deliver a quality audio experience. Even the most realistic AAA graphics require unique and innovative approaches to designing sounds in order to satisfy the needs of the game, and create an immersive experience.

When starting a project you will almost always have some visual assets or design documents to work with. Use these assets to begin thinking creatively about how you want to design the sounds. When you have access to a build of the game take video capture and import it into your DAW. This will allow you to quickly see and hear whether your designs fit with the visuals or not. In other words, it will streamline the creative process so that you can easily evaluate your work. You will find more often than not, that the more inventive your sound design is, the better it will suit the visuals regardless of graphical style. Games almost always require an original sonic personality for the player to feel immersed, so don’t undervalue the merit of creative design.

The video capture is purely for design inspiration and timing, so keep the interactive nature of games in mind. Some animations in game may require multiple parts exported from your DAW later on during implementation, while others may require all the parts baked into a single sound file. Understand that the creative aspects of design are separate from implementation, so make it a point to work out with your programmer how you will deliver files after your design direction is approved.

Deconstructing Sounds

In order to design sounds from scratch, we must first be able to deconstruct them. This process is vital to a sound designer’s technical ability because it trains the ear to listen for details in the sound that most people will overlook. It also triggers our brains creatively to think of how we can begin to build new sounds from scratch using our source audio.

Every sound can be broken down into three categories: physics, material, and space. Because all sound is fundamentally oscillating air molecules, the physics of a sound is incredibly important. The physics of a sound will tell us how heavy the sound source is, how fast it is moving, how large it is, and anything else related to how it interacts with the environment. A ball flying through the air has a definite mass, speed, and size. In turn, this will impact the sound it makes. The larger and heavier an object is, the more likely that lower frequencies will be present when it is tossed through the air. Without those low frequencies our brains will tell us the object is tiny and light.

The object will also be comprised of specific material which affects the sound as well. Let’s use a ball as an example. If the ball is made of metal it will fly through the air with very little wind resistance compared to a ball of yarn. The effect becomes more pronounced if you imagine the ball actually landing somewhere. If it lands in the palm of someone’s hand the metal ball will have a sharp attack and a metallic resonance. The ball of yarn will be much gentler, with the attack being soft and fluffy. If you were to compare waveforms of each of these examples you would notice a clear difference in the moment of impact. To add another level of detail you might even take into consideration the material of the ball as well as the material of the object the ball hits. In this case the ball is hitting the skin of the “catcher’s” hand. In a game, it might land anywhere, so all possible materials must be accounted for in the resulting sound.

Finally, our hypothetical ball also occupies a particular position in space. There are two ways we can look at the defined space of our ball. First, we can see it as a standalone sound effect having width, height, and depth in the stereo field. In sound we can create the illusion of three dimensions in a two-dimensional space. Humans subconsciously equate higher pitch to a higher vertical placement in a space. So we can assume that defining height can be done through pitch. Width can be achieved with panning of the sound or layers within the sound. This is particularly useful for adding movement within the sound. Finally, depth will define through sound how close or far away the object is. Louder and brighter sounds will sound closer while sounds lacking higher frequency content will appear to be further away. More reverb on a sound will also push it back in the mix. Second, since we are going to implement our sound effects into a game engine we can define the process of positioning a sound effect in a game as spatialization. Spatialization is largely dealt with in the implementation phase of development (see Chapter 8), but for now think of it in terms of the audio engine applying panning, volume, and reverb on an object. The farther away our ball is, the lower the volume will be on the sound it emits. If there are reflective surfaces in the environment, we will also hear more of the reflections and less of the original dry sound source. Depending on the nature of the game the sound may require having these properties baked into the asset or will rely on the audio engine to handle it. For example, sound effects attached to a first-person character will require the sense of space to be baked into the sound while a third-person perspective may require mono sound assets, which will allow the audio engine to control the spatialization.

Reconstructing Sounds

Now that we have deconstructed the ball sound, let’s focus on reconstructing it to be used as an audio asset. To do this we must decide on the relevant physical, material, and spatial information to include in the sound. The ball is moving fast, so we make sure the sound is quick and zippy. The ball’s surface is smooth, so we don’t need to include too much air friction. Finally, we make sure the sound pans from left to right as that is the (hypothetical) direction it is being thrown in, and attenuates as it gets further from the listener. Great! Are we finished with our asset?

If this was an asset to be used in linear media, then the answer is likely yes. However, since we are working on a game, there is one more element we need to take into account – gameplay. This sound needs to serve a function for the player. Perhaps the ball is actually a grenade, meaning the player has to feel a sense of excitement and urgency when throwing it toward an enemy. This can easily be accomplished by adding a satisfying “click” or pin-pulling sound just before the swoosh of the throw. We call this narrative design. Narrative design can take many forms, but in games it is often used to describe a sonic element that is added to spice up the sound, or add a bit of interest to the player’s auditory experience. Here we increased the satisfaction by throwing in a click to show that the grenade is active. The key point is that we add the click only to add interest to the game event, regardless of what occurs in the animation. We are adding sound, not only to provide credibility, but to add depth to the gameplay and narrative. This is the fourth element of sound that really only comes into play when designing sounds for games.

Getting Started

Earlier in this chapter in the section “Designing the Sonic Palette” we discussed creating a playlist of sounds as source material for use in your design. Here we will explore the four elements of sound (physics, material, spatialization, and narrative design) as helpful tools for designing sounds. If you are completely new to sound design it can be overwhelming to begin from scratch, and you might not know where to start. These four elements can be a framework to get you started. Ask yourself first, “how heavy is this object, and how fast is it moving when it emits a sound?” These physical qualities will likely tell you to start looking for low-frequency or higher frequency elements. Then ask “what is the object that’s making sound made of?” This will help you hone in on various sound sources in your palette to choose from. These two answers alone will give you plenty to work with. When you’ve found your groove you can then start adding spatial details and planning relevant functional aspects of the sound.

Sound Design Techniques

Now that we have an effective framework for designing sound, let’s dive into the important techniques and methods for designing audio assets for video games.

Layering

Music composition is an art form in which composers combine and layer groups of instruments to create a fully developed arrangement. This technique has been employed for centuries to tell the sonic story. This is similar to sound design in a lot of ways. Both disciplines allow the artist to inform the audience’s emotions and influence their mood. Just as a composer combines layers of instruments to create an engaging score, sound effects require various layers to produce a well-crafted effect.

Remember that sounds in the real world are complex and include physical, material, and spatial elements. Beyond these elements, we must also include temporal (time-based) development of sound. In other words, every sound contains information about its attack, decay, sustain, release. If you were to break a pint glass your ears would pick up a huge number of clues. The attack of the sound would consist of a quick impact, followed by a sustained shattering sound as shards of broken glass fly about. The release, in this case, would be bits of broken glass settling on the floor and table after the initial sounds have decayed. In order to include all four elements of sound, as well as the relevant temporal information, sound designers use a technique called layering for important or complex sound effects. In short, designers stack audio source tracks on top of each other, and process them in ways that result in a complex, cohesive sound. You would be shocked at how many layers are used for your favorite sounds in a game. Some more complicated sounds like weapons or special abilities use up to and above ten layers of sound, all of which need to be present in the mix!

When choosing source to be used as layers you will want to pick sounds that pair well together. As we mentioned above, there may be dozens of layers that all need to add something to the mix, so selecting the right source to use is crucial. Consider the frequency information of the layer so you don’t stack sounds that are heavily weighted in the same range. Adequately including highs, mids, and lows in a sound can make it feel more complete and satisfying. If you are missing a range of frequencies in an asset it might sound unfinished or thin. With that said, some sounds call for one particular frequency range over others. Use your ears to determine the best direction. As a rule of thumb, always think of the function of the sound in game when choosing appropriate layers.

Often times sound designers new to the field don’t experiment with layers, and often pluck one sound straight from a library and use it as an asset. Chances are that a single sound won’t tell the full story. By employing multiple layers we can mold the sound we expect to hear. For example, the velociraptor roar in the film Jurassic Park was a combination of animal growls. A recording of a dolphin’s high-pitched underwater squeaks blended with a deeply resonant walrus roar serve as the DNA of the roar. Mating tortoises along with some geese calls and chimpanzee noises were used to create the raptor’s barks, while its growls were a combination of tiger sounds and Spielberg’s dog.21

Complimentary vs. Competing Layers

Being mindful of “less is more” is again useful when stacking layers. Getting a bigger sound doesn’t always mean more layers. In fact, when you stack too many layers with the transients all triggering at once, the sound might still feel small. This is because the layers aren’t complementing each other. By stacking similar waveforms right on top of one another the sound gets louder, but less detail will be heard (see “Transient Staging” below).

Another way to avoid competing layers is to avoid utilizing too many sub layers around 50 Hz. It seems logical to go to low-frequency layers to add punch to a sound, but this can be achieved in other ways, which we discussed above. Sub layers (or too many of them) can cause the mix to sound muddy. In other words, the “mids” might sound unclear and overloaded because there is too much going on in the low end. Reducing the mids might hollow out your sound, so it is better to carve out the low end in this kind of scenario.

Important Frequency Ranges for Layering

In general one of the best ways to ensure layers are complementing each other rather than competing is to balance the sound across the entire frequency spectrum (see “Frequency Slotting” later in this chapter). Using layers with a low-end emphasis (55 Hz–250 Hz) can help add density and power to your sound. When working with sounds in this range be sure to reduce the tail and preserve the attack. This will allow the sound to come in fast and add its punch without lingering and taking up valuable headroom. As mentioned above, even if you’re looking to add punch, it’s good practice to avoid layering too many sounds in this range.

Layering sounds in the 255 Hz–800 Hz range can help glue the low and high end together. Lack of representation in this range can result in a hollow, small sound. By contrast, too much high end 7 kHz–10 kHz can add an undesired brittle element to your sound. This will make the sound effect feel harsh to the listener.

Other Considerations

Always keep the development platform in mind when layering. Too much low or high end won’t translate well on mobile speakers. We recommend using an EQ such as FabFilter Pro-Q, which has a spectrum analyzer. This will help you visualize areas in your mix that may have too much or too little happening in a given range.

Understanding the target platform and purpose of the sound in game will help define if you need to create a stereo or mono asset. Creating a stereo sound effect with a lot of panning will lose the effect when folded down to mono.

It is equally important to plan how your sound will work in terms of the game genre and design. A battle arena game, for example, will require careful thought when designing weapon or ability sounds. Each of these sounds must be able to stand out among all the others happening simultaneously in game. In some cases these sounds affect gameplay dramatically, so players need to hear the aural cues assigned to things like reloads, or impact sounds. By using EQ (as well as processing techniques mentioned in the following sections) you can carve out an effective sonic niche for each weapon, thus supporting the player and facilitating exciting gameplay.

Lastly, be sure to clean up and process your sounds to give them a unique flavor. This will lower the likelihood of someone picking out a library sound effect in your game. This is exactly how the Wilhelm Scream has become a sort of inside joke for film sound editors.

Transient Staging

With multiple layers making up a sound, transient stacking needs to be considered. This workflow can be particularly useful with weapon, ability, and impact sounds. If all layers were to be stacked up with the transients in line, there might be numerous sonic artifacts in the resulting sound. Sometimes phase issues can be heard, or unwanted spikes in loudness. In general, the asset will be lacking in detail if not clarity altogether. When layers are directly stacked the sound has no unique identifiers as it’s just a wall of sound. Staging the layers strategically into a pre-transient, transient, body, and tail will allow for all the extra details in the sound to stand out therefore creating a unique sound. This is called transient staging (see Figure 3.3) and it is a more specific way to convey the temporal pacing we mentioned earlier using attack, sustain, and release. Sounds all have a natural flow to them and this should be maintained.

Figure  3.3  A screenshot demonstrating an example of transient staging in Steinberg’s Nuendo.

The order of stacking can be done in a way that best fits your workflow, but be sure to map out the lows, mids, and highs of the sound. The idea is to allow each element to have their time to shine in the mix. Use your ears to listen to how the sound moves from one element to another. It’s also important to decide how much of the tail in each layer should ring out. You may end up with single or multiple layers to make up the spectrum ranges (see “Frequency Slotting” below) so in the end it’s really whatever fits the sound.

A pre-transient or impulse sound can be added just before the transient. Adjusting the timing of this pre-transient in 5–10 ms increments can really make a huge difference in the sound. Experiment with the space between all layers by adjusting them in millisecond increments until you find the cadence that generates the sound you desire.

After finding the best cadence for your layers, the next step is to manually draw volume automation on each track. The aim is to provide punch at the head of the sound, and then dip the volume afterward to allow room for other layers to peak through the mix. This will help set the cadence of the sound and allow other details space in the mix. How aggressive you get with the volume automation depends on the sound you are designing. If it is a weapon you may automate heavily. If it is a magical ability you may decide to make the automation subtle. This kind of transient control can be done with side-chaining as well, but manual automation offers more precision.

Frequency Slotting

Frequency slotting is a critical technique that sound designers use to add definition to the numerous layers used in sound effects. As previously discussed in the “Layering” section, all of the source you use needs to fit tightly together to sound like one cohesive sound. Layers add detail and transient staging allows sound effects to develop over time. Frequency slotting will then ensure that your layers appropriately map across the frequency spectrum so that all elements are heard. It will also keep layers from masking one another.

Masking is when one sound clashes with or covers up another. This isn’t to say that masking is a bad problem to have as some layers need to blend and overlap to create a denser arrangement. When masking is an issue, it can be resolved with re-arranging layers (transient staging), subtractive and additive EQ, and adjusting dynamics.

A specific type of masking is frequency masking, which happens when two or more sounds have similar frequency content causing them to fight for their spot in the frequency spectrum. As we discussed in the “Transient Staging” section, when mapping out transient arrangement, placement choice and lowering the sustain of certain layers can allow other layers to shine through. This isn’t the only way to manage frequency masking though. Panning elements so they are not sitting in the same exact space can also be a useful method for allowing each sound its own room in the mix. However, when rendering mono assets for use as three-dimensional events in game, panning will be of little use. This is where frequency slotting using EQ comes in to ensure your layers are filling the parts of the spectrum you might be missing. If all of your sounds have high-frequency content, the sound will feel unbalanced and incomplete.

Sound Design Techniques Summary

In the Sound Lab we offer a video tutorial on designing sound effects using layering, transient staging, and frequency slotting. This information will help your understanding relate to the information in the rest of the chapter. After viewing the tutorial come back here to explore effects processing as a sound design tool.

Effects Processing as a Sound Design Tool

In this section we will explore the use of plugins as it applies to creative sound design. Working with plugins to polish or shape your sound sometimes means using the plugins in ways beyond what they were intended for. Here we will provide some ideas on how to use plugins creatively and effectively, but it is always up to you to experiment on your own to find the best practices for your particular workflow. There are simply too many plugins to explore each one in depth, but this section will give you a headstart. As you read, keep in mind that having a handful of plugins that you know in and out is much more useful than having hundreds of plugins that you are unfamiliar with. It’s difficult to use a plugin correctly and creatively if you don’t understand the basics first. It’s also important to understand the effective use of processing order or chaining effects. Inserting EQ before compression can have a different outcome over compression before EQ.

There are numerous sources available that offer a general understanding of various plugin effects and their typical uses. In most cases third-party plugin manufacturers will provide detailed manuals of their products. These are usually freely downloadable, and offer in-depth information on the plugins themselves as well as the basic function of plugin types (EQ, compression, etc.). We highly recommend downloading these manuals and watching video tutorials from the manufacturers themselves to familiarize yourself with these tools before you buy them. Make it a point to research and experiment with a variety of plugin manufacturers per effect type because each has its own sound and workflow. For example if we look at limiters as an effects processor, Fabfilter L2 offers more transparency than iZotope Vintage Limiter, which adds thicker tonality and more weight to low-end frequencies.

Here is a brief list of some of the topics we will be covering in the Sound Lab (companion site).

  • Creative use of reverb: Instead of using an IR of a concert hall or stage, you can record the IR of a glass vase or a plastic bucket. This will yield distinctive spatial information to use with your source.
  • Frequency slotting: A critical technique that sound designers use to add definition to layers and ensure they appropriately map across the frequency spectrum.
  • Advanced synthesis: Learning to take advantage of intricate routing and features in synths like Native Instruments Absynth. This will allow for control over multiple parameters to generate far more sonic possibilities.
  • Advanced Plugin Effects: Adding more punch to your sounds with transient designers by controlling the contrast between silences and the loudest moment of the sound.

Effects Processing Summary

Before you move on, head over to the Sound Lab for a discussion and tutorial on outside-the-box effects processing for creative sound design. This information will help you understand and relate to the information in the rest of the chapter. After viewing the tutorial come back here to continue reading about the sound design process.

In summary, so much can be done with the tools that come with a DAW. Those who are starting out in their careers need to practice with the tools on hand before moving on to bigger and better things. Develop a deep understanding for how each plugin works and be in command of your tools. When deciding on a new plugin, do your research to find what works best for you DAW and workflow.

Before purchasing that shiny new thing, think about how this tool will work for you and what problem it will solve. Stick to your budget as more expensive doesn’t always equate to better. Lots of practice and experimenting will help you make the best of the tools you have on hand.

Putting It All Together

More often than not sound designers have to work from animations to ensure that the audio is synchronized to the visuals in a game. Now that we have a framework to break sounds down into their four components (physics, materials, space, and narrative) and the tools (layering, transient staging, frequency slotting, and effects processing) to design sounds from scratch, let’s take a quick look at breaking down and synchronizing animations.

Breaking Down Animations

Breaking down an animation requires patience and attention to detail. Realistic AAA-quality animations are often full of nuance, which means that it takes focus to absorb every subtlety of the graphics. Start by focusing on an important object within the animation and list every part of it. Then think through how sound can bring each of those parts to life. Start with the more literal, realistic elements (physics, materials, and spatialization) of the object and then move onto the narrative design elements.

It can help to do a few iterations of design over an animation. Watch it enough times that you have a good idea of all of the mechanical details and movement the animation possesses. These are all great points to add character and intricacy to the sound design. In general, the more detail in the animation, the more detail needs to be present in the sound design.

Step-by-Step Design

Now that we have a set of tools for creative sound design, let’s look at some common game sound effects and how to design them from the ground up. We will break each sound down into its basic sonic elements from all four of our categories: physics, materials, spatialization, and narrative design. We will use these categories as a starting point to choose source layers. Then we will use layering, transient staging, frequency slotting, and effects processing to transform the source into an asset that is ready for implementation.

Gun Sounds

Before we break down the parts of our animation, let’s start by exploring what gunfire really sounds like. If you’ve ever been to a shooting range you may have a good idea about this. If not, do a YouTube search to familiarize yourself. In reality gunshots sound like a sharp crack, similar to a firework exploding. This will not work well in a game. It lacks detail and character, so using a realistic gunshot sound will add virtually nothing to the player experience. Instead we need to design this pistol so it sounds powerful, adding a sense of perceived value to the fire power. Effective weapon sound design will make the player feel rewarded, thus improving gameplay. We will focus on power, punch, and spectrum control.

Let’s begin by viewing the gameplay capture of our pistol animation on the Sound Lab, our companion site (feel free to generate your own gameplay capture and work with that instead). With the visual fresh in your mind let’s start choosing source layers by breaking down parts of the animation that will potentially make a sound. After creating our list of source material, we will return to the Sound Lab for a practical exercise in layering, transient staging, frequency slotting, and effects processing to design the sound.

Physics

  • Sharp transient fire
  • Main body of the shot is short, almost non-existent
  • Tail (release as the bullet flies away)
  • Mechanical kickback of the weapon
  • Mechanics of the barrel
  • Mechanics of the reload

Materials

  • Metal
  • Plastic
  • Possible sci-fi energy source (the yellow/orange glow)
  • Laser shot

Spatialization

  • Gun – front and center
  • Bullet impacts – spatialized (via implementation, see Chapter 8)
  • Use mono source or adjust the stereo width to keep the detail focused

Narrative Design

The first three categories are pretty comprehensive, but there are a few things missing. This is a small weapon, but it needs to sound sleek and dangerous. The animation seems to suggest some kind of assassin character, using high-tech weaponry. This means the shot needs to sound sharp, silenced, and futuristic. The sound of a .357 Magnum would be totally inappropriate here. In short, this pistol should sound powerful but controlled in the low end, and the high end should be sharp, piercing, and include a elements of futurism.

Gun Sound Design Practice

Now that we have our list of source sounds let’s head over to the Sound Lab where we will outline how to design your own pistol sound using the sound design framework we discussed earlier.

 

 

Explosion Sounds

Let’s begin by viewing the gameplay capture of our explosion animation on the Sound Lab, our companion site (you are free to generate your own gameplay capture to work with instead). With the visual fresh in your mind let’s start choosing source layers by breaking down parts of the animation that will potentially make a sound. After creating our list of source material, we will return to the Sound Lab for a practical exercise in layering, transient staging, frequency slotting, and effects processing to design the sound.

In this particular explosion animation a few things are happening. For starters, there are multiple explosions, so we know that these sounds need to be capable of chaining together without phasing or doubling. This means there should be adequate variation (usually pitch and source materials) so that our ears can differentiate between them. Second, there are specific objects that are exploding. Our sounds need to include information about the object as well as details in the explosion itself.

Let’s break it down as we did with the pistol animation to help direct us toward the best source material to use.

Physics

  • Pre-transient “anticipation” layer (added a few milliseconds before the transient)
  • Transient attack of the explosion
  • Body of the explosion
  • Tail (in this case, reverberation and release of the sound)
  • Various impacts

Materials

  • Fire
  • Metallic creaks, groans, and movement

Spatialization

  • Wide range of space, possible stereo sounds for explosions
  • Mono sounds for specific objects, spatialized via implementation

Narrative Design

In an actual explosion it would be hard to hear much detail. To make these sounds more suitable for the gameplay we have to balance the “hugeness” of the explosions with valuable detail that the player should hear. The materials need to be present so the player knows what exactly is exploding, and there should be a “pre-transient” as mentioned in the physics category, so that the player can anticipate the explosion, adding drama to the event. Lastly, we will need to ensure that there some seriously punchy impact layers in the sound to add power to the explosions.

Explosion Sound Design Practice

Before moving on, head back to the Sound Lab where we will explore designing an explosion sound (on a limited budget) by sourcing household objects for recording and constructing layers.

 

 

Spells and Special Ability Sounds

“What should this sound like?” That is the big question one must answer when tasked with designing sound for something that does not – and cannot – exist in reality. The producer or game designer may not even know what they want a spell or special ability to sound like. But they may be able to offer a reference from a real-world sound or another fictional example. This direction can be quite nebulous however, but the plus side is it leaves a lot more creative freedom in the hands of sound designers.

The style is an important factor in how you choose source material for a spell like this. A fantasy game with special abilities may not work with entirely realistic source. Synthesized source can be really useful, but be sure to keep the synthetic elements in line with the sonic character of the sound effect overall. Processing synthesized source through an organic convolution reverb can help bring a bit of reality into the sound by softening harsh overtones. Make sure to blend real-world source as well to help the listener “relate” to the sound of the otherworldly visual.

Processing organic source through spectral synthesizers like iZotope’s Iris 2 can be helpful for creating more magical layers. Iris 2’s spectral selection tools in combination with modulation sources (like LFOs or envelopes) will change the sound over time and provide a sense of movement. With magic or fantasy sonic elements this type of movement can be the difference between a sound that is special and compelling, and one that is completely bland.

If the game requires a darker or more action-packed quality, relying on heavier and harsher layers will add some density to the asset. Whooshes, impacts, animal growls, hisses, explosions, and fireworks all provide detailed layers that can add size and depth to a sound effect. Also try using a transient designer or even a distortion unity (like iZotope Trash 2) to add some “umph” to the spell as it meets its target. Trash 2 has an impulse named “Creep,” which can be used as a convolution setting. Try it with a mix around 10–20 percent wet and it will likely increase the power behind of your source layer.

For lighter, less intense spell casts try sparklers, sizzles, broken glass, chimes, wind gusts, and lighter whooshes. These elements can add a mystical character, which is often useful for buffs and less damage-oriented abilities. A convolution reverb can also help soften the sound along with a 10 kHz + rolloff.

Let’s get started by viewing the gameplay capture of our spell casting animation on the Sound Lab, our companion site (as always, you are free to capture your own gameplay video to work that instead). With the visual fresh in your mind let’s start choosing source layers by breaking down parts of the animation that will potentially make a sound. After creating our list of source material, we will return to the Sound Lab for a practical exercise in layering, transient staging, frequency slotting, and effects processing to design the sound.

Physics

  • Pre-transient layer (as spell is being cast or charging)
  • Transient (the initial sound as the spell is cast)
  • Body of the spell (sustain)
  • Release (pay close attention to the VFX as the spell dissipates as this can change with the animation)

Materials

  • Blue energy

Spatialization

  • Begins right in front (possible mono)
  • Quickly spreads out and around the listener

Narrative Design

This category is absolutely crucial for spells and magical abilities because it is often the only real point of reference we have as designers. The important thing to consider is what the spell does. In other words, what is the function of the spell? Is it a heal and attack, or a buff? Attacks will almost always sound more threatening and harsher, and buffs need to sound pleasing in order to provide accurate aural feedback for the player. In this case, this blue energy looks to be a shield, so we will build our sound to be a powerful and ethereal energy that protects our player character from harm.

Spells and Special Ability Sound Design Practice

Now that we have our list of source sounds let’s head over to the Sound Lab for some practice designing fantasy spells and special ability sound effects using the framework we outlined earlier.

 

Creature Sounds

As with all other categories of sound design, the designer will need to answer some preliminary questions before jumping in. An important example is whether or not the creature you are designing speaks or has some language elements. If the answer is yes, then the dialogue will need to be intelligible, and processing cannot hinder this. In these cases, less is more. You may pitch dialogue up or down, but take care when adding heavy modulatory effects.

When going through your library to select source for creature sounds be sure to listen for anything that might stand out. Don’t overlook samples of the human voice. Voice artists can be brilliant with manipulating their voices to sound inhuman and beastly. Generally speaking, animal sounds from lions, pigs, bears, and tigers have become standard source material for many of the creature designs out there. They make for great base layers but you will need other elements to give your creature a unique sound.

Don’t forget to add in small details to your creature sounds. These are the elements that make your creature sound unique. Try including lip smacks, snarling, and snorting breaths. A wet mouth filled with gooey substances like pudding, jello, or oatmeal can be a valuable source for vocalizations. Tasers and gears also make great high–mid frequency layers that blend well with inhales, and they can also be placed at the tail end of roars. It’s also important to cover the non-vocal elements of creatures. How do they move? How big are they? Details like this will go a long way toward creating a memorable creature sound as well.

Another useful technique is to chop up the “expressive” parts of an animal sound and use it to humanize the creature. If you are working with a voice artist you can also direct them to perform some emotive expressions without using language. This can give the player an affinity toward the creature, helping to cultivate an emotional bond. If you aren’t working with a voice artist, try automating pitch to manufacture this sense of emotionality.

Even if you don’t plan on using your own voice in the final product you may want to use it to lay out a guide track. This can help you hit the mark on the emotive side of the design, or to convey ideas to other performers. Software like Krotos Dehumanizer works well with human voice and is especially helpful when you are on a tight deadline. Dehumanizer is also perfect for breaths and hisses as well as roars and growls. Keep in mind that even when using Dehumanizer to perform layers, there is always other source blended into the final sound.

Start by viewing the gameplay capture of our creature animation on the Sound Lab, our companion site (you are free to capture your own gameplay video to work with instead). With the visual fresh in your mind let’s start choosing source layers by breaking down parts of the animation that will potentially make a sound. After creating our list of source material, we will return to the Sound Lab for a practical exercise in layering, transient staging, frequency slotting, and effects processing to design the sound.

Physics

  • Vocalizations
  • Giant body
  • Multiple long sharp legs scraping
  • Heavy impacts
  • Body breaking apart

Materials

  • Crab-shell-like flesh (bulky)
  • Tree-bark type material for the legs
  • Ice and snow

Spatialization

  • Huge creature, so these sounds pretty much cover the stereo space

Narrative Design

This creature animation is somewhat open to interpretation as it does not have any place in the real world. You can try (as we did) comparing its material to animals in the real world, but the bottom line here is that this creature needs to sound threatening and huge. All of your layers should be chosen to fully realize that direction. The size can be handled nicely with some LFE impacts and sub-frequency hyper-realism. Also try creating some chilling shrieks!

Creature Sound Design Practice

Now that we have our list of source sounds let’s head over to the Sound Lab to get some practice designing creature sounds using the framework we outlined earlier.

 

Vehicle Sounds

As we mentioned earlier in the chapter, when working with realistic vehicles it’s a good idea to do some research first. Car engines in particular can be divisive because of their memorable sonic character. Car enthusiasts will easily be able to recognize a Corvette engine sound from a Mustang, so you should too. Start the process by getting into the specifics of vehicle type with your producer or designer and familiarize yourself with the sound of the engines.

It helps if you have contacts with the cars you are looking to record. If you don’t know anyone with the specific car you are after you can try auto forums on the internet or auto shops to see if you can make any solid connections in this regard. Keep in mind that games have deadlines, so you may need to work quickly. The project you committed to won’t allow for all the time in the word to locate and record vehicle sounds. If you run short on time you can reach for some library SFX. When the budget allows you can look into hiring field recordists like Watson Wu or Pole Position Production for custom sessions.

For this particular gameplay capture we will focus on functionality rather than direct realism to engine samples. Head over to the Sound Lab (companion site) to view the racing gameplay capture video. After you have watched the video come back here and we will break the vehicle down to help us find some useful source material.

Physics

  • Engine idle
  • Engine RPM (revolutions per minute)
  • Engine load (pressure on the engine)
  • Gear shifts
  • Exhaust
  • Skids
  • Collisions
  • Suspension rattle

Materials

  • Vehicles (metal, rubber tires, glass)
  • Obstacles and barriers (metal, wood, water)
  • Surfaces (asphalt, dirt, etc.)

Spatialization

  • Perspective (interior and exterior views)
  • Doppler
  • Mono sounds for specific objects, spatialized via implementation

Narrative Design

Racing games can be broken down into arcade and simulation categories. In simulation games the player may have specific expectations of the vehicle sound while arcade games may offer the sound designer a little more creative liberty. The scene suggests a Formula One open cockpit vehicle with asphalt and grass terrains. The barrier around the racetrack looks to be concrete. A bit of internet research will show that the engine has a low-end growl on startup and revving. The engine of a standard sedan will not provide a credible sonic experience for players.

Visiting Artist: Watson Wu, Composer, Sound Designer, Field Recordist

On Recording Vehicles

There are various areas on a car that produce unique sounds. My role has been to secure microphones to those areas in order to capture onboard sounds (sounds heard from driver and passenger views, versus external pass-bys). These sounds are always thought of from the driver’s point of view: engine left, engine right, inside the air intake box, cab left, cab right, and exhausts. All of the mics are covered by windjammer etc. wind-suppressing items and firmly secured by cloth-based gaffer tape. The mic cables are then taped on and routed to my where I sit, next to the driver. On my lap is an audio bag housing a multitrack field recorder that allows me to selectively listen to each of the mics while recording. I go back and forth listening to each of the inputs and adjust the recording levels throughout the session for the optimal sounds.

During the initial tests I usually have the driver perform high revs then we go on to recording ramps. A ramp can be done by putting the car into first gear and drive from idle smoothly up to the red line of the RPM. This ramping up to red line can be done between 10 and 15 seconds long. After reaching the red line, the driver will engine brake (letting go of the gas pedal while in first gear) back to a complete stop which is also between 10 and 15 seconds long. Some cars can easily perform this maneuver while others cannot. I stress to the driver that recording ramps is the most important part of the recording, which makes them more eager to do their best.

While recording a Ferrari 488 supercar, the ramping downs weren’t smooth at all. The owner, being an engineer, thought about performing the ramps using third gear. This worked out so we were able to capture smooth ramp ups as well as smooth ramp downs. After ramps we do high-speed or aggressive drivings. The driver is to delay shifting so that we can capture high RPMs (usually where the loudness of the vehicle shines). Once I have the correct overall adjustments (staying below the loud recording peaks for each of the inputs), I make best use of time by exiting the vehicle to grab another set of gear to capture external sounds. The onboard recorder in the audio bag is seat-belted to the passenger seat so that I can do both, record onboard and external sounds at the same time. Other items on the shot list will include driving casual, driving reverse, pass-bys in various speeds, approach-stop-away, startup, shutdowns revs. Other options are burnouts, skids, cornering, drifting, etc. stunts.

What to be mindful of:

  • Avoid external recordings when wind exceeds 15–20 mph depending on your microphone and windshield kit.
  • Always watch the oil temperature. If a vehicle overheats, use regular gear and do light drivings at 45 mph for a few minutes. Then, shut off the vehicle and raise the engine hood for better cool down. Often times, this method cools the engine better than shutting off the vehicle when the temperature is way up.
  • Always record Foley sounds first (heated cars make bing/ding etc. unwanted sounds), onboard sounds second, then external sounds.
  • Always record on smooth roads unless you need off-road sounds. Avoid driving over road reflectors, don’t use turn signals.
  • Have long enough roads for safe braking.
  • I typically start a recording with a startup and end with a shutdown. This way there are plenty of variations of those sounds.

Watch my “Watson Wu Jaguar” YouTube video for some examples of how I record vehicle sounds.

Vehicle Sound Design Practice

Let’s take a look at vehicle sound design. Head back to the Sound Lab for some practice designing vehicle sounds for the racing gameplay capture using the framework we outlined earlier.

 

UI and HUD Sounds

The term “auditory icon”22 was coined by Bill Gaver in the early 1980s when he researched the use of sound for Apple’s file management application “Finder.” As we march into the future the various devices we choose to embrace require support for the visual information as well as a tangible conformation or feedback to the user. This is what it means to design sounds for UI, or user interfaces.

Designing sound for UI and heads up displays is often thought of as being the easiest part of sound design. How difficult can it be to produce and implement a few simple blips, bloops, and bleeps? In reality, effective UI sound design is extremely difficult because it is a critical element of the sonic branding. This means that every sound that a UI or HUD makes needs to give the player a sense of mood and satisfaction. In other words, as sound designers we have to squeeze every ounce of emotion out of each and every blip, bloop, and bleep in a game. How simple do they sound now?

The purpose of the user interface is to provide data feedback (health or option settings) and to facilitate interaction between the player and the game. UI/UX designers take the time to carefully create user interface visuals that won’t get in the way of the player’s visual field. Too little or too much feedback to the player can ruin an immersive experience. Since so much thought and effort is spent developing the interface’s visuals, comparable amounts should be spent on the audio portion.

The game style and genre will play a large role in deciding how the UI sounds will be crafted. Generally speaking, UI sounds need to be on-theme. A sci-fi game may require electronic, or glitchy, granularized textures to accompany the hologram screens and futuristic visuals. Using organic layers like wood and grass will sound out of place. The same can be said about a casual Match 3 game that has a bright and cheery forest look. Implementing synthesized UI sounds might feel weird, thus breaking the immersion.

Often, UI sound is thought of as being strictly button presses and slider movement. In fact, UI elements can be found in game as well as in menus. These in-game UI sounds can provide feedback to the player without them needing to actually look at the visuals. This is an extremely important concept, and it is why sound plays such a large role in the UI experience overall. For this reason UI sounds should be something the player can identify with without being distracting or annoying if triggered frequently. In an endless runner the player must focus on what is directly in front of her. As she collects coins, the sound effects that trigger will give her an accurate sense of success or failure without even seeing the actual number on the display. Sounds like this often have a pitched element to them. This produces a satisfying, almost musical effect that easily cuts through the mix.

UI sounds in MOBA or team-based multiplayer games like Blizzard’s Overwatch or Hi-Rez’s Paladins are designed with the players’ focus in mind. These team-based games can get very overwhelming visually. Effective UI sounds allow the player to keep their visual focus on the gameplay while relying on aural cues to know when important events are happening. There is so much information relayed sonically to the player in Overwatch that technically it could be played without visuals (to a point anyway!). This is actually the basis of accessibility in sound (see “Inclusivity and Accessibility in Game Audio,” Chapter 12, page 382).

Now that we are familiar with UI sounds and the role they play in games, start by viewing the gameplay capture of our UI animation on the Sound Lab, our companion site (as always, you are free to capture your own gameplay video to work that instead). With the visual fresh in your mind let’s start choosing source layers by breaking down parts of the animation that will potentially make a sound. After creating our list of source material, we will return to the Sound Lab for a practical exercise in layering, transient staging, frequency slotting, and effects processing to design the sound.

It’s important to note that while UI sound design may be thought of as almost complete narrative design we can still use physics and materials that relate to real-world objects to gather information for our sounds. When graphical user interfaces mimic the physical world users can more easily understand how they function.24 In game, animations show how the object moves in the two- or three-dimensional space.

Physics

  • User interaction (slider, button press, highlight)
  • Animation

Materials

  • These can stem from the visual of the button or visual themes from the game
  • Metallic creaks, groans, and movement

Spatialization

  • Most UI sounds both in the menu and in game will be in 2D space

Narrative Design

The gameplay in this video seems to be casual and the UI is somewhat abstract and open to interpretation as it does not stem from specific objects in the real world. Since the overall theme of the game involves cooking you can introduce recordings of kitchen items as source material for use as layers in the design. The UI sound in this game should be subtle yet satisfying. An instant upgrade should sound more rewarding than a timed upgrade.

UI and HUD Sound Design Practice

Let’s take a look at UI sound effects design. Head to the Sound Lab for some practice designing UI sounds for the gameplay capture using the framework we outlined earlier.

 

 

 

Footstep Sounds

Have you ever taken the time to focus solely on the footsteps in a game? You might be surprised by the level of detail that goes into the sound design of footsteps. Some people would consider footstep sounds to be non-essential to gameplay, but in reality footstep sounds play an important role in games. Footsteps provide information on terrain type and NPC location, which can make the difference between success or failure in a given task. In multiplayer matches players rely on this information to understand their surroundings and assess danger.

Studying the footsteps of humans will reveal a heel-to-toe movement as a step forward is taken. Although mobile games require fewer assets for implementation, console and PC games with any level of detail will typically have eight to ten randomized footsteps per terrain. Volume and pitch randomization will also be applied adding further variety to the sound. More detailed integration might utilize separate heel and toe asset groups that trigger in synchronization to the footstep animation. This, of course, offers even more variety for the player’s ear.

To add an additional level of detail to the character’s movement, clothing (fabric, armor, etc.) sounds can be added as a separate trigger. The layers in these sounds should reflect the materials that the character is wearing in as much detail as possible. However, these accents shouldn’t get in the way of the clarity of the footsteps. There should also be enough variety to these accents that the player isn’t hearing the movement sound over and over.

For one final example, head over to the Sound Lab (companion site) to view a footstep animation (just as before, you are free to capture your own gameplay video to work with instead). With the visual fresh in your mind let’s start choosing source layers by breaking down parts of the animation that will potentially make a sound. After creating our list of source material, we will return to the Sound Lab for a practical exercise in layering, transient staging, frequency slotting, and effects processing to design the sound.

Physics

  • Heel-to-toe movement
  • Footfall velocity
  • Walk/run
  • Terrain (creaking bridge)

Materials

  • Shoe type
  • Terrain type
  • Outfit type

Spatialization

  • First-person perspective – stereo sounds but avoiding a wide stereo field
  • Third-person perspective – mono sounds spatialized via implementation

Narrative Design

The character in the video appears to be tall and muscular. From this we can add a bit more weight to his footsteps. We see that he is wearing leather boots as well as leather material around his waist that moves each time he steps his foot outward. Hanging from a few strands of hair is a large circular metal accent. It should be determined if this will make any sound as the character moves. The terrain starts out as stone steps and goes into a wooden bridge. The physical shape of the bridge should be taken into account.

Footstep Sound Design Practice

Let’s take a look at footstep sound design. Head to the Sound Lab for some practice designing footstep sounds for the gameplay capture using the framework we outlined earlier.

 

 

The Sound Lab

Before moving onto Chapter 4, head over to the Sound Lab for additional reading, practical exercises, and tutorials on the topics discussed in Chapter 3.

 

 

Summary

Here we presented a few examples for sound design practice. Using the framework we laid out in this chapter you should continue to practice creating sound design for a variety of game assets.

Notes

1    Twenty Thousand Hertz, “The Xbox Startup Sound.”

2    Michael Sweet, composer, sound designer and artistic director of Video Game Scoring at Berklee College of Music; author of Writing Interactive Music for Video Games.

3    B. Kane, Sound Unseen: Acousmatic Sound in Theory and Practice.

4    Wikipedia, “KISS principle.”

5    When time allows, gathering and generating source could be started in the pre-production phase while editing and mastering will be completed during the production phase.

6    www.syntorial.com/

7    G. Reid, “The Physics of Percussion.”

8    www.plogue.com/products/chipsounds.html

9    Note: Wasting food isn’t good for the environment or the people living in it. It is a good idea to take some measures to reuse food (or as much of it as you can). Grocery stores will often have a huge amount of expired produce that you can claim by simply asking a manager. Of course, you also want to be careful not to contaminate the food during the session if you plan on eating it or serving it at a family dinner later on. Use good judgment in either case.

10    www.gamasutra.com/view/feature/179039/the_sound_design_of_journey.php

11    R. Viers, Sound Effects Bible.

12    A. Farnell, Designing Sound.

13    N. Collins, Handmade Electronic Music.

14    www.gameinformer.com/b/features/archive/2011/02/28/war-tapes-the-sounds-of-battlefield-3.aspx

15    Always take proper precautions and go through the proper channels when recording weapons and explosives. There are many professional field recordists who can handle proper setup of these types of sessions.

16    Wikipedia, “Loudness.”

17    http://hydrophones.blogspot.com/2009/04/hydrophones-by-jrf.html

18    Turtle Beach Blog, “The Audio of Hellblade: Senua’s Sacrifice.”

19    How Stuff Works, “What is a Decibel, and How Is it Measured?”

20    D. Solberg, “The Mad Science behind Inside’s Soundtrack.”

21    D. Shay, The Making of Jurassic Park.

22    R. Gould, “Auditory Icons.”

23    Sakamoto, D., GDC, “Hearthstone: How to Create an Immersive User Interface.”

24    D. Mortensen, “What Science Can Teach You about Designing Great Graphical User Interface Animations.”

Bibliography

Collins, N. (2006). Handmade Electronic Music: The Art of Hardware Hacking. New York: Routledge.

Farnell, A. (2010). Designing Sound. Cambridge, MA: MIT Press.

Gould, R. (November 24, 2016). “Auditory Icons.” Retrieved from http://designingsound.org/2016/11/24/auditory-icons/

Johnson, S. (October 10, 2012). “The Sound Design of Journey.” Retrieved from www.gamasutra.com/view/feature/179039/the_sound_design_of_journey.php

Jurassic Park. “Jurassic Park Raptor Effects.” Retrieved from http://jurassicpark.wikia.com/wiki/Jurassic_Park_Raptor_Effects

Hanson, B. (February 28, 2011). “The Xbox Startup Sound.” Retrieved from www.gameinformer.com/b/features/archive/2011/02/28/war-tapes-the-sounds-of-battlefield-3.aspx

How Stuff Works. “What is a Decibel, and How Is it Measured?” Retrieved from https://science.howstuffworks.com/question124.htm

Kane, B. (2014). Sound Unseen: Acousmatic Sound in Theory and Practice. New York: Oxford University Press

Mastering the Mix. (August 16, 2016). “Mixing and Mastering Using LUFS.” Retrieved from www.masteringthemix.com/blogs/learn/mixing-and-mastering-using-lufs

Mortensen, D. (2019). “Auditory Icons.” Retrieved from www.interaction-design.org/literature/article/what-science-can-teach-you-about-designing-great-graphical-user-interface-animations

Mortensen, D. (n.d.). “What Science Can Teach You about Designing Great Graphical User Interface Animations.” Retrieved from www.interaction-design.org/literature/article/what-science-can-teach-you-about-designing-great-graphical-user-interface-animations

Phon2 (August 16, 2016). “The Production of Speech Sounds.” Retrieved from www.personal.rdg.ac.uk/~llsroach/phon2/artic-basics.htm

Reid, G. (June 1999). “The Physics of Percussion.” Retrieved from www.soundonsound.com/techniques/physics-percussion

Rodrigues Singer, P. “The Art of Jack Foley.” Retrieved from www.marblehead.net/foley/jack.html

Sakamoto, D., GDC. (June 15, 2015). “Hearthstone: How to Create an Immersive User Interface.” Retrieved from www.youtube.com/watch?v=axkPXCNjOh8

Shay, D. (1993). The Making of Jurassic Park. New York: Ballantine Books.

Solberg, D. (August 23, 2016). “The Mad Science behind Inside’s Soundtrack.” Retrieved from https://killscreen.com/articles/mad-science-behind-insides-soundtrack/

Stripek, J. (October 14, 2012). “Sound Sweetening in Backdraft.”Retrieved from https://cinemashock.org/2012/10/14/sound-sweetening-in-backdraft/

Sweet, M. (2015). Writing Interactive Music for Video Games: A Composer’s Guide. Upper Saddle River, NJ: Pearson Education.

Turtle Beach Blog. (August 21, 2017). “The Audio of Hellblade: Senua’s Sacrifice.” Retrieved from https://blog.turtlebeach.com/the-audio-of-hellblade-senuas-sacrifice/

Twenty Thousand Hertz. “The Xbox Startup Sound.” Retrieved from www.20k.org/episodes/xboxstartupsound

Viers, R. (2011). Sound Effects Bible: How to Create and Record Hollywood Style Sound Effects. Studio City, CA: Michael Wiese Productions.

Wikipedia. “KISS principle.” Retrieved from https://en.wikipedia.org/wiki/KISS_principle

Wikipedia. “Loudness.” Retrieved from https://en.wikipedia.org/wiki/Loudness

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset