08
Making It Good

Summary: The roles and function of audio in games, mixing

Project: DemoCh08SpaceShip01 Level: SpaceShip01

Introduction

For a long time sound has developed with the aim of matching the increasingly photo-realistic look of games, hence “making it sound real”. It’s true that to some extent, one role of sound is to support the visuals by creating a sense of reality, adding weight and substance to what are after all simply a bunch of pixels. However, the exclusive pursuit of realistic sound displays a lack of understanding about how the human brain deals with sound. Here’s the key point, and it’s a biggie—with regards to the human sensory system, there’s no such thing as ‘reality’ when it comes to sound. Take a moment, breathe deeply, and we will explain.

If you had 40,000 speakers, then it might be worth trying to accurately represent a real sound field, and then we could use our perceptual mechanisms in a way akin to how we use them in the real physical world. As this isn’t realistically possible (yet), we need to realize that the usual ways of listening do not apply. The phenomenon described as the cocktail party effect explains how the brain uses the difference in the time that sound arrives at each ear, and the difference in the sound itself (due to the head filtering and attenuation of one ear’s version, depending on the direction of sound) to enable us to focus our aural attention on things of interest (i.e., at a cocktail party where there are many sound sources, you can still tune in to the fascinating conversation of the couple by the door who are discussing what an idiot you are). If you put a microphone in the middle of the cocktail party, made a recording, and listened back, it would be chaotic (unless it was a very boring cocktail party). With your subjective listening, you can choose to tune into different voices, but with the microphone’s objective hearing, when you listen back to the recording, you won’t be able to make anything out. In other words, microphones hear, but people listen.

Without a fully functional sound field, we are no longer able to do this. It is therefore our job as sound designers to decide from moment to moment what the most important sounds are for the listener/player. If we try and play everything all the time, then we’re going to end up with sonic mud that the user cannot decipher. When deciding on what you want the player to hear, you’ll obviously talk to the designers about the goal of the player in a particular scenario or the arc of emotion they want them to experience. We’ll talk more about the functions of audio in games below and hopefully some of these ideas will inform your approach, but first let’s consider some basic principles of the mix.

Mixing

Summary: Using Sound Classes and Sound Mixes

General Mixing Principles

There are some parallels to be made between the discipline of mixing sound for films and that of mixing for games, but perhaps a more fruitful comparison is to look towards music production. Film’s language of distant, medium, and close-up shots means that the nature of the soundtrack is very often that of a shifting focus rather than an attempt to represent the soundscape as a whole. If you listen to an action sequence in a film, then you will notice that the mix is radically altered from moment to moment to focus on a particular sound or event and so is actually quite unlike the mix for games where, as the player is in control and the perspective fixed, we often need to try to represent many events simultaneously. We can use focus for specific effect, but the need to provide information to the player means that there often will be many elements present, so the challenges are more akin to those of music production than film production.

In a music mix, you have a number of simultaneous elements that you want the listener to be able to hear and choose what to focus on. You are allowing the listener the possibility of selective listening. In order to achieve this, you might think in terms of the arrangement, the panning, relative volumes of instruments, compression/limiting, EQ, and reverb. In a typical soundscape for a game, our concerns are similar.

Arrangement

The number of simultaneous sound sources—a product of the game’s activity map, voice instance limiting, and occlusion

Panning

The spatialized nature of the sound sources (3D spatialized, stereo, and multichannel)

Volume

A product of the actual volume of the source together with the source’s attenuation curve

Compression/Limiting

The overall headroom limitations of combining 16-bit sound sources

EQ

The design and nature of the sound sources themselves in terms of their frequency content

Reverb

The different game environments and their reverberant nature

If we consider the arrangement, we can see that the concept of mixing actually starts with the design of the game itself. The events in the game will govern the density and nature of the sound and music at any given moment. In the past, game audio has too often been treated as a fixed system—a network of processes put in place and then left to get on with it. Mixing a game needs to be integrated with the game design and should be dynamic and responsive to gameplay. Our game may vary significantly in terms of the intensity of the action taking place, and this should influence our choices.

fig0437

For instance it might be more “real” to have a long hall reverb type in a particular game location, but if you know that the game design means that there’s going to be a battle taking place, then this will affect your decision regarding the wet/dry mix of the reverb, or to have the reverb there at all. You will also make different decisions about the panning (the type of spatialization) and attenuation curves depending on the number of potentially simultaneous events, the distances involved, and the relative importance of being able to identify point sources.

In periods where there are likely to be fewer sound sources, these could be richer sources in terms of frequency content. If there will be many simultaneous sources, then you’ll want to plan the frequency content so that they’re not all fighting for the same space. You might dedicate certain ranges for the music or for the dialogue, leaving others for the main power of the sound effects. In the soundfield represented below, the different elements have some overlap in terms of their frequency range but enough difference so that when they are heard at the same time (imagine the image below being squashed in from each side) each one still has some room (another interesting approach is to tune your effects to fit with the music).

fig0438

As well as the nature of the sounds themselves, it’s worth considering a consistent starting point regarding the relative volumes of the different elements of your soundscape (although this can change dynamically as we’ll see below). Some have approached this by setting some general ground rules for different types of sound (for instance the Unreal Engine guidelines suggest dialog ~1.4, music ~0.75, weapons ~1.1, and ambience ~0.5). This is of course based on the premise that the raw sound files are themselves at equal volume in the first place. We know of course that there’s no such thing as equal volume, as our perception of loudness is subjective and dependent on the frequency content of the sound, not its amplitude. Having said this most people find that some kind of normalization strategy applied to groups of sounds in their DAW can save a lot of individual tweaking later (normalization is a basic function of any DAW that will scan the audio file to find the peak value and then increase the gain of the whole file so that this peak does not exceed a given value). Theoretically you should normalize all assets to 100%, as every good sound engineer knows that mixing should be a process of gain reduction, not boost.

Unfortunately in some genres, there can be an expectation that loud is good, but this doesn’t recognize that our perception of volume is actually very subjective. Humans do not have an inbuilt reference level for volume, so we will compare the volume of a sound to the volume of the other sounds we have heard within a recent time window. If you are sitting quietly in the kitchen in the middle of the night desperately trying to finish writing a book and accidentally knock your cup of tea offthe table, sending it crashing to the floor—this is loud. As you walk over to the aircraft that is warming up on the runway then you will also consider this to be loud. Intellectually you know that the jet engine is probably a lot louder than your breaking cup, but psychologically the cup felt louder! You will want to use the full dynamic range that you have available in order to highlight the differences between your quiet sounds/music and your loud sounds/music. In order for this event to feel loud, the other events around it must be by comparison relatively quiet. If everything is loud all the time then your players will simply turn it down and nothing will actually feel loud because your players have not experienced a quiet to compare it to.

Reference Levels

Mixing for cinema has long established a reference level to mix at, which is then replicated exactly in the auditoriums. Within this controlled environment, you know that audiences are going to be experiencing the film at exactly the same level that you are mixing at. Although there have been efforts to establish a common approach in games (such as an average of −23 LUFS—see this chapter’s Further Readingsection on the website for more background to this), there remains a variety of practices.

People play games in a variety of environments and circumstances and through a variety of mediums, from headphones to a mono television to a 7.1 THX-certified surround sound setup. If you are not mixing within a professional environment (and as beginners we’ll assume that most of you aren’t), then a good rule of thumb is to use other media to set a comfortable listening level—and then stick to that. Your users are typically going to be using their consoles for watching films and TV as well as listening to music, and they are not going to want to have to leap up and adjust their volume as they move between these media.

Sound Classes and Sound Mixes: The Unreal Engine Mixing System

When you are mixing a large number of elements, you obviously don’t want to have to adjust the volume of each individual one every time you make a change. You can relate this to how you might mix music—by adding all of the drum elements (snare, kick, hi hat, and cymbals) to one group or bus called “Drums”, you can quickly adjust the overall volume of all of them.

Sound Classes are the Unreal Engine’s version of groups or buses. By assigning all of your sounds or music in the game to different Sound Classes, we can then affect them as a group. You can also set up linked hierarchies in the ifig0006.jpgSound Class Editor/ifig0006.jpgGraph and opt to affect any children of a particular class as well, for example not only having control over higher level categories such as music, dialogue and effects, but also having independent control over subcategories such as effects/ambience, props, or character classes.

fig0439

When we want to apply a {Sound Mix} to change the volume or pitch, we designate which classes we want it to affect (and whether the changes should apply to their children).

fig0440

We’ll go into this in more detail in a moment, but first we should note that the Sound Class also allows us to set up some additional properties for its members.

Sound Class Properties

Use the ifig0006.jpgContent Browser filters to search for Sound Classes and double-click to open the ifig0006.jpgSound Class Editor. Select a Sound Mix in the Graph window to view its Details panel.

fig0441

The Volume and Pitch apply a master control to anything attached to this class.

Stereo Bleed refers to the amount of a stereo sound source that should bleed into the surround speakers.

LFE Bleed is the amount of sound to bleed to the LFE channel (the low frequency effects channel that goes to the subwoofer speaker). You might want to avoid this for most sounds, saving the effect for specific instances.

Voice Center Channel Volume is the volume of the source that will go to the center channel or center speaker in a 5.1 and 7.1 speaker setup. This is traditionally reserved for dialogue in the cinema, and it can be useful in games if it is more important that the dialogue is heard than spatialized. You can also change this dynamically via a {Sound Mix}.

Radio Filter Volume and Radio Filter Volume Threshold applies a radio-like filter and setting to the sounds if the volume falls beneath a given threshold (i.e., it is heard through the center speaker and does not attenuate over distance).

Apply Effects governs whether sounds belonging to this class have effects such as filtering applied.

Always Play will prioritize the sound above others in a limited voice count situation.

IsUISound determines whether or not the sound will continue to play during the pause menu.

IsMusic designates the class as being music or not.

Reverb controls whether Reverb is applied to this class.

Exercise 08_01: Sound Class Properties

In this exercise you are going to add some instructional dialogue for your player in the style of radio-ized messages from your captain in the control room back at base.

{Sound Class}

  1. Pick an area of the exercise map where you might want to give the player some instructions, for example about blowing up the wall in *Cave 02*, how to operate the machines in the *Final Cave*, instructions for removing the blockage in the *Mountain Pass*, or what they have to do in the *Outer Wall Vaults* in order to open the inner gate to the city.
  2. Create your dialogue (maybe with radio type EQ and distortion effects) and assign it to a <Play Sound Attached> node.
  3. Place a [Trigger] in the level and an associated <On ActorBeginOverlap> event in the [Level Blueprint] to trigger your <Play Sound Attached>.
  4. Create a new {Sound Class} asset in the ifig0006.jpgContent Browser using Add New/Sounds/Sound Class, and name it {Radio_Dialogue}.
  5. Double-click this to open the ifig0006.jpgSound Class Editor.
  6. With the node of your Sound Class selected in the ifig0006.jpgGraph, look at the ifig0006.jpgDetails panel on the left and expand its properties.
  7. Since this is instructional dialogue theoretically played back over the radio to the player’s headset, we want to not apply reverb and set this to the center channel speaker if the player has a 5.1 setup. Uncheck the Reverb option and check the Center Channel Only option in order to achieve this.
  8. You could alternatively control the amount of sound going to the center channel using the Voice Center Channel Volume option.
  9. To make these options apply to your dialogue, you need to assign it to this {Radio_Dialogue}sound Class. You can do this either through the ifig0006.jpgGeneric Asset Editor of the Sound Wave (double-click), or if you are using a Sound Cue, do it in the ifig0006.jpgDetails panel of the -Output- node.
  10. The Sound Class settings of a Sound Cue will override those of any Sound Waves placed in it, so always check this if you are not getting the results you expect.
  11. Play the game and test. Obviously you won’t hear the center channel effect if you are listening with just stereo speakers or headphones since this is a 5.1/7.1 specific function, and you’ll only note the absence of reverb if you’re within an audio volume.
  12. On a console that supports this functionality, you can also set its output target to be the speaker on the controller itself.

The Roles and Functions of Game Audio

Summary: Exploring some of the roles and functions of audio in games

Before you start thinking about a game mix, you need to be clear on what you want to be heard and why. In order to make a real contribution to the game, you should familiarize yourself with the roles and functions of audio so that you can be an advocate for its effective use. This way you can try and avoid the spreadsheet mentality of “See a ventilation shaft, hear a ventilation shaft.” Look for opportunities to experience subjective or interpretive audio, sound that is heard via your character’s emotional state or focus, as opposed to the strict realism of simply representing what’s there. Don’t be afraid to let go of the real.

As well as all of the storytelling and emotional functions that we’re familiar with from film, audio plays a vital role in enabling players to achieve mastery of the game by providing them with information. These ludic (specifically game related) functions typically fall into the types outlined below, and we’ll spend this chapter (and level) exploring some illustrations of these, as well as exploring mixing techniques. Since this chapter is really about principles, we’ll go into some detail on the new mixing aspects on *Floor 01*, but after that we will just talk about the concepts at work.

fig0442

The I.N.F.O.R.M. Model

Instruction

Audio to direct or encourage approaches to gameplay

Notification

Unsolicited audio conveying information about game, character, or object states that may prompt the player to perform certain actions or attend to certain matters

Feedback

Audio in response to action instigated by the player that indicates confirmation or rejection or provides reward or punishment

Orientation

Audio that geographically (or metaphorically) orientates the player

Rhythm-action

Synchronized input as a game mechanic (we’ll be looking at this in detail in Chapter 12—not here)

Mechanic

Audio that is used directly as a game mechanic (excluding rhythm-action mechanics)

Instruction

There are many ways to convey information without instructional dialogue, but it remains a fundamental part of many games. The most obvious (but sometimes overlooked) thing to bear in mind is that your instructional dialogue must be audible! In some circumstances you will have control over all the sound sources in a room, in which case you can balance them so the dialogue is easily heard, however there may be other situations (in the heat of battle for example) where it is more problematic.

Sound Mix: Triggered Ducking

  • Project: DemoCh09SpaceShip
  • Level: SpaceShip01

You can skip between the areas of the ship using the keys 1–0.

There are a variety of circumstances where you might want to duck (attenuate) all the sounds or music in order for a specific sound to be heard, but this is most commonly used for instructional dialogue.

As you awake in the *Floor 01: Sleeping Quarters* (Bookmark 1), you’re told “Wake up, the ship’s under attack!”. The console command Stat Soundmixes will display the active soundmixes to the screen while you play and it might be useful to do this for the following sections so that you can see what’s going on.

fig0443

As this dialogue is played, a {Sound Mix} is pushed on, and when it completes it is popped off.

fig0444

The other sounds in this area are set to belong to the Sound Classes: {Ship_Ambient}, {Exception_ Class}, and {Tannoy}.

fig0445

The Sound Mix {Duck_Ship_Ambient} is set to apply itself over 0.2 seconds and then remain on (duration settings of −1.0 stay on indefinitely). Since this mix doesn’t have a set duration, we then need to remove it (pop) when the dialogue line has finished, which we do using a <Bind Event to Audio Finished>. The effect of this is to duck down all the other sounds so that the instructional dialogue line can be clearly heard.

fig0446

Sound Mix: Triggered EQ

In the *Floor 01: Corridor* you’re told “Get out of there! You need to get to the bridge!” Note how this time the ducking effect is a little subtler—instead of reducing the volume of everything, we’re applying an EQ setting via the Sound Mix to allow the dialogue to punch through the other sounds.

Although we’ve used it here for dialogue again, this technique is equally applicable when you want to highlight a big sound effect event while music is playing. Music, particularly a dense orchestral arrangement, often takes up a large range of frequencies, so the use of the EQ within the Sound Mix function is very useful in notching out a frequency range for a specific sound effect to momentarily shine through without disrupting the musical flow. The system for turning the mix on and off is the same as above.

fig0447

This time we’re calling the Sound Mix that applies the EQ {Notch_for_Dialogue}.

fig0448

The EQ is a three-band EQ with a high and low shelf and a parametric mid. The frequency is in Hz, the gain is from 0.0 to 1.0, and the mid frequency bandwidth ranges from 0.1 to 2.0. One of the best approaches to working out some appropriate settings is to record the audio for a sequence of your gameplay and import this audio track into your DAW. There you can play around with the EQ within a slightly friendlier environment before attempting to imitate these settings within the Sound Mix. If you look in the “Initialize” section of the Level Blueprint you’ll see that we’ve also implemented a {WakeUpFilter} mix as the level starts.

Exercise 08_02: Triggered Duck and EQ

In this exercise we are going to make the cave wall explosion in *Cave 02* a little more impressive by implementing a simultaneous Sound Mix change.

{Sound Mix}, <Push Sound Mix Modifier>

  1. Go back to the *Cave 02 Upper*section of the exercise level and find the layered explosion system you implemented for blowing up the tunnel wall in Exercise 02_04.
  2. We are going to use this event to trigger a {Sound Mix}. This will duck the ambient sounds and apply some EQ.
  3. First you need to find all the sounds that you might hear in this area. Assign all of these Sound Waves or Sound Cues to the {Ambient}sound Class.
  4. Find the Sound Cue that you created for the explosion here ([Rock_Explosion_01], and assign this to a new {DoNotDuck}sound Class.
  5. Add a new {Sound Mix} in the ifig0006.jpgContent Browser called {Duck_Ambience}.
  6. Double-click this to open its ifig0006.jpgDetails. Under the Sound Classes/Sound Class Effects options, add a new element using the + sign and assign your {Ambient}sound Class.
  7. Set the Volume Adjuster to 0.2 so that when this Sound Mix is called all sounds that belong to the {Ambient} Class will be ducked to 0.2 of their original volume.
  8. Set the Fade in Time to 0.0, the Duration to 2.0 seconds and the Fade Out Time to 3.0 seconds.
  9. Go to your [Level Blueprint] and insert a <Push Sound Mix Modifier> node from the Output of the <Play> that plays your explosion sound (if you’ve done Exercise 07_04, you’ll have a <Set> to change the ambient zone, so connect to this instead). Reference your new Sound Mix {Duck_Ambience} in this node.
  10. Try this out in game, and then try applying some EQ changes as well. These are set in your Sound Mix {Duck_Ambience}. Remember that EQ will only apply to sounds that belong to classes that have their Apply Effects option checked (you can see what mix is active using the Console command “Stat Soundmixes”)
  11. You might also want to trigger a {Shellshock} tinnitus type whining sound when the explosion goes off. Assign this also to the {DoNotDuck}sound Class.

Passive/Automatic Mix Changes

Passive Mixes for Dialogue

Obviously setting up a push/pop Sound Mix system for every line of dialogue in your game is going to be a pain. Fortunately the Sound Class has the functionality that allows us to call a specific mix (when certain criteria are met) whenever a sound registered to that class is played.

After the explosion in *Floor 01: Corridor*, a hole is punched through the wall into the *Floor 01: Server Room* and you’re told “Go through the lab section to your left. Hurry—the airlocks are going to go!”

fig0449

This line {Vo_Line_003} is assigned to belong to the {Ship_Dialogue_With_Passive}sound Class. As you can see in the ifig0006.jpgSound Class Editor, this has a passive Sound Mix that calls the {Duck_Ship_ Ambient} mix. Anytime a sound belonging to this Sound Class is played, it will automatically apply this Sound Mix. It will also automatically pop offthe mix when the sound has finished playing—which is handy!

fig0450

Passive Sound Mixes have some additional parameters (min volume/max volume thresholds) that govern whether they actually start or not and you can set these according to the following three different usage scenarios:

  • a. Always Apply the Mix

    fig0451

    If you want the mix to always be applied, then set your Min Volume Threshold to be 0.0 and your Max Volume Threshold to be high (e.g., 10.0). When a sound from the designated Sound Class plays, then its volume will always be above the Min Volume Threshold and below the Max Volume Threshold, therefore the Sound Mix you’ve referenced will be applied.

  • b. Auto-ducking Other Sounds If Designated Sound Class Is Too Quiet

    fig0452

    If you only want the mix to be applied if a sound is played too quietly, then set your Max Volume Threshold to the cut off point from where you want the mix to be applied. When a sound from the designated Sound Class plays at a volume between 0.0 and max, then the Sound Mix will be applied. Typical use would be to assign this passive Sound Mix to your dialogue class so that if the dialogue is playing too quietly, it will auto-duck everything else so you can hear it.

  • c. Dynamic Range

    fig0453

    If you want something to have a particular impact, then you can temporarily duck the other sounds to give it more headroom. This clears the voices that you probably won’t hear anyway and allows a greater dynamic range. If the sound plays loudly (i.e., above min threshold but below max) then the Sound Mix will be applied.

    Note that the values of min/max are derived from the volume of the sound. That is its playback volume as designated by its attenuation curve. This is not the same as the sound’s actual volume. Many systems in other engines do actually track the real volume of the sound, which is more effective for this kind of side-chain effect.

Auto-attenuation or Culling for Dynamic Range

If you go to the end of the corridor, you can hear an explosion sound with the Dynamic Range effect referred to above. The Sound Cue {Explosion_With_Passive_Cull} belongs to Sound Class {Culling_ Class}. For its passive Sound Mix, the Min Volume Threshold is set relatively high, so the culling effect will only take place when it is loud. If you walk close to it, you can hear that it is a dramatic loud explosion that momentarily blocks out all other sounds, but as you’re further away becomes more of a background sound.

fig0454

As we discussed at the opening of this chapter, we notice changes and differences in volume rather than having an awareness of absolute values. If you have a big event happening that you want to have real impact, then the best effect is going to be achieved by preceding it with a quiet section to make the dynamic contrast more dramatic. This is of course not always possible, but in addition to the culling technique outlined here, you could also affect a preduck strategy that momentarily brings all the sounds down in volume immediately before loud events such as explosions or car crashes.

All platforms have some sort of built-in limiting system to stop distortion. These are often clumsy and unpredictable, and so it’s really best to avoid these systems having to be applied in the first place. There are also, however, increasingly intelligent systems of automation that can really help the overall mix of the game. Given the huge dynamic range of sounds in the physical world and the limited dynamic range that can be represented by 16 bits of information, it is not surprising that achieving the feeling of a wide dynamic range in games is a particular challenge. We’ve discussed how the level design can contribute to this, but we also need systems in place that will allow for the intelligent management of which and how many sounds are playing so that the combinations of these sounds do not cause clipping and bring in the blunt instrument of the limiter. Chirping insects may add an important element of immersion to a game environment when you are creeping around, but in the midst of battle you will not hear them, as they will be masked by the louder sounds. An automated system that monitors the current levels of the sounds at the player’s location and will cull the quieter elements, that you won’t hear anyway, will both clean up the mix in terms of frequency content and create some additional headroom for the sounds you do want to hear.

There has been a considerable amount of clever work done on interactive mixing for game audio in recent years that falls outside of the introductory remit of this book. We would encourage you to undertake some further reading in this area by following the links on the website.

Exercise 08_03: Passive Mixes

In this exercise we are going to set up an automated ducking system for whenever dialogue is played in the game.

  1. Create a new Sound Class {Dialogue_Ducking} and assign all your dialogue Sound Waves or Sound Cues to it. Remember you can select and adjust multiple Sound Waves simultaneously by selecting them all and right-clicking to open a ifig0006.jpgGeneric Asset Editor that will apply to all.
  2. In the Passive Sound Mix Modifierssection of the {Dialogue_Ducking}ifig0006.jpgDetails panel, add an element by using the +sign.
  3. Reference the Sound Mix {Duck_Ambience} you just created in Exercise 08_01 since this is already set up to duck any sounds belonging to the ambient class.
  4. Set its Min Volume Threshold to 0.0 and its Max Volume Threshold to 10. This will ensure that it is always called.
  5. Play the game. You should find that every time your dialogue is played the ambient sound ducks out. You might want to create a new {Sound Mix}since the settings of the {Duck_Ambience} you used for the explosion may not work as well for dialogue ducking.
  6. If certain sounds are not ducking, then it’s probably because they don’t belong to the {Ambient}sound Class. You could create additional Sound Classes for these and add them as Children of the {Ambient}sound Class. You can do this by dragging and dropping them onto the ifig0006.jpgGraph of this class and connecting them as Children. Make sure the Apply to Children option is checked in your Sound Mix, and your Passive Sound Mix Modifier will now apply to these as well.
  7. You might also want to set up a Passive Sound Mix Modifier for the weapons sounds so that they duck the quieter elements of the ambience (the area loops for example) or Foley. During weapons fire you probably would not hear these anyway, and by ducking them out we allow more headroom for the weapon sounds. Better still would be to only use the looping or retriggered elements of the weapon sound to instigate the duck so that the ambience is already coming back up during the tail portion, not waiting until after it has finished.

Sound Mix: Base Sound Mix and Mix Modifiers

All the Sound Mixes we’ve applied so far have actually been mix modifiers. These are mixes that you can push on or pop off at will and also apply simultaneously, as we will see in a moment. We can also apply a base Sound Mix via the <Set Base Sound Mix> node. This, however, is a permanent mix that you cannot pop off. You might have a use for this if you find that you want to make some global changes to the relative volumes of different types of sound or classes as a fixed setting across the whole game.

In the *Floor 01: Server Room* we have a problem. The airlocks are malfunctioning, and every 20 seconds or so all the air is sucked out and we are in a floating vacuum. Your helmet has a detect mode (press H) that should help you locate the passkey to the elevator at the far end.

fig0455

For this scenario we need multiple Sound Mixes to be operating simultaneously, one for the air vacuum effect (which attenuates and equalizes everything out) and one for the detect mode (which highlights the beeping of the key in the mix). After a lead-in sound, {Airlock_Out_02} and the {Airlock_Malfunction} mix are implemented (this has a timed duration of 7 seconds, so doesn’t need to be popped off). This mutes everything, including the {Exception_Class} that the {KeyCard}sound Cue belongs to.

fig0456

In order to bring this back up during the detect mode (instigated by the H <Key Press>), we apply another modifier mix. The adjuster settings work as multipliers, so if the {Airlock_Malfunction} mix adjusts the volume of our {Keycard} by 0.1, then our mix modifier {Detection_Mode} needs to have a Volume Adjuster value of 10.0 to bring the overall volume back to 1.0.

fig0457

fig0458

Note that as you can have multiple Sound Mixes at the same time, but not multiple EQs, there is also an EQ priority system to determine which EQ should take precedence. This uses the same approach to priorities that we have seen with the audio volumes and reverb (i.e., the higher the number, the greater the priority).

Sound Mixes have many uses in shaping a player’s experience, but they are also very useful when prototyping and developing your level. It is very handy to set up some mixes that solo particular types of sound (ambience, music, objects, dialogue, weapons, cats, etc.) that you can call up with some shortcut keys. See the Testing the Mixsection later in the chapter.

Exercise 08_04: Sound Mix Modifiers

In this exercise you are going to experiment with multiple Sound Mixes.

EQ priority

  1. Find an area of your exercise level where there is a lot of different audio going on (the *Inner Sanctum* for example).
  2. Identify all the sounds that are playing in this area. One way to do this would be to play the area and set the Console command “Stat Sounds”, which gives you a list of active sounds on screen (you can also use the Console command “Stat SoundMixes” to display the active Sound Mixes). See Appendix A: Core Concepts/Console Commands.
  3. Find out what Sound Class these are assigned to (in the ifig0006.jpgDetails panel of Sound Waves or the -Output- nodes of Sound Cues).
  4. Devise a Sound Mix to focus on a particular group of sounds (music, dialogue, fireworks, etc.), set up a <Key Event> in the [Level Blueprint], and trigger a <Push Sound Mix Modifier>.
  5. Play the game to test that this works and then set up a new Sound Mix that might modify this one, for example the first mix might bring everything down apart from quiet ambience and dialogue, then you might apply a modifier mix that brings the music back up.
  6. Remember that simultaneous mixes work as multipliers, so if you brought the music down to 0.2, you will now need to apply a Volume Adjuster of 5.0 to bring it back to 1.0.
  7. Now try applying some EQ settings with your mixes and test out the EQPriority settings—remember that the mix with the higher value will override the other.
  8. Don’t forget that you can set up sophisticated systems of parent/child relationships within the ifig0006.jpgSound Class Editor and choose whether or not your Sound Mixes Apply to Children.

Notification

Notifying the player about states within the game is a key role for audio.

Note that the gameplay and systems from here on are simply there for you to experience some of the different functions of audio in game, so we will not go into any detail regarding their systems or have any exercises (with the exception of the decoy system, which introduces the <Make Noise> function). But feel free to have a look yourself.

Character State

As you pass through the *Floor 02: Pipe Corridor* (Bookmark 2) section, you’ll get repeatedly hit by the bursts of fire and steam shooting out from the pipes that line the walls. You’ll need to pause on the way down in order to regain some health.

fig0459

In order to notify the player about their current health state, we’ve applied both a Sound Mix (EQ) and a system that brings in a heartbeat sound on low health. These kind of subjective sound states that represent how a player character might be feeling through the audio can be very effective in immersing the player.

See the [BP_GAB_PipeExlosion] Blueprint and the “PlayerHealthSystem” section in the [MyCharacter] Blueprint.

NPC State

One of the most essential pieces of information that sound often provides us with is that of the current status of enemies or other NPCs in the world. In this instance (*Floor 02: Turret Section*), the NPC is actually a series of gun turrets that the player needs to sneak past. (Bookmark 3)

When the player is visible to the turrets, they power up and enter a scanning phase. If the player is still visible when this phase starts, they will fire, but if not they will power down when scanning is complete. It’s important that the player knows what state the turrets are in so they can time their movements, and it’s the audio that provides this information.

See the “Turrets” section of the [Level Blueprint] and the [BP_Turret] Blueprint.

Object State

Sound can also indicate the state of objects. In the *Floor 02: Mind the Gap*section (Bookmark 4), the player must try to cross a deep shaft by moving a series of girders into place.

fig0460

The control panel is in a side chamber, so the player must rely on listening to gauge whether the objects are in position.

fig0461

As you move each girder, the different sound elements change pitch to indicate the correct position to the player. This is done by reading through curves in <Timelines> to <Set Float Parameters>. The left hand button resets the system.

See the “Girders” section of the [Level Blueprint].

Game State

In the next area *Floor 02: Puzzle Pipes* (Bookmark 5), you must fix the broken pipes by finding suitable parts around the room. As you do so, you can hear the three steam sounds get muted one by one to indicate your progress.

fig0462

See the [BP_GAB_Puzzle] Blueprint.

Feedback

This is audio that provides feedback on player actions either to just acknowledge input or to provide punishment or rewards.

Pickups

In certain genres of games (particularly platformers), the repetitive nature of the pickup sounds serve as a kind of conditioned response to punish or reward the player. Whether it’s picking up stars, coins, or powerups, the sounds are there to reward the player with a pleasurable confirmation of achievement.

In *Floor 03: Power Cell Pickups* (Bookmark 6) you need to collect 6 healthy power cells and deposit them in the mechanism within a given time to power up the door. There are several healthy cells around the room, but unfortunately there are also some unhealthy ones too—these cause damage to the player. In the Blueprints for the two object types, there are different sounds that provide immediate positive or negative feedback to the player when they are picked up (collided with).

fig0463

Although we have droned on a great deal about the need for variation in your sounds, these types of feedback sounds are an exception to the rule. These sounds are the equivalent of earcons in computer software, like the sound that tells you you’ve got mail, in that they are sounds that carry specific meaning. Although a typical coin or star pickup sound may go up in pitch if you collect more within a given period of time, for symbolic sounds like these, we are primarily interested in the information they convey rather than the sound itself, so the repetition of the same audio sample in this case is less of a problem than it has been in other circumstances. As the player is concerned with the learned meaning of the sound, rather than any information conveyed by the nature of the sound itself, we don’t want the distraction of variation where the player might be forced to think about the sound more. If there were variations in the sound, the player would be forced to consider questions like why is this sound different and what is it trying to tell me, rather than simply being able to acknowledge the information that the presence of the sound has provided.

See the [BP_DoorAccessPanelGame], [BP_DoorPowerCell_Healthy] and [BP_DoorPowerCell_ UnHealthy] Blueprints.

HUD Interactions

In the real world, sound is often used to provide a user with a confirmation of their action. Think of your cell phone or the ATM. Sound gives you the message that the button you’ve just pressed has in fact been acknowledged. This kind of confirmation sound is even more important in games, where the amount of feedback you get through their physical interfaces is often limited. Some interfaces may have rumble functionality, where you can get some (haptic) feedback to confirm your actions, but many have none and so it’s important to have the player’s interactions confirmed with sound as a representation of the tactile feedback of the physical world.

fig0464

After you complete the powercell pickups task, the *Floor 03: Fuel Cell Storage Door* requires a key code (it randomizes between 592 and 437). When designing sounds to be used for this sort of menu navigation, you need to use sounds that somehow convey the notions of forwards/backwards and accept/decline.

It is difficult to generalize about the sounds to choose that might have these kind of positive (forward/accept) or negative (backward/decline) connotations. If we look at a rule of thumb from speech patterns that we could use, it might be that positive meanings are often conveyed by a rise in pitch, negative by a fall in pitch. By using slight variations on a few sound sources, it is easier to produce a positive or negative version that conveys this meaning than using completely different sounds. This also gives them an identity as a family of related sounds, rather than simply a collection of unrelated ones.

Your sounds will have more coherence and unity if you link them with the theme of the game world itself or with an aspect of the object that you’re interacting with. If your game is set within a dungeons and dragons type world, then the UI menu sounds might be made of armor and sword sounds, if set in a sci-fi world, they might be more electronic in nature. If you’re going to have menu music playing while the game is in this mode, then you might consider how the UI sounds will fit with this. You could pitch shift your UI sounds so that they were in the same key as the music so that they did not sound too abrasive against it, or you could use musical tones that fit with the style and notes of the background music. This may not be appropriate in all cases but can add a nice sense of polish to your menus.

See the [BP_DoorAccessPanelGame], [Player], and [WBP_HelmetHUDInterface] Blueprints.

Orientation

In addition to describing offscreen space, audio can also serve the function of helping a player to navigate a level.

Navigate

The main corridors have been blocked on *Floor 04: Air Ducts*(Bookmark 7), so you’ll have to navigate your way through the air duct system (press F for a flashlight, and C to crouch).

fig0465

You can call up your tracking device (T) to help guide you through this maze of ducts. As you pass each juncture, the tracker will speed up if you’re going the right way.

fig0466

See the [Player] Blueprint for the flashlight system, and the “Ventilation Shaft Tracker” section of the [Level Blueprint] for the tracker system.

Attract

Audio can act to draw attention to things, whether it be objects in the game or on-screen instructions. In the *Floor 04: Corridor Hatches* area (Bookmark 8), you need to find three audiologs that have been hidden in security hatches. If you listen carefully, you can tell when you are walking over a security hatch, as the footstep sound changes slightly. Press E to open the hatch and listen to the audiologs.

See the [BP_Floor], [Player], and [BP_AudioLog] Blueprints.

Repel

As well as attracting players, audio can also repel them. This might be through hearing the presence of NPCs or more literally in the case of *Floor 04: Radiation* (Bookmark 9), where the player must avoid the radioactive pods and mines in the room. On proximity to the pods, you can hear radioactive sounds as they start to affect your health. The mines start beeping on proximity, and if you don’t move away, they explode!

fig0467

See the [GAB_Mine_BP] and [GAB_Radiation_Pod_BP] Blueprints.

Rhythm-action

We’ll be looking at this concept in Chapter 12.

Mechanic

Audio can sometimes be an important game mechanic.

Recall/Learn

You need to hack your way into the final *Generator Rooms* (Bookmark 0). Hacking (by pressing E) the computer terminal in the room on the right gives you a pitched code. You need to recall this pitch sequence for the key code to the door.

fig0468

See the [WBP_MusicalCodeHUD] Blueprint and the “Musical Key Code” section of the [Level Blueprint].

Decoy

There is a guard patrolling this area. Do not attempt to tackle her directly but instead lure her over to a corner so that you can sneak past. Press R to throw an object that will make a noise and cause the guard to investigate (the player’s footsteps and gun shots will also alert the guard, so press Ctrl to sneak).

fig0469

Since there is a new audio technique being used here, we’ll go into it in a little more detail. The [Player] Blueprint handles the throwing of the [ThrowObject_BP] projectile.

fig0470

When this hits something, it plays an impact sound and calls an event within the [Player] that triggers a <MakeNoise> node. Only pawns can have a pawn noise emitter component or a pawn sensing component, so although the make noise event comes from the location of the hit event, this is why we have to do it in the player Blueprint.

fig0471

The bot, [BotChar_NewMesh], contains a pawn sensing component that listens for the Make Noise event, and its behavior tree {BotAITree} references the Blueprint [BTS_CheckForEnemy]. Should the bot hear the make noise event, this changes the bots usual patrolling behavior so that it now goes to the location of the noise source.

fig0472

Exercise 08_05: Make Noise

In this exercise you will implement a make noise event to draw the guards away from the gate to the *Inner Sanctum*.

<Make Noise>

  1. You can bribe the guards at the gate to the *Inner Sanctum*, but you can also lure them away by making a noise.
  2. Open the [MyCharacter]Blueprint and create a <MakeNoise> node in the “Make Noise” section of the Blueprint. Create a key press ‘N’ event to trigger this <MakeNoise>. (The MyCharacter Blueprint already contains a PawnNoiseEmitter component which is required for the <MakeNoise> to work).
  3. Create a <Get Player Character> node and set this as the creator of the noise by connecting it to the Noise Instigator input of the <MakeNoise> node.
  4. Create a <Get Actor Location> node and connect this to the Noise Location input to define the origin of the noise.
  5. Now when you play the game, you can make a noise (N) and the guards will come and investigate. Run away and then sneak through the gates they have left unguarded.

Mask

In the *Floor 05: Engine Room* you need to take out the engines without the security turrets sensing your presence. Creep (hold Ctrl) and use the sound of the engines as they cycle up to their most intense to mask the sound of your shots (shoot the red control panels).

fig0473

See the “Final Engines” section of the [Level Blueprint], the [Player] Blueprint and the [BP_Turret2] Blueprint.

Testing the Mix

Getting an effective mix for your game is all about understanding what aspects of the audio are important to the player right now. You can’t play everything all the time because no matter how good your approach to managing frequency content, you’ll soon end up with sonic mud. Hopefully we have given you some ideas about the roles and functions of audio that can be useful in understanding how you might want to prioritize certain sounds for gameplay reasons.

Sound Mixes are also very useful when testing your level. Sometimes you want to focus on getting a particular sound, dialogue, or music system right, but you just can’t hear it clearly. Sound Mixes are very useful for hearing what different groups/classes of audio are doing.

First you can see what is going on with your audio in regard to Sound Classes by using Console commands (see also Appendix A: Core Concepts/Console Commands).

When playing the game, press the ¬ key to bring up the command line and type “listsoundclasses”. This gives you a list of how many sounds in each class are playing. This can also be useful in telling you whether you have any sounds ungrouped (i.e., not belonging to any Sound Class).

fig0474

Another useful command is “showsoundclasshierarchy”. This shows a list of all your Sound Classes and any parent/child relationships within them.

To see what mixes are currently active, you can use the command “Stat SoundMixes”. You can of course set up key events in your [Level Blueprint] to trigger Sound Mixes. It is often useful to have a set of Sound Mixes to isolate certain types like ambient, music, dialogue, etc.

You can call Sound Mixes directly from the Console using the command “SetBaseSoundMix”. Follow this with the name of your Sound Mix.

Finally, you can modify the volume of any Sound Class using the command “ModifySoundClass” followed by the name of your Sound Class and then “Vol= (***)”.

Another useful tip is to have your DAW constantly recording the game in the background. That way you can go back and compare any changes you made with how things sounded before to help you come to a more objective decision about whether you actually improved things or not!

Conclusion: Audio Concepting

In conclusion to this chapter, we’d like to say a few words about the importance of developing practical examples of your ideas and concepts.

People, including sound designers, are generally not very good at talking about sound. Instead of writing design documents and talking about it, go and do it. If you’ve got a great idea about a gameplay mechanic based on sound or an audio feedback mechanism you think would work really well, don’t just tell your producer about it—make a mock up and show them.

You need to understand your tools well enough to break them, use, and abuse them in ways not intended to illustrate your point. Whatever the engine you will eventually be using, you can mock the idea up in the Unreal Engine or Cycling ‘74 Max, and then if the idea gets accepted, the programmers can do it properly for you later.

As well as understanding the game’s mechanics, you should also get your hands on any early concept visuals as soon as possible. In Windows Movie Maker or iMovie, take a still image of this concept art, drag it across the timeline, and add an audio track to it. Try creating a one-minute environment ambience for this location or do the same with an animation. With just the still image on the screen (or a series of images), create an imagined scenario where this character or creature will interact with things. Get the movement sounds and vocalizations in. To try out visual concepts very quickly, sometimes artists will grab a bunch of imagery or video clips from a variety of sources and roughly put them together (sometimes referred to as ripomatics). If you’re pushed for time, you could take a similar approach by taking bits of the soundtrack from your favorite films or games and blending them together to see quickly if this is the direction that people are after.

Putting in the extra effort early on to provide audio concept tracks will achieve a number of things:

  1. It will tell you quickly what people don’t want. People are generally not very good at articulating what they do want (particularly in terms of audio, where most people lack the vocabulary) but are very good at telling you what they don’t like when presented with an example. Don’t get too attached to your sound ideas—they will no doubt change a number of times. Always keep them, however, and then when the producer comes around in 6 months’ time looking for some new sounds for this creature, play them your original ones that they rejected. They’ll probably think they’re perfect! Either that, or they’ll sack you.
  2. The great thing about making mockups and concept tests is that they can feedback into other areas. If you make a concept sound for the creature that you’ve got some concept art for, then show it to the animator, chances are that, consciously or not, some of your elements will influence the final animation.
  3. Providing early concept sounds will also form an early association with particular characters or objects. People will get used to hearing them with your sounds so that it will begin to sound not quite right without it!
  4. Using sound to illustrate aspects of gameplay will get other people starting to be interested in the sound of the game, to realize its potential impact, to get into discussions, and to offer input and ideas. All this increases communication, which is what you want to be able to have a positive impact on the game design.

Of course the thing that will drive your concepts more than anything will be an understanding of the nature of the game itself and of the roles and functions you want the sound and music to achieve.

For further reading please see the up-to-date list of books and links on the book website.

Recap:

After working through this chapter you should now be able to:

  • Understand some of the roles and functions of audio in games through the INFORM model;
  • Use triggered and passive Sound Mixes;
  • Implement systems using making and hearing noise.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset