11
ACOUSTICAL DESIGN AND CONFIGURATION

In order to help you to be as successful as possible in working with audio, it is important to at least touch on the subject of acoustics and studio design. While this section is not a comprehensive exploration of acoustics, it will help you to understand the complexity of the situation and provides some practical advice on what you can do to make your audio environment better.

Configuring the ultimate acoustic setup for a specific space is not an easy task to jump in to. There are many factors that play a part in creating a space that works best for the engineer and equipment utilized. This section will present a basic understanding of how sound reacts with certain materials and discuss problems and solutions for many misunderstood creations, and the placement of your sound equipment.

Sound Development

First, to understand how to properly fix troublesome frequency peaks and dips, the development of sound must be addressed. Sound is caused by variations in air pressure, and is most commonly described as the vibration of air particles. The particles in the air move by way of compression and rarefaction.

During compression, the air particles are forced together, creating higher atmospheric pressure, whereas rarefaction is when the particles begin to move inward and create lower atmospheric pressure. This same concept holds true for sound travel within other mediums besides air, such as wood or metal. Instead of air particles vibrating to create sound, the other material becomes the vibrating object that produces the perceived sound.

Figure 11-1 Compression and Rarefaction.

Figure 11-1 Compression and Rarefaction.

The Waveform

Sound development is universally represented graphically as a waveform (see Figure 11-2). A waveform visually describes the characteristics of what we perceive to be sound, such as frequency, amplitude, length, and, in some cases, phase.

Figure 11-2 The waveform.

Figure 11-2 The waveform.

The frequency of a waveform is the number of these waveforms or cycles produced per second, also known as hertz (Hz). The length of a waveform can be determined through a simple equation:

eq0001

The amplitude is how much pressure there is in the waveform; the higher the pressure, the greater the displacement from the uninterrupted position. Amplitude is mostly perceived as being the loudness or signal level of a particular sound. For example, when recording an audio file into Soundtrack, the higher the signal level, the higher the amplitude. There is a limit, depending on the recording equipment, as to how much amplitude a signal can handle before it begins to peak and distort. Phase is an occurrence that happens when the time relationship is displaced between two of the same frequencies. Figure 11-3 demonstrates how a simple waveform of the same frequency is increasingly put out of phase, beginning with 90 degrees. Once the waveform reaches 180 degrees out of phase, the compression and rarefaction sync up in time. The resulting output of this particular phase shift has an amplitude of zero. Often, the ill effects of out-of-phase audio are not noticed when dealing with more complex waveforms.

Figure 11-3 Phase.

Figure 11-3 Phase.

Sound Levels

Not all frequencies can be heard by human ears, nor do they have the same perceived loudness. The typical range of hearing by human ears is approximately 20 to 20 000 Hz. This range does not apply to all people and is subject to change as we age, as the frequency range of the human ear becomes smaller as we get older. Ears are more sensitive to certain frequency ranges than others. To help put this sensitivity into perspective, see Figure 11-4, which represents information taken from a case study originally performed by Fletcher and Munson, then later revised by Robinson and Dadson in 1956. This graph displays how human beings perceive loudness at different frequency levels.

There is also a limit to the loudness sensitivity of our ears. The threshold of hearing and the threshold of pain levels are represented as decibel values. The loudness of sound is accurately measured in pascals (Pa), which is a unit measurement of pressure. The range over which human beings can hear is vast; therefore, to help represent loudness in a more manageable range, the decibel was created. The decibel is a logarithmic ratio which helps to describe the wide span of human sensitivity to loudness. The ratio is typically between a measurement of power or intensity and a specified reference level. Instead of viewing our range of hearing as 0.00002 Pa to 200 Pa, the decibel makes it possible to reference loudness between 0 and 140 dB. Table 11-1 gives a sense of reference between decibels and pascals.

Figure 11-4 Loudness curves.

Figure 11-4 Loudness curves.

Table 11-1 Sound pressure levels.

Sound source Decibel level Sound pressure (pascals)

Sonic boom 194 100 000
Jet engine 160
Threshold of pain 140 200
Gun muzzle blast 140
Thunder 120
Chainsaw 110
Heavy traffic 100 2
Telephone dial tone 80 0.2
Conversational speech 60
Residence, private office 40 0.0002
Recording studio 30
Whisper, ticking watch 20
Threshold of hearing 10 0.00002

Table 11-2 Speed of sound in various materials.

Medium Speed of sound (ft/sec) Speed of sound (m/sec)

Air 1130 344
Sea water 4900 1493
Wood 12 500 3810
Stainless steel 18 800 5730
Brick 13 700 4175
Concrete 10 500–11 800 3200–3596
Glass 13 000 3962
Sound Reacting to the Room

Sound typically travels 1130 feet per second in air that is 70 degrees Fahrenheit, but this varies according to temperature and the travel medium. The higher the temperature, the faster sound will travel. Sound also travels at a different speed in water and through solids such as wood.

Because the speed of sound is known, the calculations to determine wavelength and frequency are easily obtained through the equation given earlier in the chapter. Resonating frequencies are common in many different rooms for many different reasons. When a specific frequency resonates in a particular cavity, that sound is perceived as being more intense than all of the other frequencies being produced. A space, such as a small control room or the body of an acoustic guitar, will resonate at a certain frequency dependent on the size of the area. How does that frequency end being up so much louder than the rest? Many different frequencies can resonate in a particular space. For example, if the listener is in a rectangular room, a wavelength the size of the room can bounce back and forth off two parallel walls. Because the sound wave bounces off the surfaces, it is multiplied and its loudness increases. When two waveforms are added together and they are in phase with each other, the amplitude will increase. This leads to the frequency becoming louder and lasting slightly longer than the other frequencies. By understanding the equation above, we can determine at least one frequency at which a specific space will resonate. If the resonant frequency of a space falls within the human range of hearing, adjustments such as low frequency absorption and diffusion can be made to help reduce the excited tones. Using this same method of simple algebra, peaks, dips, and potential phasing can be calculated within the known dimensions of a given space. Other than resonating frequencies, boosts and cuts in particular frequencies will occur as well. Often, when a waveform reflects off a hard surface, it will interact with other oncoming waveforms. This process results in other phasing problems and peaks or dips in the frequency spectrum (see Figure 11-5).

Figure 11-5 Wave interaction.

Figure 11-5 Wave interaction.

Other problematic issues may arise, such as comb filtering and flutter echo. Comb filtering occurs when two signals of the same frequency arrive at a hard surface at two different times and, when reflected, are added together in a way that creates both varying peaks and dips in the resulting signal. A graphic analysis of the output typically looks like a comb with extreme peaks and dips in the amplitude. Flutter echo is aurally perceived as being a fast echo between two parallel hard walls. Higher frequencies are easier to hear as flutter echo, and contain a ringing characteristic. The listener may not notice any problems to begin with, but, as time passes and as the signal continues and grows in volume, these waveforms will continue to add to and subtract from each other. This in turn can produce several different effects. As discussed before, resonance, peaks, dips, and echoes may occur from all of the interacting waveforms. Sound localization becomes unclear because phasing is introduced. In some cases, bouncing waveforms can be delayed long enough to create an echo or reverb. If the delayed signal is as loud as the fundamental sound, then aurally locating the original source becomes inaccurate. Many of these situations only occur in large concert halls or large rooms made up of hard surfaces. In a near-field monitoring system (small speakers close to the listening position) or smaller recording spaces, the room is not big enough to produce such a phenomenon.

Sound and Materials

Sound reacts to various surfaces in many different ways. For the purposes of this chapter, three primary reactions will be discussed. When a sound approaches a surface, it will either be reflected, absorbed, or diffused. Reflected waveforms are typically produced by hard surfaces, such as a wooden floor or a concrete wall. Reflection of sound is simply expressed in Figure 11-6.

Figure 11-6 Reflection.

Figure 11-6 Reflection.

Other materials, such as insulation and fabric, exhibit absorptive properties. The frequency range absorbed depends on the surface’s thickness and density. For example, four inches of rigid fiberglass will absorb lower frequency content than one-inch thick foam. Each piece of material will absorb down to a certain frequency limit and have a specific absorption coefficient. Determining the specific absorption coefficients of certain materials is beyond the scope of this book. Although certain materials can absorb down to very low frequencies, there will still be a specific frequency range that cannot be absorbed (see Figure 11-7).

Figure 11-7 Absorption.

Figure 11-7 Absorption.

Another occurrence that is still fairly new to acoustic design and is continually being re-invented is diffusion. Unlike reflection, diffusion disperses the waveform evenly into the room (see Figure 11-8).

Figure 11-8 Diffusion.

Figure 11-8 Diffusion.

Many diffusion structures utilize mathematical sequences to determine the reaction of the waveform to the surface. There are many different ways to effectively incorporate diffusion into a space. Naturally, old churches and concert halls have built-in diffusers to help distribute sound evenly throughout the audience. Most of these structures include curved panels and oddly shaped architectural walls. Diffusion helps to bring life back into the workspace. In the past, audio engineers would completely cover the control room and recording space with acoustic foam and other materials to deaden the sound. There is nothing wrong with this concept; however, working in a completely dead space tends to drain a person’s hearing stamina. Today, studio owners tend to take a more sound-friendly approach to meeting the needs of a particular editing and mixing space.

Figure 11-9 A treated audio edit room. Reproduced with permission of 42 Productions – Boulder, CO.

Figure 11-9 A treated audio edit room. Reproduced with permission of 42 Productions – Boulder, CO.

Optimizing a Room

Before a space is examined and tested to determine its specific acoustical needs, speaker placement must be addressed and optimized. The positioning of the monitoring system can help to reduce the unnecessary use of certain acoustic constructions. Rooms that are easy to calculate, such as those that are square or rectangular, can be ideal in some situations. The even shape of the room means that the engineer can easily determine the resonant frequency and some possible peaks and nulls in the room. More complex rooms are much harder to calculate, but will inherently reduce the effects of flutter echo and comb filtering. Figure 11-10 displays a range of rectangular room proportions that yield a smooth low frequency response for a small space. The inside of the area is the ideal dimensions for a room used for mixing or editing audio. For example, for a room with a height of 10 feet and a width of 12 feet, it would be ideal to have a length of 13 feet, using the ratio 1.0 : 1.2 : 1.3.

Figure 11-10 Ideal proportions for a small space. Based on research by Bolt.

Figure 11-10 Ideal proportions for a small space. Based on research by Bolt.

Lower frequencies have a tendency to build up in corners and small partitions in a given space. Keeping this information on hand helps to determine an optimal position for the playback system. Centering the speaker monitors in a corner is not ideal; however, certain acoustical constructs can help reduce the low frequency buildup produced in the chosen corner. The more centered a playback system is in a room, the more symmetrical the reacting sound will be. If the speakers are in the center of a rectangular room, the distribution of sound can be easily determined and further decisions for acoustical structures can be optimized.

In the ideal space, the speaker monitors should be as far from the back wall as possible. If the back wall is untreated, then sound will reflect right off that surface back to the listener. This can lead to phasing problems with waveforms adding to and canceling each other. The same theory is applied to the front wall. If the monitors are mounted right in front of the wall, the sound may have a tendency to bounce off that wall directly after the initial sound and construe the listener’s opinion of the frequency spectrum. There are many different ways to place setup monitors in a control room and minimize the amount of error from sound bouncing off certain objects. Placing the speaker monitors on top of a mixing console can be harmful to the mixing experience. The engineer will usually sit in front of the console and listen to the speaker monitors; however, if they are placed on top, there will be a misinterpretation of the signal as it reaches the engineer’s ears as well as the top of the mixer, which then reflects the signal back to the engineer shortly after the direct signal. As discussed before, this can lead to phasing, peaks, and dips in the signal. To help fix these phenomena, the speakers can be mounted on speaker stands directly behind the mixing console, or the monitors can be mounted into the front wall if space permits. Placing the speakers into the structure of the front wall will not produce the same effects as simply placing the units directly in front of the wall. While in the wall, the drivers inside the speaker cabinets are at the same level as the wall and leave very little room to reflect the same signal at different times.

Room Size

The size of the room will help determine what speaker setup will best suit the engineer’s needs. The right size of speaker for the right size of room will make a large difference to the acoustics used. It is not advised to place very large speaker monitors in a very small room. This would be like wearing very big headphones. A lot of sound can be created with larger speakers, therefore leading to louder waveforms that have more energy. The more energy these waveforms contain, the longer they will stick around in the studio, ultimately making way for phasing problems, peaks, and nulls. Lower frequencies tend to take longer to lose energy, especially in a smaller room where they become trapped. With this simple knowledge, a large subwoofer would not be ideal for a small room. Rooms smaller than 8 × 10 feet function well without a subwoofer or large speaker monitors. Bigger rooms benefit from slightly bigger speaker monitors and an added subwoofer, or just a pair of larger speakers to reach that lower frequency range.

Construction

With this knowledge of how sound works in a given space with certain materials, the studio can be optimized for the specific needs of the engineer/owner. Before the construction process begins, the room should be analyzed. During this time, the designer will pick out any parallel hard surfaces, such as the walls or floor and ceiling, and determine speaker placement. Depending on the financial budget of the studio owner, a full building reconstruction may not be preferred. This is when non-permanent or portable acoustical solutions are ideal. As earlier suggested, different materials and structures will react differently to sound. Egg crates and foam can absorb some of the higher frequencies and are cheap; however, these two materials are not ideal for more broadband absorption. Materials such as rigid fiberglass, rockwool, and cotton/jean fiber are very dense and have high absorption coefficients. Other materials that you can find in a typical home are strongly suggested for use for diffusion. Without having to get overly technical and mathematical, a multi-pattern structure such as a bookshelf will help tremendously in the back of a control room. In cases where engineers are using homemade acoustical formations, a little diffusion is still better than a flat, hard surface.

Checklist for Audio Setup Issues

Figure 11-11 An acceptable room dimension.

Figure 11-11 An acceptable room dimension.

  1. Pick the right room, using Figure 11-11. This chart uses the height at the common denominator, but you should use the shortest distance. Divide each dimension of the room by the shortest dimension. Once you do this you will have three numbers that vary between one and three. Use Figure 11-11 to find the point your room is on. Divide each number by the height; for example, if your room is 8 feet tall, 10 feet wide, and 12 feet long, you will get 1 (8 ÷ 8), 1.25 (10 ÷ 8), and 1.5 (12 ÷ 8). If you plot this on the chart, you will see that it falls inside the Bolt area, which means the dimensions of the room are acceptable for audio monitoring.
  2. Use a mirror to find potential problem spots in your audio mix area. This will require two people: one to move the mirror and one to sit at the mix location. As the mirror is moved along the edge of all surfaces in the room, the person sitting in the mix location should look at the mirror and keep track of any time they see the speakers. These instances indicate the specific places in the room where the audio will leave the speakers and bounce off a surface aimed directly at the listening position. These locations are where you can place acoustic treatment material to help prevent/reduce the negative effects of reflections. Don’t forget to check the ceiling and floors.
  3. Use speaker stands instead of placing the speakers on the desk surface. This will also help to prevent harmful reflections.
  4. Arrange the speakers and listening position in an equilateral triangle when using stereo speakers. This ensures that you will hear a balanced image. Also make sure you have a clear line of sight to the speakers.
  5. You should be using full range speakers (full range means able to reproduce at least 20 – 20 000 Hz). If not, then at least use a subwoofer as a means to hear the entire range of sound. You should also be using high quality cables to connect devices. Typically this means that you will be using balanced cables, which refers to cables that have three connection parts: hot, cold, and ground. This could be an XLR or a quarter-inch TRS (tip ring sleeve). These are the best options for ensuring that your signal is free from unwanted noise interference.
  6. Calibrate the level of the speakers. There is more information about this in Chapter 6. You should make sure you know exactly where the level should be for the room when mixing for different types of projects. Most audio interfaces will have a monitoring knob, and I recommend using a marker to mark the standard levels.
  7. Use comprehensive metering. Not only should you be taking care to make sure your room is accurate, but you should also use software meters to make sure you are meeting the expected level standards.
  8. Have reference files available at all time. This is important when mixing because it is so easy to lose track of what your mix sounds like. Use a ‘known’ project as a method of comparing your mix. This will keep your efforts more accurate.
  9. Set up a typical consumer television to hear what your project will sound like in a standard setting. This can be useful when trying to make sure your mix works in as many spaces as possible.
  10. Keep a pair of headphones handy so you can listen critically to various portions of your audio. If your room sounds very bad, you might consider moving to headphones for more of the time, but be aware that you really can’t trust them for final mixes.

Special Thanks

Tira Neal, graduate student and instructor at the University of Colorado, Denver, is currently working on a thesis on the topic of acoustics as part of a Master of Science in Recording Arts degree. Thanks to Tira for preparing the majority of the material in this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset