21
Home Studio Technology

In This Chapter

Wearing the Hat of Audio Engineer

If you are going to work at any level of professionalism as a voice actor, you need to have at least a basic understanding of how digital recordings are created, edited, and distributed.

That’s what this chapter is about.

I must warn you, however, that some of the information in this chapter can get a bit technical. But don’t let that worry you… as long as you understand the basics, you’ll be fine.

Having a home studio where you can record your auditions and voice tracks does not necessarily qualify you as a recording engineer, nor does it mean that you should even consider handling extensive editing or complete audio production with music and sound effects. If you would like to offer your clients full audio production services, but you know nothing about production, you have your work cut out for you. But that’s another book!

At the very least, as a voice actor you will need to master your chosen audio recording software and equipment so you can record and deliver pristine audio. Be prepared to spend some time learning how to edit out breaths and adjust spacing between words and phrases. More complex production skills include understanding normalization, and how to adjust the quality of a recording using equalization (EQ), properly utilizing the many features of your software plugins, learning how to use onboard signal processing and how to render high-quality audio in different formats. There are many ways to learn audio production and postproduction skills, and many excellent books on the subject.

Your home studio is critical to your voiceover business. By mastering the technical skills necessary to assemble and operate your home studio, you will be in a much better position to build a successful career as a voice actor. Chances are you may never be asked to provide anything more than an edited recording of your voice. But understanding the basics of sound reproduction, recording technology and the editing process will ultimately make your life easier.

Digital Recording 101

When recording with a microphone, the original source audio starts as sound waves that the mic converts to an electronic analog signal that travels through the wires of the mic cable. The USB digital converter samples the analog signal and converts it to a series of zeros and ones that the computer software can understand and work with. The resulting quality of this analog to digital (AD) conversion will depend on a variety of factors including sample rate, bit rate and the conversion file format.

Another aspect of digital audio is that files can be recorded as either compressed or un-compressed. Raw, or un-compressed files will have the highest quality and will be large in size. Compressed files are a fraction the size of raw files which makes them ideal for delivery and use on websites. Although a compressed file may sound undistinguishable from the original raw audio, there are some trade-offs taken to create the smaller file size.

Digital Audio Formats

All high-quality audio recording software and professional digital recorders will record audio in one of the two primary, uncompressed native audio formats, either .WAV (waveform audio file format) for a PC or .AIFF (audio information file format) for a Mac. At a standard sample rate of 44.1Khz, these uncompressed, or lossless, formats result in large files of roughly 10mb per stereo minute.

Large files can be challenging to deliver or play so a variety of compressed, or “lossy” formats have been developed. The most common of these is the .MP3 file. The term “lossy” is not a reflection on quality, but simply means that the resulting file is created by losing some portion of the original data. The MP3 format goes back to 1993 and stands for MPEG-1 Audio Layer 3. Originally part of the MPEG-1 video format this format was created to reduce the size of the audio data for recording on video tape. The MP3 “lossy” format compresses (or reduces the audio data size) by removing audio data that is beyond the hearing range of most people and then compressing the remaining data as efficiently as possible. The resulting file size is approximately 1/10th the size of the original lossless, uncompressed file.

A variety of other lossy formats have also appeared that are used in various video and audio applications. Here’s a short list of audio file formats, both lossless and lossy:

  • .wav—WAVeform audio file format—uncompressed PC •. aiff—Audio Information File Format—uncompressed Mac
  • .pcm—Pulse Code Modulation—most .wav files are pem
  • .mp3—MPEG-1 Audio Layer 3—lossy—most common format
  • .aac—Advanced Audio Coding—designed to replace mp3, most commonly used in video
  • .ogg (Vorbis)—lossy—mostly used in special applications
  • .wma—Windows Media Audio—better than mp3, but a proprietary format. A lossless version is available
  • .flac—Free Lossless Audio Codec—can compress up to 60% without any loss of data
  • .alac—Apple Lossless Audio Code—referred to as Apple Lossless—supported by iTunes and IOS

Sample Rate vs. Bit Rate

There are two basic factors relating to both the quality and size of an uncompressed digital audio file: sample rate and bit rate.

The sample rate is the speed at which analog audio is scanned, or sampled, by the computer as it is converted to a digital representation of the analog signal. 44.1KHz (kilohertz) means that any given second of analog audio is sampled 44,100 times. 44.1KHz is the standard sample rate for audio CD and most audio recordings. 48KHZ is generally reserved for video production and is a carry-over from the days of analog video tape. Most audio software will convert imported audio on the fly to the project’s sample rate. If you record at 44.1KHZ, you can’t go wrong.

The bit rate refers to the number of data bits (ones and zeros) that are included in each individual sample of the incoming audio. 16 bit means that there is a combination of 16 zeros and ones used to represent each of those 44,100 samples per second. A bit rate of 24 has an additional 8 zeros and ones per sample. The higher the bit rate, the higher the quality of the analog to digital conversion. The bit rate of audio can be compared to the dot resolution of an image. An image printed at 600 dpi (dots per inch) will be much sharper and clearer, with much more detail than an image printed at only 72 dpi.

Compressed audio files are usually referred to in terms of their bit rate as Kbps (kilobits per second). An audio file compressed at 256 Kbps will be of higher quality than a file compressed at 128 Kbps.

Digital Recording Devices

Equipment Upgrades

The digital revolution has resulted in recording equipment becoming smaller and lighter while retaining extremely high quality. There are a wide variety of hand-held digital recorders on the market, but only a handful are an option for recording professional voiceover. The smaller, consumer, digital recorders have a built in mic, but no way to connect an external microphone. These often record at a low sample rate (some as low as 8 KHz) so recording quality is poor. Other digital recorders designed for broadcast and professional remote recording applications include the ability to connect 2 XLR mics. These models record at the standard sample rates of 44.1KHz and 48KHz and often have the ability to record directly to 128 Kbps or higher MP3 files. As good as they may be at recording, quality can vary and editing is often challenging at best.

The challenge with any digital audio recorder is in getting the audio file out of the device and into a computer where it can be edited. With professional devices, a simple USB connection to a computer is all that is needed. Unfortunately, some consumer recorders have no easy way to export files.

The Apple iPad® and other tablets have become a very popular option for home studio recording partly due to the fact that there are no moving parts to produce sound. A few years after its release, the need for a way to connect a professional microphone and provide phantom power was answered with the Alesis iO Dock and more recently the Behringer iStudio. Both of these devices provide a docking bay for the iPad®, allowing for connection of professional microphones, additional audio connections, phantom power, and zero-latency headphone monitoring. Although it is possible, and software is available to record and edit on other digital tablets, the iPad® remains the most popular and functional device for this purpose.

A variety of recording and editing Apps are available for the iPad®, Android devices and even for recording on a smart phone. Yes, there’s an App for that.

At some point in your voiceover career you may want to upgrade your home studio. By the time you are ready for this you will have plenty of experience and you’ll have a better knowledge of what you might want to upgrade, what equipment or software you might want to add and, more importantly, why. Equipment that is installed outside of your mixer or computer is generally referred to as outboard equipment, but much of the signal processing that used to require outboard gear can now be achieved with a VST (Virtual Studio Technology) plug-in within your recording software.

A few of the possible equipment upgrades or additions are listed below:

  • Signal processing—A signal processor can be any device that modifies or adjusts an audio signal. The most common processors include a compressor/iimiter, equalizer, de-esser, noise reduction, de-clicker, signal enhancer, or mic preamp. Some outboard devices include multiple functions while others are dedicated to a single purpose. Most signal processing is also available as a software plug-in.
  • Outboard microphone preamp—Your mixer or USB interface already includes a high-quality mic preamp. A microphone produces an extremely low electrical output that needs to be boosted (amplified) to a level that can be used by a mixer or other device. An external mic preamp will generally be of a higher quality than the built-in preamp in your mixer or USB interface, but it will also be much more expensive.
  • Powered speakers—Advances in speaker design have produced speaker systems with the power amplifier built in resulting in extremely high quality audio from relatively small speakers. Most powered speakers are in matched pairs.
  • Additional microphones—Different microphones can produce different results from your voice recordings. You or your client might want a certain sound for a specific project or type of voiceover work. Careful selection of your mic can help you achieve the desired results and allow you to offer greater versatility with your recording services.
  • VST plugins—A plugin is a small piece of software that works in conjunction with your recording software. Most outboard hardware has a VST plugin equivalent. Some plugin suites will handle a wide variety of applications and can cost several hundred dollars. Other plugins are free.

Upgrading your home studio should only be a consideration when you have either a specific need for the upgrade or you have generated enough income to justify the expense.

Advanced Home Studio Technology

Online Remote Recording Technologies

If you are just getting started in voiceover, you don’t need anything more than the basic equipment discussed earlier in this chapter. However, as you begin to work, you’ll soon discover that some clients might request a phone patch, video patch, ISDN or another technology that will allow them to monitor or record your session remotely. You may never need any of these, but it is still worth knowing what they are and how to use them.

ISDN stands for Integrated Services Digital Network. This is basically a hard-wire digital phone line provided by your local phone company that will connect your home studio to any other ISDN studio in the world using wired land lines. It’s been around for a long time and many high-end studios consider it the next best thing to the talent actually being in the studio. To connect your studio to another studio requires a codec (coder-decoder) that will convert your audio to a digital signal that can be transmitted over phone lines. The receiving studio must have a compatible codec in order to receive your audio.

ISDN dates back to the mid 1980s and for decades, was the only method for digital audio transmission. Until recently it was the preferred option for remote recording (and for some, it still is) because it allows for real-time, high-quality remote audio recording. However, in recent years, ISDN has steadily been losing favor and many telephone companies are discontinuing or no longer supporting ISDN service.

A number of cost-effective alternatives to ISDN have made their appearance, all of which use advances in Internet technology to provide real-time remote recording similar to ISDN. Some emulate ISDN while others use proprietary technology. Every system requires matching components or software at both ends of the connection. Some of the many contenders to ISDN for live remote recording include: ipdtl.com, source-elements.com (Source Connect and Source Connect Live), connectionopen.com, audiotx.com, and luci.eu (Luci Live).

All contenders have a considerably lower cost than ISDN. Depending on location and availability, installation of ISDN phone lines can cost up to several hundred dollars per line and the codec alone can cost up to $3,000. Add to that the monthly service fees and the price of ISDN can add up quickly.

Do you need ISDN—or one of the alternatives? Probably not. Because any digital remote recording scheme will require matching equipment, software, or access to a service provider at both ends and the potential for monthly service fees, I do not recommend investing in any technology of this sort until you have a client base that will support the initial costs and produce a return on your investment. And when you do get that first ISDN booking, there’s no need to rush out and install expensive equipment. Most cities have at least one ISDN studio where you can book the session.

The Mysterious Decibel

Second only to your acoustical environment, the most important thing for making excellent recordings is your recording levels. This simply refers to the volume, or loudness of your recordings as they are represented as a waveform in your software and represented by your software’s VU meter.

You’ve most likely heard the term decibel, or dB, mentioned in terms of audio recording levels, but do you have any idea what it means? Most voice talent don’t!

This isn’t the place to go into a technical discussion of the decibel, but it is important that you have at least a basic understanding of what it is and how to use it.

Because the decibel measurement is used in different ways, understanding it can be a bit confusing. I’ll do my best to explain this as simply as possible without the technical mumbo jumbo.

Let’s start with the basics. Anything that moves has power. Sound waves moving through air have power. Electrons moving through a wire have power. So the question is... “how do we measure that power and what do we reference it to as a standard?”

The term decibel literally means 1/10 of a Bel, named in honor of Alexander Graham Bell. The decibel is a logarithmic measurement of power used to express physical energy, gain, or attenuation in electronics, or a comparative ratio between an input and output. It is usually used in terms of measuring sound in audio electronics or acoustic energy. According to Wikipedia, “A change in power by a factor of 10 is a 10 dB change in level. A change in power by a factor of two is approximately a 3 dB change. A change in voltage by a factor of 10 is equivalent to a change in power by a factor of 100 and is thus a 20 dB change. A change in voltage ratio by a factor of two is approximately a 6 dB change.”

Got it? Thought so. Probably more than you ever wanted to know. But this is important stuff, at least in terms of how you are going to record your voice.

Let’s use threshold of hearing, which is just above absolute silence, as a reference for zero (0dB). Since we know that a 10-times increase in power is an increase of 10dB, we can measure the comparative loudness of sound in an acoustic environment or in an electronic circuit. Here are just a few common sound levels in dB:

  • Threshold of hearing 0 dB
  • A whisper 15 dB
  • Normal conversation 60 dB
  • A lawnmower 90 dB
  • A car horn 110 dB
  • A jet engine 120 dB
  • Threshold of pain 140 dB

The dB levels above are measured in terms of the power of sound waves, or sound pressure level (SPL) when referenced to the threshold of hearing. When we talk about measuring audio signals for recording, everything changes.

Measuring power in electronic audio equipment is referenced in terms of voltage and if the signal is audio, an increase or decrease in voltage can refer to a specific change in decibels.

In the early days of radio and analog electronics, a standard was needed for measuring audio levels. Through a series of calculations, the electronics wizards of the day developed a standard reference level for audio signal measurement. The standardized reference was arrived at by producing a 1KHz sine wave at .775 volts RMS. Don't ask why. The math isn’t important. What is important is that you know that the .775 volt RMS signal became the 0dBVU reference on the old analog VU (volume unit) meters and that, technically, the correct notation for audio measurement is dBu.

The range on the analog VU meters was from -20dBVU to +3dBVU, but the actual incoming audio signal could be below -20 or well above +3. But 0VU became the optimal audio output level for recording and broadcast, so that’s where the demarcation between black and red happened on the VU meter. Not coincidentally, that 0VU reference was the equivalent of a specific measurement in decibels, so 0dBVU also became commonly known as simply 0dB.

At some point, an analog signal hotter than 0dBVU would begin to distort, usually around +20dBVU. The difference between the optimal recording level (0dBVU) and the point of distortion (+20dBVU) is known as headroom.

Digital Recording Levels

As you study audio recording, you will likely hear the phrase “record as hot as you can without clipping.” Although that might sound good in theory, digital recording equipment simply isn’t designed to work that way.

Digital audio uses “0dBFS” as the ceiling, which means that’s as loud as you can record without clipping. Everything below “0” is a negative number. This is referred to as 0dBFS (Full Scale).

Digital converters are calibrated for line level at -18dBFS= 0dBVU, so the optimal recording level is around -18 on your digital meter. This is the point on your digital meter where it gradually changes from green to yellow. This measurement can vary somewhat, but the general range for optimum audio recording is between -24 and -12dBFS. Audible.com’s Audiobook Creation Exchange (acx.com) specifies that submitted recordings average between -23dB and -18dBFS, with peaks no louder than -3dBFS and a -60dB noise floor. So if you keep your average levels just at that green-to-yellow transition, with only occasional peaks getting into the yellow, you’ll always have great recordings.

Visually, this is roughly where the wave form on your computer screen fills about 1/4 of the wave form window and just about where the meter begins to change from green to yellow.

Figure 21.1: Good recording levels

Figure 21.1: Good recording levels

If your recording level is too low, it may be necessary to increase the volume during playback in order for the recording to be used in a mix with music, sound effects, or other voices. As far as the software is concerned, increasing the volume of a recording is easy. However, increasing the volume of a low-level recording may increase more than just the loudness of the recording.

All electronic equipment has internal noise that is generated by the mere fact that electricity is running through it. It might be very low and inaudible, but it’s there. Also, all acoustic environments have a base “noise” level, usually referred to as room tone. Your room tone level can be as low as -72dB or lower in a really quiet room. But most homes are closer to -40 or higher. Room tone is usually louder than any internal electronic noise, so the noise you hear in your recordings is likely the natural room noise.

A common method used to bring a low recording up to workable levels, is known as normalize. Normalization is a software process in which the computer scans the recording for the loudest spike and increases the volume of the recording equally so that the loudest audio spike is brought up to a predetermined maximum volume, which can usually be set in the software. When you normalize your recording you are also proportionally increasing all that room noise along with any electronic noise. That's why you'll sometimes hear lots of background noise (or “hiss”) in normalized recordings that started at too low a record level.

Figure 21.2: Recording levels

Figure 21.2: Recording levels

So, if your room tone level is -40dB and your record level averages -18dB, there is only a 22dB difference between the two. If your original record level averages around -18dB and your room noise is -55dB, you have a 37dB difference between the loudness of your voice and any noise. This difference is called the signal to noise ratio (S:N). The greater the S:N ratio the quieter the recording. Since each change of 1dB is a 10-times change in power, a 37dB S:N will be a pretty quiet recording while a recording with a 22dB S:N will contain a lot of noise when brought up to a workable level.

Recording levels that are too high (Figure 21.3) can also produce some serious problems. Excessive levels are commonly referred to as clipping. In the days of analog recording, if you recorded too loud or too “hot,” the audio would simply become distorted. With digital recording, if you record too “hot,” putting your level above the 0dB ceiling, you will actually be losing data. Although there are ways to “restore” missing data, the process is very time consuming and the tools can be very expensive.

Figure 21.3: Too hot (clipping)

Figure 21.3: Too hot (clipping)

Clipping sounds like distortion and can occur at any of several places in the audio signal path. Some USB converters have green, yellow and red lights that indicate your recording level before your signal reaches the computer. If you’re in the red, you might get clipping. If you are using a mixer, the input gain trim or the input control knob or fader might be set too high, or the master output might be too high—all potentially resulting in clipping in the software.

The best way to avoid clipping and determine your optimal recording levels will be to experiment with your equipment and software so you understand how it works. Don’t worry, you can’t break anything. And you will learn a lot about how your equipment and software function.

An Audio Recording and Editing Primer

Sound recording is an art form in its own right. The purpose of this section is not to teach you the finer points of sound recording, but, rather, to give you some insights into how you can record excellent voiceover tracks. Search Google or YouTube for “how to edit audio” to find all sorts of information and resources.

Recording Software

Fortunately, all audio recording software is similar in design, so whether you use a PC or a Mac, the basic functionality will be essentially the same. The differences come in the software “look and feel” and the specifics of how a given task is accomplished. As your voice is recorded digitally in the computer, all software will show a timeline, tracking the recording in real-time, with a wave form that represents the amplitude (loudness) of your voice.

If you ask an audio equipment retailer about recording software, most will recommend Pro Tools®, telling you that it is the standard of the recording industry. Although this is largely true, Pro Tools® is extremely complex with a long learning curve and includes features that will never be used by most voice actors. It’s really designed for high-end music recording and it truly is one of the standards of that industry. However it is serious overkill for basic voiceover work. You will be far better off starting with trial versions of various software to find what works best for you. Many people start with the free Audacity software (audacityteam.org) and move up to paid software as their skills develop and bookings begin to come in. Still others, including many professionals, find Audacity more than fulfills their needs for recording and editing.

Audacity is considered by many to be among the best open source audio recording software available. And you can’t beat the price! It’s absolutely free! Audacity’s design is handled through a team of software engineers around the world who issue updates and improvements on a regular basis, so it’s constantly evolving. The software is relatively easy to use—just click the record button and a track opens and starts recording. For free software, Audacity is incredibly powerful and has built-in plugins that can handle a wide variety of production needs.

For basic voice track recording and editing, the learning curve for Audacity is fairly short. However, utilizing this software to its fullest extent will take some serious study.

The Project

An audio project is simply a computer file that contains references to a variety of other files and components which, when combined, result in a complete, finished recording.

Think of an audio project like a shelf of books. There might be 15 or 20 different books, but they’re all on a single shelf. The books can be changed or moved around, and pages can be added, removed, or text can be high-lighted... but they stay on the same shelf. Selected information can be taken from any or all of those books to create something unique and completely different.

Instead of books, an audio project can hold a variety of audio files like voice tracks, music, and sound effects. These tracks can be adjusted, edited, moved around, and processed in a variety of ways… but they are all part of a single project. When combined, or mixed, the result is a unique combination of the component parts.

An audio mix is the production process where all the tracks within a project are balanced to result in the desired sound. Once the mix is set, the project is rendered to create the output file (see “rendering.”)

Basic Editing

Sound editing is an art form in itself and there are many excellent books on this subject. Many high-end voice talent record in studios where a recording engineer handles the recording and editing. However, if you’re just getting started, it is critical that you understand at least the basics of audio editing if you are going to submit professional auditions and paid work.

When you record a script, your phrasing includes an inherent rhythm and timing of beats, inflection, tone of voice, dynamics and more. Many of these elements of your performance can be seen in the recorded waveform, and all of these must be taken into consideration when editing.

Editing is the process of repairing or re-assembling the sequence of the recorded sounds. In other words, if you record a script and make a mistake, you’ll need to do a pickup to replace the section in which the error was made. If you make a mistake during the pickup, you’ll simply record another pickup. When you record a pickup, make sure your tone of voice, levels and inflection are consistent with what you did originally. When it comes time to edit your recording, you’ll be removing the parts in error, replacing them with the best pickups.

A common mistake in editing is to overlook the timing or phrasing between words and sentences. If the edit is too tight or too loose, the phrasing will sound wrong or unnatural. Bad edits are often the result of mismatched timing, not matching inflection with pickups, and removing breaths or mouth noise without retaining the original timing. Always listen to the transition at the original edit point for the timing before making the edit.

Occasionally, adjusting the timing for an edit might result in a gap of digital silence at the edit point. Digital silence is a complete absence of sound, and will usually be heard as a drop, or cut-off, of the audio which can be perceived as a mistake. To solve this problem, simply record a few seconds of room tone. This is the natural sound of your recording environment with your microphone open and level set at the spot where you would normally record your voice. When you have a gap at an edit point, simply copy a small portion of the room tone and paste into the track to replace the gap. When removing breaths or mouth noise, this little trick can also allow you to adjust your timing by shortening or lengthening the beat for a more effective delivery.

Figure 21.4: Waveform with timing gap that needs to be tightened.

Figure 21.4: Waveform with timing gap that needs to be tightened.

Most editing requires a certain level of precision, even for simple voice tracks. For the best edits, you should zoom in on the waveform and make your edit at the point where the waveform crosses the center line (Figure 21.5). To avoid clicks or audible edits, the waveform should retain its natural flow as it crosses the center line. When you become extremely proficient with your editing, you can replace a single word, part of a word, or even adjust the inflection of a phrase by replacing parts of a sentence.

Figure 21.5: Waveform edit point

Figure 21.5: Waveform edit point

Signal Processing

Signal processing is defined as any electronic process that affects the recorded audio signal. This could include normalization, equalization (EQ), compression, limiting, normalization, de-clicking, de-popping, de-noise and any number of other processes.

It’s not the purpose of this book to explain how to use any of these signal processing tools. The important thing to know is that if you record your auditions or projects in an acoustically quiet area with proper audio levels, you really should not need to do any signal processing prior to rendering your file. More important: if you do not understand how to properly use a given process or what it does, you can very easily do something that will adversely affect your audio to the point where it becomes unusable. Fortunately, most software uses non-destructive editing and processing, so your original files are safe and you can start over is you make a serious processing mistake.

The three most common types of audio processing that are misunderstood or misused are 1) compression, 2) EQ (equalization), and 3) normalization.

  • Compression is a process that balances out the peaks in your recordings, thus smoothing out the overall record levels between low volume and louder sections.
  • EQ (equalization) is a process that can compensate for problems at specific frequencies. It is similar to the bass and treble tone controls on some audio equipment. EQ can help to compensate for some acoustic issues, but care must be taken to use EQ sparingly. If overused, EQ can severely damage a rendered recording. EQ can also be used to create special effects like the sound of a telephone.
  • Normalization is the process of proportionally increasing the overall loudness of the entire recording to a pre-set maximum level as determined by the loudest spike in the audio. Normalization is not intended to compensate for poorly recorded audio. This process will increase everything in your recording, including noise. See the section on "Recording Levels" earlier in this chapter for more about Normalization.

Once applied, signal processing cannot be undone after the file is rendered, however it can be changed in the original project. If not used correctly, signal processing can affect a recording to the point where it becomes unusable. Most producers will prefer a completely unprocessed voice track at proper levels. This allows them to make audio processing adjustments at their end during post-production. However, some clients may request some limited processing. If or when you get this request, be sure to ask for any specific settings they want for the processing.

With the possible exception of normalization, when you apply audio processing to your recording, you are making a subjective judgment about what you think your recording should sound like. This may or may not be what the producer is looking for, and if your choice is wrong, it may affect future work. Your job is to deliver the highest quality, clean, voiceover recording you can deliver, not to demonstrate your subjective opinions through audio processing.

Rendering

Rendering is the process of compiling all the component parts of a project into a final delivery format. For images, rendered files are commonly .jpg and .png. For audio, rendered files can be any of several formats, including the native formats of .wav and .aiff or any of dozens of other audio formats including .ogg and .mp3. The rendered format is generally requested by the client, depending on the specific needs of the project in production.

The .wav and .aiff native audio formats create very large files, averaging about 10MB per stereo minute. This is great for retaining high quality during editing and production, but is not practical for delivering files by email.

Most of today’s auditions and paid projects will be rendered to a compressed MP3 file at 128Kbps. MP3 conversion (or rendering) handles the bit rate in a slightly different way than native files. The bit rate for an MP3 refers to the transfer bit rate at which the file is converted, or more accurately, the degree to which the file has been compressed. The greater the compression the lower the MP3 bit rate, and the lower the quality because more data is being lost in the conversion. An MP3 file with a bit rate of 128Kbps (kilobits per second) is considered to be the standard for MP3 conversion as it is a good compromise of overall quality to file size (roughly 1/10th the size of a native file). Rendering to a higher bit rate (larger number) like 192Kbps or 320Kbps will result in a higher quality MP3 file.

If you render auditions to MP3 at 128Kbps, you’ll be within the conventional standard. However, for paid projects, you should always ask your client how they would like you to render your files for delivery.

Delivering Rendered Files

There is no one, single, best way to deliver your rendered audio files. Online audition sites will usually ask you to upload an MP3 file through their website. Talent agents will ask for your auditions to be delivered via email or through their website. Clients and production companies may use email, a third-party delivery service, or their internal FTP (file transfer protocol) upload system. There are large file delivery services like transferbigfiles.com and wetransfer.com, and file sharing services like box.com, sugarsync.com, dropbox.com, drive.google.com, onedrive.live.com, pogoplug.com, mediafire.com, and a few dozen others, all of which establish a direct communication from your computer to that of your client or agent. And, of course, there are those occasions when you might burn your rendered files to an audio CD or data CD/DVD for delivery to your client. It will always be a good idea to ask your client how they would like files delivered.

Regardless of the delivery method used, make sure you follow your client’s file naming protocol to the letter. Talent agents have a reputation for not accepting mis-named audition files.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset