3

The radio studio

Here is an introduction to the crucial operational process of creating the sounds that the listener will hear – sounds that must be clear and accurate. Anything that is distorted, confused or poorly assembled is tiring to understand and will not retain the listener’s interest.

The quality of the end product depends in part directly on the engineering and operational standards. It scarcely matters how good the ideas are, how brilliant the production, how polished the presentation, because all will founder on poor operational technique. Whether the broadcaster is using other operational staff, or is in the ‘self-op’ mode, a basic familiarity with the studio equipment is essential – it must be second nature.

Taking the analogy of driving a car, the good driver is not preoccupied with how to change gear, or with which foot pedal does what, but is more concerned with road sense, i.e. the position and speed of the car on the road in relation to other vehicles and people. So it is with the broadcaster. The proper use of the tools of the trade – the studio mixer, microphones, computers, recorders – these must all be at the command of the producer who can then concentrate on what broadcasting is really about, the communication of ideas in music and speech.

Good technique comes first, and having been mastered, should not be allowed to impinge on the programme. The technicalities of broadcasting – editing, fading, control of levels, sound quality and so on – should be so good that they do not show. By being invisible they allow the programme content to come through.

In common with other performing arts such as film, theatre and television, the hallmark of the good communicator is that the means are not always apparent. Basic craft skills are seldom discernible except to the professional who recognises in their unobtrusiveness a mastery of the medium.

There are programme producers who declare themselves uninterested in technical things; they will leave the mixer or computer operation to others so that they can concentrate on ‘higher’ editorial matters. Unfortunately, if you do not know what is technically possible, then you cannot fully realise the potential of the medium. Without knowing the limitations there can be no attempt to overcome them. You simply suffer the frustrations of always having to depend on someone else who does understand. So here, we will explore the studio and discuss some of the equipment.

Studio layout

Studios for transmission or rehearsal/recording may consist simply of a single room containing all the equipment, including one or more microphones. This arrangement is designed for use by one person and is called a self-operation or self-op studio.

Where two or more rooms are used together, the room with the mixer and other equipment is often referred to as the control room or cubicle, while the actual studio – containing mostly microphones – is used for interviewees, actors, musicians, etc. If the control cubicle also has a mic it may still be capable of self-operation. In any area, when the mic is faded up and becomes ‘live’, the loudspeaker is cut to avoid ‘feedback’ or ‘howl-round’, and monitoring must be done on headphones.

image

Figure 3.1  A typical analogue cubicle with digital add-ons. 1. Analogue mixing desk. 2. Racks panel with jackfield and power supply. 3. CD players. 4. Screens connected to digital add-ons. 5. Gram deck. 6. Loudspeakers

The studio desk, mixer, control panel, console, or board

Most studios will include some kind of audio mixer – analogue, digital or fully computerised. What it is called depends on which country you are in, such as panel, console, board, mixer, desk, etc. It is essentially a device for mixing together the various programme sources, controlling their level or volume, and sending the combined output to the required destination – generally either the transmitter, a recorder or streamed to the Internet. Traditionally, it contains three types of circuit function:

1    Programme circuits: a series of differently sourced audio channels, their individual volume levels controlled by separate slider faders. In addition to the main output, a second or auxiliary output – generally controlled by a small rotary fader on each channel – can provide a different mix of programme material typically used for public address, echo, foldback into the studio for contributors to hear, a clean feed, or separate audio mix sent to a distant contributor, etc.

2    Monitoring circuits: a visual indication (either by a programme meter or an LED bargraph) and an aural indication (loudspeaker or headphones), to enable the operator to hear and measure the individual sources as well as the final mixed output.

3    Control circuits: the means of communicating with other studios or outside broadcasts by means of ‘talkback’ or telephone.

In learning to operate a mixer there is little substitute for first understanding the principles of the individual equipment, then practising until its operation becomes second nature. The following are some operational points for the beginner.

The operator must be comfortable. The correct chair height and easy access to all necessary equipment is important for fluid operation. This mostly calls for a swivel chair on castors.

The first function to be considered is the monitoring of the programme. Nothing on a mixer, which might possibly be on-air, should be touched until the question has been answered – what am I listening to? The loudspeaker is normally required to carry the direct output of the desk, as, for example, in the case of a rehearsal or recording. In transmission conditions it will often take its programme feed off-air, although it may not be feasible to listen via a receiver when transmitting on short wave or on the web. As far as possible, the programme should be monitored as it will be heard by the listener, i.e. after the transmitter, not simply as it leaves the studio. With a digital desk and transmission chain, the inherent delay and latency in the signal often means that one cannot monitor much beyond the desk output.

image

Figure 3.2  Layout of a traditional studio, controlled or ‘driven’ by its cubicle. This arrangement is primarily designed for complex demands such as lengthy news and current affairs programmes with many guests and recorded inserts. The studio area has a table and chairs with two or more microphones. Monitoring is by loudspeaker or headphones, with the speakers in the studio area being muted when the mic channels are open. The headphones in the studio also carry talkback from the cubicle. The size of the mixing desk can vary – in this illustration there are 10 channels, which can be assigned by the operator. Depending on the complexity of the programme the cubicle may be run by one or more operators. The first operator will control the desk, with a second operator at the back of the cubicle controlling media playout, recordings and outside sources

The volume of the monitoring loudspeaker should be adjusted to a comfortable level and then left alone. It is impossible to make subjective assessments of relative loudness within a programme if the volume of the loudspeaker is constantly being changed. If the loudspeaker has to be turned down, for example for a phone call, it should be done with a single key operation so that the original volume is easily restored afterwards. If monitoring is done on headphones, care should be taken to avoid too high a level, which can damage the hearing.

image

Figure 3.3  Studio mixer. Typical programme and monitoring circuits illustrating the principle of main and auxiliary outputs, prefade listening and measurement of all sources, desk output and off-air monitoring

Loudspeakers should also be kept to reasonable levels if a risk of hearing loss is to be avoided.

In mixing sources together – mics, computer playout, etc. – the general rule is to bring the new sound in before taking the old one out. This avoids the loss of atmosphere which occurs when all the faders are closed. A slow mix from one sound source to another is the ‘crossfade’.

In assessing the relative sound levels of one programme source against another, either in a mix or in juxtaposition, the most important devices are the operator’s own ears. The question of how loud speech should be against music depends on a variety of factors, including the nature of the programme and the probable listening conditions of the audience, as well as the type of music and the voice characteristics of the speech. There will certainly be a maximum level that can be sent to the line feeding the transmitter, and this represents the upper limit against which everything else is judged. Obviously for the orchestral concert, music needs to be louder than speech. However, the reverse is the case where the speech is of predominant importance or where the music is already dynamically compressed, as it is with rock or pop. This ‘speech louder than music’ characteristic is general for most programming or when the music is designed for background listening. It is particularly important when the listening conditions are likely to be noisy, for example at busy domestic times or in the car.

image

Figure 3.4  A versatile mixer that can be used for permanent installation in a small studio or as a portable mixer for outside broadcasts. There are comprehensive mixing and monitoring controls, including left and right LED columns for stereo level indication. This mixer has eight faders for mono inputs, which can be panned left or right, plus two faders for stereo inputs. Each channel has rotary controls for gain, EQ and auxiliary output which could be routed to reverberation devices. The last two faders on the right are master faders for the left and right outputs

In a situation of fiercely competing transmitters, maximum signal penetration is obtained by sacrificing dynamic subtlety. The sound level of all sources is kept as high as possible and the transmitter is given a large dose of compression. It is as well for the producer to know about this, otherwise it is possible to spend a good deal of time obtaining a certain kind of effect or perfecting fades, only to have them overruled by an uncomprehending transmission system!

Probably the most important aspect of mixer operation is self-organisation. It does require multitasking so it is essential to have a system for handling any physical items: that is, the running order, scripts, CDs, etc. The second requirement is accurate reading of the computer screens. The good operator is always one step ahead, knowing what has to be done next, and having done it, setting up the next step.

Digital mixers

The general change from analogue mixers to digital has not been without its problems.

In the digital cubicle, the audio signal does not come into the desk but is controlled by the faders remotely with the advantage that levels can be preset and ‘remembered’, so that any given setting can be restored by the touch of a ‘recall’ button – the faders being motorised to reset themselves. A digital desk offers a large amount of processing – EQ, compression, echo, etc., often by means of a touch-screen. Also since the audio signal is remote from the desk – and is digital – it is less prone to noise. Digital desks are often more expensive than analogue types and are ideal for the more complex mixing required for outside broadcast (OB), orchestral or theatrical work. However, if their advantages are not applicable to the smaller station, there are many alternative analogue desks that come with a USB connection to provide the advantages of computer playback or recording.

image

Figure 3.5  A typical digital cubicle. The mixing is digitally controlled by the fader banks, with the signals being routed through a main apparatus room in another part of the building. 1. Digital mixing faders. 2. Racks panel with power supplies and additional XLR inputs and outputs. 3. CD players. 4. Screens connected to digital playout and production systems. 5. Gram deck. 6. Loudspeakers

There are also forms of virtual mixer where the ‘faders’ are operated by a touch-screen and all the controls are visual. Given the right software a laptop becomes a mixer – the advantage is the low cost. Mixers like this are mostly used for small Internet stations.

Studio software

Capable of recording, editing, storing and replaying audio material in digital form, the software offers very high-quality sound, and immediate access to any part of the programme held. The computer in this context is used in two different ways:

1    As part of an integrated network system where all the material is stored in a central server and can be accessed or manipulated by any one of a number of terminals. Individual programmes or items can be protected by allowing access only through a password.

2    As a terminal capable of editing, storing or broadcasting material, used essentially as a production computer.

image

Figure 3.6  A self-contained virtual mixer on a computer screen. The graphics simulate a traditional physical mixer, giving the software a familiar look which makes the device very user-friendly. The faders and rotary controls can be operated by a mouse/keyboard combination or by using the touch-screen monitor. The inputs are connected to the computer hardware either directly, or through an add-on called a ‘break-out box’ which allows for additional inputs and outputs

The advantage of an integrated system is that after editing an item – a news report, for example – on a terminal in the newsroom, it is then immediately available to the on-air broadcaster in the studio. The studio terminal can have access to an immense range of programme items – it simply depends on the presenter understanding the system well enough to know what is available.

As a programme presenter, producer or journalist operating in a live studio, the basic operations are:

•    to transfer into the database an item from another source, e.g. a recorder;

•    to retrieve an audio item from the computer database;

•    to edit, rename and save an item;

•    to open a programme running order;

•    to insert material into a running order;

•    to move items within a running order;

image

Figure 3.7  A versatile ‘portable’ digital mixer. Inputs, outputs, auxiliary sends and other channel settings can be configured using the touch-screen and subsequently saved to internal memory to be recalled as required

•    to delete material from a running order;

•    to play out an item through the desk on-air.

The range of playout software for the recording, editing, storing, scheduling and playback of audio material is very extensive, including SCISYS dira! – which includes the StarTrack, Highlander and Orion applications – SADiE, RCS Master Control, Radioman, DALET, Myriad P Squared, Soundforge, Soundscape, Simian, Pro-Tools, VoxPro, Zetta, Prisma, D-Cart, Adobe Audition (formerly Cool Edit Pro), Audacity, etc. Some of these systems are capable of virtually running the station, maintaining a database, recording audio, storing, editing, linking together, scheduling and playing back music tracks, voice links, commercials and promos, etc. in any desired order.

It is absolutely key for any operator to know the software in use really well – what it is designed to do, its details and shortcuts. Suffice it to say that no system should be handed over to the operational users until it is thoroughly tested and proved by the installation engineers. Once operational, the software must be totally reliable. Even so, computers can crash and newsrooms in particular will produce bulletin scripts as back-up. Experience has shown that it is also wise to have a CD player with pre-recorded music easily and quickly available on one of the mixer channels.

image

Figure 3.8  Community station ‘Future Radio’, showing an analogue mixer connected to a digital media playout system. It also has facilities for DJs who prefer to stand while broadcasting their ‘mix sets’, either from vinyl or CD. 1. Analogue mixing desk. 2. Racks panel with power supply. 3. CD players. 4. Screens connected to digital media playout. 5. Gram decks. 6. Loudspeakers. 7. Adapted CD players providing vari-speed and scratch facilities for DJ ‘mix sets’

Digital compression

Digital recorders and computers store an audio signal in the form of digital data. A recognised standard was set in 1983 by the compact disc, storing music by using 16-bit samples taken 44,100 times per second, creating a digital file of about 10 MB per minute of audio. Newer formats provide options to record digital audio at higher sample rates, e.g. 48 kHz and 96 kHz. Higher sample rates will involve much more data, and although this does not pose problems for computers with their fast processors and huge storage, large amounts of data are difficult and time-consuming to send on the Internet or by email.

It is possible to reduce the size of an audio file by halving the sampling rate to 22.05 kHz. This limits the frequency response of the audio to around 11 kHz which, while not good enough for music, is quite acceptable for most speech. Here are some sampling rates and the kind of quality they give. The figures are for dual channel audio.

  8,000 kHz telephone quality
11,025 poor AM radio quality
22,050 near FM radio quality
32,000 better than FM
44,100 CD quality
48,000 DAT quality
96,000 DVD – audio (typical)

 

The ‘Moving Pictures Experts Group’ – MPEG for short – was established with the aim of developing an international standard for encoding full-motion film, video, and high-quality audio. They came up with the MP3 format, an abbreviation of MPEG-1 audio layer 3. This shrinks down the original sound data by a factor of 12 or more by cleverly discarding the faint and extreme sounds that the ear is unlikely to hear. The perceived audible effect is therefore minimal. Different rates of MP3 compression can be heard on the website.

Digitally compressed audio is ideal for a reporter sending material from a remote location back to the studio. A report recorded, for example, on a smartphone or digital recorder, which itself introduces compression, is transferred to a laptop computer where it is edited and compressed to MP3 format. This file is then sent via a local connection to the Internet or email, and downloaded by the radio station ready for broadcast. One minute of high-quality audio, equivalent now to only 1 MB of data, will take relatively few minutes to upload via the Internet, or even a mobile data connection.

Digital development

Following the installation of digital studios, systems are being introduced that are based around central digital hubs supporting more than one radio station in a corporate group.

The BBC’s ViLoR system – Virtualisation of Local Radio – is designed to have 39 stations connected to one of two digital hubs where all the audio files – speech and music – are held. This means that the same uncompressed linear audio signal is used all the way to the transmitter, or online audience, so avoiding transcoding between formats. This results in reduced cost – less digital equipment on-station, better quality and less noise.

Music playout

Music reproduction employs a wide variety of methods. Many stations have all their music either remotely or locally held on a central hard drive computer system with instant access to any track. A few may rely on a library of vinyl records built up over the years. Some use CDs in one form or another, either on individual players or through a jukebox system. It is not surprising, therefore, that studio equipment differs widely from station to station.

In playing a music item, most control desks have a ‘prefade’, ‘pre-hear’ or ‘audition’ facility which enables the operator to listen to the track and adjust its volume before setting it up to play on air. This provides the opportunity of checking that it is the right piece of music, even though listening only to the beginning may give a false idea of its volume throughout.

image

Figure 3.9  A typical digital playout screen, providing options for manual or automatic playout to transmitters or for Internet radio streaming. This screen displays a Beatles special – a previously formatted playlist is already playing out. The title shown top left is ‘On the air’, and the ‘Next’ track is preloaded and shown top right. The upcoming tracks are listed in the main panel, and last minute adjustments to the running order can be made as required. The software usefully displays the time remaining on each track, in addition to current time and weather information. The operator must take manual control for live items such as travel and news bulletins. The small fader (top centre) controls the level of the music output

The fully digital station, using only a central server as a music source, is wise to keep at least one CD player or conventional turntable for vinyl records – even the occasional 78 rpm disc can be brought in by a listener.

A possible argument against the remote operation of music is that presenters often prefer to feel in control of their programme through physical contact with their discs. The disadvantage here is that the disc is then separated from its inlay or sleeve notes. However, this information can always be made available from a data store and brought up on a computer screen as required.

Recording formats

From spools of wire, reels of magnetic tape and cassettes, through minidiscs and DAT, to SD cards and solid state storage, recording formats have changed immensely. The trend has always been to store longer and longer durations in smaller and smaller spaces. Reel-to-reel recordings were mostly made on ¼-inch width tape, but they may also be found on 1- and 2-inch sizes for multi-track recordings. Huge stocks of archive and library material exist on tape, which is why many studios keep a reel-to-reel machine in the corner.

image

Figure 3.10  Illustration comparing the physical size of four common types of solid state storage: (A) Compact Flash card; (B) SDHC card; (C) Micro SDHC; (D) USB stick

Digital audio workstation

This facility is likely to be in a small cubicle area separate from the studio, often adjacent to the newsroom. It could be a ‘stand-alone’ terminal complete in itself, or be fully integrated with a network computer system. The example here is a typical DAW (Figure 3.11). It comprises an audio mixer (one source of which is a microphone), computer and keyboard, and perhaps a unit for picking up incoming phone lines. In addition, the mixer could be supplied with feeds from the local newsroom, a remote studio or outside broadcast. This arrangement can therefore be used for local voice recording, interviewing someone at a remote source, editing material or correcting levels of an already recorded mix. It is an ideal arrangement for mixing items to create a short self-op news package. In this example we are making a programme opening, mixing together three separate sources (Figure 3.12).

Given the appropriate software, different studio items are recorded on the computer to appear as separate tracks. Here is the script we are working on.

image

Figure 3.11  Digital audio workstation or DAW capable of downloading from a portable recorder, or recording from outside sources or mic and mixing or editing. Suitable for making finished packages

MUSIC: Locally recorded guitar rhythm
  (fade after 4 seconds and mix with Fx)
Fx: Surf waves on seashore
  (fade up, peak and hold under)
NARRATOR: The Comoros Islands. On the map they look like tiny specks in the Indian Ocean
  (slowly fade down music)
But walking along the white sandy beaches you’re overwhelmed by the mass of lush green vegetation climbing up the huge mountain peaks. But they’re not just mountains – they’re volcanoes. Ask any passing fisherman.
  (music out)
Fx: Briefly up, down, hold under)
FISHERMAN: Oh yes – the last real eruption we had was in 1977 – lava poured down into the sea . . .

 

Here, the music is put on Track 1, the speech on Track 2, and the effects on Track 3. The resulting mix appears on Track 4. At the bottom is a timescale marked off in seconds from the left.

Each of the tracks can be played independently and its level adjusted by clicking the mouse on to the volume line, or envelope. This is the solid line on each track indicating its relative level at any one time – commonly known as the ‘rubber band’. The level is altered by moving it up or down i.e. ‘rubber banding’.

image

Figure 3.12  Mixing the programme opening. Music starts on Track 1, then effects on Track 3 faded up, both down for Voice 1 on Track 2. Music faded out. Effects up and down for Voice 2 – effects held under. The solid black lines or ‘rubber bands’ show the sound level of each track and are manipulated by the mouse. The final mix appears on Track 4

The music begins on its own and after 3 seconds the seawash is faded up to mix with it. After 5 seconds this mix is faded down and held under the narrator’s voice. The music continues to be faded and is completely out by the time the seawash is briefly peaked before the fisherman – Voice 2 – comes in. Note that the fisherman was recorded at a slightly lower level than the narrator’s voice so that Track 2 is brought up to compensate.

The advantage of this facility is that it allows the mixing to be repeated with 100 per cent accuracy while perhaps changing one of the levels or cues to get the desired result. It provides for split-second timing, it leaves all the original recordings intact, it doesn’t tie up a whole studio, and takes only one person to achieve the final programme. If that person is the narrator who is also the producer, the production method is not only very efficient but can also create a high level of personal job satisfaction.

Editing principles

The purpose of editing can be summarised as:

1    to rearrange recorded material into a more logical sequence;

2    to remove the uninteresting, repetitive or technically unacceptable;

3    to reduce the running time;

4    for creative effect to produce new juxtapositions of speech, music, sound and silence.

Editing must not be used to alter the sense of what has been said – which would be regarded as unethical – or to place the material within an unintended context.

There are always two considerations when editing, namely the editorial and the technical. In the editorial sense it is important to leave intact, for example, the view of an interviewee, and the reasons given for its support. It would be wrong to include a key statement but to omit an essential qualification through lack of time. On the other hand, facts can often be edited out and included more economically in the introductory cue material. It is often possible to remove some or all of the interviewer’s questions, letting the interviewee continue. If the interviewee has a stammer, or pauses for long periods, editing can obviously remove these gaps. However, it would be unwise to remove them completely, as this may alter the nature of the individual voice. It would be positively misleading to edit pauses out of an interview where they indicate thought or hesitation.

The most frequent fault in editing is the removal of the tiny breathing pauses which occur naturally in speech. There is little point in increasing the pace while destroying half the meaning – silence is not necessarily a negative quantity.

Editing practice

Once audio material has been transferred on to a hard disk, e.g. from a Flashmic, it can be manipulated, cut, rearranged or treated in a variety of ways depending on the software used.

This may be Adobe Audition, Audacity, Quick Edit Pro, Sound Forge, WavePad, etc. A general point of technique is almost always to cut on the beginning of a word, as it makes a definite edit point, rather than at the end of the preceding word – although occasionally the most definite point is in the middle of the word. However, while it is tempting to edit visually using the waveform on the screen, it is essential to listen carefully to the sounds, distinguishing between the end of sentence breath and the mid-sentence breath. These are typically of different length and getting them right gives a naturalness to the end result.

An advantage of this form of editing is that it leaves the original recording intact – non-destructive editing. It is therefore possible to do the same edit several times, or to try alternatives, to get it absolutely right. This is a valuable facility for editing both speech and, especially, music. If a skilled music editor, faced with the master recording and a couple of retakes, can assure the producer that after editing, the show will be perfect, then the musicians can go and a lot of money will be saved.

Microphones

The good microphone converts acoustic energy into electrical energy very precisely. It reacts quickly to the sudden onset of sound – its transient response; it reacts equally to all levels of pitch – its frequency response; and it operates appropriately to sounds of different loudness – its sensitivity and dynamic response. It should be sensitive to the quietest sounds, yet not so delicate as to be easily broken or susceptible to vibration through its mounting. It should not generate noise of its own. Add to these factors desirable qualities in terms of size, weight, appearance, good handling, ease of use, reliability and low cost, and microphone design becomes a highly specialised scientific art.

To the producer, the most useful characteristic of a microphone is probably its directional property. It may be sensitive to sounds arriving from all directions – omni-directional – and such a microphone is useful for location recording and interviewing, audience reaction, and talkback purposes. Alternatively a directional mic is essential in most types of music balance, quiz shows, and where there is any form of public address system.

image

Figure 3.13  Editing. Using the mouse, the arrows are set at the edit points in order to remove the ‘good morning’ for the afternoon repeat

The choice of mic for a particular job requires some thought and although it might be possible to rely on the expertise of a technician, it will pay a producer to become familiar with the advantages and limitations of each type available. For example, some mics include on/off switches, or a switch to start a recorder. Some incorporate an optional bass-cut facility. Some require a mains unit or battery pack, or have a directivity pattern that can be changed while in use. Some operate better out of doors than others, some will make a presenter sound good when working close to it, others just distort. Producers will need to decide whether a radio mic is necessary, when clip-on personal mics are appropriate, or if a highly directional ‘rifle’ mic is required. The more one knows about the right use of the right equipment, the more the technicalities of programme making become properly subservient to the editorial decisions.

image

Figure 3.14  Microphone directivity patterns. A microphone is sensitive to sounds within its area of pick-up. In selecting one for a particular purpose, consideration must be given to how well it will reject unwanted sounds from another direction

Stereo

Simply stated, the stereo microphone gives two electrical outputs instead of one. These relate to sounds arriving from its left and its right. This ‘positional information’ is carried through the entire system via two transmission channels, arriving at the stereo receiver to be heard on left and right loudspeakers. When stereo was introduced in the late 1950s, the BBC introduced a metering formula which has continued to be followed and has also been adopted by some other broadcasters. Here, the left channel is generally referred to as the ‘A’ (red) output and the right channel is the ‘B’ (green) output. The meter monitoring the electrical levels may have two needles – red and green (following navigational rules), or there may be two meters – left and right. The signal sent to a mono transmitter – the ‘M’ signal – is the combination of both left and right, i.e. ‘A + B’, while the stereo information – the ‘S’ signal – consists of the differences between what is happening on the left and on the right, i.e. ‘A – B’. Sometimes a second monitoring meter is available to look at the ‘M’ and ‘S’ signals. Again, it has two needles conventionally coloured, respectively, white and yellow. Vertical columns of LEDs are an alternative way of indicating the signal level. What does the producer need to know about all this?

First, that if a programme is to be carried by both monaural and stereo transmitters some thought has to be given to the question of compatibility. Material designed for stereo can sound pointless in mono, or even technically bad. For example, speech and music together can be distinguished in stereo purely because of their positional difference; in mono the same mix may be unacceptable since a difference in level is needed. The producer will optimise the programme for the primary audience, but also ensure it is compatible for both. It is all too easy to fall in love with the stereo sound in the studio and forget the needs of the mono listener.

Second, it is not necessary to use a stereo mic to generate a stereo signal. Two directional mono mics (or a ‘co-incident pair’) connected to a stereo mixer in such a way as to simulate left and right signals, for example through ‘pan-pots’, will give excellent results. This technique is useful in an interview or phone-in when the voices can be given some additional left/right separation for the stereo listener.

Third, a pan-pot on a mixer channel can give a mono source both size and position. For example, a mono recording of a sound effect can be placed across part, or all, of the sound picture. Two mono recordings, for example of rain, can give a convincing stereo picture if one is panned to the left and the other to the right, with some overlap in the middle. When recording music, special effects can be obtained by the deliberate mislocation of a particular source – hence the piano 10 metres wide at the front of the orchestra, or the trumpeter whose fanfare flies around the sound stage simply by the twirl of the pan-pot!

image

Figure 3.15  Mono mics and faders create a stereo effect when their pan-pots are set to give a left and right placing to each sound source

And fourth, that working in stereo is a challenge to the producer’s creativity. To establish distance and effect movement in something as simple as a station promo gives it impact. To play three tracks simultaneously – one left, one centre and one right – as a ‘guess the title’ competition is intriguing. A spatial round-table discussion that really separates the speakers has a much more live feel than its mono counterpart. The drama – or commercial – in which voices can be made to appear from anywhere, dart around or ‘float’, literally adds another dimension. For the listener, a stereo station should do more than keep the stereo indicator light on.

image

Figure 3.16  Panning for artificial stereo. In this example, ‘Hello’ is left and right, ‘good morning’ is taken to the left, and ‘welcome’ is moved right. The centre is then restored

Equipment faults

Studios are complex places. There is a lot to go wrong – a knob or key on the mixer works loose, the software seems unreliable, the clock is not absolutely accurate, a small indicator light may blow or a headphone lead break. Perhaps it is something physical like a squeaky door or an unsafe mic stand. Whatever the problem, it is likely to affect programmes and must be put right. It is up to every studio user to report any problem and the easiest way of doing this is to have a special routine in the computer system for tracking faults and errors. Alternatively, hang a small notebook permanently in a conspicuous place. It is the responsibility of the person who discovers the fault to make a note of its symptoms and the time it happened. Every morning the system is routinely checked by a maintenance technician. Intermittent faults are particularly troublesome and may require a measure of detective work. It is in everyone’s interest to record any technical incident, however slight, in order to maintain high operational standards.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset