11

Optical disks in digital audio

Optical disks are particularly important to digital audio, not least because of the success of the Compact Disc, followed by the MiniDisc, DVD and magnetooptical production recorders.

11.1 Types of optical disk

There are numerous types of optical disk, which have different characteristics.1 There are, however, three broad groups, shown in Figure 11.1, which can be usefully compared.

1  The Compact Disc and the prerecorded MiniDisc and DVD are read-only laser disks, which are designed for mass duplication by stamping. They cannot be recorded.

2  Some laser disks can be recorded, but once a recording has been made, it cannot be changed or erased. These are usually referred to as write-once-readmany (WORM) disks. Recordable CDs (CD-R) and DVDs (DVD-R) work on this principle.

3  Erasable optical disks have essentially the same characteristic as magnetic disks, in that new and different recordings can be made in the same track indefinitely. Recordable MiniDisc and CD-RW are in this category. Sometimes a separate erase process is necessary before rewriting.

The Compact Disc, generally abbreviated to CD, is a consumer digital audio recording which is intended for mass replication. Philips’ approach was to invent an optical medium having the same characteristics as the vinyl disk in that it could be mass replicated by moulding or stamping. The information on it is carried in the shape of flat-topped physical deformities in a layer of plastic. Such relief structures lack contrast and must be read with a technique called phasecontrast microscopy that allows an apparent contrast to be obtained using optical interference.

Figure 11.2(a) shows that the information layer of CD and the prerecorded MiniDisc is an optically flat mirror upon which microscopic bumps are raised. A thin coating of aluminium renders the layer reflective. When a small spot of light is focused on the information layer, the presence of the bumps affects the way in which the light is reflected back, and variations in the reflected light are detected in order to read the disk. Figure 11.2 also illustrates the very small dimensions common to both disks. For comparison, some sixty CD/MD tracks can be accommodated in the groove pitch of a vinyl LP. These dimensions demand the utmost cleanliness in manufacture.

Figure 11.1 The various types of optical disk. See text for details.

image

Figure 11.2(b) shows two types of WORM disk. In the first, the disk contains a thin layer of metal; on recording, a powerful laser melts spots on the layer. Surface tension causes a hole to form in the metal, with a thickened rim around the hole. Subsequently a low-power laser can read the disk because the metal reflects light, but the hole passes it through. Computer WORM disks work on this principle. In the second, the layer of metal is extremely thin, and the heat from the laser heats the material below it to the point of decomposition. This causes gassing which raises a blister or bubble in the metal layer. In a further type, the disk surface is coated with a special dye whose chemical composition is changed by the heat of the recording laser. This changes the opacity of the dye. Clearly once such recordings have been made, they are permanent.

Rerecordable or eraseable optical disks rely on magneto-optics,2 also known more fully as thermomagneto-optics. Writing in such a device makes use of a thermomagnetic property possessed by all magnetic materials, which is that above a certain temperature, known as the Curie temperature, their coercive force becomes zero. This means that they become magnetically very soft, and take on the flux direction of any externally applied field. On cooling, this field orientation will be frozen in the material, and the coercivity will oppose attempts to change it. Although many materials possess this property, there are relatively few which have a suitably low Curie temperature. Compounds of terbium and gadolinium have been used, and one of the major problems to be overcome is that almost all suitable materials from a magnetic viewpoint corrode very quickly in air.

There are two ways in which magneto-optic (MO) disks can be written. Figure 11.2(c) shows the first system, in which the intensity of laser is modulated with the waveform to be recorded. If the disk is considered to be initially magnetized along its axis of rotation with the north pole upwards, it is rotated in a field of the opposite sense, produced by a steady current flowing in a coil which is weaker than the room-temperature coercivity of the medium. The field will therefore have no effect. A laser beam is focused on the medium as it turns, and a pulse from the laser will momentarily heat a very small area of the medium past its Curie temperature, whereby it will take on a reversed flux due to the presence of the field coils. This reversed-flux direction will be retained indefinitely as the medium cools.

Figure 11.2 (a) The information layer of CD is reflective and uses interference. (b) Write-once disks may burn holes or raise blisters in the information layer. (c) High data rate MO disks modulate the laser and use a constant magnetic field. (d) At low data rates the laser can run continuously and the magnetic field is modulated.

image

Alternatively the waveform to be recorded modulates the magnetic field from the coils as shown in Figure 11.2(d). In this approach, the laser is operating continuously in order to raise the track beneath the beam above the Curie temperature, but the recorded magnetic field is determined by the current in the coil at the instant the track cools. Magnetic field modulation is used in the recordable MiniDisc.

In both of these cases, the storage medium is clearly magnetic, but the writing mechanism is the heat produced by light from a laser; hence the term thermomagneto-optics. The advantage of this writing mechanism is that there is no physical contact between the writing head and the medium. The distance can be several millimetres, some of which is taken up with a protective layer to prevent corrosion. In prototypes, this layer is glass, but commercially available disks use plastics.

The laser beam will supply a relatively high power for writing, since it is supplying heat energy. For reading, the laser power is reduced, such that it cannot heat the medium past the Curie temperature, and it is left on continuously. Readout depends on the so-called Kerr effect, which describes a rotation of the plane of polarization of light due to a magnetic field. The magnetic areas written on the disk will rotate the plane of polarization of incident polarized light to two different planes, and it is possible to detect the change in rotation with a suitable pickup.

11.2 CD and MD contrasted

CD and MD have a great deal in common. Both use a laser of the same wavelength that creates a spot of the same size on the disk. The track pitch and speed are the same and both offer the same playing time. The channel code and error correction strategy are the same.

CD carries 44.1 kHz sixteen-bit PCM audio and is recorded in a continuous spiral like a vinyl disk. The CD process, from cutting, through pressing and reading, produces no musical degradation whatsoever, since it simply conveys a series of numbers which are exactly those recorded on the master tape. The only part of a CD player that can cause subjective differences in sound quality in normal operation is the DAC, although in the presence of gross errors some players will correct and/or conceal better than others.

MD begins with the same PCM data, but uses a form of compression known as ATRAC having a compression factor of 0.2. After the addition of subcode and housekeeping data MD has an average data rate which is 0.225 that of CD. However, MD has the same recording density and track speed as CD, so the data rate from the disk greatly exceeds that needed by the audio decoders. The difference is absorbed in RAM as for a hard-drive based machine. The RAM in a typical player is capable of buffering about 3 seconds of audio. When the RAM is full, the disk drive stops transferring data but keeps turning. As the RAM empties into the decoders, the disk drive will top it up in bursts. As the drive need not transfer data for over three quarters of the time, it can reposition between transfers and so it is capable of editing in the same way as a magnetic hard disk. A further advantage of the RAM buffer is that if the pickup is knocked off track by an external shock the RAM continues to provide data to the audio decoders and provided the pickup can get back to the correct track before the RAM is exhausted there will be no audible effect.

When recording an MO disk, the MiniDisc drive also uses the RAM buffer to allow repositioning so that a continuous recording can be made on a disk which has become chequerboarded through selective erasing. The full total playing time is then always available irrespective of how the disk is divided into different recordings. The sound quality of MiniDisc is a function of the performance of the convertors and of the data reduction system, with the latter typically being responsible for most of the degradation.

Figure 11.3 Mechanical specification of CD. Between diameters of 46 and 117 mm is a spiral track 5.7 km long.

image

11.3 CD and MD – disk construction

Figure 11.3 shows the mechanical specification of CD. Within an overall diameter of 120 mm the program area occupies a 33 mm-wide band between the diameters of 50 and 116 mm. Lead-in and lead-out areas increase the width of this band to 35.5 mm. As the track pitch is a constant 1.6 μm, there will be

image

tracks crossing a radius of the disk. As the track is a continuous spiral, the track length will be given by the above figure multiplied by the average circumference.

image

Figure 11.4 shows the mechanical specification of prerecorded MiniDisc. Within an overall diameter of 64 mm the lead-in area begins at a diameter of 29 mm and the program area begins at 32 mm. The track pitch is exactly the same as in CD, but the MiniDisc can be smaller than CD without any sacrifice of playing time because of the use of compression. For ease of handling, MiniDisc is permanently enclosed in a shuttered plastic cartridge 72 × 68 x 5 mm. The cartridge resembles a smaller version of a 3½-inch floppy disk, but unlike a floppy, it is slotted into the drive with the shutter at the side. An arrow is moulded into the cartridge body to indicate this.

Figure 11.4 The mechanical dimensions of MiniDisc.

image

In the prerecorded MiniDisc, it was a requirement that the whole of one side of the cartridge should be available for graphics. Thus the disk is designed to be secured to the spindle from one side only. The centre of the disk is fitted with a ferrous clamping plate and the spindle is magnetic. When the disk is lowered into the drive it simply sticks to the spindle. The ferrous disk is only there to provide the clamping force. The disk is still located by the moulded hole in the plastic component. In this way the ferrous component needs no special alignment accuracy when it is fitted in manufacture. The back of the cartridge has a centre opening for the hub and a sliding shutter to allow access by the optical pickup.

Figure 11.5 The construction of the MO recordable MiniDisc.

image

The recordable MiniDisc and cartridge has the same dimensions as the prerecorded MiniDisc, but access to both sides of the disk is needed for recording. Thus the recordable MiniDisc has a shutter which opens on both sides of the cartridge, rather like a double-sided floppy disk. The opening on the front allows access by the magnetic head needed for MO recording, leaving a smaller label area. Figure 11.5 shows the construction of the MO MiniDisc. The 1.1 μm wide tracks are separated by grooves which can optically be tracked. Once again the track pitch is the same as in CD. The MO layer is sandwiched between protective layers.

11.4 Rejecting surface contamination

A fundamental goal of consumer optical disks is that no special working environment or handling skill is required. The bandwidth required by PCM audio is such that high-density recording is mandatory if reasonable playing time is to be obtained in CD. Although MiniDisc uses compression, it does so in order to make the disk smaller and the recording density is actually the same as for CD.

High-density recording implies short wavelengths. Using a laser focused on the disk from a distance allows short-wavelength recordings to be played back without physical contact, whereas conventional magnetic recording requires intimate contact and implies a wear mechanism, the need for periodic cleaning, and susceptibility to contamination. The information layer of CD and MD is read through the thickness of the disk. Figure 11.6 shows that this approach causes the readout beam to enter and leave the disk surface through the largest possible area. The actual dimensions involved are shown in the figure. Despite the minute spot size of about 1.2 μm diameter, light enters and leaves through a 0.7 mm-diameter circle. As a result, surface debris has to be three orders of magnitude larger than the readout spot before the beam is obscured. This approach has the further advantage in MO drives that the magnetic head, on the opposite side to the laser pickup, is then closer to the magnetic layer in the disk.

Figure 11.6 The objective lens of a CD pickup has a numerical aperture (NA) of 0.45; thus the outermost rays will be inclined at approximately 27° to the normal. Refraction at the air/disk interface changes this to approximately 17° within the disk. Thus light focused to a spot on the information layer has entered the disk through a 0.7 mm diameter circle, giving good resistance to surface contamination.

image

The bending of light at the disk surface is due to refraction of the wavefronts arriving from the objective lens. Wave theory of light suggests that a wavefront advances because an infinite number of point sources can be considered to emit spherical waves that will only add when they are all in the same phase. This can only occur in the plane of the wavefront. Figure 11.7 shows that at all other angles, interference between spherical waves is destructive. When such a wavefront arrives at an interface with a denser medium, such as the surface of an optical disk, the velocity of propagation is reduced; therefore the wavelength in the medium becomes shorter, causing the wavefront to leave the interface at a different angle (Figure 11.8). This is known as refraction. The ratio of velocity in vacuo to velocity in the medium is known as the refractive index of that medium; it determines the relationship between the angles of the incident and refracted wavefronts.

Figure 11.7 Plane-wave propagation considered as infinite numbers of spherical waves.

image

Figure 11.8 Reflection and refraction, showing the effect of the velocity of light in a medium.

image

The size of the entry circle in Figure 11.6 is a function of the refractive index of the disk material, the numerical aperture of the objective lens and the thickness of the disk. MiniDiscs are permanently enclosed in a cartridge, and scratching is unlikely. This is not so for CD, but fortunately the method of readout through the disk thickness tolerates surface scratches very well. In extreme cases of damage, a scratch can often be successfully removed with metal polish. By way of contrast, the label side is actually more vulnerable than the readout side, since the lacquer coating is only 30 μm thick. For this reason, writing on the label side of CD is not recommended.

The base material is in fact a polycarbonate plastic produced by (among others) Bayer under the trade name of Makrolon. It has excellent mechanical and optical stability over a wide temperature range, and lends itself to precision moulding and metallization. It is often used for automotive indicator clusters for the same reasons. An alternative material is polymethyl methacrylate (PMMA), one of the first optical plastics, known by such trade names as Perspex and Plexiglas. Polycarbonate is preferred by some manufacturers since it is less hygroscopic than PMMA. The differential change in dimensions of the lacquer coat and the base material can cause warping in a hygroscopic material. Audio disks are too small for this to be a problem, but the larger video disks are actually two disks glued together back-to-back to prevent this warpage.

11.5 Playing optical disks

A typical laser disk drive resembles a magnetic drive in that it has a spindle drive mechanism to revolve the disk, and a positioner to give radial access across the disk surface. The positioner has to carry a collection of lasers, lenses, prisms, gratings and so on, and cannot be accelerated as fast as a magnetic-drive positioner. A penalty of the very small track pitch possible in laser disks, which gives the enormous storage capacity, is that very accurate track following is needed, and it takes some time to lock on to a track. For this reason tracks on laser disks are usually made as a continuous spiral, rather than the concentric rings of magnetic disks. In this way, a continuous data transfer involves no more than track following once the beginning of the file is located.

In order to record MO disks or replay any optical disk, a source of monochromatic light is required. The light source must have low noise otherwise the variations in intensity due to the noise of the source will mask the variations due to reading the disk. The requirement for a low noise monochromatic light source is economically met using a semiconductor laser that is a relative of the light-emitting diode (LED). Both operate by raising the energy of electrons to move them from one valence band to another conduction band. Electrons falling back to the valence band emit a quantum of energy as a photon whose frequency is proportional to the energy difference between the bands. The process is described by Planck’s Law:

image

For gallium arsenide, the energy difference is about 1.6 eV, where 1eV is 1.6 × 10−19 Joules. Using Planck’s Law, the frequency of emission will be:

image

The wavelength will be c/f where c = the velocity of light = 3 × 108 m/s.

image

In the LED, electrons fall back to the valence band randomly, and the light produced is incoherent. In the laser, the ends of the semiconductor are optically flat mirrors, which produce an optically resonant cavity. One photon can bounce to and fro, exciting others in synchronism, to produce coherent light. This is known as Light Amplification by Stimulated Emission of Radiation, mercifully abbreviated to LASER, and can result in a runaway condition, where all available energy is used up in one flash. In injection lasers, equilibrium is reached between energy input and light output, allowing continuous operation. The equilibrium is delicate, and such devices are usually fed from a current source. To avoid runaway when temperature change disturbs the equilibrium, a photosensor is often fed back to the current source. Such lasers have a finite life, and become steadily less efficient.

Some of the light reflected back from the disk re-enters the aperture of objective lens. The pickup must be capable of separating the reflected light from the incident light. Figure 11.9 shows two systems. In (a) an intensity beamsplitter consisting of a semi-silvered mirror is inserted in the optical path and reflects some of the returning light into the photosensor. This is not very efficient, as half of the replay signal is lost by transmission straight on. In the example at (b) separation is by polarization.

Rotation of the plane of polarization is a useful method of separating incident and reflected light in a laser pickup. Using a quarter-wave plate, the plane of polarization of light leaving the pickup will have been turned 45°, and on return it will be rotated a further 45°, so that it is now at right angles to the plane of polarization of light from the source. The two can easily be separated by a polarizing prism, which acts as a transparent block to light in one plane, but as a prism to light in the other plane, such that reflected light is directed towards the sensor.

Figure 11.9 (a) Reflected light from the disk is directed to the sensor by a semisilvered mirror. (b) A combination of polarizing prism and quarter-wave plate separates incident and reflected light.

image

In a CD player, the sensor is concerned only with the intensity of the light falling on it. When playing MO disks, the intensity does not change, but the magnetic recording on the disk rotates the plane of polarization one way or the other depending on the direction of the vertical magnetization. MO disks cannot be read with circular polarized light. Light incident on the medium must be plane polarized and so the quarter-wave plate of the CD pickup cannot be used. Figure 11.10(a) shows that a polarizing prism is still required to linearly polarize the light from the laser on its way to the disk. Light returning from the disk has had its plane of polarization rotated by approximately ±1 degree. This is an extremely small rotation. Figure 11.10(b) shows that the returning rotated light can be considered to be comprised of two orthogonal components. Rx is the component which is in the same plane as the illumination and is called the ordinary component and Ry is the component due to the Kerr effect rotation and is known as the magneto-optic component.

Figure 11.10 A pickup suitable for the replay of magneto-optic disks must respond to very small rotations of the plane of polarization.

image

A polarizing beam splitter mounted squarely would reflect the magneto-optic component Ry very well because it is at right angles to the transmission plane of the prism, but the ordinary component would pass straight on in the direction of the laser. By rotating the prism slightly a small amount of the ordinary component is also reflected. Figure 11.10(c) shows that when combined with the magneto-optic component, the angle of rotation has increased.3 Detecting this rotation requires a further polarizing prism or analyser as shown in Figure 11.10. The prism is twisted such that the transmission plane is at 45° to the planes of Rx and Ry. Thus with an unmagnetized disk, half of the light is transmitted by the prism and half is reflected. If the magnetic field of the disk turns the plane of polarization towards the transmission plane of the prism, more light is transmitted and less is reflected. Conversely if the plane of polarization is rotated away from the transmission plane, less light is transmitted and more is reflected. If two sensors are used, one for transmitted light and one for reflected light, the difference between the two sensor outputs will be a waveform representing the angle of polarization and thus the recording on the disk. This differential analyser eliminates common-mode noise in the reflected beam.4

As Figure 11.10 shows, the output of the two sensors is summed as well as subtracted in a MiniDisc player. When playing MO disks, the difference signal is used. When playing prerecorded disks, the sum signal is used and the effect of the second polarizing prism is disabled.

11.6 Focus and tracking systems

The frequency response of the laser pickup and the amount of crosstalk are both a function of the spot size and care must be taken to keep the beam focused on the information layer. Disk warp and thickness irregularities will cause focalplane movement beyond the depth of focus of the optical system, and a focus servo system will be needed. The depth of field is related to the numerical aperture, which is defined, and the accuracy of the servo must be sufficient to keep the focal plane within that depth, which is typically ±1 μm.

The focus servo moves a lens along the optical axis in order to keep the spot in focus. Since dynamic focus-changes are largely due to warps, the focus system must have a frequency response in excess of the rotational speed. A focus-error system is necessary to drive the lens. There are a number of ways in which this can be derived, the most common of which will be described here.

In Figure 11.11 a cylindrical lens is installed between the beam splitter and the photosensor. The effect of this lens is that the beam has no focal point on the sensor. In one plane, the cylindrical lens appears parallel-sided, and has negligible effect on the focal length of the main system, whereas in the other plane, the lens shortens the focal length. The image will be an ellipse whose aspect ratio changes as a function of the state of focus. Between the two foci, the image will be circular. The aspect ratio of the ellipse, and hence the focus error, can be found by dividing the sensor into quadrants. When these are connected as shown, the focus-error signal is generated. The data readout signal is the sum of the quadrant outputs.

Figure 11.11 The cylindrical lens focus method produces an elliptical spot on the sensor whose aspect ratio is detected by a four-quadrant sensor to produce a focus error.

image

Figure 11.12 shows the knife-edge method of determining focus. A split sensor is also required. At (a) the focal point is coincident with the knife-edge, so it has little effect on the beam. At (b) the focal point is to the right of the knife-edge, and rising rays are interrupted, reducing the output of the upper sensor. At (c) the focal point is to the left of the knife-edge, and descending rays are interrupted, reducing the output of the lower sensor. The focus error is derived by comparison of the outputs of the two halves of the sensor. A drawback of the knife-edge system is that the lateral position of the knife-edge is critical, and adjustment is necessary. To overcome this problem, the knife-edge can be replaced by a pair of prisms, as shown in Figure 11.12(d)–(f). Mechanical tolerances then only affect the sensitivity, without causing a focus offset.

The cylindrical lens method has a smaller capture range than the knife-edge/ prism method and a focus-search mechanism will be required, which moves the focus servo over its entire travel, looking for a zero crossing. At this time the feedback loop will be completed, and the sensor will remain on the linear part of its characteristic. The spiral track of CD and MiniDisc starts at the inside and works outwards. This was deliberately arranged because there is less vertical runout near the hub, and initial focusing will be easier.

Figure 11.12 (a)–(c) Knife-edge focus method requires only two sensors, but is critically dependent on knife-edge position. (d)–(f) Twin-prism method requires three sensors (A, B, C), where focus error is (A + C)–B. Prism alignment reduces sensitivity without causing focus offset.

image

The track pitch is only 1.6 μm, and this is much smaller than the accuracy to which the player chuck or the disk centre hole can be made; on a typical player, run-out will swing several tracks past a fixed pickup. A track-following servo is necessary to keep the spot centralized on the track. There are several ways in which a tracking error can be derived.

In the three-spot method, two additional light beams are focussed on the disk track, one offset to each side of the track centre-line. Figure 11.13 shows that, as one side spot moves away from the track into the mirror area, there is less destructive interference and more reflection. This causes the average amplitude of the side spots to change differentially with tracking error. The laser head contains a diffraction grating that produces the side spots, and two extra photosensors onto which the reflections of the side spots will fall. The side spots feed a differential amplifier, which has a low-pass filter to reject the channelcode information and retain the average brightness difference. Some players use a delay line in one of the side-spot signals whose period is equal to the time taken for the disk to travel between the side spots. This helps the differential amplifier to cancel the channel code.

Figure 11.13 Three-spot method of producing tracking error compares average level of side-spot signals. Side spots are produced by a diffraction grating and require their own sensors.

image

The alternative approach to tracking-error detection is to analyse the diffraction pattern of the reflected beam. The effect of an off-centre spot is to rotate the radial diffraction pattern about an axis along the track. Figure 11.14 shows that, if a split sensor is used, one half will see greater modulation than the other when off-track. Such a system may be prone to develop an offset due either to drift or to contamination of the optics, although the capture range is large. A further tracking mechanism is often added to obviate the need for periodic adjustment. Figure 11.15 shows that in a dither-based system, a sinusoidal drive is fed to the tracking servo, causing a radial oscillation of spot position of about ±50 nm. This results in modulation of the envelope of the readout signal, which can be synchronously detected to obtain the sense of the error. The dither can be produced by vibrating a mirror in the light path, which enables a high frequency to be used, or by oscillating the whole pickup at a lower frequency.

Figure 11.14 Split-sensor method of producing tracking error focuses image of spot onto sensor. One side of spot will have more modulation when off-track.

image

Figure 11.15 Dither applied to readout spot modulates the readout envelope. A tracking error can be derived.

image

11.7 Typical pickups

It is interesting to compare different designs of laser pickup. Figure 11.16 shows an early Philips laser head.5 The dual-prism focus method is used, which combines the output of two split sensors to produce a focus error. The focus amplifier drives the objective lens mounted on a parallel motion formed by two flexural arms. The capture range of the focus system is sufficient to accommodate normal tolerances without assistance. A radial differential tracking signal is extracted from the sensors as shown in the figure. Additionally, a dither frequency of 600 Hz produces envelope modulation that is synchronously rectified to produce a drift-free tracking error. Both errors are combined to drive the tracking system. As only a single spot is used, the pickup is relatively insensitive to angular errors, and a rotary positioner can be used, driven by a moving coil. The assembly is statically balanced to give good resistance to lateral shock.

Figure 11.16 Philips laser head showing semisilvered prism for beam splitting. Focus error is derived from dual-prism method using split sensors. Focus error (A + D)–(B + C) is used to drive focus motor which moves objective lens on parallel action flexure. Radial differential tracking error is derived from split sensor (A + B)–(C + D). Tracking error drives entire pickup on radial arm driven by moving coil. Signal output is (A + B + C + D). System includes 600 Hz dither for tracking. (Courtesy Philips Technical Review)

image

Figure 11.17 shows a Sony laser head used in early consumer players. The cylindrical-lens focus method is used, requiring a four-quadrant sensor. Since this method has a small capture range, a focus-search mechanism is necessary. When a disk is loaded, the objective lens is ramped up and down looking for a zero crossing in the focus error. The three-spot method is used for tracking. The necessary diffraction grating can be seen adjacent to the laser diode. Tracking error is derived from side-spot sensors (E, F). Since the side-spot system is sensitive to angular error, a parallel-tracking laser head traversing a disk radius is essential. A cost-effective linear motion is obtained by using a rack-and-pinion drive for slow, coarse movements, and a laterally moving lens in the light path for fine rapid movements. The same lens will be moved up and down for focus by the so-called two-axis device, which is a dual-moving coil mechanism. In some players this device is not statically balanced, making the unit sensitive to shock, but this was overcome on later heads designed for portable players. Some designs incorporate a prism to reduce the height of the pickup above the disk.

Figure 11.17 Sony laser head showing polarizing prism and quarter-wave plate for beam splitting, and diffraction grating for production of side spots for tracking. The cylindrical lens system is used for focus, with a four-quadrant sensor (A, B, C, D) and two extra sensors E, F for the side spots. Tracking error is E–F; focus error is (A + C)–(B + D). Signal output is (A + B + C + D). The focus and tracking errors drive the two-axis device. (Courtesy Sony Broadcast)

image

11.8 CD readout in detail

Many descriptions are simplified to the extent that the light spot is depicted as having a distinct edge of a given diameter. In reality such a neat spot cannot be obtained. It is essential to the commercial success of CD that a useful playing time (75 min max.) should be obtained from a recording of reasonable size (12 cm). The size was determined by the European motor industry as being appropriate for car dashboard-mounted units. It follows that the smaller the spot of light which can be created, the smaller can be the deformities carrying the information, and so more information per unit area. Development of a successful high-density optical recorder requires an intimate knowledge of the behaviour of light focused into small spots. If it is attempted to focus a uniform beam of light to an infinitely small spot on a surface normal to the optical axis, it will be found that it is not possible. This is probably just as well as an infinitely small spot would have infinite intensity and any matter it fell on would not survive. Instead the result of such an attempt is a distribution of light in the area of the focal point having no sharply defined boundary. This is called the Airy distribution5 (sometimes pattern or disk) after Lord Airy (1835), the then Astronomer Royal.

If a line is considered to pass across the focal plane, through the theoretical focal point, and the intensity of the light is plotted on a graph as a function of the distance along that line, the result is the intensity function shown in Figure 11.18. It will be seen that this contains a central sloping peak surrounded by alternating dark rings and light rings of diminishing intensity. These rings will in theory reach to infinity before their intensity becomes zero. The intensity distribution or function described by Airy is due to diffraction effects across the finite aperture of the objective. For a given wavelength, as the aperture of the objective is increased, so the diameter of the features of the Airy pattern reduces. The Airy pattern vanishes to a singularity of infinite intensity with a lens of infinite aperture which of course cannot be made. The approximation of geometric optics is quite unable to predict the Airy pattern. An intensity function does not have a diameter, but for practical purposes an effective diameter typically quoted is that at which the intensity has fallen to some convenient fraction of that at the peak. Thus one could state, for example, the half-power diameter.

Figure 11.18 The structure of a maximum frequency recording is shown here, related to the intensity function of an objective of 0.45NA with 780 μm light. Note that track spacing puts adjacent tracks in the dark rings, reducing crosstalk. Note also that as the spot has an intensity function it is meaningless to specify the spot diameter without some reference such as an intensity level.

image

With a fixed objective aperture, as the tangential diffraction pattern becomes more oblique, less light passes the aperture and the depth of modulation transmitted by the lens falls. At some spatial frequency, all the diffracted light falls outside the aperture and the modulation depth transmitted by the lens falls to zero. This is known as the spatial cut-off frequency. Thus a graph of depth of modulation versus spatial frequency can be drawn and which is known as the modulation transfer function (MTF). This is a straight line commencing at unity at zero spatial frequency (no detail) and falling to zero at the cut-off spatial frequency (finest detail). Thus one could describe a lens of finite aperture as a form of spatial low-pass filter. The Airy function is no more than the spatial impulse response of the lens, and the concentric rings of the Airy function are the spatial analog of the symmetrical ringing in a phase linear electrical filter. The Airy function and the triangular frequency response form a transform pair6 as shown in Chapter 3.

When an objective lens is used in a conventional microscope, the MTF will allow the resolution to be predicted in lines per millimetre. However, in a scanning microscope the spatial frequency of the detail in the object multiplied by the scanning velocity gives a temporal frequency measured in Hertz. Thus lines per millimetre multiplied by millimetres per second gives lines per second. Instead of a straight line MTF falling to the spatial cut-off frequency, Figure 11.19 shows that a scanning microscope has a temporal frequency response falling to zero at the optical cut-off frequency which is given by:

image

The minimum linear velocity of CD is 1.2 m/s, giving a cut-off frequency of

image

Actual measurements reveal that the optical response is only a little worse than the theory predicts. This characteristic has a large bearing on the type of modulation schemes that can be successfully employed. Clearly, to obtain any noise immunity, the maximum operating frequency must be rather less than the cut-off frequency. The maximum frequency used in CD is 720 kHz, which represents an absolute minimum wavelength of 1.666 μm, or a bump length of 0.833 μm, for the lowest permissible track speed of 1.2 m/s used on the fulllength 75 min-playing disks. One-hour-playing disks have a minimum bump length of 0.972 μm at a track velocity of 1.4 m/s. The maximum frequency is the same in both cases. This maximum frequency should not be confused with the bit rate of CD since this is different owing to the channel code used. Figure 11.18 showed a maximum-frequency recording, and the physical relationship of the intensity function to the track dimensions.

Figure 11.19 Frequency response of laser pickup. Maximum operating frequency is about half of cut-off frequency Fc.

image

The intensity function can be enlarged if the lens used suffers from optical aberrations. This was studied by Maréchal7 who established criteria for the accuracy to which the optical surfaces of the lens should be made to allow the ideal Airy distribution to be obtained. CD player lenses must meet the Maréchal criterion. With such a lens, the diameter of the distribution function is determined solely by the combination of Numerical Aperture (NA) and the wavelength. When the size of the spot is as small as the NA and wavelength allow, the optical system is said to be diffraction limited. Figure 11.20 shows how Numerical Aperture is defined, and illustrates that the smaller the spot needed, the larger must be the NA. Unfortunately the larger the NA the more obliquely to the normal the light arrives at the focal plane and the smaller the depth of focus will be.

Figure 11.20 Fine detail in an object can only be resolved if the diffracted wavefront due to the highest spatial frequency is collected by the lens. Numerical aperture (NA) = sin θ, and as θ is the diffraction angle it follows that, for a given wavelength, NA determines resolution.

image

Since the introduction of CD, developments in the technology have continued. The use of a shorter-wavelength laser and a larger numerical aperture allows the spot size to be reduced in DVD. As a result the recording density is increased in comparison with CD.

Figure 11.21 The many stages of CD manufacture, most of which require the utmost cleanliness.

image

11.9 How optical disks are made

The steps used in the production of CDs will next be outlined. Prerecorded MiniDiscs are made in an identical fashion except for detail differences to be noted. MO disks need to be grooved so that the track-following system will work. The grooved substrate is produced in a similar way to a CD master, except that the laser is on continuously instead of being modulated with a signal to be recorded. As stated, CD is replicated by moulding, and the first step is to produce a suitable mould. This mould must carry deformities of the correct depth for the standard wavelength to be used for reading, and as a practical matter these deformities must have slightly sloping sides so that it is possible to release the CD from the mould.

The major steps in CD manufacture are shown in Figure 11.21. The mastering process commences with an optically flat glass disk about 220 mm in diameter and 6 mm thick. The blank is washed first with an alkaline solution, then with a fluorocarbon solvent, and spun dry prior to polishing to optical flatness. A critical cleaning process is then undertaken using a mixture of deionized water and isopropyl alcohol in the presence of ultrasonic vibration, with a final fluorocarbon wash. The blank must now be inspected for any surface irregularities that would cause data errors. This is done by using a laser beam and monitoring the reflection as the blank rotates. Rejected blanks return to the polishing process, those which pass move on, and an adhesive layer is applied followed by a coating of positive photoresist. This is a chemical substance that softens when exposed to an appropriate intensity of light of a certain wavelength, typically ultraviolet. Upon being thus exposed, the softened resist will be washed away by a developing solution down to the glass to form flat-bottomed pits whose depth is equal to the thickness of the undeveloped resist. During development the master is illuminated with laser light of a wavelength to which it is insensitive. The diffraction pattern changes as the pits are formed. Development is arrested when the appropriate diffraction pattern is obtained.8 The thickness of the resist layer must be accurately controlled, since it affects the height of the bumps on the finished disk, and an optical scanner is used to check that there are no resist defects that would cause data errors or tracking problems in the end product. Blanks passing this test are oven-cured, and are ready for cutting. Failed blanks can be stripped of the resist coating and used again.

The cutting process is shown in simplified form in Figure 11.22. A continuously operating helium cadmium9 or argon ion10 laser is focused on the resist coating as the blank revolves. The focus system uses a separate helium neon laser sharing the same optics. The resist is insensitive to the wavelength of the He–Ne laser. The laser intensity is controlled by an acousto-optic modulator driven by the encoder. When the device is in a relaxed state, light can pass through it, but when the surface is excited by high-frequency vibrations, light is scattered. Information is carried in the lengths of time for which the modulator remains on or remains off. The deformities in the resist produced as the disk turns when the modulator allows light to pass are separated by areas unaffected by light when the modulator is shut off. Information is carried solely in the variations of the lengths of these two areas.

The laser makes its way from the inside to the outside as the blank revolves. As the radius of the track increases, the rotational speed is proportionately reduced so that the velocity of the beam over the disk remains constant. This constant linear velocity (CLV) results in rather longer playing time than would be obtained with a constant speed of rotation. Owing to the minute dimensions of the track structure, the cutter has to be constructed to extremely high accuracy. Air bearings are used in the spindle and the laser head, and the whole machine is resiliently supported to prevent vibrations from the building from affecting the track pattern.

Figure 11.22 CD cuter. The focus subsystem controls the spot size of the main cutting laser on the photosensitive blank. Disc and traverse motors are coordinated to give constant track pitch and velocity. Note that the power of the focus laser is insufficient to expose the photoresist.

image

As the player is a phase contrast microscope, it must produce an intensity function that straddles the deformities. As a consequence the intensity function that produces the deformities in the photoresist must be smaller in diameter than that in the reader. This is conveniently achieved by using a shorter wavelength of 400–500 nm from a helium–cadmium or argon–ion laser combined with a larger lens aperture of 0.9. These are expensive, but only needed for the mastering process.

The master recording process has produced a phase structure in relatively delicate resist, and this cannot be used for moulding directly. Instead a thin metallic silver layer is sprayed onto the resist to render it electrically conductive so that electroplating can be used to make robust copies of the relief structure.

The electrically conductive resist master is then used as the cathode of an electroplating process where a first layer of metal is laid down over the resist, conforming in every detail to the relief structure thereon. This metal layer can then be separated from the glass and the resist is dissolved away and the silver is recovered leaving a laterally inverted phase structure on the surface of the metal, in which the pits in the photoresist have become bumps in the metal. From this point on, the production of CD is virtually identical to the replication process used for vinyl disks, save only that a good deal more precision and cleanliness is needed.

This first metal layer could itself be used to mould disks, or it could be used as a robust submaster from which many stampers could be made by pairs of plating steps. The first metal phase structure can itself be used as a cathode in a further electroplating process in which a second metal layer is formed having a mirror image of the first. A third such plating step results in a stamper. The decision to use the master or substampers will be based on the number of disks and the production rate required.

The master is placed in a moulding machine, opposite a flat plate. A suitable quantity of molten plastic is injected between, and the plate and the master are forced together. The flat plate renders one side of the disk smooth, and the bumps in the metal stamper produce pits in the other surface of the disk. The surface containing the pits is next metallized, with any good electrically conductive material, typically aluminium. This metallization is then covered with a lacquer for protection. In the case of CD, the label is printed on the lacquer. In the case of a prerecorded MiniDisc, the ferrous hub needs to be applied prior to fitting the cartridge around the disk.

As CD and prerecorded MDs are simply data disks, they do not need to be mastered in real time. Raising the speed of the mastering process increases the throughput of the expensive equipment. Pressing plants have been using computer tape streamers or hard disk drives in order to supply the cutter with higher data rates.

11.10 How recordable MiniDiscs are made

Recordable MiniDiscs make the recording as flux patterns in a magnetic layer. However, the disks need to be pregrooved so that the tracking systems can operate. The grooves have the same pitch as CD and the prerecorded MD, but the tracks are the same width as the laser spot: about 1.1 μm. The grooves are not a perfect spiral, but have a sinusoidal waviness at a fixed wavelength. Like CD, MD uses constant track linear velocity, not constant speed of rotation. When recording on a blank disk, the recorder needs to know how fast to turn the spindle to get the track speed correct. The wavy grooves will be followed by the tracking servo and the frequency of the tracking error will be proportional to the disk speed. The recorder simply turns the spindle at whatever speed makes the grooves wave at the correct frequency. The groove frequency is 75 Hz; the same as the data sector rate. Thus a zero crossing in the groove signal can also be used to indicate where to start recording. The grooves are particularly important when a chequer-boarded recording is being replayed. On a CLV disk, each seek to a new track radius results in a different track speed. The wavy grooves allow the track velocity to be corrected as soon as a new track is reached.

The pregrooves are moulded into the plastics body of the disk when it is made. The mould is made in a similar manner to a prerecorded disk master, except that the laser is not modulated and the spot is larger. The track velocity is held constant by slowing down the resist master as the radius increases, and the waviness is created by injecting 75 Hz into the lens radial positioner. The master is developed and electroplated as normal in order to make stampers. The stampers make pregrooved disks that are then coated by vacuum deposition with the MO layer, sandwiched between dielectric layers. The MO layer can be made less susceptible to corrosion if it is smooth and homogeneous. Layers that contain voids, asperities or residual gases from the coating process present a larger surface area for attack. The life of an MO disk is affected more by the manufacturing process than by the precise composition of the alloy.

Above the sandwich an optically reflective layer is applied, followed by a protective lacquer layer. The ferrous clamping plate is applied to the centre of the disk, which is then fitted in the cartridge. The recordable cartridge has a doublesided shutter to allow the magnetic head access to the back of the disk.

11.11 Channel code of CD and MiniDisc

CD and MiniDisc use the same channel code. This was optimized for the optical readout of CD and prerecorded MiniDisc, but is also used for the recordable version of MiniDisc for simplicity.

The frequency response falling to the optical cut-off frequency is only one of the constraints within which the modulation scheme has to work. There are a number of others. In all players the tracking and focus servos operate by analysing the average amount of light returning to the pickup. If the average amount of light returning to the pickup is affected by the content of the recorded data, then the recording will interfere with the operation of the servos. Debris on the disk surface affects the light intensity and means must be found to prevent this reducing the signal quality excessively. Chapter 6 discussed modulation schemes known as DC-free codes. If such a code is used, the average brightness of the track is constant and independent of the data bits. Figure 11.23(a) shows the replay signal from the pickup being compared with a threshold voltage in order to recover a binary waveform from the analog pickup waveform, a process known as slicing. If the light beam is partially obstructed by debris, the pickup signal level falls, and the slicing level is no longer correct and errors occur. If, however, the code is DC-free, the waveform from the pickup can be passed through a high pass filter (e.g. a series capacitor) and Figure 11.23(b) shows that this rejects the falling level and converts it to a reduction in amplitude about the slicing level so that the slicer still works properly. This step cannot be performed unless a DC-free code is used. As the frequency response on replay falls linearly to the cut-off frequency determined by the aperture of the lens and the wavelength of light used, the shorter bumps and lands produce less modulation than longer ones. It is a further advantage of a DC-free code that as the length of bumps and lands falls with rising density, the replay waveform simply falls in amplitude but the average voltage remains the same and so the slicer still operates correctly.

Figure 11.23 A DC-free code allows signal amplitude variations due to debris to be rejected.

image

CD uses a coding scheme where combinations of the data bits to be recorded are represented by unique waveforms. These waveforms are created by combining various run lengths from 3T to 11T together to give a channel pattern 14T long where T is half a cycle of the master clock.11 Within the run length limits of 3T to 11T, a waveform 14T long can have 267 different patterns. This is slightly more than the 256 combinations of eight data bits and so eight bits are represented by a waveform lasting 14T. Some of these patterns are shown in Figure 11.24. As stated, these patterns are not polarity conscious and they could be inverted without changing the meaning.

Not all of the 14T patterns used are DC-free, some spend more time in one state than the other. The overall DC content of the recorded waveform is rendered DC-free by inserting an extra portion of waveform, known as a packing period, between the 14T channel patterns. This packing period is 3T long and may or may not contain a transition, which if it is present can be in one of three places. The packing period contains no information, but serves to control the DC content of the overall waveform.12 The packing waveform is generated in such a way that in the long term the amount of time the channel signal spends in one state is equal to the time it spends in the other state. A packing period is placed between every pair of channel patterns and so the overall length of time needed to record eight bits is 17T.

Figure 11.24 (a–g) Part of the codebook for EFM code showing examples of various run lengths from 3T to 11T. (h,i) Invalid patterns which violate the run-length limits.

image

Thus a group of eight data bits is represented by a code of fourteen channel bits, hence the name of eight to fourteen modulation (EFM). It is a common misconception that the channel bits of a group code are recorded; in fact they are simply a convenient way of synthesizing a coded waveform having uniform time steps. It should be clear that channel bits cannot be recorded as they have a rate of 4.3 Megabits per second whereas the optical cut-off frequency of CD is only 1.4 MHz.

Another common misconception is that channel bits are data. If channel bits were data, all combinations of 14 bits, or 16 384 different values could be used. In fact only 267 combinations produce waveforms that can be recorded. In a practical CD modulator, the eight bit data symbols to be recorded are used as the address of a lookup table which outputs a fourteen-bit channel bit pattern. As the highest usable frequency in CD is 720 kHz, transitions cannot be closer together than 3T and so successive ones in the channel bitstream must have two or more zeros between them. Similarly transitions cannot be further apart than 11T or there will be insufficient clock content. Thus there cannot be more than 10 zeros between channel 1s. Whilst the lookup table can be programmed to prevent code violations within the 14T pattern, they could occur at the junction of two successive patterns. Thus a further function of the packing period is to prevent violation of the run length limits. If the previous pattern ends with a transition and the next begins with one, there will be no packing transition and so the 3T minimum requirement can be met. If the patterns either side have long run lengths, the sum of the two might exceed 11T unless the packing period contained a transition. In fact the minimum run length limit could be met with 2T of packing, but the requirement for DC control dictated 3T of packing.

Decoding the stream of channel bits into data requires that the boundaries between successive 17T periods are identified. This is the process of deserialization or parsing. On the disk one 17T period runs straight into the next; there are no dividing marks. Symbol separation is done by counting channel bit periods and dividing them by 17 starting from a known reference point. The three packing periods are discarded and the remaining 14T symbol is decoded to eight data bits. The reference point is provided by the synchronizing pattern so called because its detection synchronizes the deserialization counter to the replay waveform.

Synchronization has to be as reliable as possible because if it is incorrect all the data will be corrupted up to the next sync pattern. Synchronization is achieved by the detection of an unique waveform periodically recorded on the track at with regular spacing. It must be unique in the strict sense in that nothing else can give rise to it, because the detection of a false sync is just as damaging as failure to detect a correct one. In practice CD synchronizes deserialization with a waveform that is unique in that it is different from any of the 256 waveforms which represent data. For reliability, the sync pattern should have the best signal to noise ratio possible, and this is obtained by making it one complete cycle of the lowest frequency (11T plus 11T) giving it the largest amplitude and also making it DC-free. Upon detection of the 2 × Tmax waveform, the deserialization counter that divides the channel bit count by 17 is reset. This occurs on the next system clock, which is the reason for the 0 in the sync pattern after the third 1 and before the merging bits.

CD therefore uses forward synchronization and correctly deserialized data are available immediately after the first sync pattern is detected. The sync pattern is longer than the data symbols, and so clearly no data code value can create it, although it would be possible for certain adjacent data symbols to create a false sync pattern by concatenation were it not for the presence of the packing period. It is a further job of the packing period to prevent false sync patterns being generated at the junction of two channel symbols.

Each data block or frame in CD and MD, shown in Figure 11.25, consists of 33 symbols 17T each following the preamble, making a total of 588T or 136 μs. Each symbol represents eight data bits. The first symbol in the block is used for subcode, and the remaining 32 bytes represent 24 audio sample bytes and 8 bytes of redundancy for the error-correction system. The subcode byte forms part of a subcode block which is built up over 98 successive data frames.

Figure 11.25 One CD data block begins with a unique sync pattern, and one subcode byte, followed by 24 audio bytes and eight redundancy bytes. Note that each byte requires 14T in EFM, with 3T packing between symbols, making 17T.

image

Figure 11.26 shows an overall block diagram of the record modulation scheme used in CD mastering and the corresponding replay system or data separator. The input to the record channel coder consists of sixteen-bit audio samples that are divided in two to make symbols of eight bits. These symbols are used in the error-correction system that interleaves them and adds redundant symbols. For every twelve audio symbols, there are four symbols of redundancy, but the channel coder is not concerned with the sequence or significance of the symbols and simply records their binary code values.

Symbols are provided to the coder in eight-bit parallel format, with a symbol clock. The symbol clock is obtained by dividing down the 4.3218 MHz T rate clock by a factor of 17. Each symbol is used to address the lookup table that outputs a corresponding fourteen-channel-bit pattern in parallel into a shift register. The T rate clock then shifts the channel bits along the register. The lookup table also outputs data corresponding to the digital sum value (DSV) of the fourteen-bit symbol to the packing generator. The packing generator determines if action is needed between symbols to control DC content. The packing generator checks for run-length violations and potential false sync patterns. As a result of all the criteria, the packing generator loads three channel bits into the space between the symbols, such that the register then contains fourteen-bit symbols with three bits of packing between them. At the beginning of each frame, the sync pattern is loaded into the register just before the first symbol is looked up in such a way that the packing bits are correctly calculated between the sync pattern and the first symbol.

Figure 11.26 Overall block diagram of the EFM encode/decode process. A MiniDisc will contain both. A CD player only has the decoder; the encoding is in the mastering cutter.

image

A channel bit one indicates that a transition should be generated, and so the serial output of the shift register is fed to the JK bistable along with the T rate clock. The output of the JK bistable is the ideal channel coded waveform containing transitions separated by 3T to 11T. It is a self-clocking, run-lengthlimited waveform. The channel bits and the T rate clock have done their job of changing the state of the JK bistable and do not pass further on. At the output of the JK the sync pattern is simply two 11T run lengths in series. At this stage the run-length-limited waveform is used to control the acousto-optic modulator in the cutter.

The resist master is developed and used to create stampers. The resulting disks can then be replayed. The track velocity of a given CD is constant, but the rotational speed depends upon the radius. In order to get into lock, the disk must be spun at roughly the right track speed. This is done using the run-length limits of the recording. The pickup is focused and the tracking is enabled. The replay waveform from the pickup is passed through a high-pass filter to remove level variations due to contamination and sliced to return it to a binary waveform. The slicing level is self-adapting as Figure 11.23 showed so that a 50 per cent duty cycle is obtained. The slicer output is then sampled by the unlocked VCO running at approximately T rate. If the disk is running too slowly, the longest run length on the disk will appear as more than 11T, whereas if the disk is running too fast, the shortest run length will appear as less than 3T. As a result the disk speed can be brought to approximately the right speed and the VCO will then be able to lock to the clock content of the EFM waveform from the slicer. Once the VCO is locked, it will be possible to sample the replay waveform at the correct T rate. The output of the sampler is then differentiated and the channel bits reappear and are fed into the shift register. The sync pattern detector will then function to reset the deserialization counter that allows the 14T symbols to be identified. The 14T symbols are then decoded to eight bits in the reverse coding table.

Figure 11.27 reveals the timing relationships of the CD format. The sampling rate of 44.1 kHz with sixteen-bit words in left and right channels results in an audio data rate of 176.4 kb/s (k = 1000 here, not 1024). Since there are 24 audio bytes in a data frame, the frame rate will be:

image

Figure 11.27 CD timing structure.

image

If this frame rate is divided by 98, the number of frames in a subcode block, the subcode block or sector rate of 75 Hz results. This frequency can be divided down to provide a running-time display in the player. Note that this is the frequency of the wavy grooves in recordable MDs.

If the frame rate is multiplied by 588, the number of channel bits in a frame, the master clock-rate of 4.3218 MHz results. From this the maximum and minimum frequencies in the channel, 720 kHz and 196 kHz, can be obtained using the run-length limits of EFM.

11.12 Error-correction strategy

This section discusses the track structure of CD in detail. The track structure of MiniDisc is based on that of CD and the differences will be noted in the next section.

Each sync block was seen in Figure 11.25 to contain 24 audio bytes, but these are non-contiguous owing to the extensive interleave.13–15 There are a number of interleaves used in CD, each of which has a specific purpose. The full interleave structure is shown in Figure 11.28. The first stage of interleave is to introduce a delay between odd and even samples. The effect is that uncorrectable errors cause odd samples and even samples to be destroyed at different times, so that interpolation can be used to conceal the errors, with a reduction in audio bandwidth and a risk of aliasing. The odd/even interleave is performed first in the encoder, since concealment is the last function in the decoder. Figure 11.29 shows that an odd/even delay of two blocks permits interpolation in the case where two uncorrectable blocks leave the error-correction system.

Figure 11.28 CD interleave structure.

image

Figure 11.29 Odd/even interleave permits the use of interpolation to conceal uncorrectable errors.

image

Left and right samples from the same instant form a sample set. As the samples are sixteen bits, each sample set consists of four bytes, AL, BL, AR, BR. Six sample sets form a 24-byte parallel word, and the C2 encoder produces four bytes of redundancy Q. By placing the Q symbols in the centre of the block, the odd/ even distance is increased, permitting interpolation over the largest possible error burst. The 28 bytes are now subjected to differing delays, which are integer multiples of four blocks. This produces a convolutional interleave, where one C2 codeword is stored in 28 different blocks, spread over a distance of 109 blocks.

At one instant, the C2 encoder will be presented with 28 bytes that have come from 28 different codewords. The C1 encoder produces a further four bytes of redundancy P. Thus the C1 and C2 codewords are produced by crossing an array in two directions. This is known as crossinterleaving. The final interleave is an odd/even output symbol delay, which causes P codewords to be spread over two blocks on the disk as shown in Figure 11.30. This mechanism prevents small random errors destroying more than one symbol in a P codeword. The choice of eight-bit symbols in EFM assists this strategy. The expressions in Figure 11.28 determine how the interleave is calculated. Figure 11.31 shows an example of the use of these expressions to calculate the contents of a block and to demonstrate the crossinterleave.

The calculation of the P and Q redundancy symbols is made using Reed–Solomon polynomial division. The P redundancy symbols are primarily for detecting errors, to act as pointers or error flags for the Q system. The P system can, however, correct single-symbol errors.

Figure 11.30 The final interleave of the CD format spreads Ρ codewords over two blocks. Thus any small random error can only destroy one symbol in one codeword, even if two adjacent symbols in one block are destroyed. Since the Ρ code is optimized for single-symbol error correction, random errors will always be corrected by the CI process, maximizing the burst-correcting power of the C2 process after de-interleave.

image

Figure 11.31 Owing to crossinterleave, the 28 symbols from the Q encode process (C2) are spread over 109 blocks, shown hatched. The final interleave of Ρ codewords (as in Figure 11.30) is shown shaded. The result of the latter is that Q codeword has 5, 3, 5, 3 spacing rather than 4, 4.

image

11.13 Track layout of MD

MD uses the same channel code and error-correction interleave as CD for simplicity and the sectors are exactly the same size. The interleave of CD is convolutional, which is not a drawback in a continous recording. However, MD uses random access and the recording may be discontinuous. Figure 11.32 shows recording a sector would prevent error correction in the area of the edit. The solution is to use a buffering zone in the area of an edit where the convolution can begin and end. This is the job of the link sectors. Figure 11.33 shows the layout of data on a recordable MD. In each cluster of 36 sectors, 32 are used for encoded audio data. One is used for subcode and the remaining three are link sectors. The cluster is the minimum data quantum which can be recorded and represents just over two seconds of decoded audio. The cluster must be recorded continuously because of the convolutional interleave. Effectively the link sectors form an edit gap which is large enough to absorb both mechanical tolerances and the interleave overrun when a cluster is rewritten. One or more clusters will be assembled in memory before writing to the disk is attempted.

Figure 11.32 The convolutional interleave of CD is retained in MD, but buffer zones are needed to allow the convolution to finish before a new one begins, otherwise editing is impossible.

image

Prerecorded MDs are recorded at one time, and need no link sectors. In order to keep the format consistent between the two types of MiniDisc, three extra subcode sectors are made available. As a result it is not possible to record the entire audio and subcode of a prerecorded MD onto a recordable MD because the link sectors cannot be used to record data.

The ATRAC coder produces what are known as sound groups. Figure 11.33 shows that these contain 212 bytes for each of the two audio channels and are the equivalent of 11.6 milliseconds of real-time audio. Eleven of these sound groups will fit into two standard CD sectors with 20 bytes to spare. The 32 audio data sectors in a cluster thus contain a total of 16 × 11 = 176 sound groups.

Figure 11.33 Format of MD uses clusters of sectors including link sectors for editing. Prerecorded MDs do not need link sectors, so more subcode capacity is available. The ATRAC coder of MD produces the sound groups shown here.

image

11.14 Player structure

The physics of the manufacturing process and the readout mechanism have been described, along with the format on the disk. Here, the details of actual CD and MD players will be explained. One of the design constraints of the CD and MD formats was that the construction of players should be straightforward, since they were to be mass-produced. Figure 11.34 shows the block diagram of a typical CD player, and illustrates the essential components. The most natural division within the block diagram is into the control/servo system and the data path. The control system provides the interface between the user and the servo-mechanisms, and performs the logical interlocking required for safety and the correct sequence of operation.

The servo systems include any power-operated loading drawer and chucking mechanism, the spindle-drive servo, and the focus and tracking servos already described. Power loading is usually implemented on players where the disk is placed in a drawer. Once the drawer has been pulled into the machine, the disk is lowered onto the drive spindle, and clamped at the centre, a process known as chucking. In the simpler top-loading machines, the disk is placed on the spindle by hand, and the clamp is attached to the lid so that it operates as the lid is closed.

Figure 11.34 Block diagram of CD player showing the data path (broad arrow) and control/servo systems.

image

The lid or drawer mechanisms have a safety switch which prevents the laser operating if the machine is open. This is to ensure that there can be no conceivable hazard to the user. In actuality there is very little hazard in a CD pickup. This is because the beam is focused a few millimetres away from the objective lens, and beyond the focal point the beam diverges and the intensity falls rapidly. It is almost impossible to position the eye at the focal point when the pickup is mounted in the player, but it would be foolhardy to attempt to disprove this.

The data path consists of the data separator, timebase correction and the deinterleaving and error-correction process followed by the error-concealment mechanism. This results in a sample stream that is fed to the convertors. The data separator that converts the readout waveform into data was detailed in the description of the CD channel code. The separated output from both of these consists of subcode bytes, audio samples, redundancy and a clock. The data stream and the clock will contain speed variations due to disk runout and chucking tolerances, and these have to be removed by a timebase corrector.

The timebase corrector is a memory addressed by counters arranged to overflow, giving the memory a ring structure as described in Chapter 3. Writing into the memory is done using clocks from the data separator whose frequency rises and falls with runout, whereas reading is done using a crystal-controlled clock, which removes speed variations from the samples, and makes wow and flutter unmeasurable. The timebase-corrector will only function properly if the two addresses are kept apart. This implies that the long-term data rate from the disk must equal the crystal-clock rate. The disk speed must be controlled to ensure that this is always true, and there are several ways in which it can be done.

The data-separator clock counts samples from the disk. By phase-comparing this clock with the crystal reference, the phase error can be used to drive the spindle motor. The alternative approach is to analyse the address relationship of the timebase corrector. If the disk is turning too fast, the write address will move towards the read address; if the disk is turning too slowly, the write address moves away from the read address. Subtraction of the two addresses produces an error signal which can be fed to the motor. In these systems, the speed of the motor is unimportant. The important factor is that the sample rate is correct, and the system will drive the spindle at whatever speed is necessary to achieve the correct rate. As the disk cutter produces constant bit density along the track by reducing the rate of rotation as the track radius increases, the player will automatically duplicate that speed reduction. The actual linear velocity of the track will be the same as the velocity of the cutter, and although this will be constant for a given disk, it can vary between 1.2 and 1.4 m/s on different disks.

An alternative method is used in more recent drives, especially those with antishock mechanisms or the capability to play DVDs as well. Here the disk speed is high and poorly controlled. A large buffer memory is used and this soon fills up owing to the high speed. To prevent overflow, the player pickup is made to jump back one track per revolution so that the data flow effectively ceases. Once the memory has emptied enough, the skipping will stop and replay of the disk track will continue from exactly the right place. An undetectable edit of the replay data takes place in the memory.

Owing to the use of constant linear velocity, the disk speed will be wrong if the pickup is suddenly made to jump to a different radius using manual search controls. This may force the data separator out of lock, and the player will mute briefly until the correct track speed has been restored, allowing the PLO to lock again. This can be demonstrated with most players, since it follows from the format.

Following data separation and timebase correction, the error-correction and de-interleave processes take place. Because of the crossinterleave system, there are two opportunities for correction, first, using the C1 redundancy prior to deinterleaving, and second, using the C2 redundancy after de-interleaving. In Chapter 6 it was shown that interleaving is designed to spread the effects of burst errors among many different codewords, so that the errors in each are reduced. However, the process can be impaired if a small random error, due perhaps to an imperfection in manufacture, occurs close to a burst error caused by surface contamination. The function of the C1 redundancy is to correct single-symbol errors, so that the power of interleaving to handle bursts is undiminished, and to generate error flags for the C2 system when a gross error is encountered.

The EFM coding is a group code which means that a small defect that changes one channel pattern into another will have corrupted up to eight data bits. In the worst case, if the small defect is on the boundary between two channel patterns, two successive bytes could be corrupted. However, the final odd/even interleave on encoding ensures that the two bytes damaged will be in different C1 codewords; thus a random error can never corrupt two bytes in one C1 codeword, and random errors are therefore always correctable by C1. From this it follows that the maximum size of a defect considered random is 17T or 3.9 μm. This corresponds to about a 5 μs length of the track. Errors of greater size are, by definition, burst errors.

The de-interleave process is achieved by writing sequentially into a memory and reading out using a sequencer. The RAM can perform the function of the timebase-corrector as well. The size of memory necessary follows from the format; the amount of interleave used is a compromise between the resistance to burst errors and the cost of the de-interleave memory. The maximum delay is 108 blocks of 28 bytes, and the minimum delay is negligible. It follows that a memory capacity of at least 54 × 28 = 1512 bytes is necessary. Allowing a little extra for timebase error, odd/even interleave and error flags transmitted from C1 to C2, the convenient capacity of 2048 bytes is reached. Players with a shockproof mechanism will naturally require much more memory.

The C2 decoder is designed to locate and correct a single-symbol error, or to correct two symbols whose locations are known. The former case occurs very infrequently, as it implies that the C1 decoder has miscorrected. However, the C1 decoder works before de-interleave, and there is no control over the burst-error size that it sees. There is a small but finite probability that random data in a large burst could produce the same syndrome as a single error in good data. This would cause C1 to miscorrect, and no error flag would accompany the miscorrected symbols. Following de-interleave, the C2 decode could detect and correct the miscorrected symbols as they would now be single-symbol errors in many codewords. The overall miscorrection probability of the system is thus quite minute. Where C1 detects burst errors, error flags will be attached to all symbols in the failing C1 codeword. After de-interleave in the memory, these flags will be used by the C2 decoder to correct up to two corrupt symbols in one C2 codeword. Should more than two flags appear in one C2 codeword, the errors are uncorrectable, and C2 flags the entire codeword bad, and the interpolator will have to be used. The final odd/even sample de-interleave makes interpolation possible because it displaces the odd corrupt samples relative to the even corrupt samples.

If the rate of bad C2 codewords is excessive, the correction system is being overwhelmed, and the output must be muted to prevent unpleasant noise. Unfortunately the audio cannot be muted simply by switching the sample values to zero, as this would produce a click. It is necessary to fade down to the mute condition gradually by multiplying sample values by descending coefficients, usually in the form of a half-cycle of a cosine wave. This gradual fadeout requires some advance warning, in order to be able to fade out before the errors arrive. This is achieved by feeding the fader through a delay. The mute status bypasses the delay, and allows the fadeout to begin sufficiently in advance of the error. The final output samples of this system will be correct, interpolated or muted, and these can then be sent to the convertors in the player.

The power of the CD error correction is such that damage to the disk generally results in mistracking before the correction limit is reached. There is thus no point in making it more powerful. CD players vary tremendously in their ability to track imperfect disks and expensive models are not automatically better. It is generally a good idea when selecting a new player to take along some marginal disks to assess tracking performance.

The control system of a CD player is inevitably microprocessor-based, and as such does not differ greatly in hardware terms from any other microprocessorcontrolled device. Operator controls will simply interface to processor input ports and the various servo systems will be enabled or overridden by output ports. Software, or more correctly firmware, connects the two. The necessary controls are Play and Eject, with the addition in most players of at least Pause and some buttons which allow rapid skipping through the program material.

Although machines vary in detail, the flowchart of Figure 11.35 shows the logic flow of a simple player, from Start being pressed to sound emerging. At the beginning, the emphasis is on bringing the various servos into operation. Towards the end, the disk subcode is read in order to locate the beginning of the first section of the program material. When track-following, the tracking-error feedback loop is closed, but for track crossing, in order to locate a piece of music, the loop is opened, and a microprocessor signal forces the laser head to move. The tracking error becomes an approximate sinusoid as tracks are crossed. The cycles of tracking error can be counted as feedback to determine when the correct number of tracks have been crossed. The ‘mirror’ signal obtained when the readout spot is half a track away from target is used to brake pickup motion and re-enable the track-following feedback.

The control system of a professional player for broadcast use will be more complex because of the requirement for accurate cueing. Professional machines will make extensive use of subcode for rapid access, and in addition are fitted with a hand-operated rotor which simulates turning a vinyl disk by hand. In this mode the disk constantly repeats the same track by performing a single trackjump once every revolution. Turning the rotor moves the jump point to allow a cue point to be located. The machine will commence normal play from the cue point when the start button is depressed or from a switch on the audio fader. An interlock is usually fitted to prevent the rather staccato cueing sound from being broadcast.

Another variation of the CD player is the so-called Karaoke system, which is essentially a CD jukebox. The literal translation of Karaoke is ‘empty orchestra’; well-known songs are recorded minus vocals, and one can sing along to the disk oneself. This is a popular pastime in Japan, where Karaoke machines are installed in clubs and bars. Consumer machines are beginning to follow this trend, with machines becoming available which can accept several disks at once and play them all without any action on the part of the user. The sequence of playing can be programmed beforehand.

CD changers running from 12 volts are available for remote installation in cars. These can be fitted out of sight in the luggage trunk and controlled from the dashboard. The RAM buffering principle can be employed to overcome skipping caused by road shocks. Personal portable CD players are available, but these have not displaced the personal analog cassette in the youth market. This may be due to the cost of player and disks relative to Compact Cassette. Personal CD players are more of a niche market, being popular with professionals who are more likely to have a quality audio system and CD collection. The same CDs can then be enjoyed whilst travelling. There has been a significant development of such devices which now incorporate anti-shock memory as well as very low power logic which combines with recent developments in battery technology to give remarkable running times.

Figure 11.35 Simple flowchart for control system, focuses, starts disk, and reads subcode to locate first item of program material.

image

Figure 11.36 MiniDisc block diagram. See text for details.

image

Figure 11.37 A DVD player’s essential parts. See text for details.

image

Figure 11.36 shows the block diagram of an MD player. There is a great deal of similarity with a conventional CD player in the general arrangement. Focus, tracking and spindle servos are basically the same, as is the EFM and Reed–Solomon replay circuitry. A combined CD and MD player is easy because of this commonality.

The main difference is the presence of recording circuitry connected to the magnetic head, the large buffer memory and the compression codec. Whilst MD machines are capable of accepting 44.1 kHz PCM or analog audio in real time, there is no reason why a twin-spindle machine should not be made which can dub at four to five times normal speed.

In many respects the audio channel of a DVD player is similar in concept to that of MD. Figure 11.37 shows that the DVD bitstream emerging from the error-correction system is a multiplex of audio and video data. These are routed to appropriate decoders. The audio bitstream on a DVD may be compressed according to AC-3 or MPEG Layer II standards. Audio-only DVDs may also be uncompressed or use lossless compression, either of which will offer an improved sound quality when compared with the use of lossy compression.

References

Bouwhuis, G. et al., Principles of Optical Disk Systems, Bristol: Adam Hilger (1985)

Mee, C.D. and Daniel, E.D. (eds) Magnetic Recording, Vol.III, Chapter 6, New York: McGraw-Hill (1987)

Goldberg, N., A high density magneto-optic memory. IEEE Trans. Magn., MAG-3, 605 (1967)

Various authors, Philips Tech. Rev., 40, 149–180 (1982)

Airy, G.B., Trans. Camb. Phil. Soc., 5, 283 (1835)

Ray, S.F., Applied Photographic Optics, Chapter 17, Oxford: Focal Press (1988)

Maréchal, A., Rev. d’Optique, 26, 257 (1947)

Pasman, J.H.T., Optical diffraction methods for analysis and control of pit geometry on optical disks. J. Audio Eng. Soc., 41, 19–31 (1993)

Verkaik, W., Compact Disc (CD) mastering – an industrial process. In Digital Audio, edited by B.A. Blesser, B. Locanthi and T.G. Stockham Jr, New York: Audio Engineering Society 189–195 (1983)

Miyaoka, S., Manufacturing technology of the Compact Disc. In Digital Audio, op. cit., 196–201

Ogawa, H. and Schouhamer Immink, K.A., EFM–the modulation system for the Compact Disc digital audio system. In Digital Audio, op. cit., 117–124

Schouhamer Immink, K.A. and Gross, U., Optimization of low-frequency properties of eight-to- fourteen modulation. Radio Electron. Eng., 53, 63–66 (1983)

Peek, J.B.H., Communications aspects of the Compact Disc digital audio system. IEEE Commun. Mag., 23, 7–15 (1985)

Vries, L.B. et al., The digital Compact Disc–modulation and error correction. Presented at the 67th Audio Engineering Society Convention (New York, 1980), Preprint 1674

Vries, L.B. and Odaka, K., CIRC – the error correcting code for the Compact Disc digital audio system. In Digital Audio, op. cit., 178–186

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset