1. Computing and Enabling Technologies

In this chapter:

Capabilities discussed in this chapter are the basis of important technologies used by residential and business customers on a daily basis. They enable:

  • Image the Internet to store vast amounts of data

  • Image millions of people to simultaneously access the Internet

  • Image cellular networks to handle ever-increasing amounts of traffic

  • Image storage devices to hold massive amounts of data

High-powered computer chips are the building blocks of innovation in the twenty-first century. Their increased memory, speed, and 8-core processing capabilities along with smaller size have made possible innovative technologies affecting industry, personal productivity, and connectivity. Decreases in chip sizes enable people to carry computers in the form of smartphones in their pockets. Computing power, previously only available in personal computers and large centralized computer servers, is now used in smartphones and tablet computers.

For the most part, these electronics and networks have been available for most of the twenty-first century. However, innovations in the speed, capacity, and amount of memory on computer chips have resulted in higher capacity fiber-optic networks and denser storage systems.

High-capacity virtualized servers that store multiple operating systems and applications in a single device in less space, as well as high-capacity broadband networks, are the basis of highly reliable cloud computing. Moreover, fiber networks link cloud data centers together so that if one data center fails, another one automatically takes over.

In an effort to more flexibly store applications, cloud companies and large enterprises use containers. Like virtual servers, containers don’t require that each application be held in its own dedicated physical server. Containers break applications into multiple parts so that each part can be modified without changing an entire application. With virtualized servers, entire applications need to be rewritten for every change. The software running containers is usually open source, free software. The hardware is often lower-cost commodity servers.

The growth in capacity and reduced price of broadband networks have made access to images, music, television series, and movies available to millions of people via broadband networks powered by innovations in computer chips, multiplexing, and compression. The images and videos are stored and accessed from high-capacity computers. The computers, often located in the cloud, hold movies and TV shows offered by services such as Netflix and Hulu.

Computing and enabling technologies have spurred the increased use of cloud computing and streaming movies. Cloud computing has now evolved to a point where many companies and individuals use it, but there are nevertheless challenges for large organizations that use cloud computing. IT staff need skills suited to monitoring and transmitting applications to the cloud and making sure that applications previously accessed from onsite data centers are compatible with each cloud provider’s infrastructure.

Another enhancement enabled by increased speed and memory in chips is machine learning. Machine learning is the ability of chips with powerful memories to “recognize” patterns of images and changes in order to “learn” how to perform particular tasks. For example, machines can learn to recognize clues in medical images to diagnose medical conditions that previously required skilled physicians to analyze.

For the most part, the technologies and protocols discussed here are not new. Rather, they are more powerful and many of them have been refined to support additional applications. This is the case with the growing use and availability of cloud computing and fiber-optic networks by all segments of developed nations.

Fiber-Optic and Copper Cabling

The three technologies discussed in this section—fiber-optic cabling, multi-core processors, and memory—are the building blocks of modern networks. They enable networks to carry more information faster. Decreasing memory costs have led to affordable personal computers and the ability to store vast amounts of information, accessible via fiber-optic–based networks at lower costs.

Fiber-Optic Cabling: Underpinning High-Speed Networks

Without fiber-optic cabling, modern high-speed data networks would not be possible. Fiber-optic cabling is the glue connecting all high-speed networks to the cloud, and to other continents. Fiber networks are located between cities; undersea in oceans; within cities; and on university, business, and hospital campuses. Sophisticated equipment and computer chips are enabling fiber networks to keep up with the growing demand for capacity on broadband networks.

Demand for bandwidth is a result of cloud computing, subscribers streaming movies and television shows, online gamers, real-time coverage of live sporting events, and international news. People expect instant access to breaking news, social networks such as Facebook, QQ, and Weibo, and entertainment. (Chinese company Tencent owns QQ and a different Chinese company, Sina owns Weibo.) Fiber-optic networks support broadband connections to the Internet, which enable streaming TV services such as Netflix and Amazon; streaming music providers including Apple Music, Pandora, and Spotify; and social networks. The world has gotten smaller and businesses and people expect to be able to remain connected wherever and whenever they want.

Because of dropping prices of installation and lower maintenance costs, network providers including AT&T, Verizon, and CenturyLink are increasingly connecting fiber-optic networks to apartment buildings and even individual homes. Traffic on fiber-optic cabling is not limited to these networks. It is also growing in cellular networks that use fiber-optic cabling to carry data- and video-heavy smartphone traffic between antennas and mobile providers’ data centers in mobile networks. Fiber networks also carry traffic from cellular networks to the Internet.

Cloud computing is the main reason for the growing demand for capacity on fiber networks. Most students, residential customers, corporations and governments use cloud computing to access some or all of their applications. Fiber-optic networks connect much of the world to applications located in the cloud. Demand for capacity on networks is expected to continue growing as more organizations, developing countries, individuals, and governments use cloud-based applications and stream movies and television shows.

In an interview, Geoff Bennett, Director of Solutions and Technology at the fiber-optic networking company Infinera, stated:

Nothing else has the transmission capacity of optical fiber, and that’s why fiber powers the Internet. There is no technology on the horizon that could replace it.

Information Content Providers: Heavy Users of Fiber

Google, Amazon, Microsoft, and Facebook are information content providers. Information content providers generate enormous amounts of traffic over fiber-optic networks in locations worldwide. Information content providers build and operate massive data centers connected to high-bandwidth fiber-optic networks that carry streams of data to and from millions of subscribers. The data centers are connected to other information content providers’ data centers as well as to the Internet. Facebook operates its data centers in much of the world. Two of its data centers are located in Luleå, a city on the northern coast of Sweden. The data centers in Luleå are linked to each other by fiber-optic networks as well as to a network of Facebook data centers throughout Europe. Infinera built these fiber-optic networks for Facebook. Other fiber-optic providers in addition to Infinera are Ciena, Huawei, and Nokia.

Nearness to customers to minimize delays and assure the availability of power are two of the criteria information providers consider when selecting locations for their data centers. To get closer to subscribers and large business clients, Internet content providers locate data centers in large cities as well as locations near plentiful electricity sources such as in Luleå, Sweden. Sweden offers the advantage of requiring less power for cooling because of its cold climate. Google has bought many buildings in New York City and other major cities so that it can be closer to its many customers in these cities. This minimizes delays in responses to users that access Google.

Fiber links to duplicate data centers enable cloud-based data centers to function even if one of their centers is disabled due to an equipment or software malfunction or a natural disaster such as a hurricane. Fiber links importantly enable cloud-based data centers to continuously back up their entire sites in real time to ensure that customer data is not lost during a disaster or malfunctions. The duplicate data center may be hundreds of miles away or nearby within the same city.

Each data center that is linked to another data center and to subscribers has redundant fiber cabling connected to their facilities so that if one group of fiber cables is damaged or cut, the other is still intact. For example, one bundle of fibers may be located on the eastern side of their building and another on the western side. Duplicate power sources are also installed at these data centers.

Splitting Capacity of Individual Fiber Strands into Wavelengths

Dense wavelength division multiplexers (DWDMs) add capacity to individual strands of fibers connected to fiber strands by dividing each fiber into numerous channels called wavelengths. See Figure 1-1 below. Dense wave division multiplexers divide the fiber into wavelengths using frequency division multiplexing, where each stream of light uses a different frequency within the fiber strand. In some networks, each wavelength is capable of transmitting a 100Gb stream of light pulses.

A figure shows two multiplexers, Cloud Xpress 2 and Cloud Xpress.

Figure 1-1 Representation of dense wavelength division multiplexers used for connections between data centers. (Based on image courtesy of Infinera)

The distance between the highest parts of each wavelength equate to the wavelength sizes. Wavelengths are so small that they are measured in nanometers. A nanometer equals one billionth of a meter. A nanometer-sized object is not visible to the human eye. A fiber-optic cable is smaller than a strand of hair or thinner than a sheet of paper.

The Cloud Xpress is a first generation 500Gbps Dense Wavelength Division Multiplexer that supports an aggregate speed of 500Gbps. The newer second generation Cloud Xpress 2 is smaller and faster. Its capacity is 1.2 terabits per second and requires only one RU (rack unit), about 1.75 inches (44.45 millimeters) high in a 19-inch wide computer rack. The older Cloud Xpress takes up 2RU, essentially two vertical shelf units.

Lasers—Lighting Fiber

Lasers (Light amplification by stimulated emission of radiation) are the source of light on long-distance fiber networks and on fiber networks within cities. A laser turns light pulses on and off over fiber cables. Most outside fiber-optic cabling is connected to lasers for generating their light pulses. Due to improvements in manufacturing and chips, lasers are so small that they are now available as computer chips integrated into line cards connected to fiber-optic cables.

Super Channels—For Even More Capacity

Super channels are an advanced form of dense wavelength division multiplexing. Super channels essentially mix multiple gigabit per second wavelengths together to achieve higher gigabit or terabit speeds on a single circuit card. In some systems, one wavelength can transmit at 100Giga bits per second. For example, a fiber used for sending can combine five 100Gbps wavelengths to transmit at 500Gbps. The fiber sending in the other direction can transmit at 500Gbps, equaling a total of a terabit of capacity. Thus, super channels enable fiber to reach terabits (1,000 gigabits) of capacity.

Coherent fiber optics is one of the technologies that enable high bandwidth transmissions above 40Gbps. Coherent fiber transmissions achieve high bandwidth by the use of error correction bits in transmitted light pulses, and in lasers located in receivers. At the receiving end, receivers are equipped with lasers tuned to the exact same frequency and shape as the transmitted wavelength. Each wavelength has an associated laser in the receiver in fiber-optic networks that optical network providers build.

Receivers use sophisticated algorithms to correct for impairments in the signals sent from lasers associated with each wavelength. Coherent transmission is an important enabler of high bandwidth transmission in long-haul networks because increasing the speed of transmissions causes additional impairments. Long-haul networks are those connecting cities, and countries. Undersea fiber-optic cables that connect continents to each other are also examples of long-haul networks.

Half Duplex: Sending and Receiving on Separate Cables

Fiber-optic cables are half duplex. Each fiber cable is capable of sending in one direction only. Consequently, a separate fiber strand is needed for sending and a different fiber for transmitting back in the other direction. In other words, each strand of fiber can send signals in one direction only.

In contrast, copper cabling is full duplex. Within buildings a single copper cable is connected to each computer and printer. This single cable is used both to send and receive signals. Moreover, cable modems and routers within people’s homes require just one copper cable to both send and receive e-mail messages and files to and from the Internet. See Table 1-1 for examples of half vs. full duplex.

Table 1-1 Half Duplex vs. Full Duplex

Half and Full Duplex

Type of Cabling for Each

Where Deployed

Half duplex

Separate cables for sending and receiving

Locations with fiber or copper cabling

Examples of half duplex

Fiber networks

Fiber optic cabling in large data centers

Fiber optic cabling between floors in many apartment buildings and businesses

Fiber optic networks between and within cities

Full duplex

Send and receive on same

Locations with copper cabling

Examples of full duplex

Copper cabling-based networks

Cables to homes, desktop computers, and printers

Amplifiers vs. Regenerators

Although signals on fiber-optic cabling travel longer distances than those on copper cabling, the signals do fade; they weaken over long distances. In addition to fading, imperfections in the form of noise impedes the signals over distances. See Table 1-2 for a comparison between amplifiers and regenerators. Thus, equipment must be added to fiber-optic networks both to boost the signals when they fade and eliminate the noise. Amplifiers and regenerators do these tasks.

Table 1-2 Amplifiers vs. Regenerators

 

Distance when Needed

Function

Cost Comparison

Amplifier

80 to 120 kilometers (49 to 74.4 miles)

Boosts optical amplitude of all wavelengths when they fade

Lower cost

Regenerator

3,000 to 4,000 kilometers (1,860 to 2,480 miles)

Removes noise and other impairments. Regenerates each wavelength separately

More costly than amplification

Amplifiers are needed in fiber-optic networks every 80 to 120 kilometers (49 to 74.4 miles). They boost the signals’ amplitude on all the wavelengths of an entire fiber in one operation. Amplitude refers to the height of a signal at the highest point of the wavelength. Amplitude is similar to a unit of loudness or strength.

Regenerators are needed less frequently than amplifiers, about every 3,000 to 4,000 kilometers (1,860 to 2,480 miles). Regeneration is more costly than amplification because each wavelength must be regenerated individually rather than regenerating the entire fiber and all the wavelengths in one operation.

Fiber-Optic Cabling—No Electromagnetic Interference, Smaller and Lighter than Copper

In contrast to copper cabling, fiber-optic networks have more capacity and they can carry the signals longer distances without amplification. In addition, fiber-optic networks are less costly to maintain once they are installed. This is because bits are carried on non-electrical light pulses. These pulses are analogous to the light emitted by quickly turning a flashlight on and off. Because the pulses are non-electrical, there is no resistance from electromagnetic interference (EMI).

Resistance associated with EMI in networks made up of copper cabling causes fading over relatively short distances of about 2 miles. This is analogous to water pressure from a hose weakening over short distances. Consequently, amplifiers are needed every mile and a half to 2 miles to boost electrical signals carried on copper-based networks. The absence of resistance from electrical interference is fiber’s main advantage over copper cabling. Moreover, once installed, amplifiers must periodically be replaced and maintained.

Because data on fiber-optic cabling is carried as non-electric pulses of light, amplification is only needed after 80 to 120 kilometers (49 to 74.4 miles). This means that in many short fiber runs to homes or apartment buildings, no amplification is needed. In long-haul networks, those between cities, states, and countries, fewer amplifiers are required to boost optical signals after they have faded than are required in copper networks.

The non-electric light pulses on fiber-optical cabling can travel about 400 kilometers, or 248 miles, before having to be regenerated. This is another savings in labor because fewer technicians are needed to install regenerators. This lowers the cost for network providers who lay miles of fiber between cities, within cities, and under the ocean. In short fiber runs, regeneration is not needed if the cable runs are less than 4,000 kilometers.

Lighter and Smaller than Copper Cabling

In addition to the absence of electrical interference, fiber-optic cabling is lighter and smaller than copper cabling. Because of its size, it takes up less space than copper cabling in conduits (hollow pipes in which cables are run) that are located under streets. This is important in large cities where conduits are often stuffed and at capacity with older copper cabling. A university in downtown Boston wanted to connect two buildings by running fiber-optic cabling under a public street between its buildings. This was a problem because there was no space left in the existing underground conduit that was already filled with copper cabling. The university needed to get permission from the city’s authorities to dig up the street to add a new conduit, and they had to pay for digging up the street in addition to the cost to lay a new conduit for the fiber-optic cabling between their two buildings.

If the conduit had fiber cabling, it’s likely that space would have been available because of fiber’s smaller diameter.

More Capacity than Copper Cabling

A significant advantage of fiber-optic cabling is its enormous capacity compared to copper cabling and mobile services. Light signals on optical cabling pulse on and off at such high data rates that they are able to handle vastly greater amounts of information than any other media and the capacity can be multiplexed into wavelengths. See Table 1-3 for a comparison of copper and fiber optic cabling.

Table 1-3 Summary of Fiber-Optical Cabling Compared to Copper Cabling

Characteristic

Fiber-Optical Cabling

Copper Cabling

Diameter

Thin as a strand of hair

Larger diameter—Needs more space in conduits

Capacity

Higher—Non-electrical pulses so no electromagnetic interference

Lower—Electromagnetic interference causes resistance to signals

Weight

Light

Heavier

Distance before amplification needed

80 to 120 kilometers (49 to 74.4 miles)

2.4 to 3.2 kilometers (1½ to 2 miles)

Installation costs

Higher labor cost

Lower—Less skill required

Maintenance costs

Lower—Less equipment needed to boost signals

Higher—More equipment to maintain

Security from eavesdropping

High—Need to physically break cable; easy to detect

Low—Can attach a monitor without cutting cable

Half duplex or full duplex

Half duplex—A separate fiber for sending and receiving

Full duplex—Can send and receive on same cable

In older networks, once high-quality fiber is installed in trenches, newer lasers and dense wavelength division multiplexing can be added to increase its capacity to handle the growing amounts of traffic, including high-definition video transmitted along its routes.

An illustration shows the dark fiber.

Figure 1-2 A fiber-optic cable with DWDMs and dark fiber.

For example, a university in the Northeastern United States installed two fiber runs to a hosting center where many of their computer servers were located. They added dense wavelength division multiplexing (DWDM) to one fiber run, and left the second one as an alternate route to their hosting center, to be used in the event of a cable cut in the main fiber pairs.

Single-Mode vs. Multi-Mode Fiber

There are two main types of fiber: single-mode and multi-mode. Single-mode fiber is smaller, is more expensive, and supports higher speeds than multi-mode fiber. Measuring approximately the same diameter as a strand of human hair, it is used mainly in carrier networks and in undersea cabling.

The fact that single-mode fiber carries light pulses faster than multi-mode fiber can be explained by a geometric rule: A straight line is the shortest distance between two points. Light travels faster in a straight line than if it zigzags along a path, which is precisely what happens to light waves if they reflect, or “bounce,” off the inner wall of the fiber strand as they travel. These zigzag paths also cause the signals to attenuate, lose power, and fade over shorter distances. The narrow core results in a narrower angle of acceptance. The small angle of acceptance of single-mode fiber keeps the light signal from bouncing across the diameter of the fiber’s core. Thus, the straighter light signal travels faster and has less attenuation than if it had a bouncier ride through the core.

When the light pulses travel in narrower paths, fewer amplifiers are needed to boost the signal. Single-mode fiber can be run for 49 to 74 miles without amplification (boosting). In contrast, signals on copper cabling need to be amplified after approximately 1.5 miles. For this reason, telephone companies originally used fiber for outside plant cabling with cable runs longer than 1.2 miles. However, the demand for higher capacity and the lower maintenance costs have resulted in carriers running increasing amounts of fiber cabling directly to homes and buildings.

The main factor in the increased expense of single-mode fiber is the higher cost to manufacture more exact connectors for patch panels and other devices. The core is so small that connections and splices require much more precision than does multi-mode fiber. If the connections on single-mode fiber do not match cores exactly, the light will not be transmitted from one fiber to another. It will leak or disperse out the end of the fiber core at the splice.

Multi-mode fiber has a wider core than single-mode fiber. The wider core means that signals can only travel a short distance before they require amplification. In addition, fewer channels can be carried per fiber pair when it is multiplexed because the signals disperse, spreading more across the fiber core. Multi-mode fiber is used mainly for LAN backbones between campus buildings and between floors of buildings.

Another factor in the expense of installing fiber cabling systems is the cost of connector standardization. Three of the main connector types are the Straight Tip (ST), the MT-RJ, and the Little Connector (LC). Each type of connector requires specialized tools for installation, and both factory- and field-testing. Technicians perform field-testing prior to installation to ensure that the connectors meet factory specifications. It’s critical that connectors match up exactly to the fiber so that signals don’t leak. If the connectors don’t exactly match up to the fiber, signal loss impairs the fiber’s performance.

Fiber-Optic Cabling in Commercial Organizations

Because fiber is non-electric, it can be run in areas without regard to interference from electrical equipment. However, due to its higher installation cost, fiber within buildings is used most often in high-traffic areas. These include:

  • Image Within data centers

  • Image Between buildings on organizations’ campuses

  • Image Between switches in backbone networks. A backbone connects switches to each other and to data centers. Backbone networks are not connected to individual computers—only to servers and switches.

Fiber-optic cabling fiber itself requires more care in handling and installation than copper. For example, it is less flexible than copper and cannot be bent around tight corners. However, given its greater capacity and cost savings in ongoing maintenance, developers install fiber between floors in large office buildings and between buildings in office complexes. See Figure 1-3 for an example of fiber-optic cabling in enterprises. The fiber-optic cabling within commercial organizations is for the most part multi-mode rather than single-mode because of multi-mode’s less stringent installation requirements. Moreover, multi-mode capacity is adequate for commercial buildings’ bandwidth requirements.

A photograph of fiber-optic cable.

Figure 1-3 Examples of fiber-optic cabling used in telephone company networks. (Photo by Annabel Z. Dodd)

There are two reasons why fiber is typically more expensive than copper to install:

  • Image The electronics (multiplexers and lasers) to convert electrical signals to optical signals, and vice versa, are costly.

  • Image Specialized technicians, paid at higher levels, are required to work with and test fiber cabling.

The multiplexers and interfaces between fiber-optic cabling and copper cabling in the customer’s facility require local power. This adds a point of vulnerability in the event of a power outage.

Fiber-optic cable is made of ultra-pure strands of glass. The narrower the core that carries the signals, the faster and farther a light signal can travel without errors or the need for repeaters. The cladding surrounding the core keeps the light contained to prevent the light signal from dispersing, that is, spreading over time, with wavelengths reaching their destination at different times. Finally, there is a coating that protects the fiber from environmental hazards such as rain, dust, scratches, and snow.

An important benefit of fiber-optic cabling is that eavesdropping is more difficult because the strands have to be physically broken and listening devices spliced into the break. A splice is a connection between cables. Splices in fiber-optic cables are easily detected.

Copper Cabling in Enterprises

Improvements in copper cable have made it possible for unshielded, twisted-pair cabling to transmit data rates at speeds of 100Gbps in Local Area Networks (LANs). Previously, these speeds were only attainable over fiber-optic cabling. This is important because the interfaces in most computer devices are compatible with unshielded, twisted-pair (UTP) cabling, not fiber-optic cabling. Connecting this gear to fiber requires the cost and labor to install devices that change light signals to electrical signals, and vice versa. Figure 1-4 depicts copper cabling in enterprises: four pairs of wires enclosed in a sheath. Each pair consists of two strands of wire twisted together—a total of eight strands of cable, arranged by color: purple with white, orange with white, green with white, and brown with white.

A photograph of unshielded twisted pair cable.

Figure 1-4 Unshielded twisted-pair cabling used within enterprise buildings. (Photo by Annabel Z. Dodd)

UTP copper and fiber-optic cables are the most common media used in enterprise LANs. Because of improvements in the speeds, capacity, and distances wireless signals can be transmitted, wireless media is replacing copper cabling in some office buildings. Wireless services based on the 802.11 (Wi-Fi) protocols are discussed in Chapter 7, “Mobile and Wi-Fi Networks.”

Characteristics of media have a direct bearing on the distance, speed, and accuracy at which traffic can be carried. For example, thin copper wire carries data shorter distances at lower speeds than thicker, higher-quality copper.

In corporate networks, UTP is the most prevalent medium used to link computers, printers, and servers to wiring closets on the same floor. This is referred to as the horizontal plant.

Fiber is capable of carrying vastly more traffic than copper. However, it is more expensive to install and connect to devices in LANs than UTP, and thus it is generally used in high-traffic connections between the following:

  • Image Wiring closets

  • Image Floors (the risers)

  • Image Buildings on campuses

The key characteristic that makes fiber suitable for these high-traffic areas is that it’s a non-electric medium. Fiber exhibits superior performance because, unlike copper, it does not transmit electric signals that act like an antenna and pick up noise and interference from nearby electrical devices.

Connecting Fiber to Copper

In enterprise networks, when fiber is connected to copper cabling at locations such as entrances to buildings, at wiring closets, or in data centers, equipment converts light pulses to electrical signals, and vice versa. This requires converters called transmitters and receivers. Transmitters also are called light-source transducers. If multiplexing equipment is used on the fiber, each channel of light requires its own receiver and transmitter. Transmitters in fiber-optic systems are either Light-Emitting Diodes (LEDs) or lasers. There are several reasons for this:

  • Image LEDs cost less than lasers. They are commonly used with multi-mode fiber.

  • Image Lasers provide more power. Thus, less regeneration (amplification) is needed over long distances.

  • Image At the receiving end, the light detector transducers (receivers) that change light pulses into electrical signals are either positive intrinsic negatives (PINs) or avalanche photodiodes (APDs).

  • Image LEDs and PINs are used in applications with lower bandwidth and shorter distance requirements.

Electrical Property—Disadvantage of Copper Cabling

Signals transmitted via copper react to electrical interference or “noise” on the line. Power lines, lights, and electric machinery can all inject noise into the line in the form of electric energy. This is why interference from signals such as copiers, magnetic sources, manufacturing devices, and even radio stations can introduce noise and static into telephone calls and data transmissions. One way to protect copper cabling from noise and crosstalk introduced by nearby wires is to twist each insulated copper wire of a two-wire pair. Noise induced into one wire of the twisted pair cancels an equal amount of noise induced in the other wire of the pair. Crosstalk occurs when electrons that carry the conversations or data along a copper pair cross over, or “leak,” on to other nearby wire pairs.

Another electrical property of copper wire is resistance. Resistance causes signals to weaken as they are transmitted. This is referred to as attenuation. Attenuation is the reason signals on copper cable in the outside network need to be boosted on cable runs of more than approximately 1.5 miles. Thus, the dual inherent impairments of interference and resistance are the key factors that limit copper’s performance. See Table 1-3 for the differences between copper and fiber cabling

Copper Cabling Standards: Higher Capacity, Exacting Installation, and Connections

Cabling standards have been created to support ever-higher speeds, carry multimedia traffic, and ensure that organizations can purchase cabling and connectors from diverse manufacturers without risk of incompatibility. Each cabling standard includes defined tests that should be performed when cables are installed to ensure that it and all of the connectors perform to their specifications.

Every new standard needs to be compatible with all lower standards. This allows applications that operated over lower-category cabling systems to operate on higher categories, as well. The biggest problems organizations face with cabling systems are that they are not always properly installed and tested to meet standards. This results in either lower-than-expected data rates or inconsistent reliability.

Each of the standards specifies not only the cable itself, but all of the connections including jacks (outlets), plugs, and cross-connects in wiring closets. Cross-connects provide outlets on each floor where cabling from individual devices is connected. The floor cabling, also referred to as the horizontal plant, is connected to the riser cabling, which is the cabling connections between floors. See Figure 1-5 for an example of copper cabling in enterprises. The riser cabling is connected in the building’s main wiring closet and to other buildings within a campus.

An illustration shows the copper cabling in enterprises.

Figure 1-5 Riser and horizontal cabling connections in a wiring closet.

Standards also specify the network interface card (NIC) in printers and computers from which a cable connects the device to the jack (outlet). The Telecommunications Industry Association (TIA) rates twisted-pair cabling and connection components used inside buildings.

Unshielded Twisted-Pair Copper Cabling Standards

Category 3 UTP cabling—often referred to as simply “CAT 3”—is rated as suitable for voice transmission, but it is only suited for older type phone systems. It is rarely used in new installations. Categories 5, 5e, 6, and 6a, are the commonly deployed cabling system standards for UTP. Organizations often use Category 6a to support 10Gbps speeds in their data centers, and Category 6 for the rest of their facility. New cabling infrastructure is installed using categories 7 or 7a. Moreover, this same cabling plant is generally used for both voice and data. Category 7 and 7a cabling are based on a standard ratified by the International Organization for Standardization (ISO). The following is a synopsis of the major UTP categories. Category 8 was approved by the Telecommunications Industry Association (TIA) in May 2017.

  • Image Category 5 Supports speeds of up to 100Mbps for 100 meters (328 feet) over UTP. It was the first ratified standard for megabit Ethernet transmissions. It consists of four pairs of eight unshielded copper wires. Category 5 has been superseded by Category 5e.

  • Image Category 5e Supports 1Gbps speeds at distances of 100 meters (328 feet). Higher speeds are attainable because the cabling and connectors are manufactured to higher standards. There are more twists per inch in the cabling than that specified for Category 5.

  • Image Category 6 Supports 1Gbps speeds at 100 meters (328 feet). At shorter distances, it handles 10Gbps speeds. A higher twist rate, heavier cabling, and a metallic screen around the entire cable protect it from ambient noise. Insulation material (usually plastic strips) is placed between each of the four pairs to reduce crosstalk between the wires.

  • Image Category 6a (augmented) Supports 10Gbps Ethernet transmissions of up to 55 meters (180 feet). This is possible because the stricter standards result in less cross talk between adjacent cables. Lower speeds are supported for up to 100 meters (330 feet).

  • Image Category 7 Supports 10Gbps speeds up to 100 meters (330 feet). High performance at longer distances is possible because there is foil shielding around each pair of wires and an overall metal foil shield around the entire cable. It must be installed with different components and connectors from those for Categories 5 through 6a to accommodate this shielding. It can also be installed with the same type of connectors specified in earlier standards; however, it will not transmit 10Gbps traffic as far as the 100 meters (330 feet) achieved with the appropriate connectors.

  • Image Category 7a Supports 40Gbps speeds up to 50 meters and 100Gbps up to 15 meters. It is similar to Category 7. It requires shielded twisted-pair cabling with shielding around the entire cable. It is widely available and is often used in new cabling installations. The standards for connections are more exacting than those in Category 7. The shielding in Category 7a guard against electromagnetic interference (EMI). Shielding must be extended to all connections within the cabling system to attain gigabit speeds. This requires a special insert into every RJ45 jack in the building. RJ45 jacks are outlets used to plug cables into. The designation RJ stands for registered jack.

  • Image Category 8 Supports 25Gbps and 40Gbps speeds up to 30 meters. The shielding and connectors are not compatible with Categories 7 and 7A. The TIA approved Category 8 cabling in 2017. Category 8 cabling is capable of supporting high-bandwidth transmissions between servers in data centers because of its stringent shielding and connector requirements. This is less costly than using fiber-optic cabling. The standard specifies both F/FTP (foil-shielded twisted-pair cabling) where the entire bundle of four pairs of cable is surrounded with one shield. The other option is foil-shielded twisted pairs (S/FTP) where each of the four pairs is shielded individually.

Chips—Building Blocks of the Digital Age

Chips, short for microchips, are integrated processors and memory found in computers, cellular devices, electronic toys, wristwatches, thermostats, airplanes, cars, trucks, and automated devices people and machines interact with daily. Multi-core chips have up to eight cores, each of which is able to process information simultaneously, in real time. Chips are the basis of the digital age.

Faster multi-core processors are an integral part of the high-speed electronics used on fiber-optic, Wi-Fi, and cellular networks. They enable these networks to process multiple streams of signals simultaneously. They are also at the core of network switches, continually transmitting increasing amounts of data at ever-higher speeds. Additionally, these processors facilitate the capability of personal computers, set-top boxes, and smartphones to handle graphics and video transmitted via the Internet.

Chips now incorporate up to 64-bit processing (the ability to process data in chunks of 64 bits), which means that they process data faster. Chips are now able to process petabytes and exabytes of information. One petabyte equals 1,000,000,000,000,000 bytes or 1,000,000,000 gigabytes. An exabyte equals 1,000,000,000,000,000,000 bytes. Moreover, they are small and inexpensive, and use only small amounts of power. The size of chips is shrinking at the same time that more processors can be packed into personal computers and other electronic devices.

Low power consumption results in longer battery life in mobile devices. ARM chips are designed by semiconductor firm ARM Holdings, Plc., and are available to electronics manufacturers who pay a licensing fee plus royalties up front for each chip designed. ARM is part of SoftBank Group Corp., which is headquartered in Japan.

Other chip makers include Broadcom, Nvidia, Intel, Qualcomm, NXP, and Micron. Figure 1-6 is an example of a Nvidia X-1 chip. Facebook and Alphabet subsidiary Google are developing chips as well. Facebook will use its chips in servers located in their data centers.

A photograph of Nvidia Integra Xi chip.

Figure 1-6 The Nvidia Integra Xi chip is about the size of a thumbnail. It was the first chip Nvidia used for self-driving car technology development. (Image courtesy of Nvidia)

Machine Learning

Memory and processing power in chips enable machine learning and in the future will enable artificial intelligence. Although the terms are often used interchangeably, there is a difference between machine learning and artificial intelligence. Machine learning uses microchips to compare stored images to new images to detect anomalies and patterns of data. It can be thought of as computerized pattern detection in existing data and identification of similar patterns in future data. Microchips compare billions of images and phrases and words to find anomalies and to perform tasks formerly done by people.

For example, graphic processing units (GPUs) chips are able to process and store complex mathematical algorithms that represent images. Google is training its machine learning system to recognize a picture with a car in it. The machine learning system is shown a large number of pictures with a car in it. Telling the system if there is a car in the picture or not enables the computer to later recognize the visual qualities associated with cars. Machine learning is also used in facial recognition software used by police departments.

Artificial intelligence solves problems by using reasoning, logical deduction, searching, and trial and error. Artificial intelligence has the ability to imitate a human brain’s functions. Although media articles claim that certain applications and cloud computing services use artificial intelligence, they are actually powered by machine learning. Artificial intelligence applications that mimic human intelligence are not currently available.

The following are examples of machine learning:

  • Image Mass General Hospital in Boston’s radiology department is planning to use Nvidia’s graphical processing units on chips to analyze a library of 10 billion x-rays, CT scans, and MRI images that will be compared with new images to identify images indicating the presence of cancer and Alzheimer’s.

  • Image Medical applications analyze large amounts information in medical journals and textbooks to assist physicians in making a diagnosis.

  • Image In the future, self-driving cars and trucks may replace all or some taxi and truck drivers because chips with image recognition will detect vehicles and pedestrians on roads as well as traffic signals and signs.

  • Image At Amazon warehouses, automation enables robots to take products off shelves and give them to employees to package for shipping. The robots retrieve four times as many products hourly as employees.

  • Image Speech recognition systems from Nuance recognize words based on previously learned patterns of frequencies. Nuance software for individuals and businesses is programmed to recognize spoken words. Nuance also develops specialized applications such as one used by Radiologists to write reports.

  • Image Home digital assistants such as Amazon’s Echo and Alphabet’s Google Home respond to users’ spoken commands based on data previously collected.

  • Image Banks use machine learning to track and investigate anomalies in claims and transactions to find fraudulent activities.

  • Image The military deploys robots to find and defuse unidentified explosives so that personnel are not injured.

Machine learning enabled by powerful chips already has a major impact on work and job efficiency. Machine learning will continue to affect jobs that involve repetitive tasks and those needing stored information to make decisions.

Packetized Data

All Internet traffic, and the vast majority of high-speed data network traffic, is sent in packets. And increasingly, traffic that carries residential traffic in neighborhoods is also arranged in packets. Putting data into packets is analogous to packaging it in envelopes. Packet switching was developed by Rand Corporation in 1962 for the United States Air Force and utilized in 1969 in the Advanced Research Projects Agency (ARPANET) of the Department of Defense. ARPANET was the precursor to today’s Internet. The Department of Defense wanted a more reliable network with route diversity capability. Developers envisioned greater reliability with packet switching in the ARPANET, where all locations could reach one another.

Packet networks are more resilient and can better handle peak traffic periods than older networks, because diverse packets from the same message are routed via different paths, depending on both availability and congestion. In a national emergency such as the September 11, 2001 attacks in the United States on the Pentagon in Washington, DC, and the World Trade Center in New York City, the Internet, which is a packet network, still functioned when many portions of the older public voice and cellular networks were either out of service or so overwhelmed with traffic that people could not make calls.

If one route on a packet network is unavailable, traffic is rerouted onto other routes. In addition, unlike older voice networks, the Internet does not depend on a few large switches to route traffic. Rather, if one router fails, another router can route traffic in its place.

Per Packet Flexible Routing

Packet networks can handle peak congestion periods better than older types of networks because traffic is balanced between routes. This ensures that one path is not overloaded while a different route carries only a small amount of traffic. Sending data from multiple computers on different routes uses resources efficiently because packets from multiple devices continue to be transmitted without waiting until a single “heavy user” has finished its entire transmission. Thus, if one route is congested, packets are transmitted on other routes that have more availability.

Routers in packet networks are connected to each separate route in the network. They have the ability to check congestion on each leg of the journey connected to them and send each packet to the least congested route. As shown in Figure 1-7, packets from the same message are transmitted over the different routes.

A figure shows a Packet Network.

Figure 1-7 Three of the packets of an e-mail from John to Sophie take different paths through the network.

Packet Contents: User Data vs. Overhead

Each packet is made up of user data; data bits, digital voice or video, and specialized header information, such as addressing, billing, sender information, and error correction bits. See Figure 1-8 for an example of a data packet. Error correction might indicate if the packet is damaged, if the receiver is ready to start receiving, or if the packet has been received. The end of the packet contains information to let the network know when the end of the packet has been reached. Header, end-of-packet data, and other signaling data are considered overhead. User data (also referred to as the payload) is the actual content of the e-mail message or voice conversation.

A figure shows an example of fields in a packet.

Figure 1-8 User data vs. overhead addressing and end-of-packet bits in packets.

Throughput

Throughput is the amount of user information transmitted, not the actual speed of the line. The disadvantage of frequent error messages and other protocol-related bits is that overhead bits often consume large amounts of bandwidth. Throughput only measures actual user data transmitted over a fixed period of time. It does not include header bits. Protocols with many bits for error control messages and other types of overhead have lower throughput. Technologies such as Deep Packet Inspection and Traffic Shaping are used to mitigate the effect of delays associated with these protocols. (See the section “Traffic Shaping,” later in this chapter, for more information.)

So, What Are Carriers and ISPs?

At one time, the term “carrier” referred to local telephone companies, such as Verizon Communications, that carried voice and data traffic for consumers and commercial organizations. Now, all companies that provide outside cabling or mobile infrastructure and operate networks are generally referred to as carriers. These include cable television operators, cellular telephone companies, long distance providers and traditional local telephone companies. Cable TV operators, mobile carriers, and traditional local telephone companies transmit voice, data, and television signals as well as providing connections to the Internet. To complicate matters further, carriers are also referred to as operators and providers.

ISPs at one time provided the connections to the Internet and information services over a carrier’s cabling, and sometimes provided the switching infrastructure needed to access the Internet. ISPs also provide e-mail hosting and other services over a carrier’s infrastructure. In everyday usage, the terms carriers and ISPs are used for all types of providers. This is due to the fact that telephone companies such as AT&T as well as traditional ISPs provide access to the Internet, e-mail services, and e-mail addresses to customers.

Deep Packet Inspection: Monitoring, Prioritizing, and Censoring Traffic

In order to manage the networks they operate, companies and carriers need information about traffic patterns. They monitor security, congestion, and whether there is too much capacity. Enterprises, healthcare organizations, and universities monitor outgoing traffic to safeguard privacy and intellectual property such as patents, social security numbers, formulas for new drugs, and details on new technologies. An important tool in managing both enterprise and telephone and cellular company’s networks is deep packet inspection.

Deep Packet Inspection (DPI) accomplishes this by analyzing the contents of packets transmitted on network operators’ landline and mobile networks. DPI can examine the content in the headers of packets as well as user content. It inspects and looks for patterns in header information, such as error correction, quality of service, and end-of-message bits, and sometimes the e-mail messages themselves.

Large, modern packet networks typically carry a mix of rich media traffic, including television, movies, game, voice, and music streams, as well as data. Faster processers and more affordable memory have led to DPI switches and software that enable carriers, large universities, and enterprises to manage congestion in real time as well as offer new services on these diverse mobile and landline networks.

Depending on how it’s configured, DPI equipment can monitor the header only or the header plus the user data (payload) of e-mail and instant messages. Capturing the payload of a message is referred to as data capture. Data capture can be used to store messages for later scrutiny or to scan messages in real time. Either function requires high-speed processors and massive storage archives.

Deep packet inspection is one tool that organizations, telephone companies, and governments use to manage traffic in the following scenarios:

  • Image On a specific carrier’s Internet networks

  • Image Between residential customers and their carriers

  • Image On mobile networks

  • Image Between enterprise locations

  • Image On enterprise links to the Internet

  • Image Within the internal networks of an enterprise

In addition to the above functions, deep packet inspection is used to censor messages critical of governments.

DPI in Organizations: Protecting Confidential Information

Organizations use Deep Packet Inspection to block access to specific non-business-associated web locations such as Facebook (the social network site) to cut down on unnecessary traffic on their networks and increase employee productivity. Importantly, they also use Deep Packet Inspection to protect their networks against hackers.

Private organizations use DPI to ensure that intellectual property is not leaked. For example, it may monitor outgoing traffic for keywords. Healthcare and educational institutions that collect private information such as social security numbers or student information monitor outgoing traffic to ensure privacy of patients and students, to be in compliance with privacy regulations.

Universities, for example, have applications with DPI capabilities enabling them to block outgoing e-mail containing students’ social security numbers and student identification numbers from all computers except those few computers in, for example, human resources or registrars’ computers that need to collect the information.

Moreover, because DPI software can detect packet content as well as header bits, it is one way to recognize malware hidden within for example graphics in PDF documents.

Governments Monitor: Terrorism, Web Access, and Unfavorable Comments

Governments often request that their country’s wireless and wireline telephone companies monitor and/or censor e-mail messages that they might consider harmful. DPI can be used, for example, to track terrorists or people critical of the government. They may also request that telephone companies block access to particular web sites. Governments additionally request that telephone and cellular companies monitor networks for terrorism suspects and other criminal acts such as disseminating child pornography.

Governments do this to block access to web sites they consider detrimental to their goals. For example, the Chinese government blocked access to certain web sites in the United States including Google. In August 2016, Turkey blocked access to Facebook, PayPal, YouTube, and Twitter, in response to a failed military coup.

However, some citizens figure out ways to circumvent efforts to block them from reaching particular web sites. One method they use is to create a VPN (virtual private network) to directly access a web site.

Carriers, Networks: Categorization and Billing

DPI is an application that can potentially be used by carriers to discriminate against competitors’ traffic. For example, using DPI, a carrier can slow down or block traffic generated by competitors’ services. See Chapter 6, “The Internet,” for information on network neutrality. Network neutrality refers to carriers treating their own and competitors’ traffic in an equal manner.

DPI further enables telephone companies to categorize traffic in real time to prioritize particular applications or traffic from their own subscribers. T-Mobile and Sprint both announced plans to allow cellular subscribers unlimited video streaming.

DPI systems have the ability to exchange information with a carrier’s billing system to support specialized offerings for data plans covering e-mail, songs, games, video, and web browsing. A carrier might offer plans in which customers are allowed to use 3Gb of data for a fixed price, with metered pricing kicking in on anything over 3Gb.

Metered pricing is a billing practice in which customers are charged by usage, rather than a flat rate, for unlimited or predetermined amounts of minutes or data bits. In some broadband networks, carriers throttle certain customers’ transmission if they use more bandwidth than the carrier has allotted or if there is congestion. This has happened in broadband and cellular networks even with customers who are on unlimited usage plans.

For example, carriers have slowed down subscribers’ traffic if they are considered “bandwidth hogs.” In particular, these subscribers might watch 10 hours of video a day and be among the top users of bandwidth capacity in a carrier’s network. Carriers want to ensure that a few users don’t cause undue congestion for other subscribers.

Traffic Shaping: Prioritizing Traffic

Deep Packet Inspection can provide information on network conditions because of its capability to see more fields in packets than only the “send to” address. Discerning only a packet’s address is analogous to looking at the address of an envelope. DPI provides the capability to determine which application the packet is using by examining patterns within packets. It can distinguish Voice over Internet Protocol (VoIP) traffic from data, gaming, and video traffic. Carriers use traffic shaping to prioritize particular types of traffic.

Giving certain types of traffic priority over other types is referred to as traffic shaping. Traffic shaping can be used to prioritize and deny access to the network by types of traffic, e.g. music vs. data or video, and traffic from particular organizations. Traffic from healthcare providers or companies that pay extra fees might be given higher priorities than other traffic.

DPI software develops a database of patterns, also referred to as signatures. Each signature or pattern is associated with a particular application such as peer-to-peer music sharing or protocols such as VoIP. It can also be associated with traffic from certain hackers or even terrorists attempting to launch malicious attacks with a goal of disrupting Internet or government sites. The DPI software matches its own database of patterns found in packets to those associated with particular applications, and network attacks.

DPI software can be installed in stand-alone switches connected to an ISP or carrier’s network or as part of routers or firewalls. Routers are used to connect traffic to the Internet as well as to select routes to other networks. Firewalls are hardware or software that is designed to protect networks from hackers by looking for and blocking unusual header information and known viruses and to detect outgoing messages with confidential and private information.

Compression

Compression is a technology that reduces the size of video, data, image, and voice files so that they take up less space in networks and on hard drives in computers. Compression has the same effect as a trash compactor that reduces the amount of space trash consumes. Compression shrinks the size of files without materially changing images or text. The benefit of compression is that it dramatically decreases the amount of network capacity (bandwidth) needed to transmit high definition TV, music, and movies.

Compression additionally facilitates the ability of companies to store video files, and large databases can use less disc space on computer storage systems. The ability to store more information using less computer capacity has led to companies, governments, social networks, and marketing organizations storing petabytes of data.

Additionally, compression has drastically changed the entertainment business, sports viewing, online games, and has led to the formation of new business models including streaming media companies, specialty set-top boxes such as Apple TV and Roku, and video productions of online video ads.

Compression increases throughput, the amount of user data or video transmitted, without changing the actual bandwidth (capacity) of the line. In other words, it takes fewer bits to transmit a movie, but the bandwidth of the network is the same. A given amount of bandwidth is able to transmit more user information, TV, audio, and video, with fewer bits.

At the receiving end compressed files are de-compressed, re-created, in the exact images as before they were transmitted or at a slightly lower quality. Text is re-created exactly as it was before it was compressed so that numeric information or product information is not altered. However, when it’s received, compression might re-create video and voice in varying degrees of lower video resolution or voice quality with acceptable, often barely noticeable, alterations. Certain critical files such as MRI and x-ray images are decompressed without losing any quality.

There are a number of standardized compression algorithms (mathematical formulas used to perform operations such as compression) that enable compressed text and video to be transmitted in a format that can be easily decompressed at the receiving end. The following is a list of commonly used compression protocols:

  • Image UHD is used for Ultra High Definition TV

  • Image 8K UHD is a video compression standard that compresses files to 4320p progressive lines of non-interlaced vertical display. Progressive video displays odd and even lines in video at the same time. Interlaced displays show all the even lines and then all the odd lines.

  • Image HDR (High Dynamic Range) has a higher range of luminosity, which results in greater color contrast and improved color accuracy. Netflix is now streaming some shows in HDR to subscribers with broadband bandwidth between 16 and 20Mbps.

  • Image Various MPEG standards are used to compress and decompress audio and video. MPEG stands for Moving Picture Experts Group.

  • Image MPEG-4 is a standard used mainly for streaming and downloading compressed video and television.

  • Image AAC Advanced Audio Coding is Apple’s proprietary compression used in iTunes and Apple Music. It is part of the MPEG-4 standard.

  • Image Most Windows-based personal computers store and transmit files using WinZip compression. It is available for MAC computers as well. It can compress a group of files into one “package.”

See Table 1-4 in the “Appendix” section at the end of this chapter for a more complete listing of compression standards.

Streaming: Listening and Viewing without Downloading

For the most part, when people watch television and movies from the Internet, the content is streamed to them. They do not own or keep copies of what they’re viewing. The ability to stream high-quality music, TV shows and movies has drastically changed the music and entertainment industries. This ability is a result of advancements in compression. While some people own CDs and DVDs, ownership and rentals of DVDs and ownership of CDs have shrunk considerably due to the ease of streaming and downloading music, movies, and television shows over the Internet.

Streaming is different from downloading. Downloading requires an entire file to be downloaded before it can be viewed or played. With streaming, the user can listen to music in real time, but cannot store it for later use. Spotify, Pandora (now owned by Spotify), Google Play Music, and Apple Music each offer streaming music services. With streaming music, subscribers are able to listen to more targeted choices, often without ads, than that offered on AM and FM radio. Most music sources charge a monthly subscription fee for the ability to listen to music without hearing ads.

In addition, most free music services are streamed at a lower quality than paid streaming music services. For example, Pandora free services stream music at 64 Kbps, and Spotify’s streams at 160 Kbps. Pandora’s paid subscription streams at 192 Kbps and Spotify’s streams at up to 320 Kbps. Apple Music has no free subscriptions, only paid ones where listeners stream music from Apple’s library of 300,000 songs at 256 Kbps using AAC (Advanced Audio Coding) compression.

Because many young people have too many songs to store on their smartphones, they store their music in the cloud. In addition to gaining storage capacity, they can stream their songs wherever they are as long as they have access to the Internet. Moreover, if they lose or change devices, they can access their music from other devices.

Streaming and downloading music has caused the music industry and artists’ royalties to shrink considerably. Customers now buy their music primarily from online sources such as iTunes and listen to songs on their smartphones. This has caused sales of CDs to shrink because people want to buy just a favorite song from particular artists. They often choose not to buy a CD because they may like only one or two of the songs. Illegal free music downloads are still occurring from sites such as LiveWire.com. According to Hannah Karp’s March 30, 2017, article “In a First, Streaming Generated the Bulk of Annual Music Sales” in the Wall Street Journal, in 2016 streaming music made up 51 percent of the total music revenue in the United States. This was the first time that streaming sales accounted for the majority of music revenue compared to CDs and music downloads.

The combination of free downloads and the drastic decreases in CD sales have resulted in lower royalties to musicians. Popular singers now depend on concert sales for the majority of their income because for the most part, young people simply buy individual songs online. Or they stream music and don’t own copies of all the songs to which they listen.

Compression: The Engine behind TV over the Internet

Compression used on video and multimedia streamed over the Internet has transformed how people get their entertainment. Television and movies at sites such as Hulu (a joint venture of Disney; and Walt Disney Company’s ABC Television Group, with 60 percent; Comcast with 30 percent, and AT&T with 10 percent of Hulu.) and Netflix enable individuals to stream high quality movies, TV, sports events, and news from the Internet. Viewers now expect to view even high quality real-time sporting events over their broadband connections. To take advantage of these expectations, broadcasters and cable TV companies such as AT&T include broadband streaming in their product mix and strategy planning.

In another nod to this capability, new television sets have the capability to connect directly to the Web so that people can stream and watch broadband programming more easily on their high-definition, digital televisions. These changes have been made possible by advancements in compression, home Wi-Fi technology, and improved broadband connections.

Innovative Compression Algorithms—Fewer Bits, Higher-Quality Images

FLIF, GFWX, and PERSEUS are examples of compression software that compress images at higher speeds, achieve higher quality images, and require less bandwidth when compressed images are transmitted over broadband networks than older compression standards. All of the above compression methods were introduced in either 2015 or 2016.

FLIF compression, which stands for Free Lossless Image Format, shrinks the size of files dramatically. FLIF as the name states is Lossless. Lossless refers to the fact that files are stored at the same quality as the original image. Another feature of FLIF is that it is progressive. As images are downloaded, a preview appears so that people see a preview of the image before it is fully downloaded. Because it is lossless, FLIF is suitable for all types of images including MRIs, photographs, and art. FLIF is available without cost or royalties under GNU General Public License.

GFWX, Good, Fast Wavelet Codec, was developed by Graham Fyffe at the University of Southern California to store large amounts of video. It encodes and decodes videos faster than earlier compression standards. It can store videos in lossy and lossless formats. With lossy formats, bits deemed non-essential are eliminated when the video is stored. Images on DVDs are stored in lossy formats.

V-NOVA’s PERSEUS compression software is based on the premise that there is a lot of spare digital processing capacity in the multiprocessing chips that process the complex mathematical algorithms that are the basis of this compression. PERSEUS compression is encoded in parallel rather than in sequential, serial lines of code. The complex mathematical algorithms coded in hierarchical, parallel streams take advantage of the multiprocessing capabilities in digital devices such as set-top boxes, video games, mobile devices, computers, and storage systems where processors can simultaneously process complex mathematical algorithms encoded in parallel streams.

According to V-NOVA’s co-founder Guido Meardi, the fact that compression enormously shrinks the number of bits needed to transmit data will enable mobile networks to carry vast amounts of video and rich media applications. This is particularly important in countries with older cellular networks not built to transmit bandwidth-heavy video and medical images. Compression that vastly shrinks the number of bits required for medical imaging could bring telemedicine to people in countries with predominantly older cellular networks. V-NOVA’s compression is being deployed in Nepal where doctors collaborate over their older cellular network with radiologists in New York City on interpretation of MRIs and x-rays.

Compression Applications
  • Image Streaming TV in India—FastFilmz, an Indian streaming TV provider similar to Netflix, uses PERSEUS compression to stream standard definition (SD) TV compressed down to 128 kilobits per second to customers with 2nd generation cellular service. Currently, about 70 percent of consumers in India have 2G cellular service. The 9 percent of customers in India with 3G cellular are able to receive high definition (HD) that requires only 2.5 to 3.5Mbps rather than 9Mbps. Prior to FastFilmz’s use of V-NOVA’s PERSEUS two thirds of people in India could not receive any streaming video because of 2G’s low capacity.

  • Image Concentration camp survivors’ testimony—At the University of Southern California, GFWX compression is used to manage storage of videos compiled by the New Dimensions in Testimony project. The videos are clips of remembrances by survivors of World War II concentration camps. The U.S. Army Research Laboratory and the USC Shoah Foundation sponsored the project.

Plug-Ins for Software Upgrades and Decompression

The use of plug-ins enables compression applications to transmit upgrades to devices using their software. Plug-ins are small programs used to ease sending upgrades to devices. Plug-ins can additionally decode compressed images and files sent over networks, thus avoiding the need to install hardware at customers’ sites to decompress received files such as images, videos and text.

Note

Using Codecs to Compress and Digitize Speech

Speech, audio, and television signals are analog in their original form. Analog signals are transmitted in waves; digital signals are transmitted as on and off bits. Before they are transmitted over digital landlines or wireless networks, codecs compress (encode) analog signals and convert them to digital bits. Codecs sample speech at different amplitudes along the sound wave and convert it to a one or a zero. At the receiving end, decoders convert the ones and zeros back to analog sound or video waves.

Codecs are located in cellular handsets, telephones, high-definition TV transmitters, set-top boxes, televisions, IP telephones, and radios. Codecs also compress voice in speech recognition and voicemail systems. With compression, codecs do not have to sample every height on the sound wave to achieve high-quality sound. They might skip silence or predict the next sound, based on the previous sound. Thus, fewer bits per second are transmitted to represent the speech.

Virtual Reality—Immersive Experiences

Virtual reality (VR) is the use of specialized software and hardware to produce video that creates an immersive experience for people that play video games, and watch videos, sporting events, and movies. The goal of virtual reality is to create the illusion that people watching movies and sporting events are actually at the event rather than watching from a remote location. They are virtually experiencing it. For example, the experience of watching someone climb a mountain via VR can give the viewer a sense of the peril and freezing temperature experienced by the climber. As a result of this immersive experience, virtual reality movies that depict assaults on people or scenes of destruction caused by war or natural disasters may create empathy for people viewing these videos.

Virtual reality (VR) is a technique for capturing immersive images and videos. For example, people can see the cruise ship or resort in VR to help select a vacation venue. Images captured in virtual reality formats have far greater detail than available on traditional photos and videos. The viewing angle is also larger, and not limited to just a TV screen.

Rather, the video image appears to be all around viewers. High-quality cameras and viewing devices are needed to view, transmit, and decode 100- or 120-degree images. Special software can also be used to “stitch” a few cameras together to create wide virtual reality images. In the future, social networks might allow people to upload posts in virtual reality formats.

According to Mike Quan, Co-Founder of Boston-based Boston 360:

Social media has connected us. Virtual reality provides people the opportunity to experience another person’s life, almost to see it through someone else’s eyes. It’s the ultimate telepresence.

Head Mounted Displays for Viewing Virtual Reality Content

Viewing virtual reality content requires special headsets called head mounted displays (referred to as HMD or goggles) with high-powered lenses and controls that enable users to focus in on parts of the video and to tilt the video. Additionally, a virtual reality app (small application) must be downloaded to their smart phone to enable virtual reality formats. The app in combination with a head mounted display enables people to watch high definition videos in a virtual reality format.

Depending on the headset, users can control how they see the video with a physical remote or their hands. For example, moving their hands or their phone in a certain direction enables them to look at the video sideways. The virtual reality software on their smartphone or PC combined with the headset provides an immersive experience so that users feel as though they are actually within the movie. VR headsets are commonly connected to PCs so that users can play video games on their computers in a virtual reality format.

Compression software, multifunction graphical processing chips, increases in memory capacity, and financial backing from companies that see potential profits from virtual reality all enable virtual reality development. For example, graphical processing chips process large chunks of video, and powerful compression software means fewer bits need to be transmitted to create virtual reality images. Samsung, Facebook’s Oculus Rift, Sony, Dell, Leap Motion, and Google manufacture virtual reality headsets. See Figure 1-9 for an Oculus Headset.

A photograph of Virtual Reality headsets.

Figure 1-9 A VR headset. (Photo by cheskyw/123RF)

Open Source software developer Unity provides a 3-D software tool for creating virtual reality applications that a majority of the about 2 million virtual-reality developers use to develop virtual reality applications. In addition, Google, Facebook, Apple, and Amazon are developing virtual reality capabilities for their smartphones and web sites.

The most commonly used virtual reality applications are video games. Additional applications under development or in use include:

  • Image Live televised sports events that people with headsets and compatible smartphones can view. VR provides 360-degree views of sporting events from all angles: front, sides, and back.

  • Image Training professional athletes with simulation of tactics.

  • Image Real estate applications where prospective buyers see homes for sale without actually having to travel to each home.

  • Image A remodeling virtual reality application being tested by Loews that enables customers to envision how remodeling projects will look.

  • Image Social network posts that enable people to connect with each other in a more personal way using posts in virtual reality formats to share experiences and feelings.

  • Image Medical training that involves virtual reality videos for operating nurses and anatomy courses where students use VR to learn anatomy and gain “first-hand” knowledge of steps in medical procedures.

  • Image Drones with high-quality cameras that capture images on the ground in virtual reality format.

Technical and Content Availability Challenges

One of the current issues with virtual reality is that it can create nausea in viewers. This is because people turn their heads faster to look at images than current frame rates can process these view changes. Because of this delay, people don’t see what they expect to see. As the technology matures, this issue should be resolved.

Another challenge is the need for additional content in virtual reality formats. Developers are waiting for additional VR users and buyers are often reluctant to purchase head mounted displays and download apps until more virtual reality content is available.

Additionally, current cellular networks don’t have the capacity to support virtual reality. VR requires either direct connection to broadband or Wi-Fi connected to broadband Internet links. Thus, it is limited to indoor use. Next generation 5G networks may have the capacity to support virtual reality. See Chapter 7, “Mobile and Wi-Fi Networks,” for information on 5G technologies.

Augmented Reality

Augmented reality, on the other hand, adds pictures of animated characters or other images to what users see when they look around them. Augmented reality requires specialized apps on smart phones, but not always specialized headsets or headsets resembling eyeglasses. The mobile game Pokémon Go, which was released in 2016, is an example of the use of augmented reality (AR). Pokémon Go, a joint effort of Niantic Labs, Nintendo, and Pokémon Company as reported in the July 12, 2016, Wall Street Journal article, “What is really behind the Pokémon Go Craze,” by July of 2016, was the most profitable game on Google and Apple’s app stores up until that time.

Augmented reality needs smartphones’ cameras, GPS, and position sensors. Position sensors rely on accelerometers within smartphones. All new smartphones are equipped with accelerometers able to detect how users tilt their smartphones. For example, are they tilting them sideways or straight up? Accelerometers enable smartphone viewers to modify the angle at which they view images.

Augmented reality is used in manufacturing and product assembly. In one example, workers wear special glasses such as Google Glasses that flash images of instructions and diagrams sent from computers to the glasses with directions for the next steps in processes. In this way, workers don’t have to interrupt their work to read instructions.

Increasing Network Capacity via Multiplexing

Multiplexing combines traffic from multiple devices or sources into one stream so that they can share a circuit or path through a network. Each source does not require a separate, dedicated link.

Like compression, companies and carriers use multiplexing to send more information on wireless airwaves, fiber networks, and internal Local Area Networks (LANs). However, unlike compression, multiplexing does not alter the actual data sent. Rather, the multiplexer at the transmitting end combines messages from multiple devices and sends them on the same wire, wireless, or fiber medium to their destination, whereupon a matching device distributes them locally.

One important goal is to make more efficient use of the most expensive portion of a carrier’s network so that the carrier can handle the vast amounts of traffic generated by devices such as smartphones and computers. Multiplexing is also used by enterprises that link offices together or access the Internet by using only one circuit (a path between sites) rather than paying to lease multiple circuits from their provider. The two most commonly used types of multiplexing are time division and statistical.

Time-Division Multiplexing

Time-Division Multiplexing (TDM) is a digital multiplexing scheme that saves capacity for each device or voice conversation. Once a connection is established, capacity is saved even when the device is idle. For example, if a call is put on hold, no other device can use this spare capacity. Small slices of silence in thousands of calls in progress result in high amounts of unused network capacity. This is the reason TDM is being replaced in high-traffic portions of networks by VoIP technologies, in which voice packets are interspersed with data and video traffic more efficiently, without wasting capacity. Thus, network capacity is not wasted during pauses in voice or data traffic.

Statistical Multiplexing: Efficient Utilization via Prioritization of Network Services

Statistical multiplexers do not guarantee capacity for each device connected to them. Rather, they transmit voice, data, and images on a first-come, first-served basis, as long as there is capacity. Ethernet, the protocol used in local area networks, Wi-Fi networks, and broadband networks is an example of statistical multiplexing. Unlike time division multiplexing, statistical multiplexing is asynchronous. A particular amount of capacity is not dedicated to individual devices. Rather, capacity is allocated asynchronously, on demand, when requested in an uneven manner.

Statistical multiplexers can be used in a Wide Area Network (WAN) to connect customers to the Internet. It is also the most common method of accessing LANs within buildings. The Ethernet protocol uses statistical multiplexing for access to LANs. On Ethernet LANs, if more than one device attempts to access the LAN simultaneously, there is a collision and the devices try again later. Ethernet gear and software is used for LAN access and is located in each network-connected device, such as a printer, computer, or security monitor.

Statistical multiplexers support more devices and traffic than TDMs because they don’t need to save capacity when a device is not active. Carriers sell WAN Internet access via carrier Gigabit Ethernet offerings, supporting a range of speeds from 10Mbps to 100Gbps. If there is a surge in traffic, such as during peak times, the carrier can temporarily slow down traffic. However, because Gigabit Ethernet’s statistical multiplexing has the capability to prioritize traffic, customers who contract for more costly, high-priority service can obtain higher capacity than customers with lower-priority service during traffic spikes.

Other types of multiplexing in addition to statistical multiplexing include dense wavelength division, a form of frequency division multiplexing which divides traffic among frequencies in fiber-optic networks to increase the fibers’ capacity. Dense wavelength division multiplexing is discussed in the section below on fiber optical cabling.

  • Image Gigabit Ethernet, used in carrier and enterprise networks, can carry data at speeds of one gigabit per second (1Gbps) to 100 gigabits per second (100Gbps).

  • Image Terabit speed routers deployed on the Internet are capable of transmitting at a rate of 1,000 gigabits or 1 terabit; 10 terabits per second = 10,000,000,000,000 bps.

Using Protocols to Establish a Common Set of Rules

Protocols enable disparate devices to communicate by establishing a common set of rules for sending and receiving. For example, TCP/IP is the suite of standard protocols used on the Internet with which different types of computers, running a variety of operating systems, can access and browse the Internet. The fact that TCP/IP is simple to implement is a prime factor in the Internet’s widespread availability.

The fact that protocols used on the Internet are available for free in their basic form and work with a variety of browsers, operating systems, and computer platforms makes them attractive interfaces for enterprises, hosting, and cloud-computing sites that support remote access to services such as Microsoft Office documents. A web-based interface compatible with many types of computers and operating systems enables enterprises to support software at a central site.

Installing software at a central site minimizes support requirements. When an IT department supports applications such as the Office suite that are installed on each user’s computer, it must download software and updates to every computer and ensure that each has a compatible hardware platform and operating system. By locating the software at a central site rather than on each user’s computer, updates and support are simpler. In addition, fewer IT employees are required to maintain servers with applications on them in remote offices.

However, many of these frequently used protocols are structured in such a way that they add a great deal of overhead traffic to the Internet and enterprise networks. This is because these protocols require numerous short signaling messages between the user and the Internet for functions such as identifying graphics, checking for errors, and ensuring correct delivery and receipt of all the packets.

The following protocols are used on the Internet and on corporate networks that have a web interface to information accessible to local and remote users.

  • Image Ethernet This is the most common protocol used in corporate LANs. It defines how data is placed on the LAN and how data is retrieved from the network. Wi-Fi wireless networks are based on a different form of Ethernet.

  • Image Hypertext Markup Language (HTML) This is the markup language used on the Internet and on enterprise networks. Employees who write web pages for their organization often use it. HTML commands include instructions to make text bold or italic, or to link to other sites. Instructions, known as tags (not visible on Internet documents), are bracketed by opening and closing angle brackets (< and >), otherwise simply known as the less-than and greater-than symbols. Tags indicate how browsers should display the document and images within each web page. For example, <bold> is a command for bolding text. Tags are delivered along with the web page when users browse the Internet and access information through web interfaces at enterprises. They are good examples of overhead bits.

  • Image Hypertext Transfer Protocol Secure (HTTPS) This is a protocol used to transfer data over the Internet and other networks in a secure fashion by encrypting data. It provides protection from hackers. It can authenticate (verify) that the network connected to is not spoofed, meaning it is the network that it purports to be and that another network has not intercepted a communication intended for a different site.

  • Image Extensible Markup Language (XML) This is another markup language based on elements surrounded by tags that identify fields requiring user input. The tag <name> is an example of a tagged label; it is not visible to users. Other variable labels might include quantity, address, age, and so on. Firms can analyze responses provided by visitors to a site who fill out online surveys. Tagged responses can be sorted by fields such as geography or age. XML enables computers to automatically process responses collected online or in specialized applications such as purchasing and ordering functions in businesses. The protocol-related tags and labels identifying fields in XML create the many extra overhead bits transmitted along with documents containing XML commands.

  • Image Simple Object Access Protocol (SOAP) This enables communications between programs on different operating systems such as Windows and Linux by specifying how to encode, for example, an HTTP (Hypertext Transfer Protocol) header and an XML file. It eliminates the requirement to modify infrastructures to process HTTP and other communication transport protocols on networks.

Protocols and Layers

When describing their products’ capabilities, organizations often refer to the OSI layers. In the 1970s, the International Organization for Standardization developed the Open Systems Interconnection (OSI) architecture, which defines how equipment from multiple vendors should interoperate. Architecture refers to the ways that devices in networks are connected to one another.

Although not widely implemented because of its complexity, OSI has had a profound influence on telecommunications. The basic concept underpinning OSI is that of layering. Groups of functions are divided into seven layers, which can be changed and developed without having to change any other layer. (See Table 1-5 in the “Appendix” section at the end of this chapter for a complete list of layers.) LANs, public networks, and the Internet’s TCP/IP suite of protocols are based on a layered architecture.

An understanding of the functionality at each layer of the OSI provides an understanding of the capability of particular protocols and equipment. Examples of the layers include the following.

  • Image Layer 2: Switches This layer corresponds to capabilities in the Data Link Layer, functioning as the links to the network. However, Layer 2 devices cannot choose between multiple routes in a network. Layer 2 switches are placed in wiring closets in LANs and route messages to devices within their LAN segment. See Figure 1-6 for an illustration of how this works.

  • Image Layer 3: Switches and Routers Layer 3 corresponds to the Network Layer. These devices can select an optimal route for each packet. Routers select paths for packets on the Internet and route messages between networks.

  • Image Layer 5: Encryption Protocols These are Session Layer protocols that reorder data in packets by using complex mathematical algorithms to keep data private.

  • Image Layer 7: Deep Packet Inspection (DPI) DPI services include Application Layer capability. DPI can look into packets to determine the application of the data within them.

Knowing the capabilities in each layer helps in understanding the protocol and equipment capabilities being described.

Virtualization: Space, Cost, and Maintenance Efficiencies

The term “virtual” refers to entities such as networks or servers that provide the functions of the physical devices that they are emulating. A virtual machine is software with the functionality of a computer. Server refers to single servers performing the functions of multiple servers. To illustrate, multiple virtual machines can exist within a single server, with each virtual machine performing the functions of a single server.

Note

Without server virtualization, each server in a data center would support only a single operating system. Virtualization enables each server to run multiple operating systems, with each operating system running multiple applications. Each operating system running multiple applications is a virtual machine. This reduction in the number of servers required to support vast numbers of applications made virtualization a key building block for cloud computing.

Technical advances have enabled virtualization to make it possible for large enterprises and cloud computing providers to consolidate servers. Supporting more than one operating system on a single physical server requires large amounts of processing power. However, with the development of powerful multi-core processors, parallel-computing streams can perform multiple computer instructions simultaneously.

Virtualization host operating software from companies such as VMware, Inc. (VMware was developed by EMC, which is now owned by Dell) and Microsoft allocate and manage computer resources such as memory, hard-disk capacity, and computer processing between the operating systems and applications on a server. Virtualization management software simplifies data center operations by providing the ability to allocate more resources in a data center from a single interface.

Server sprawl can be a problem when many applications are replicated on diverse servers. It is often a challenge to manage the large number of virtual machines located within a data center.

Scalability and Energy Savings

Carriers, ISPs, enterprises, and developers adopt virtualization as a way to save on energy, computer memory, staffing, and hardware costs. Installing applications on multiple virtual computers on each physical server decreases the number of physical servers required. It also ensures that there is less wasted capacity than on physical servers used for a single application, often using only 10 percent of the physical server’s capacity. This makes data centers more scalable. Applications can be installed more easily, without adding extra hardware. Rather, a new virtual machine can be added to a physical computer that has spare capacity until the physical server is at between 70 percent and 80 percent of capacity.

In addition, having fewer computers to run applications results in less space used and lower facility cooling costs. Although individual servers running virtual operating software are more powerful and require more cooling than less-powerful servers, the reduction in the total number of physical devices results in overall energy efficiency.

Virtualization—Enabling Cloud Computing

Virtualization is a major enabler of cloud computing. Server virtualization refers to a single physical server with multiple operating systems and applications. Prior to server virtualization, each unique operating system required its own physical server. Virtualized servers enable multiple operating systems as well as multiple applications to reside on the same server.

Large cloud providers commonly have multiple data centers that contain replicated copies of all data. If a data center becomes disabled, another can easily take over its functions. Virtualization makes it less costly and complex for providers to support multiple data centers in different physical locations. This results in a reduction of physical services, less electrical power, and less cooling, thus lowering providers’ energy costs.

Moreover, virtualization enables data centers to support multiple developers by providing a virtual computing platform for each developer while he is logged on to a virtual machine. Multi-core processors enable multiple developers to simultaneously log on to the same physical server. When a developer logs off from his area of the server, computing power is freed up for other uses.

Because of security and privacy concerns, large companies often do not want their applications and files on the same physical servers as those of other organizations. In these instances, they can elect to reserve a group of servers for their own use. Amazon refers to this feature as a Virtual Private Cloud. Other providers offer similar features. Not surprisingly, there is an extra monthly fee associated with this service.

Managing Virtualization

Organizations can realize many benefits by implementing virtualization capability in servers and storage networks. They also take on challenges managing them in complex environments.

Managing Memory, Virtual Machines, and Disk Storage in Virtualized Data Centers

Server virtualization has many benefits, including saving money on electricity, heating, and cooling. However, there are challenges in large data centers. One such challenge is managing memory in the host physical servers. Newer operating systems installed on host servers have more code and require more memory to operate. IT staff members allocate memory to applications on virtual machines by using VMware’s Hypervisor software. Staff members need to monitor applications’ memory usage so that servers can be upgraded or applications moved to other hosts if memory in the current host is not adequate. If this isn’t done, applications will run slowly and response times on individual user computers will be degraded.

New servers are equipped with eight CPUs—quite literally, eight CPU chips on a single host server. However, as the amount of processing needed to run programs grows, even this is not always adequate for the virtual machines installed on them. Programs that have sound and video are “fatter,” with more code, which requires additional CPU overhead. It’s difficult to estimate under these conditions the amount of CPU power that’s required for applications running on virtual machines.

In addition, disk storage is being used up at a faster rate. Users now routinely store MP3 files in their e-mail inboxes. In data centers where user files and e-mail messages are archived, this can deplete spare storage in Storage Area Networks (SANs) at a faster rate than planned. Thus, storage, memory, and processing requirements should be monitored. In large, complex data centers, it can sometimes be easier to monitor and manage single physical servers rather than virtual machines and storage.

In small organizations, managing memory, storage, and CPU usage is not as complex. With only three or four physical servers, it’s not as difficult to track and manage resource allocation.

Server Sprawl

Server sprawl is the unchecked proliferation of virtual machines on physical host servers. (Virtual machines are also referred to as images.) Managing server sprawl in large data centers is a major challenge. Because it’s easy to install multiple images of applications on virtualized servers, the number of applications can escalate rapidly. Data centers that previously had 1,000 servers with 1,000 applications can potentially now have 8 images per physical server and 8,000 applications to manage.

Another cause of server sprawl occurs when an application is upgraded. To test the application before it’s upgraded, management creates an image of the application, upgrades the image, and then tests the upgrade on a small group of users before making the upgrade available to all users. However, often the original and the upgraded application are left on the physical server, which further contributes to sprawl. To complicate matters further, if either the original or the upgraded version is moved to a different physical server, it can be difficult to determine that there is a duplicate. Containers, which are discussed below, are also vulnerable to sprawl.

Containers: A Newer Form of Server Virtualization

Containers are servers with a single operating system shared by multiple small programs. Like server virtualization, containers enable a single physical server to hold multiple applications and components of applications.

Unlike virtualized servers in which each virtual machine (each application) requires a separate operating system, applications in containers share a single operating system. See Figure 1-10 for a comparison between containers and virtualized servers. This enables containers to hold many more applications than virtualized servers. This is because in virtualized servers, each application has an operating system and also a virtual copy of the hardware that runs the virtual machine. Thus, each operating system uses memory and server capacity.

A figure illustrates the comparison of virtualized servers and containers.

Figure 1-10 A comparison of virtualized servers and containers.

In contrast, container technology’s use of a single operating system is an efficient use of memory and disk space. Because of this, containers hold many more applications than virtual servers. Netflix alone has over 500 micro services in its containers located in Amazon’s and Google’s data centers in the United States and Europe.

Containers are used to hold small, related programs that are referred to as micro services. For example, Netflix and LinkedIn might use containers to differentiate and manage each of the many services they offer customers without having to change programming for other services. This is efficient when organizations need to modify just part of their offerings. With containers, programmers don’t need to alter their entire application, just a particular piece of it: a micro service. For this reason, applications in containers are less complex to modify. Thus, if there’s a change in one service, programmers need only modify that particular micro service.

Factors in Choosing between Containers and Virtualization

Enterprises might choose virtualized servers when they have applications that require a variety of operating systems in a single physical server. This is because containers support only a single operating system per server.

Organizations that switch to containers from virtualized servers need to rewrite their applications so that they’re compatible with the open source operating systems used with containers and with the container platform, e.g. Docker. Examples of these open source operating systems are: Red Hat, Linux, Ubuntu, and Rocket. Windows operating systems can also be used in containers. There is no cost for using open source software, but a company may need to hire a programmer with the skill to rewrite the programs for compatibility with their container’s operating system.

Container applications are installed on bare metal servers, which have only a single operating system. Many containers are installed on cloud platforms such as those provided by Amazon and Rackspace. Private equity firm Apollo Global Management LLC owns Rackspace.

Each container is located on a private server dedicated to a single organization. Multiple customers do not share containers in the cloud. This is an advantage for customers who want to be assured that the total capacity of the server is reserved for their applications. They are isolated from traffic, also referred to as noise, from other customers’ traffic, which may use much of the physical server’s resources.

Applications on containers located in the cloud or in large data centers are accessed via application program interfaces (APIs), small programs that translate between programs in the containers and those in the data center from which staff access the programs in the container. APIs are used by both developers and end-users. Accessing an application is referred to as pulling an image (an application) down.

Security Challenges with Containers

Because there is only one operating system in containers, if that one operating system is hacked, the entire container with its large number of micro services is likely to be damaged. Thus IT staff need to be extra vigilant about who is allowed to access each container. Additionally, installing certain applications as read-only will also protect against hackers damaging micro services on containers.

Docker: A Container Software Platform

Docker software was developed as open source, free software. It is a software platform that enables developers to write applications compatible with containers. Docker is a set of software tools for coding, testing, and running applications on Linux- and Windows-based containers. The Docker company offers paid consulting and container set-up services as well as the open source Docker software.

The Cloud: Applications and Development at Providers’ Data Centers

Cloud computing refers to the paradigm by which computing functions, document storage, software applications, and parts of or all of a data center are located and maintained at an external provider’s site. Businesses use cloud computing to more quickly develop applications and to eliminate the need to maintain and update applications. Companies additionally can use the cloud to dramatically shrink the size of their data centers. Start-ups and small businesses often eliminate them altogether and rely solely on cloud services.

Cloud computing is based on a distributed architecture structure where providers’ data centers are duplicated in multiple geographic locations. See Figure 1-11 for distributed cloud data centers. Cloud providers generally build duplicate data centers that are connected to each other by high capacity fiber optic cabling. For example, they may have data centers on the west coast, on the east coast, and a few in the central or western states. The goal of connecting the data centers with fiber-optic cabling is that if one data center fails, a duplicate data center can take over operations.

A figure shows the cloud Providers data centers connected through Fiber-Optic Cabling.

Figure 1-11 Cloud computing distributed data centers connected by fiber-optic cabling.

Organizations now have choices about where applications are located. They can place applications at:

  • Image A hosting center where companies often place their own computers and manage them remotely. The hosting company is responsible for:

    • – Security

    • – Broadband links to the hosting site

    • – Sustainability in the event of power outages and natural disasters.

  • Image Purchasing rights to software with installation at their own data centers

  • Image On the cloud where a provider manages the hardware and the application software

When cloud computing was first introduced, it was wildly popular with small start-ups that wanted to avoid investing in computer hardware because of growth uncertainties. However, larger businesses were more cautious about moving applications to the cloud because of concerns about privacy, security, and loss of control over applications. Cloud computing is now widely recognized by organizations of all sizes as a legitimate way of managing and developing applications. However, there are still concerns over security. There are additionally challenges involved in moving applications to the cloud and controlling runaway usage costs associated with the cloud.

Private vs. Public Cloud Service

Depending on their privacy and security requirements, organizations choose private or public cloud services. With private cloud service, the organization’s server is not shared with any other organization. This ensures customers that there is no interference from other customers’ applications, which might have large amounts of traffic into the server.

With public cloud service organizations, applications are on servers used by other customers. They share the capacity of the physical server with other customers. This is more commonly used than private cloud service, which is more costly.

Cloud Computing Fees

The pricing structure for most cloud-computing providers is typically based on the number of transactions and the number of users for a given application. Customers that contract with providers for e-mail or other data applications pay by the hour or by the gigabyte for CPU (computer processing unit) gigabytes consumed. In addition, cloud providers often charge an implementation fee, and might charge developers who use their platform to develop applications for bandwidth consumed on the provider’s Internet connections.

Rationale for Cloud Computing

There are many reasons why organizations adopt a cloud-computing strategy. When cloud computing was first available, most of these reasons revolved around the desire to carry out IT functions more cost-effectively with fewer capital outlays. A small start-up company, for example, might not have the resources to hire staff and purchase computing hardware. No onsite software to maintain, fewer servers, and less technical staff required for maintaining and upgrading applications were all motivations for moving applications to the cloud. In fact, smaller companies of fewer than 200 or 300 employees may have only a single server to support because most of their applications are located in the cloud.

Using a service such as Amazon Web Services or Google Apps (offered by Amazon and Google, respectively) for computing and archiving data saves start-up costs with fewer financial risks. If the business grows, it won’t outgrow the hardware that it purchased earlier. Conversely, if the business fails or slows down dramatically, it doesn’t have an investment in hardware and software with little resale value or one that might be too sophisticated for its changing needs. It is also useful in supporting IT in acquired companies and gearing up for spikes in usage.

The cloud is additionally a way for concerns, particularly start-ups, to develop and test applications before making them available to staff or customers. The following is an example of how a new company used cloud computing to test the viability of their planned applications. The start-up rented 50 client computers and one server with four application programs and one database machine from Amazon for 0.50¢ an hour to test their application without having to invest in hardware. Without the ability to test applications on Amazon, their start-up costs would have been prohibitive because they would have had to lease 50 computers plus 1 for a database.

Large enterprises were initially interested in cloud computing because there was a perception that it would be less costly than hiring additional staff as well as purchasing and supporting hardware and software for new applications. With cloud computing there is no capital investment. Companies pay usage per person, in a similar manner to renting a car.

Cost savings are not always the main advantage or impetus for using cloud computing. While capital and staff expense are lower with cloud computing, monthly usage costs can balance out these savings so that businesses don’t spend less money on overall IT services. Usage fees from employees’ extra computing and data access can eliminate expected savings as employees use more computing resources than initially envisioned.

Both large and small organizations look to cloud computing to maximize their attention and assets on core missions. They may consider some IT applications to be utilities better managed by IT experts at cloud companies. For example, they may use Microsoft Azure’s cloud-based Office applications for word processing, collaboration, presentations and spreadsheets to avoid managing and upgrading these applications. With Office in the cloud, upgrade responsibilities are taken care of at Microsoft’s data centers.

An important advantage of cloud computing is the speed at which new applications can be implemented in the cloud. This can equate to competitive advantages. For example, customer service or sales applications can increase sales revenue. Faster access to company analytical data about sales and customers are other examples of how the speed of developing applications can help businesses grow or retain current customers. In fact, while saving money may be the initial motivation, it is often not the key advantage of using cloud computing. Agility in business operations, the ability to launch strategic applications, and the ability to scale and shrink as required are stronger motivations for using the cloud. The following is a quote from an IT Director in the Boston area:

We ordered Oracle one day and the next day it was launched in the cloud. Without the cloud it would have taken three months and additional staff time. When we purchase new software, we always look for applications that are cloud ready.

Three Categories of Cloud Services—Layers in the Cloud

There are three generally agreed upon but sometimes overlapping classifications of cloud computing offerings. It’s not unusual for providers to offer more than one of these types of services.

The classifications are as follows:

  • Image Software as a Service (SaaS) Application developers manage and develop specific applications for enterprises. Enterprise and residential customers pay monthly fees to use these applications, which are generally accessed via a web browser interface.

  • Image Platform as a Service (PaaS) Providers make their hardware and basic application suite available to developers and enterprises that create specialized add-on applications for specific functions.

  • Image Infrastructure as a Service (IaaS) Developers create applications on basic computing and storage infrastructure owned and supported by cloud providers. This is the hardware used by developers.

SaaS

Software as a Service is the most frequently used cloud service. Organizations that want to avoid the staff time needed to develop, test, and roll out new applications often use Software as a Service. With SaaS, new applications can be implemented more quickly than if the customer implemented them on their own servers. They can turn to developers to write them or simply use existing cloud-based applications.

Customers are attracted to Software as a Service because the cloud provider manages applications, the operating systems on which applications are installed, the virtualization in the server, servers, storage, and broadband connections between users and the applications. The customer’s responsibility is porting applications to the cloud, and monitoring the billing and usage. Many SaaS providers offer applications, such as Salesforce.com and NetSuite, used by business and commercial organizations.

Salesforce.com offers its customer relationship management (CRM) services for managing relationships concerning customers and sales prospects. Its software can be used to automate writing sales proposals. It also profiles and targets customers based on these profiles. NetSuite, owned by Oracle, offers businesses an end-to-end software suite that includes CRM, inventory, and e-commerce applications that integrate web sales with back-office functions such as billing and accounts receivable.

Young people and adults as well use SaaS applications in their personal lives. For example, students in particular back up their music on sites such as iTunes. The following is a quote from a Chinese exchange student in the Boston area:

I use the cloud to back up my music. I like being able to listen to my music from all of my devices. And, I don’t have to worry about running out of space on my smartphone.

Services for residential customers, such as the Office-like Google Docs suite, the document-sharing service DropBox, the backup service Carbonite, and social media sites LinkedIn and Facebook are examples of Software as a Service.

PaaS: Cloud-Based Data Centers with Specialized Software

Platform as a Service providers manage the operating system, servers, virtualization, storage, and networking within their data centers. They provide data centers with specialized software to enterprises and application developers. For example, one PaaS provider, Microsoft, currently maintains massive data centers in the United States, Europe, and Asia. The Microsoft Azure platform is available to developers to customize applications that they in turn offer to their customers. Both Salesforce.com and NetSuite sell directly to developers as well as to business customers. The services Salesforce.com and NetSuite sell to developers are considered PaaS offerings because developers use the platforms to customize and support software for their own customers.

An enterprise customer can also create applications directly on Azure, or use standard office software such as word processing and spreadsheet applications as well as productivity applications such as web conferencing and calendaring, and then later port them to Azure. By using applications housed on Azure, organizations eliminate the complexity of supporting in-house applications. Azure also offers storage facilities to developers and enterprise customers. A bank can store a year’s worth of records on Azure to which each of its customers and their own staff have access.

Akamai Technologies maintains platforms focused on e-commerce that are deployed on 1450 public networks worldwide, where it hosts web sites for major enterprise customers. Web applications located on these servers generate massive amounts of multimedia Internet traffic, such as games, map generation, search, and dealer/store locators. Akamai intercepts this web traffic for its customers and sends it to its destination on the fastest available route.

Akamai’s security services are available at each of the public networks in which its equipment is installed. These services include security that protects sites from distributed Denial of Service (DoS) attacks. Distributed DoS attacks simultaneously send thousands of bogus messages or messages containing viruses to networks in coordinated attacks from computers in multiple locations around the world. If one of its sites does become disabled, Akamai has the ability to run its applications at other locations.

IaaS: Infrastructure as a Service

With Infrastructure as a Service, customers manage their applications, their data, the operating system, and middleware for applications located at a cloud provider. The cloud provider manages the servers, virtualization, data storage, and networking within their own data centers so that an adequate number of users can reach the applications within the data center.

Note

Amazon: The Gorilla of Cloud Computing

Amazon is the largest cloud provider in the United States. In addition to its computing infrastructure for developing applications, it offers storage. Interestingly, Netflix, its competitor for streaming media services, is a major customer. Netflix stores its movies on Amazon servers. Thus, when Netflix customers in the United States stream movies and TV shows, they access the shows from computers located in Amazon’s data centers. Amazon has expanded into applications such as databases and analytics to analyze usage statistics of applications on its site. Its HSM security encryption services is designed to protect the security of files located on Amazon’s infrastructure.

Other major cloud providers include Microsoft Azure, IBM, Salesforce.com, Rackspace, Google, and Oracle. Rackspace offers hosting in addition to its cloud service. With hosting, customers supply their own servers (computers) and additionally manage the software on their servers. The following is quote from a staff member at a small organization that uses Amazon EC2:

My company is a start-up. We rent 50 client computers, one server machine with a LAMP (Linux, Apache, MySql, PHP) package and one database machine for just 0.50¢ per hour. There are many features in Amazon to customize the environment according to our requirements.

Spinning Applications to the Cloud

When organizations talk about moving applications to the cloud, they use the term spin. They spin applications up to the cloud over broadband links between their site and the Internet. When transmitting applications over broadband links to the cloud, companies often protect these transmissions from eavesdroppers and interception by creating Virtual Private Networks (VPNs). The VPN creates a secure link between the cloud and the customer by encrypting the data and providing a way to securely decrypt (re-create) the data at the receiving end.

Encryption is the process of using a mathematical formula to reorder bits so that they are unrecognizable to equipment that doesn’t have the “key” to decrypt the data into readable data. The VPN also ensures that the data exchanged is formatted so that it is not blocked by firewalls at either end. A firewall is a software application that all incoming transmissions must pass through. Firewalls are programmed to accept only certain transmissions and to block known viruses. The software used to spin applications to the cloud makes the addressing formats compatible between the cloud’s infrastructure and the customer’s data center, and vice versa.

Another complication of spinning applications to the cloud is that many users access applications from mobile devices. Companies have to figure out how to get applications to them when they’re not sitting at their desk. Two issues are that the mobile user may be accessing an application from a location with poor mobile coverage and from a device with a small screen that makes it difficult to actually view the data easily.

Moving applications from data centers to the cloud can involve rewriting them or changing internal processes. For example, operating systems and addressing schemes used by a particular cloud provider might be different from those utilized by its customer. Addressing schemes can include formats such as IP addresses assigned to databases and applications, and Media Access Control (MAC) addresses for individual computers. In addition, compatibility between the two infrastructures is important, because applications that reside on a provider’s infrastructure interface might not be compatible with its customers’ databases.

Regulations and the Cloud

Various state and national regulations mandate that organizations in fields such as banking, healthcare, pharmaceutical, and government retain files for specified lengths of time. In addition to archiving information with cloud providers, organizations often use these providers as a lower-cost emergency backup in the event that their data is destroyed in a natural disaster, fire, or computer hacking incident. Enterprises can store up to 1 petabyte (PB) of data in servers that are often as large as a refrigerator. These servers consume large amounts of energy and take up a great deal of space.

Computers used for storage are now also able to take advantage of efficiencies attained by virtualization. Data stored in different database formats (for example, Microsoft SQL vs. Oracle) can be stored on the same physical computer by using virtualization able to separate files by operating system. Thus, the databases of various customers can be stored on the same storage server, resulting in server consolidation, both for storage and for running applications.

Security in the Cloud

Security is a major concern when organizations move applications to the cloud. While cloud services are often more secure than those provided at small and medium-sized companies that don’t have the resources to hire top security staff, they are not without risks. Cloud applications in which large numbers of customers access the application are the most vulnerable to hacking attacks.

There is often a conflict between making sites easy to access and having sufficient authentication requirements to protect against intrusions. According to Alert Logic, a cloud security and compliance firm headquartered in Houston, Texas, sites with high amounts of financial, health, and other personal information are at particular risk. Information gleaned from hacking these sites can be sold for millions of dollars. Moreover, because of their high-profile, large cloud providers are particular targets.

The cloud’s most vulnerable link is often the staff and customers that access cloud-based applications. Important ingredients in protecting against hacker attacks are customers that shield the secrecy of their log-on credentials. Consumers that use cloud services may share their log-on credentials with friends or log in from public Wi-Fi where their passwords and user IDs may be stolen. Many public Wi-Fi services in cafes and other public areas don’t provide encryption, making these transmissions vulnerable to being hacked, and people’s log-on credentials can be stolen while they are transmitted over the air.

To avoid being hacked, companies require employees to log in with strong passwords and multi-factor authentication. An example of multi-factor authentication might be a password and a token that generates a unique code that users type into their computer when they log into the cloud. These organizations may also mandate encryption on transmissions sent from mobile devices.

As a further step, organizations may lease security appliances (computer hardware with specialized software) or security software from companies such as Amazon and Alert Logic. In addition to its HMS security application, Amazon and other cloud security companies such as Threat Stack offer a software application with assessment tools to evaluate applications’ vulnerability to hackers, anomalies in workflow, and compliance with industry standards.

Finally, it’s up to customers to monitor and update applications and browsers in the cloud to avoid possible security breaches. Customers can request reports in the form of security logs from their cloud provider so that they or their consultants can audit their applications for security breaches. Logs that indicate spikes in usage may be indicative of a denial of service attack where overwhelming amounts of traffic are sent to a site, making it difficult or impossible for legitimate users to access these applications. Furthermore, hackers may introduce viruses that destroy data files in cloud-based applications or steal a company’s private information via hacking attacks.

Shadow IT—Departments Independently Signing up for the Cloud

In large companies it’s not unusual for staff in various departments to sign up for cloud services that fit the particular needs of their department without prior approval from the IT (Information Technology) staff. If the department stores intellectual property or confidential information in the cloud, they may not have the expertise to manage the security needed for access to the cloud. They may not use strong passwords or authentication. This is an ongoing challenge for large businesses where individual departments may feel that IT does not respond quickly enough to their needs.

Fewer IT Employees; Different Skills—DevOps

When cloud applications are moved to a cloud provider’s data center, companies need fewer staff to manage them. An organization that formerly required five people to manage its Enterprise Resource Planning (ERP) application may need only a single person to manage it once the application is in the cloud.

However, the skills necessary to support cloud-based applications are different from those used to manage traditional onsite data center-based applications. The skill most often in demand when large and medium sized companies move applications to the cloud is DevOps, short for development operations.

The term DevOps refers to developing, provisioning (installing), and managing software. In companies that have applications in the cloud it means application development, spinning applications up to the cloud, and monitoring cloud-based applications. DevOps staff are involved in the planning relating to which applications to port to the cloud, tracking security or reasons for delays when accessing cloud-based applications, monitoring access to the cloud, and integration of cloud applications with business processes. Importantly, DevOps staff collaborate with other IT personnel and with management to determine organizations’ needs and then develop applications compatible with the cloud.

Compatibility with the Cloud

All applications are coded to a particular operating system, and operating systems are dependent on the resident hardware characteristics. These applications may not be coded to operate on all cloud platforms. Without changes, some applications might not be compatible with hardware located in a provider’s data centers. The addressing formats also need to be compatible between the cloud’s infrastructure and the customer’s data center, and vice versa. Addressing schemes can include formats such as IP addresses assigned to databases and applications, and Media Access Control (MAC) addresses for individual computers. Without changes, some applications might not be compatible with hardware located in a provider’s data centers. Software, transparent to customers and cloud providers is required to create compatibility between operating systems, hardware, and applications.

A common practice is for customers to program their applications using a stack of open source LAMP programs (Linux, Apache, MySql, PHP), packages located, for example, on Amazon. Stacks are programs that work together. Linux is an operating system, Apache is an HTTP program for transferring data, PHP is a programming language. Perl and Python are also used instead of PHP. LAMP programs may be grouped together in container servers.

Moving applications from data centers to the cloud can involve rewriting them or changing internal processes. In addition, compatibility between the two infrastructures is important, because applications that reside on a provider’s infrastructure interface might not be compatible with its customers’ databases.

APIs, (Application Programming Interfaces) transparent to customers and cloud providers, create compatibility between operating systems, hardware, and applications. This enables different software programs to communicate with each other. Most companies use APIs to spin applications to the cloud.

Some applications are not architected for the cloud. Standard web browser interfaces are used with APIs to access cloud-based applications. Before applications are spun up to the cloud, organizations test access to them with multiple web browsers to make sure they will be compatible with all staff even when staff are traveling or accessing them from home computers where a variety of browsers might be installed. If organizations want something more sophisticated than a web user interface, they might bring them in-house rather than putting them in the cloud.

Monitoring Runaway Costs, Service Logs, and Congestion on Transmissions to the Cloud

Managing cloud-based operations is an ongoing task. It involves the following:

  • Image Monitoring costs

    • – Are costs increasing in line with business growth?

    • – If not, what is causing the increase?

    • – Are particular staff or departments using an unnecessary amount of computer resources?

  • Image Tracking congestion on broadband links connected to the cloud

    • – Is higher capacity needed?

    • – Is there too much capacity?

  • Image Monitoring cloud providers’ reports on outages or other glitches

  • Image Checking out user complaints

    • – Are broadband links congested or are complaints coming from people accessing the cloud remotely using their cable modems or public Wi-Fi services that don’t have adequate broadband bandwidth?

  • Image Making sure that cloud applications are compatible with users dialing in from mobile devices

  • Image Making sure that providers are adhering to contractual obligations for uptime, availability, and security

Integrating Applications Located at Different Providers

There are now web applications that enable companies to integrate or synchronize applications with each other when they are located at multiple cloud providers’ sites. This enables companies to compare data located at different cloud providers’ sites. This is helpful when applications from the same company need to be on different specialized providers’ cloud-based data centers. Oracle, Microsoft, and Salesforce.com are examples of providers who offer cloud services for applications such as Microsoft’s Azure, and Oracle’s database services for specialized data centers.

Integration is often needed to develop meaningful information. For example, the sales department may need sales figures compared to financial data. If the financial data is at, say, Amazon, and data for sales is at Salesforce.com, it may be difficult to determine the gross margins on products. If, for example, a NetSuite—mid-sized ERP—application is at Amazon, the sales department can see how many widgets were sold at Salesforce.com, but not gross or net profit on each or inventory data on their NetSuite application.

There could be one database in the cloud for Oracle, one for NetSuite, and one for manufacturing, in separate clouds. Each cloud is a silo that needs to “talk” to the other applications. If Salesforce.com and NetSuite are at different clouds, each will have a different database. The solution is for applications in separate clouds to be linked together. This is not currently possible.

In the pharmaceutical industry, document workflow management that tracks drug development is needed for regulatory approval of new drugs. Product safety must be documented; safety results, pricing, and the name of the drug are all required reports. Pharmaceutical companies must report on problems such as the adverse effects of drugs and what caused them as well as product packaging issues such as tampering with over-the-counter drugs in retail outlets. While people in the same company may be able to add to reports fairly easily, having staff from other organizations access this information on the cloud complicates security because of log-on and authentication requirements.

Because of security concerns, collaboration on product development between multiple companies is a challenge when using the cloud. Because businesses are worried about security, they build all kinds of barriers around accessing this data. So organizations that need to collaborate with other companies may keep some applications in-house so they can control access more easily when collaborating with other organizations.

Keeping Up with Automatic Updates

Without cloud computing, IT departments have the challenge of updating applications on their own as updates become available. Some of these updates are security patches for newly discovered viruses. A virus is a piece of computer code that is developed by hackers to damage and/or steal computer files. As new viruses are discovered manufacturers and developers provide application updates to customers. These updates are not always applied automatically. It’s at the discretion of IT staff as to when to install updates and whether to test updates before making them available to corporate staff.

When applications are in the cloud, cloud software automatically applies updates. This can be disruptive as the user interface may be changed or customers may not be informed in advance about an upgrade that affects how users interact with an application. For example, users may have been trained to access information in a database a certain way or to use particular commands in a spreadsheet application. Additionally, and more critically, a staff person’s computer operating system and hardware needs to be compatible with updates. This can be a problem with older computers and operating systems.

Provider “Lock-in”—Moving Applications between Providers

Often, organizations that consider using the cloud for particular applications are concerned about portability. Customers may wish to change providers because of factors such as costs and service. Others may want to move from a more limited hosting environment to a fuller service cloud provider.

Another layer of complexity is that all applications are coded to a particular operating system, and operating systems are dependent on the resident hardware characteristics. Without changes, some applications might not be compatible with hardware located in a provider’s data centers. A healthcare company moved its applications from Rackspace’s hosting environment to Amazon. The transition required 6 months of recoding software and Application Programming Interfaces to make them compatible with Amazon’s data centers. It additionally involved training staff on how to access applications on Amazon.

Various consultants and developers offer software or consulting services to assist customers in moving applications between providers.

Privacy Internationally—Data Transfers

When organizations move applications to the cloud, they are required to follow privacy and security regulations mandated by governments for specific industries, particularly healthcare and finance. All countries have rules about guarding private information located in the cloud and personal information transferred to data centers in other countries.

In the United States, healthcare organizations are required to follow HIPAA (Health Insurance Portability and Accountability Act) rules on protecting the privacy of people’s medical records. Retailers are bound by PCI DSS (Payment Card Industry Data Security Standard) regulations. Government agencies need to follow the 2002 Federal Information Security Management Act (FISMA), legislation that defines security practices to protect government information, as well defenses against natural or man-made threats. Financial firms have specific privacy and security regulations as well.

Adhering to the myriad of rules internationally in different countries often means that companies with a presence in, say, the U.S. and the European Union need to establish separate data centers that adhere to Europe’s privacy and security rules. The same holds true in Asia and Pacific Rim countries. Moving that information to other countries is also subject to privacy rules.

The EU–U.S. Privacy Shield

Prior to 2015, the European Union and the United States Department of Commerce had agreed upon ways to protect European Union citizens’ information stored in United States companies’ cloud centers in both the United States and Europe. In 2015, in response to a privacy suit filed against Facebook, the 28-country European Union’s highest court, the European Court of Justice, invalidated the earlier EU–U.S. Safe Harbor agreement that set out terms for companies in the United States to transfer cloud-based data containing personal information about European Union members to data centers in the United States. The 2015 ruling requires United States companies moving files out of Europe that contain information about Europeans to follow EU privacy guidelines.

The European Union and the United States Department of Commerce came to a new EU–U.S. agreement in 2016 on updating privacy regulations to protect European data. The agreement, called the Privacy Shield, took effect on August 1, 2016. Its requirements are more stringent than those in the Safe Harbor agreement. However, many experts expect even these regulations to be challenged in court.

In addition, some U.S. companies are slow to update their privacy clauses to comply with the new regulations. Some of this slowness is because of the uncertainty about the Privacy Shield. There are concerns that the EU Court of Justice will overturn it as not being strict enough. Other organizations see adherence to the Privacy Shield stipulations as a competitive edge in competing for business in the European Union.

At any rate, a new set of privacy regulations, the General Data Protection Regulation, took effect in May 2018. It includes a “right to be forgotten” provision that states that people’s personal data should be deleted when citizens request the right to be forgotten.

Establishing Cloud-Based Data Centers Abroad

Companies, particularly small and medium sized ones, often solve the challenge of establishing cloud-based data centers in other counties by signing agreements with providers familiar with various international rules for hosting their data centers in other countries. Data storage company Box’s partnership with IBM and Amazon to host other cloud companies’ data centers outside of the United States are examples of this type of agreement. For now, companies are hosting data centers in Europe even though rules on privacy and security are not 100 percent clear.

Cloud Summary: Rationale and Challenges

The use of cloud computing application development, application management, and IT services in general has become a widely accepted practice in commercial and for-profit companies. There are however, a number of challenges in managing cloud computing and adapting applications for the cloud. Security and privacy are major concerns. Small and medium-sized companies recognize that cloud companies for the most part have more skilled staff to manage security and privacy. Large companies with the resources to manage security better than smaller companies, are careful not to place highly confidential or strategic files in the cloud. There is never a 100 percent guarantee that a cloud company won’t be hacked or have an outage. However, it is generally accepted that the advantages for most applications need to be weighed against security and privacy concerns.

Cloud computing advantages include faster time to market that enables companies to implement new, strategic applications in less time with fewer staff. However, staffs and consultants with different skills from those required to manage in-house applications are required to move applications to the cloud. This can be an issue where there is a shortage of employees with these skills. Moreover, total cost of running IT services may not be lowered by using the cloud because of usage fees that often increase over time.

Major implication of using cloud computing is the fact that local area networks within buildings and broadband networks carry an exponentially increasing amount of traffic. Moreover, these networks are more critical than ever as many computing functions are not possible without broadband and in-house network links. Thus, it behooves companies to have network backup plans and redundancy in the event of failures in their networks.

Summary

The rapid pace of innovation and technological advancements is powered by computer chips that are faster and smaller and have increased memory and processing power. Chips are the engines in personal and business computers and all electronic devices including wristwatches, headsets, printers, fitness gear, televisions, set-top boxes, thermostats, and newer vending machines. Much of this gear is connected to broadband networks.

Broadband networks transmit data, voice, and video over high-capacity fiber-optic cabling. Dense wavelength division multiplexers, with powerful chips, connected to fiber in broadband networks enable broadband networks to carry many more streams of traffic simultaneously. The small chips within multiplexers take up less space, but enable transmissions from multiple devices to be carried simultaneously on single strands of fiber cables.

Compression and multiplexing gear are key elements in advanced gigabit per second packet networks connected to homes, commercial organizations, and businesses. Compression is an important technology that uses advanced mathematical algorithms to shrink the number of bits needed to transmit high-definition video, music, text messages, and voice. The most widely used cloud applications enabled by compression and multiplexing are the social networks whose subscribers upload and stream music, video clips, and photos to and from Facebook, Snapchat, Spotify, LinkedIn, and others. Amazon’s cloud computing, streaming media, and retail sales is another factor in increased traffic on broadband networks.

Compression, multiplexing, high capacity networks and computer chips additionally enable cloud computing. Cloud computing has a large impact not only on how IT departments manage their applications. It also is a factor in how consumers manage their own storage requirements for photos and music. Young people in particular consume such high amounts of music and photos that their personal devices often don’t have the capacity to store them all. Thus, they too use cloud computing for the ease of accessing their music and documents from anywhere there is broadband and from any device.

Compression is the key enabler of virtual and augmented reality. Chips used to compress images in virtual reality have graphical processing capability that can process large chunks of graphical data. Currently, online gaming is the most frequently used application for virtual reality. It enables high-density images that provide immersive experiences for gamers. In the future, virtual reality will also be used for applications such as training, education, and commerce.

Appendix

A Comparison between Analog and Digital Signaling

Speed or frequency on analog service is stated in hertz (Hz). A wavelength that oscillates, or swings back and forth between cycles, 10 times per second has a speed of 10Hz or cycles per second. A cycle starts at a neutral place in the wavelength, ascends to its highest point, descends to its lowest point, and then goes back to neutral. Lower frequencies are made up of longer wavelengths than higher frequencies.

Analog telephone signals are analogous to water flowing through a pipe. Rushing water loses force as it travels through a pipe. The farther it travels in the pipe, the more force it loses, and the weaker it becomes.

The advantages of digital signals are that they enable the following:

  • Image Greater capacity The ability to mix voice, video, photographs, and e-mail on the same transmission enables networks to transmit more data.

  • Image Higher speeds It is faster to re-create binary digital ones and zeros than more complex analog wavelengths.

  • Image Clearer video and audio In contrast to analog service, noise is clearly different from on and off bits, and therefore can be eliminated rather than boosted along with the signal.

  • Image Fewer errors Digital bits are less complex to re-create than analog signals.

  • Image More reliability Less equipment is required to boost signals that travel longer distances without weakening. Thus, there is less equipment to maintain.

Table 1-4 Compression Standards and Descriptions

Compression Standard

Description

H.264

An International Telecommunications Union (ITU) standard used widely for video conferencing systems on LANs (local area networks) and WANs (wide area networks).

G.726

A family of standards for voice encoding adopted by the ITU. It is used mainly on carrier networks to reduce the capacity needed for VoIP (Voice over IP).

IBOC

In-band, on-channel broadcasting that uses airwaves within the AM and FM spectrum to broadcast digital programming. IBOC is based on the Perceptual Audio Coder (PAC). There are many sounds that the ear cannot discern because they are masked by louder sounds. PAC discerns and discards these sounds that the ear cannot hear and that are not necessary to retain the observed quality of the transmission. This results in transmission with 15 times fewer bits. PAC was first developed at Bell Labs in the 1930s.

JPEG

A Joint Photographic Experts Group compression standard used mainly for photographs. The International Standards Organization (ISO) and the ITU developed JPEG.

MPEG-2

A Moving Picture Experts Group video compression standard approved in 1993 for coding and decoding video and television images. MPEG-2 uses past images to predict future images and color. It then transmits only the changed image. For example, the first in a series of frames is sent in a compressed form. The ensuing frames send only the changes. A frame is a group of bits representing a portion of a picture, text, or audio section.

MPEG-3

MPEG-3 is Layer 3 of MPEG-1. MPEG-3, also referred to as MP3, is a standard for streaming audio and music. MPEG-3 is the compression algorithm used to download audio files from the Internet. Some Internet e-commerce sites use MPEG so that potential customers who have applications that use compression software can download samples of music to decide if they want to purchase a particular song.

MPEG-4

MPEG-4 is a standard for compression, which defined coding on audio and video.

Table 1-5 OSI Layers

OSI Layer Name and Number

Layer Function

Layer 1: Physical Layer

Layer 1 is the most basic layer.

Layer 1 defines the type of media (for instance, copper, wireless, or fiber optic) and how devices access media.

Repeaters used to extend signals over fiber, wireless, and copper networks are Layer 1 devices. Repeaters in cellular networks extend and boost signals inside buildings and in subways so that users can still take advantage of their cellular devices in these otherwise network-inaccessible locations.

Layer 2: Data Link Layer

Ethernet, also known as 802.3, is a Layer 2 protocol. It provides rules for error correction and access to LANs.

Layer 2 devices have addressing information analogous to Social Security numbers; they are random but specific to individual locations.

Frame Relay is a Layer 2 protocol previously used to access carrier networks from enterprises.

Layer 3: Network Layer

Layer 3 is known as the routing layer. It is responsible for routing traffic between networks that use IP network addresses. Layer 3 has error-control functions.

Layer 3 is analogous to a local post office routing an out-of-town letter by ZIP code while not looking at the street address. Once an e-mail message is received at the distant network, a Layer 3 device looks at the specific address and delivers the message.

Layer 4: Transport Layer

Layer 4 protocols enable networks to differentiate between types of content. They are also known as content switches.

Layer 4 devices route by content. Video or voice transmissions over data networks might receive a higher priority or quality of service than e-mail, which can tolerate delay.

Filters in routers that check for computer viruses by looking at additional bits in packets perform a Layer 4 function.

Transmission Control Protocol (TCP) is a Layer 4 protocol.

Layer 5: Session Layer

Layer 5 manages the actual dialog of sessions. Encryption that scrambles signals to ensure privacy occurs in Layer 5.

H.323 is a Layer 5 protocol that sends signals in packet networks to set up and tear down, for example, video and audio conferences.

Layer 6: Presentation Layer

Layer 6 controls the format or how the information looks on the user’s screen.

Hypertext Markup Language (HTML), which is used to format web pages and some e-mail messages, is a Layer 6 standard.

Layer 7: Application Layer

Layer 7 includes the application itself plus specialized services such as file transfers or print services. HTTP is a Layer 7 protocol.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset