2. Data Centers and LANs, Storage, and IP Private Branch Exchanges

In this chapter:

Introduction

IT management is in a race to keep up with growing demand for capacity on local area networks (LANs) and storage systems. The growth in the amount of traffic on LANs that results in congestion is due to the growing number of employees

  • Image Accessing data that is in the cloud

  • Image Uploading data to the cloud

  • Image Downloading data from the Internet

  • Image Holding videoconferences with remote and local colleagues

  • Image Using collaboration software to share and edit documents related to joint projects

The above factors add to the increasing volume of data traversing the LANs.

In addition to more voice, data, and video transmitted on LANs, there are requirements for additional capacity on storage systems. Some of the needs for additional storage can be attributed to the fact that regulations in certain industries require businesses to save data for 3 or more years. Examples of industries with retention requirements include

  • Image Medical device and pharmaceutical companies

  • Image Hospitals

  • Image Retailing

  • Image Financial firms

  • Image Government agencies

Even organizations without as many requirements to retain data are finding it necessary to add more storage space for data related to customer information, product specifications, research projects, and employee data. In addition, the cost for storage is decreasing, which allows organizations to store more data.

Data centers are centralized locations for housing software applications. Because of the growing dependence on the cloud to store and manage applications, enterprise data centers are shrinking in physical size. Placing applications in the cloud takes the burden of monitoring, patching, and upgrading applications away from staff in corporate data centers. It additionally reduces the load on organizations to have mechanisms to protect applications from power outages, brief power interruptions, and natural disasters such as hurricanes and tornadoes.

Cloud services allow small companies to operate without a data center, and most of the medium-sized companies only require two or three servers. For small and large organizations alike, cloud services provide many of the management tasks previously performed by in-house technical staff. However, even with cloud services, IT staff is needed to monitor security and secure access to applications in the cloud.

Employees often bring expectations for capacity, accessibility, and user-friendly interfaces from their experience as residential customers. Residential customers can easily access Facebook, Snapchat, and Google from mobile devices and laptops. Employees expect the same level of service for work-related computing tasks. They anticipate the applications they access to be always available, easy to use, and accessible from mobile devices and remote locations. This is an ongoing challenge for IT personnel and management.

Organizations are meeting these staff expectations through unified communications, collaboration software, and easy-to-access desktop video conferencing. Unified communications is the ability to access company directories and voice mail messages from within a single e-mail inbox. Collaboration software, which mimics Google Docs and Box services, enables employees to share documents, edit documents written by other staff, and keep up-to-date on group projects. Telephone system manufacturers such as Mitel and Cisco often include these applications and capabilities in their systems’ platforms.

To meet staff expectations for access to applications, organizations are taking steps to prevent delays and ensure continuous uptime. They are investing in higher capacity switches to transmit growth in LAN traffic, and LAN monitoring software to quickly spot and resolve equipment and software glitches.

What Is a LAN?

A LAN is a local network owned by an enterprise, a commercial organization, or a residential user. The purpose of a LAN is to allow employees and residential users to share resources. For example, without a local area network, each person would need his or her own printer, router, switch, and applications. The LAN enables employees to share high-speed printers, routers, modems, Wi-Fi equipment, and broadband services to reach the Internet, and to receive incoming and make outgoing calls. Without a LAN, each user would additionally require her own connection to the Internet. In short, costs would be sky-high and staff would be burdened with arranging all their own access to services.

Table 2-1 is a partial list of devices connected to LANs. Growing numbers of “connected” devices is adding tremendous traffic to LANs. It is additionally increasing the criticality of LANs. Each device on a LAN is referred to as a node. The size of a LAN is often described as the number of nodes on that LAN.

Table 2-1 Devices (Nodes) on LANs

Routers

Security cameras

Printers

Wi-Fi equipment

Switches

Telephones

Personal computers

Cable modems

Thermostats

Video conferencing equipment

Electronic white boards for meetings

Security alarms

Lights

Equipment to manage lighting

Fire alarms

Factory automation systems

Shared applications

Projectors

Wireless devices connected to Wi-Fi

Bar code scanners in retail locations

Cash registers

LAN monitoring software

Storage networks

Databases

Televisions

Set-top boxes including Apple TV and Roku

Switches, Media, and Protocols in LANs

Switches have been described as “goes into, goes out of.” Requests for access to applications including e-mail go “into” switches and the responses to these requests are transmitted “out of” switches.

LANs are made up of Layer 3 (also referred to as core or backbone) switches and Layer 2 switches.

  • Image Layer 3 core or backbone switches

    • – Transmit messages between buildings on a campus.

    • – Are connected to other Layer 2 and to Layer 3 switches.

    • – Are not connected to nodes.

    • – Carry the highest amount of LAN traffic.

  • Image Layer 2 switches

    • – Are connected to nodes (devices) on floors.

    • – Carry less traffic than Layer 3 switches.

Copper, Wi-Fi, and fiber media tie nodes and switches together into a network.

Layer 3 Switches—Transmitting Data between Switches and Data Centers

Layer 3 switches are large-capacity switches that transmit data between Layer 2 switches and centralized applications in data centers. Layer 3 switches are also referred to as core or backbone switches. They tie together buildings located in, for example, hospitals, universities, and enterprise campuses. Core switches are not connected directly to end-users’ devices. A backbone switch has connections to multiple switches and to routers, but not to LAN nodes listed in Table 2-1. On large LANs, if a link to a core switch is down, the backbone switch can often route traffic around the disabled switch to a functioning switch.

IP Addresses in Layer 3 Switches

Layer 3 switches are called Layer 3 because they route messages to devices via their IP address. Layer 3 devices are considered Network Link nodes equipment. Layer 3 Network Link switches send messages only to other switches and routers, not to nodes such as printers and individuals’ devices. All networks including broadband, cellular and the Internet, have Layer 3 switches for transmitting data and voice within backbone networks. See Chapter 1, “Computing and Enabling Technologies,” for backbone networks.

Layer 2 Switches—Links to Nodes

In LANs, Layer 2 switches are located in wiring closets on individual floors. The switches typically have between 8 and 24 ports with a dedicated port for each device connected to them. Each port is connected to users’ computers, Wi-Fi access points, printers, and other LAN nodes. Because each device is cabled to its own dedicated port, messages they transmit are not broadcast to all users. For example, a manager is able to send the same e-mail or video to staff on various floors. These messages are sent only to the designated recipients, not to each person working on these floors. This avoids flooding each device with traffic for other nodes.

Note

MAC Addresses in Layer 2 Switches

Layer 2 switches send packets to devices based on their Media Access Control (MAC) address. Each device connected to a LAN has a MAC address. Layer 2 devices are considered Data Link equipment. They typically support up to 40Gbps speeds. If a port fails, the device connected to it loses its connection to the LAN. See Figure 2-1 for switches in wiring closets. This represents a single point of failure to the devices connected to them. If the switch crashes, all of the 16 to 24 devices connected to it are out of service. But the nodes connected to Layer 2 switches in other wiring closets don’t lose service.

An illustration shows the switches in a wiring closet.

Figure 2-1 A switch located in a wiring closet.

Many Layer 2 switches are an industry-standard 19 inches wide for rack mounting and are typically housed in freestanding equipment racks with horizontally positioned blades. Circuit boards are often referred to as blades when they are dense, such as when they have many ports. Switches can be wall-mounted in wiring closets that don’t contain other equipment.

The Criticality of Layer 3 vs. Layer 2 Switches

LAN backbones where core switches are located carry the highest volume of an organization’s traffic. Backbone switches are more critical in supporting staff communications than switches in wiring closets. If a backbone switch fails, every Layer 2 switch plus the nodes connected to each wiring closest switch are out of service. In contrast, if a Layer 2 switch crashes, the only nodes that lose service are the 8 to 16 or 24 devices connected to it. Moreover, the cost to purchase redundant Layer 2 switches is high because there are so many of them. In a five-story building, there could be a total of 40 Layer 2 switches in wiring closets and only six in the backbone. See Figure 2-2 for switches in LAN backbones.

There are instances, however, when high-level employees such as CEOs are connected to a dedicated Layer 2 switch. In these cases, components within the wiring closet may be duplicated to avoid outages. In addition, IT staff closely monitor these senior employees’ equipment connections so that if they do break down, they can be repaired or quickly replaced.

A figure shows the switches in LAN backbones.

Figure 2-2 Switches in LAN backbones.

Layer 3 Switch Redundancy in the Backbone

Because core Layer 3 switch failures impact many nodes in networks, there are often redundant Layer 3 switches. Medium- and large-sized organizations often have redundant switches in their backbone network so that if one switch is out of service traffic can be rerouted to a different Layer 3 switch.

In addition to or instead of having redundant switches in backbones, components within each core switch can be duplicated, particularly those components such as power supplies that fail more frequently than other components. And importantly, there may be redundant switches within data centers. When either of these two things is done, there are rarely LAN outages.

Organizations consider a variety of levels of redundancy for core switches, including the following:

  • Image Purchase of a separate, duplicate switch that can take over if one fails.

  • Image Installation of redundant power supplies in the switch. (A power supply converts AC power to the low-voltage DC power required by computers.) Switches are inoperable if the power supply fails.

  • Image Installation of redundant blades in the switch. Each blade (also referred to as a card or circuit board) supports a megabit or gigabit capacity Ethernet port.

40/100Gbps Ethernet Switch Standards

The Institute of Electrical and Electronics Engineers (IEEE) approves Gigabit Ethernet standards such as a 40/100Gbps and higher capacities. Most organizations currently use the 40Gbps switches. However, for organizations whose networks support web sites with extremely high traffic, this is actually too slow. To support ultra-high levels of web site traffic, they deploy 100Gbps Ethernet switches. Facebook’s needs for bandwidth are so large that they have developed source switches that support 400Gbps. They’ve made the design available to other organizations. They’ve additionally stated their intention to develop a switch with thirty-two 400Gbps ports.

When IT staff discuss the capacity of their LAN networks, they refer to them by the capacity of their switches. So organizations with 40 Gbps switches state their LAN as a 40Gbps or 40 Gig network. Here is one IT staff member’s comment:

“We used to have a 10 Gigabit network. But, because of the increase in our LAN traffic, we upgraded to 100 Gigabits.”

Virtual Local Area Networks for Specialized Treatment

A Virtual Local Area Network (VLAN) comprises devices such as personal computers, physical servers, IP phones, and wireless telephones whose addresses are programmed as a group in Layer 2 switches to give them higher priority or special privileges, or to segregate them from the rest of the corporate network for higher security. Although they are programmed as a separate LAN and treated like a separate LAN in terms of privileges, they are not necessarily connected to the same switches. Some computers are put into VLANs so that they can access secure files such as healthcare records.

VLANs are used to provide special features to nodes. For example, video conferencing or LAN-connected telephones may receive priority over e-mail traffic because delays in video may result in distorted images whereas e-mails are not time sensitive and delays in sending or receiving messages are not noticeable.

IP telephones and video conferencing units that send voice in packets over networks and wireless LAN devices are often programmed into their own VLANs. These devices allow only certain types of equipment, such as other telephones or video conferencing equipment, to communicate with them.

Media-Connecting Nodes to LANs

In information technology and telecommunications, media refers to cabling and wireless services that connect devices to each other, to buildings and ultimately to individuals.

Note

Increased Bandwidth Needs on LANs

A number of factors are causing the amount of traffic on local area network to increase exponentially. These include increasing amounts of data-intensive centralized computing and requirements for additional storage. Another factor is the growing dependence on accessing content in the cloud. The staff’s cloud traffic transits LANs to reach databases located in the cloud and in companies’ data centers. And to a greater extent than ever before, video traffic streamed and downloaded from the cloud, the Internet, and from on-site videoconferences is adding to LAN congestion and the need for larger pipes.

Other factors adding to LAN congestion are:

  • Image Large graphics file attachments such as PowerPoint files.

  • Image Daily backups to databases of entire corporate files.

  • Image Web downloads of long, continuous streams of images, audio, and video files.

  • Image Access by remote and local employees to applications and personal files on centralized servers and storage devices.

  • Image Web access, during which users typically have four or more windows open concurrently. Each open web page is a session, an open communication with packets traveling between the user and the Internet.

Here’s a quote from an IT Director at a university:

“We never have enough capacity on our local network or storage space in our storage area network.”

Protocols for Communications in LANs

All devices connected to LANs use network protocols to access the LAN. They communicate with other devices and applications, which might be located on the same floor, on another floor, in another building, or even on an organization’s LAN located on another continent. By providing a uniform way to communicate, LANs simplify the process of linking devices, applications, and networks together. Ethernet is the most common of these protocols. Other specialized protocols are used to access Storage Area Networks.

The Ethernet Open Standard for Network Access

Ethernet is an 802.3 open-standard protocol, approved by the Institute of Electrical and Electronics Engineers (IEEE), a non-profit standards body. Devices such as personal computers and printers use it to access the LAN and to retrieve packets from wired and wireless LANs. Each device on an Ethernet LAN has an address, referred to as its Media Access Control (MAC) address. Layer 2 switches in wiring closets use the MAC address to send messages to other LAN-attached devices.

Ethernet is the first LAN access scheme that was widely adopted. Because it is based on an open standard, it is broadly available from different manufacturers. In the 1980s, departments within companies began to use Ethernet to link their computers together over coaxial cables to share software applications, such as accounting and financial tracking packages as well as printers. In the 1990s, lighter-weight, lower-cost unshielded, twisted-pair (UTP) cabling became available on new UTP cable standards on LANs. This greatly simplified installing and moving computers on LANs because of the lighter weight and flexibility of UTP cabling. See Table 2-2 for additional LAN protocols, devices and terms.

Ethernet is an asynchronous protocol. Asynchronous protocols do not specify a specific start and stop time for each packet of data. All devices connected to the network attempt to send whenever they are ready.

Ethernet’s simplicity makes it straightforward to connect equipment to networks. All that’s required are computers with Ethernet interfaces (ports) for an Ethernet cable, a Network Interface Card (NIC), and Ethernet software to link devices to switches in wiring closets. Because of the uncertainty of traffic volume at any given point in time, Ethernet networks are often configured to run at no more than half of their capacity.

Note

Using TCP/IP as a Linking Protocol

Ethernet is a way for individual frames to access LANs and for computers to retrieve frames from local networks, whereas TCP/IP is used to tie LANs together and to route packets between networks as well as to and from the Internet. As the need arose to tie LANs together for e-mail and file sharing, compatibility between LANs from different manufacturers became a problem. The TCP/IP suite of protocols became a popular choice for overcoming these incompatibilities and for managing the flow of information between LANs.

Routers were developed to send data between individual LANs, and between LANs and the Internet and other networks. Routers send packets to individual networks based on their IP address. IP addresses are assigned to individual networks and servers that run software applications.

Software Defined Networks in LANs

As more varied applications, data from storage networks, cloud data, and Internet traffic is transmitted across LANs, large organizations need to have more control over how various applications are transmitted. For example, video files and real-time transactions require a higher priority than for example, e-mail.

In addition, IT managers are under pressure to keep the staff lean and productive. At the same time, they need the ability to quickly adapt to changes and to add and delete features and applications. This is often difficult with current local area network infrastructure where prioritizing traffic results in complicated management of that traffic.

It may be difficult to add new services because of the need for backward compatibility with older applications and equipment. This is particularly true in large conglomerates often made up of a variety of subsidiaries and departments, many of which have different applications and requirements.

Features of Software-Defined Networks

As the name implies, software-defined networks are controlled by software rather than hardware. The software is located in a central controller in off-the-shelf, general-purpose computers. The controller has policies in software that define how packets are forwarded. The control plane distributes policies that the controller transmits to it. The data plane is made up of tunnels that carry data to endpoints. Endpoints are also referred to as nodes, devices connected to the LAN.

The key advantage of distributing policies to endpoints is that endpoints do not need to continuously communicate with a central controller because the policies are forwarded as they change with instructions on how to treat data that is transmitted by the endpoints. These policies have instructions on how to treat traffic during peak times. The endpoints do not need to send data back to a central point for instructions because they are automatically sent to the endpoints whenever there are changes in policies. Thus, LANs can respond dynamically in real time, to changing traffic patterns, congestion, and outages. For example, if one route is congested or out of service, data is automatically transmitted on other routes.

Dependence on a Central Controller—The Need for Redundancy

Controller redundancy is important in SDNs because no new policies can be transmitted to endpoints if the links to the central controller or the central controller itself is out of service. Once service is reestablished, controllers automatically transmit policies that were programmed during the outage to endpoints.

Flexibility

Changes can be implemented rapidly because many changes in software-defined networks do not require additional hardware. Changes are made by programming new policies in central controllers. Changes should of course be tested in isolated parts of the network or data centers before they are implemented. See Chapter 4, “Managing Broadband Networks,” for software-defined network technologies on broadband links.

Network Operating Systems

LAN Network Operating System (NOS) software is located on specialized computers called file servers in the LAN. File servers provide shared access to programs on LANs. A NOS—also referred to simply as the operating system (OS)—defines consistent ways for software to use LAN hardware and for software on computers to interoperate with the LAN. LAN operating system client software is located on each device connected to the LAN.

The way PCs request access to services such as printers, e-mail, and sales applications located on servers is defined by operating systems. A network operating system also defines how devices access shared applications and the network. A NOS further controls how icons are displayed on computers’ “desktops.” Examples of network operating systems include Novell, Windows Server 2010, Windows Server 2016, Sun Solaris 10, and Linux.

Client software (software on nodes) is installed on individual devices. Examples of client software on PCs include Windows XP, Windows 8, and Windows 10. Apple computers use various operating systems such as OS10.5 and OS10.14.2. Often when client software on Windows and MAC computers are upgraded, older Office and other applications need to be upgraded. Features in the upgraded operating system may not be compatible with older word processing and spreadsheet software.

Data Centers—Centralized Locations for Housing Applications

A data center is a physical location where an organization or cloud provider’s data applications are centralized. This includes LAN operating systems, e-mail servers, security software on dedicated servers or appliances, voice telephony, storage area networks, databases, and accounting applications. See Figure 2-3 for a depiction of a data center. An appliance is a server dedicated to a specific software function such as email and security. Applications including e-mail are housed in servers, powerful computers accessed by authorized staff. Switches in data centers transmit traffic to individual applications in the data center.

A figure shows a data center.

Figure 2-3 A data center.

Data centers come in a variety of sizes. They can be located in an organization’s branch as well their headquarters. They are also at cloud providers’ sites. For example, a small to medium-sized company may have a data center with only a few servers if the bulk of their applications are located in the cloud. Large organizations with multiple sites may have data centers and applications in branch offices as well as at headquarters. Or branch office staff may remotely access information in a centralized data center at headquarters or at a cloud provider’s data center.

Hyper cloud providers—the very largest providers; Amazon, Facebook, Google (part of Alphabet), and Microsoft have large data centers spread all over the world. As cloud computing proliferates, an increasing number of applications previously at enterprise data centers are now at hosting and cloud providers’ data centers. According to a Wall Street Journal, April 10, 2017, online article, “A New Arms Race for Tech” combined capital expenditures at Amazon, Microsoft, and Alphabet’s Google increased 22 percent in 2016 compared to 2015.

Protecting Data Centers

Cloud providers and social networks have multiple, duplicated large data centers dispersed around the world. Many of them are exact duplicates of each other in the event that one of their data centers is destroyed by a natural disaster or other disaster. As discussed in Chapter 1, duplicate data centers are for the most part connected by fiber-optic cabling. Capacity in fiber cabling and the electronics connected to it have the capacity to back up data continuously in real time or at the end of the day.

Telephone companies and cellular providers all have data centers. These data centers manage services and applications such as voice mail and messaging that they offer customers. In addition, these companies deploy roomfuls of technicians to remotely monitor and manage their networks. This is done from data on technicians’ screens in these data centers. Cellular and telephone companies’ data centers monitor and manage:

  • Image Connections to other networks

  • Image Connections to subscribers

  • Image Conditions in their own networks

  • Image Tracking customers’ voice and data usage

  • Image Billing software with information on each subscriber’s voice and data usage and voice and data plan

  • Image Security

  • Image Switches used for voice calling

  • Image E-mail servers

The Impact of Cloud Computing on Data Centers

Although the criticality of and reliance on applications and databases is increasing, many data centers are shrinking in physical size. When cloud computing was first introduced, mainly small and medium-sized organizations trusted it enough to use it for the majority of their applications. As time went on, more organizations began trusting the cloud and transferring more of their applications and development to cloud platforms. This resulted in smaller data centers and fewer tech staff required to operate data centers because a number of applications are located in the cloud and often managed by the cloud provider. A data center in an organization with most of its applications in the cloud might consist of only a few servers.

Often an organization will consolidate applications that are not in the cloud into central data centers that were previously located in numerous remote offices and department. Centralization and streamlined data center operations are enabled in large part by storage, server, and switch virtualization. Increased broadband network capacity is another key enabler of centralized data centers.

Environmental Controls in Data Centers

Virtualization and cloud computing have decreased the number of physical servers needed in data centers. However, each physical server requires more cooling due to more powerful, octal processors that generate more heat than dual or single processors. This increases electricity costs needed for cooling these powerful servers.

Designing cooling, power, and Uninterrupted Power Supply (UPS) systems that distribute power are so complex that organizations often hire consultants to design cost-effective environmental controls. An uninterrupted power supply provides power for the short time before gas-generated backup power kicks in. Increasing or decreasing the number of people in the data center will further impact cooling needs.

In addition, there are different cooling requirements for equipment located at the bottom of racks vs. equipment located at the top. The type and amount of lighting also impacts cooling requirements. Energy consultants offer consulting services aimed at designing cost-effective energy and power systems for data centers.

Taking Steps to Ensure Adequate, Lower-Cost Power

To save money on electricity, some large enterprises have applied for and received permission from the United States Federal Energy Regulatory Commission (FERC) to become wholesale providers of electricity. By doing so, they can purchase bulk supplies of power at low rates. In addition to saving money on power purchases, this ensures that they have an adequate supply of electricity for their own power needs. Examples of companies that have been approved by FERC to purchase bulk supplies of electricity include Google, Exxon Mobile, Kimberly-Clark Corporation, Alcoa, Tropicana, and The Dow Chemical Company. None of these companies have stated an interest in reselling electricity on a retail basis.

While it’s not expected that smaller organizations will follow this route, this does point out the efforts that large, multisite firms will take to ensure an adequate supply of electricity at the lowest possible rates. To further ensure a steady supply of power, companies also purchase power from multiple power generating companies over different power feeds. In addition, electric rates and adequate sources of power often factor into decisions of where to locate data centers. For example, Facebook has two data centers located in Sweden, partly to cut down on cooling costs.

Resiliency in Data Centers: Continuous Operation

A key task in designing data centers is determining which devices or elements represent single points of failure that have the potential to put the entire data center out of service. Power cuts and interruptions, fire, and natural disasters such as hurricanes all have the potential to shut down computer operations. Human error is also a common cause of failures.

Data centers are often located in out-of-the way spaces within buildings, particularly if there is a shortage of office space. Thus, it’s not uncommon for data centers to be located in basements. In low-lying areas this is a problem because of the danger of flooding the data center and destroying the equipment. Although most data centers have raised floors, floods in these areas are nevertheless a problem.

Enterprises and carriers with mission-critical data centers must decide where to spend money on redundancy and protection from catastrophic failures. See Figure 2-4 for an example of failover during a data center failure. Because loss of an uninterrupted power supply can bring down a data center, many organizations consider redundant UPS and backup electric generator service. This can mean two UPS’s and backup generators connected to the same electrical feed or the more costly option of two electrical feeds, each with its own UPS.

Organizations with two electrical feeds generally arrange that each enter the building in separate conduits and separate building entrances. If one electrical cable is cut, the data center will fail over to the other one. This is expected of hosting and cloud providers and other critical infrastructure suppliers.

An illustration shows the automatic failover to a remote site.

Figure 2-4 Automatic failover to a remote site.

An even more costly option is to lease or build a second, backup data center. The backup may be at a providers’ site or may be owned and operated by the enterprise.

Storage Systems—Managing Petabytes of Data

Organizations use storage systems to access information for daily transactions and to archive backup copies of computer files and databases on hard drives controlled by servers with specialized software. They are used by online as well as brick-and-mortar stores that need access to credit card and customer service data to authorize purchases and resolve customer complaints.

Biopharmaceutical companies use storage systems in their refinement of and development of new drugs. They can change the amount of particular chemicals they add to drug formulations and track how each alteration changes how drugs’ effectiveness is impacted by each input. These changes are tracked over time. The ability to track massive amounts of data during drug development is speeding the time to develop and improve drugs’ effectiveness.

According to a chemist at a bio-tech firm:

“The decline in costs of mass storage has greatly impacted our research and development. We can purchase huge amounts of storage to help us analyze how changing inputs to drugs during development results in chemical changes.”

Telephone and cable TV companies store massive amounts of customer billing and usage data. Their customer service representatives access much of this information when customers call with questions about bills or to make changes to their service. Other critical functions of data in storage systems are to detect patterns of financial fraud, authorize credit card transactions, and increase the speed of stock trading by supplying near real time access to changes in stocks and bonds trades.

Without access to customer and organizational data, organizations cannot manage daily operations efficiently. Small and medium-sized organizations store databases and back up files in the cloud or on standard on-site computers. Larger organizations, however, require storage systems to manage the large number of requests to access and input information to complex storage systems. The level of complexity of storage systems is such that in large data centers, a specialized staff manages the storage.

Examples of organizations that use and manage their own large storage systems to collect and manage data are cloud providers, and large commercial entities. In particular, government agencies, municipalities, healthcare organizations, financial firms, and large universities as well as streaming media companies store enormous amounts of information. Faster processors, powerful IP fiber-optical networks, and lower-cost, large capacity disk storage and faster memory enable storage systems to provide near real time access to immense amounts of information. Typically, these storage systems are located at off-site data centers or in the cloud.

Large enterprises and data centers operated by cloud providers deploy storage systems with servers (computers) that run special-purpose programs that monitor and manage access to storage, memory chips, and disks that store information required to operate their businesses. The information is in the form of text, databases, video, and audio records.

In contrast to local and broadband networks that measure capacity in bits, storage systems measure capacity of the massive amounts of data stored in bytes. Each byte is made up of 8-bit characters. Large storage systems typically hold petabytes of information. A petabyte equals 1,000,000,000,000,000 bytes or 1,000,000,000 gigabytes.

Measuring Performance—Input–Output Operations per Second

Input–Output Operations per Second (IOPS) is measured by how long it takes to access data from user devices. It’s the time elapsed between requesting the data, transmitting the data, and receiving the data—in other words, the movement of data between the data storage network and the requesting or sending computer. Current storage systems are capable of close to 1 million inputs–outputs per second. Each request for data uses a protocol that issues a request and receives either an acknowledgment or a negative acknowledgment if the request is refused. The transmission of the requested data is also referred to as the time in flight.

Storage Components

All storage systems are made up of servers with specialized programs to manage and monitor the memory in disk drives and flash memory. The programs on servers also handle access to storage. Newer storage systems are composed of standard off-the-shelf computers. Storage systems additionally require fiber-optic cabling links between the storage area network and LANs and broadband networks. Traffic on the links may be routed via specialized protocols.

Memory—Flash vs. Spinning

Memory in storage systems is complex and can be architected, or put together, a number of different ways. The data stored is held either in spinning disks or flash memory. Some storage systems use a combination of both spinning disks and flash memory. Spinning disk memory is held in drives made up of metal platters with magnetic coating on which data files are stored. See Figure 2-5 below to see a spinning disk for data storage. A read-write arm attachment accesses data on the hard drive while it is spinning.

A photograph of spinning disk for data storage is shown. The spinning disc is labeled out, "Spinning disc containing data." The read-write arm to access and add data is labeled out.

Figure 2-5 A spinning disk for data storage. (Photo by olegdudko/123RF)

Another option in addition to spinning discs is to use flash memory. Flash memory consists of interconnected flash memory chips in which data is stored. Flash memory has the advantages of being faster to access and requiring less electricity. However, after a finite amount of times that data is “written” onto flash memory, it will physically wear out. Some storage systems contain both types of memory and others consist of all flash memory. Kaminario, EMC’s DSSD, and Quorum are examples of companies that offer storage systems made entirely of flash memory. Infinidat offers a hybrid system with flash memory for the most frequently accessed data and spinning discs for the rest of the data. NetApp sells both hybrid and all-flash storage.

There is disagreement in the industry over which approach, flash memory vs. spinning disks, is best. Often flash memory is used for data that is accessed more frequently and spinning disks for archived and less frequently requested data.

Storage Redundancy—Preventing Data Losses

The criticality of data requires that storage systems write data to multiple disks. If one disk fails, the data is not lost because it is stored in a redundant disk. In addition to redundant discs within the same storage system, data center operators and cloud computing providers may duplicate the entire storage system in another location. Connections between the main storage and the duplicate system enable real-time updates in the backup system. In this way, if a natural disaster such as a flood or hurricane destroys an entire data center, all data is up to date and preserved.

Frequently, only the changes in files are backed up every night. As LANs and WANs became more powerful, SANs began using disk mirroring to back up files in real time. Disk mirroring is the process of simultaneously writing data to backup and primary servers so that all recent changes are saved and up-to-date, in the event of a failure during working hours.

Note

Storage Area Networks—Centralized File Access

Storage area networks (SANs) enable the entire organization to share files so that people do not need their own, personal database. Having an adequate number of channels into the servers connected to storage systems is a critical step in avoiding delays because of congestion in data look-up requests. See Figure 2-6 for an example of a storage area network.

A figure shows an example of a storage area network.

Figure 2-6 A storage area network.

The Fibre Channel group of SAN open protocols was designed for the heavy traffic generated in data-intensive networks. It is a point-to-point protocol, wherein data is transferred directly from one point (a computer) to another point (disk storage). Because of its capability to transfer large amounts of data at high speeds, large companies use Fibre Channel. It is the costliest SAN option.

The Fibre Channel over Ethernet (FCoE) protocol was approved by ANSI in 2009. It was developed so that Fibre Channel SANs could communicate with Gbps Ethernet networks without translating between Ethernet and Fibre Channel protocols. The goal is to simplify structures and communications in data centers.

Other, less costly, SANs include:

  • Image iSCSI (Internet small computer system interface), which is a newer, less costly all-IP protocol suited for small and medium sized companies

  • Image NAS (Network attached storage) systems, which are connected to devices via a specialized server containing file sharing software. NAS is compatible with Ethernet protocols

Hyper-Converged Infrastructure

Hyper-converged infrastructure (HCI) in data centers simplifies data center infrastructure and provides the ability to manage storage, computing, development and testing, remote offices, and other services from a single point by combining storage, applications, and networking on commodity, off-the shelf appliances. HCI is available from most equipment manufacturers. If not using off the shelf servers, HCI requires that all the data center hardware be from the same manufacturer.

HCI is software-controlled infrastructure. Centralized software is used to manage the entire HCI infrastructure and prioritize streams of data. The software application (the controller) monitors and controls the data center and creates logs of traffic, and outages. The error reporting software is centralized.

The goal of hyper converged infrastructure is to provide enterprises a way to scale up or down without a major forklift. HCI came into prominence as a way to save money by emulating cloud infrastructure in private and commercial companies. Compression and deduplication are used in storage equipment so that data can be stored using less disk space. Compression uses complex algorithms to shrink the size of data. See Chapter 1 for more about compression. Deduplication removes redundant data from stored and transmitted files. Additionally, it streamlines protocols that require acknowledgments and negative acknowledgments after each stream of data transmitted.

The Impact of Virtualized Hardware Failure

Because multiple software applications run on each physical server, virtualization increases each physical server’s criticality. If one server fails, multiple applications and business processes are disrupted. Moreover, server virtualization results in the centralization of more applications, such as collaboration, video and audio conferencing, and accounts payable and accounts receivable software being centralized within data centers rather than located in remote departments. Thus, in multi-site organizations, failures affect multiple sites and departments.

Redundancy is a key consideration in organizations where computing is critical for operations. Hot standby is one option for redundancy. Hot standby refers to the capability of one piece of equipment to automatically take over the function of a failed unit, without human intervention. One way to achieve this is to provide alternate paths if one switch fails. Another option is to use replication software, which is used in virtualized servers to back up all files to a hot standby location that can handle all computing if the main site crashes.

Managing Users’ Computers via Virtual Desktops

Desktop virtualization, also called Virtual Desktop Integration (VDI), refers to the phenomenon of users’ applications and desktop images being held in central servers in data centers or in the cloud. Staff access their desktop images and applications from screens on their desktop or from mobile devices.

This relieves IT staff from troubleshooting software, applying patches to software, and upgrading software on employees’ computers, which occupies a considerable amount of time. Moreover, ensuring that desktop and laptop computers don’t contaminate networks with viruses is a complex and time-consuming task. End users who install unknown programs or inadvertently open e-mail attachments that contain viruses can unwittingly bring computer networks to their knees.

In addition, users today are mobile. They access their applications and documents from all types of mobile devices—laptops, desktop computers, tablets, and smartphones. In addition to providing portability and security, desktop virtualization ensures that users are able to access applications that require, for instance, Windows or others that operate on Mac operating systems.

All of these factors plus improvements in desktop virtualization and lower costs for VDI are pushing an interest in desktop virtualization by large enterprises. With desktop virtualization, users have a screen, and keyboard. A connector box linked to their computer is tied to a centralized server that runs desktop virtualization software as in Figure 2-7. Users’ desktops are hosted in the central server as software.

An illustration shows an example of desktop virtual integration (VDI).

Figure 2-7 An example of desktop virtual integration (VDI).

When a user turns on his screen, he sees an image of his desktop, complete with icons. Virtual Desktop Integration is referred to as thin-client technology because very little intelligence is needed at the end user’s device. The user’s equipment is referred to as the client. Organizations such as Dell, Citrix Systems, Inc., LISTEQ, Microsoft, Nimboxx, Oracle, and VMware supply desktop virtualization software.

In the past, when Virtual Desktop Integration was tested, most organizations that installed it experienced unacceptable delays in accessing centralized files and applications because of inadequate LAN capacity. Improvements in LAN capacity have eliminated these delays. Another factor inhibiting implementation is the fear that if a remote user wants to work on centralized files, and she is somewhere without an available broadband connection, productivity gains in centralizing applications and any modifications to the files will be lost.

Multiple Operating Systems on Desktop Computers—Testing and Flexibility

Software developers and IT staff use desktop virtualization on computers with multiple operating systems to test new applications as they are developed or before they are provided to an organization’s staff. They do this by trying out the new software with a variety of operating systems such as Linux, Windows, and Mac operating systems to determine if the application is compatible with each of these operating systems. They additionally can test newer and older versions of these operating systems with various applications before making them available to internal staff or customers.

Desktop virtualization can be installed on individual computers. A person with a Mac computer may wish to run a program that works only on Windows operating systems. To run a Windows program that is not available for Mac computers, they install the Windows operating system on a computer capable of supporting dual operating systems, and activate it to run the Windows-compatible application. A personal computer or laptop with adequate memory and processing power is required to avoid slowing down the computer with the use of dual operating systems.

In addition to testing operating systems, IT staff also test upgraded versions of web browsers for compatibility with Mac, Windows, and other computer operating systems.

Access to the Internet and Other Broadband Networks via Routers

Routers connect enterprise networks to the Internet, a carrier’s network, and other corporate sites, based on the IP address stored in the router’s memory. If a LAN-connected device such as a printer or PC is moved from one LAN to another, the router table must be updated or messages will not reach that device. The fact that routers transmit packets by IP address rather than individual device MAC addresses is one reason they are considered Layer 3 devices. Routers are critical. If they fail, all access to the Internet and other sites is lost. Because of this criticality, organizations typically purchase backup routers. They can balance the traffic between the two routers or keep a backup router on hand, which can be installed quickly in the event of a primary-device failure.

The devices that connect internal networks to a carrier’s public networks are considered edge devices. Routers are defined as edge devices because they connect sites to the Internet via dedicated links, such as Carrier Gigabit Ethernet service. For more on Carrier Gigabit Ethernet, see Chapter 5, “Broadband Network Services.” In cellular networks, cell phones and smartphones are edge devices that connect users to either the Internet or other mobile devices.

Network Functions Virtualization

Enterprises are starting to implement previously hardware-based network devices as software on their network. For example, routing and switching functions may be represented as software in commodity, x86 servers.

Additional Router Functionality

Beyond simply acting as a connection point to the Internet, routers can provide other functionality, including the following:

  • Image Firewall Routers can screen incoming and outgoing traffic.

  • Image Deep Packet Inspection (DPI) DPI can manage internal traffic. See Chapter 1 for more information about DPI.

  • Image Encryption protocols Routers employ IPsec and other protocols for sending packets over the Internet securely. Encryption is the use of complex algorithms to mathematically reorder bits in frames and packets.

  • Image Carrier Gigabit Ethernet See Chapter 5 for more information about Carrier Gigabit Ethernet.

  • Image Video Digital Signal Processor (DSP) This is used for conferencing services.

  • Image Wi-Fi A wireless service for branch and small offices that allows users to connect wirelessly to the router for Internet access.

  • Image Session Initiation Protocol (SIP) SIP support permits trunking for Unified Communications (UC) functions such as presence and instant messaging with other sites and VoIP calls.

  • Image Quality of Service (QoS) This prioritizes voice and video in the WAN. Voice and video are assigned tags that classify them with a higher priority than perhaps e-mail bits.

These services are located on specialized circuit boards, or blades, within the router. Multiple services can be located on the same blade, which can have more than one processor.

Note

Software to Monitor LAN-Connected Devices

Small data centers with only one or two physical computers holding virtual computers are generally easier to manage than data centers with perhaps 20 applications installed on 20 physical servers. However, there is a learning curve for managing numerous physical servers, each running multiple operating systems and applications. It involves keeping track of which physical server each image resides on as well as the application’s address. For large data centers, it is often easier to manage physical servers, where each one can be physically observed, than to manage an array of applications on virtual computers.

However, in economic downturns, organizations are often reluctant to invest in software used to track LAN traffic and uptime. New applications critical to operating an organization’s core operations have a priority in resource allocation.

Impact on IT When Organizations Merge

On January 19, 2016, the Boston Conservatory of Music and the Berklee College of Music announced an agreement to merge. At the merger, Berklee’s name became simply Berklee and the Conservatory was renamed the Boston Conservatory at Berklee. Berklee is a global university with a campus in Valencia and programs internationally including in Canada, China, India, and Latin America. Although the Boston locations are a few blocks from each other in downtown Boston, the close proximity of the Boston Conservatory to Berklee did not preclude disparities in hardware and software.

Bob Xavier, former IT Directory of the Boston Conservatory, was appointed the IT Director for the merged entity. Xavier’s goal was to simplify and streamline the infrastructure as key elements in ensuring uptime. The secondary goal was to make technology investments that promote the institution’s business values.

Per Xavier, an immediate need was to put in place consistent processes to manage change and implementation, and to develop compatible software applications. According to Xavier,

“We are being strangled by the weight of supporting so many incompatible systems and applications.”

For example, Berklee and the Boston Conservatory had incompatible student systems. Berklee had a student system called Colleague, and the Conservatory had PowerCampus Student. Student systems are software packages used to manage the majority of tasks around students. These include admissions, financial aid records, human resources, tuition bills, and course registrations.

According to IT director Bob Xavier, the number one priority for Berklee is the ability to be nimble so that the University can keep up with technological changes and requirements. Four years before the merger, the Boston Conservatory made a strategic decision to push all applications and processes to the cloud or to hosting centers. They wanted to “get out of the data center business.” The rationale is that it’s easier to be nimble and flexible without the burden of upgrading hardware in data centers when technology inevitably changes. Xavier is in the process of continuing the push to the cloud for the combined entity.

Incompatibility between browsers, computers, and applications is another area where the school is working toward standardization and compatibility. Not all of cloud and hosted applications are compatible with every browser used at the school. The university would like to see more standardization in browsers so that cloud and hosted applications work more smoothly and can be tested with just a single browser. In addition to these differences, some staff and faculty have PCs and others have Mac computers.

Currently the IT department feels that none of the systems scale, adequately to support both universities. They are not designed to be functioning globally, across time zones. The university is now working to align all of these incompatibilities and limits. Xavier predicts that this will be a 3-year process.

Monitoring LANs—What’s Up? What’s Down?

To properly operate large networks, a good suite of management tools is required. Moreover, because problems can occur in switches, servers, and storage, the best management suites are those that are capable of diagnosing and viewing all of these devices. Visibility across these three sectors is complicated by the fact that in many data centers, different manufacturers supply the switches, servers, and storage. One solution is to use third-party suites of management software that are capable of monitoring equipment supplied by more than one manufacturer.

The purpose of monitoring software is to notify IT staff about the status of LAN infrastructure. For example, are there outages? Where are the outages? Is congestion delaying traffic? Monitoring software is important on local area networks because LANs and data centers are critical to the functioning of today’s organizations. Companies deploy monitoring software to alert them to outages and provide reports and charts on the percentage of outages and congestion on, for example, switch ports and routers.

LAN monitoring software includes visual information on servers with web interfaces. Charts indicate the status of various devices so that IT staff can clear the problems by having information on web-like browser interfaces indicating where the outages are that affect each LAN segment. IT staff receive email, text, and audible alerts to notify them that there are outages. LAN monitoring software can be installed on servers at customer sites or in the cloud.

Cloud based and on-site monitoring packages are sold and maintained by the software developer, resellers, and managed service providers. Developers of monitoring software include: Cisco, Hewlett Packard, ipswitch, NetBrain, PagerDuty, PRTC-Paessler, and NagiosCore.

Setting Up the System—Time Consuming, Complex

The most challenging, time-consuming part of implementing a network monitoring system is inventorying all the equipment on site. This takes about a day to accomplish. The following is a list of some of the equipment that needs to be monitored:

  • Image Personal computers

  • Image Printers

  • Image Security monitors

  • Image Switches

  • Image Routers

  • Image Storage devices

  • Image Security monitors

  • Image Servers

  • Image Wi-Fi gear

While LAN monitoring software can be used for Wi-Fi gear, specialized Wi-Fi monitoring software from Wi-Fi manufacturers can monitor these networks in more detail. If Wi-Fi gear is included in non-specialized monitoring software, then components of Wi-Fi networks need to be inventoried. In addition to inventorying each piece of LAN hardware, monitoring software needs to be aware of which devices are connected to each other. For example:

  • Image To which switch and/or alternate router is each router wired?

  • Image To which printer is each computer connected?

Other information in databases includes:

  • Image What are all the switches to which each port is connected?

  • Image To which backbone switch is each wiring closet connected?

  • Image What is the IP address of each device on the network?

Note

Keeping Up with Changes—An Ongoing Challenge

The biggest challenge of operating monitoring software is keeping up with changes. As departments grow or shrink or change locations, monitoring software needs to be aware of the changes and update their software. Much of this is done automatically using discovery. Discovery is a feature of monitoring software that “discovers” new and changed devices by tracking their IP addresses. However, it is not 100 percent accurate.

Reports and Alerts—Pings: “Are You There?”

Monitoring software determines if equipment is up by sending continuous pings to all of the equipment in the network. Ping software consists of small software programs that expect a response to each of their messages. It’s analogous to sending a message that says “are you there?” If the equipment sends a response to the ping, the monitoring software assumes that the gear is operational and that the network is operational. Because ping responses are only 80 percent accurate, IP staff must investigate further if a response is not received from a ping.

Note

Alerts—24 Hours a Day, 7 Days a Week

IT staff monitor enterprise networks around the clock. Monitoring systems are able to check cloud-based applications as well as LAN gear. They are able to access their cloud applications’ HTTP (Hypertext Transfer Protocol) addresses from within browsers to determine if they are able to log into applications. Monitoring software can be programmed to alert staff of outages via e-mail, text messaging, and audible alerts. If key cloud-based applications or parts of the network such as e-mail or the entire network are down, they notify staff after hours as well as during work hours.

Organizations define which devices should trigger alerts. Alerts are often programmed differently after hours than during the day. For example, only issues that adversely affect the entire network may trigger an alert after hours. After hours, if on call IT staff can’t clear the problem remotely, they may be expected to come into work to try to resolve the problem during off hours so that the network is available the next morning.

Monitoring software is close to 100 percent accurate, but there are occasionally false positives and false negative alerts. In a similar way to, for example, strep throat or cancer screenings where false positives mistakenly indicate the presence of cancer or strep, with false positives, the system thinks there’s an outage. A false negative is when an outage occurred, but the monitoring system didn’t see it. With false negatives, the software falsely thinks the gear is up, and IT staff is not notified of an outage.

Per Jim Chapman, Product Manager at Lexington, Massachusetts-based monitoring software developer, Ipswitch:

“Companies would rather not get any false readings from their monitoring systems. But, if they do get a false reading, which happens from time to time, they would rather get false positives than false negatives. With a false negative, an outage is ignored and thus not resolved.”

Charts and Graphs

As indicated in Figure 2-8, charts and graphs provide visual “pictures” of the health of the local network. They further indicate congestion on ports within switches and routers. Moreover, monthly summaries are available indicating, for example, total percentage of time the network was up. This is especially important to IT and corporate management because outages negatively impact productivity, and may slow the time to market of new and upgraded products.

A screenshot of a window indicating the status of LAN gear is shown.

Figure 2-8 A graphic indicating the status of LAN gear. (Screenshot of Ipswitch © 2018 Ipswitch Corporation)

LAN Monitoring Software in the Cloud

Monitoring software located in the cloud is able to access customers’ networks by using agent software called a collector connected to firewalls. The collector polls gear such as servers, switches, and router ports on the local network and sends the results back to the cloud-based software. For providers of monitoring software, cloud-based systems have the benefits of faster set-up for each customer. This is because once the basic software is configured, only new customer information needs to be entered into the software. All customers have close to the same features, thereby simplifying the set-up and the software.

In addition, LAN monitoring suppliers receive a continuous stream of income from customers’ subscriptions. In contrast, with on-site sales, there is a one-time payment for the software and an approximate 12 percent of the purchase price annual maintenance fee.

Cloud provider customers’ advantages are that they avoid a one-time purchase price and the day-to-day maintenance of the software. Rather cloud providers’ customers pay monthly fees based on the size of their systems. Customers whose systems are in the cloud are provided with a console with a browser interface so they are able to receive indications of what’s up and what’s not and print charts and graphical summaries of the status of their gear.

Monitoring software located in the cloud needs the capability to monitor diverse customers located in different locations and distinguish, for example, Customer A from Customer B. This capability is provided by multi-tenant software. Multi-tenant software is able to organize the software by individual companies.

IP PBXs—Voice, Video, and Unified Communications

Software for IP phone systems is located on servers that use industry-standard operating systems and often open-source protocols. They don’t require a great deal of space and voice is considered another application in the data center.

The use of voice telephony in business and commercial organizations is declining as staff rely increasingly on messaging and e-mail services to communicate. As a matter of fact, some high-tech companies such as Alphabet do not supply most employees with telephones. Rather, these employees use e-mail and text messages to communicate with each other and the outside world. If they do need to speak with someone, they use their own cell phone. Most high-tech companies do provide some voice capabilities in conference rooms equipped with audio and videoconferencing gear so that employees can collaborate with each other and with partners.

However, companies that don’t provide telephones at every desk do give each customer support center agent a telephone or computer with voice calling capabilities. They use them to speak with customers and/or partners whose questions and issues can’t be resolved via text, chat, or e-mail. Sophisticated customer support centers may provide the flexibility for agents to exchange text-based chat, and text messages in addition to speaking directly with callers whose issues can’t be resolved via online chats or text messages. Contact centers are also used to make outgoing telemarketing calls.

IP Telephone Systems—Voice and Applications on LANs

When organizations purchase new phone systems, they buy IP-based systems for a number of reasons. Today, employees routinely conduct their day-to-day communications by using e-mail, collaboration tools, and instant messaging. Thus, there is not the level of concern regarding the use of one network (the LAN) for both voice and data in the event of a LAN crash. For the most part, voice is used for discussions about complex topics as well as for audio and video conferences. It is also used for time-sensitive purposes such as reporting fires and other emergencies. These outgoing calls can be made on mobile phones as well as office phones.

IP-based voice and data traffic is carried over the LAN. IP PBX systems are easier to maintain and can be more easily integrated with all types of conferencing and contact centers. Moreover, the quality of voice carried on LANs is generally quite good. This is also due to improvements in LAN capacities. In addition, LAN reliability is such that downtime is no longer a concern of any substance.

While the initial cost of many IP telephone systems is no less expensive than older-style phone systems, ongoing support costs are lower. In addition, sharing centralized applications such as audio and video conferencing and contact center services with remote offices is less complex than it was in older systems.

IP Telephony Manufacturers

Makers of voice telephone systems such as Avaya, Inc. and Mitel Networks Corporation once dominated the market for telephone systems. However, the advent of IP telephony, wherein voice and data are carried on the same network, presented an opportunity to companies such as Cisco Systems and Microsoft that previously sold only data equipment and software.

Microsoft’s VoIP product Microsoft Teams replaces Skype for Business. Teams is a cloud based suite of applications such as voice, email, and collaboration between staff. Teams also includes integration with Office 365, a package of software for spread sheets, word processing and presentations. Microsoft has stated its intention to continue its Skype service for consumers and small businesses.

Manufacturers and software developers of IP-based telephone systems include the following:

  • Image Avaya Communications (part of Nokia)

  • Image Cisco Systems, Inc.

  • Image Microsoft

  • Image Mitel Networks

  • Image Nexmo (part of Vonage)

  • Image Polycom

  • Image RingCentral

  • Image Mitel

  • Image Tropo

  • Image Twillio

  • Image Vertical

IP Telephony—Converting Voice Signals to Digital Data

Converged phone systems convert analog voice signals to digital bits, compress that data, and then bundle the compressed, digitized bits into packets, essentially transmitting voice as data packets. Data, voice, and video packets often share the same LAN infrastructure (cabling and switches) as that used for normal data traffic. However, voice and video require special treatment. They need to be sent in real time. Impairments such as delay and packet loss caused by network congestion are often not noticeable with data traffic. However, these problems noticeably degrade voice and video quality.

Thus, network capacity is critical for sustaining voice quality. Greater LAN capacity and faster Digital Signal Processors (DSPs) as well as protocols for Quality of Service (QoS) enable local data networks to carry high-quality voice and video. Even though voice quality is generally good in IP systems, the aforementioned problems can occur. Therefore, packets with voice and video are given priority over packets containing bits that are less time-sensitive, such as e-mail.

Voice QoS and Security

Keeping networks secure is a difficult, ongoing challenge. Security needs to be installed in IP telephony system servers, and in all of the Layer 2 and Layer 3 switches, the routers, and the firewalls. In particular, many IP systems use common protocols. These protocols are wide open in the sense that many hackers know how they work and put a great deal of effort into finding their vulnerabilities.

Organizations use the following to ensure voice quality:

  • Image QoS solutions These solutions mark and prioritize voice and are important to ensure minimal delays for acceptable voice quality. See the upcoming sections “Assessing Network Quality by Using Voice Quality Measurements” and “Prioritizing Voice and Video on a VLAN” for information about voice quality measurements and Virtual LANs (VLANs).

  • Image Compression Compressing and digitizing voice signals impacts quality. For example, the compression algorithm, Adaptive Multi-Rate Wideband (AMR-WB), based on the G.722.2 standard, is able to compress 16,000 samples of voice per second to rates as low as 12.65 kilobits per second. At this rate, each voice session uses only 12.65KB of bandwidth, but provides better audio than earlier compression standards, including G.711, which required more capacity. G.722.2 provides high-definition (HD) voice and is used for some mobile voice traffic.

In addition to the preceding, proxy servers authenticate callers to verify that they are who they say they are before being sent to their destination. Proxy servers, located in gateways and firewalls, serve as intermediaries between callers and applications or endpoints, telephones, and other devices connected to the LAN.

Assessing Network Quality by Using Voice Quality Measurements

IT staff members can manage VoIP by using software management tools to assess voice and video quality to analyze the following:

  • Image Packet loss This refers to the packets that are dropped when there is network congestion. Packet loss results in uneven voice quality. Voice conversations “break up” when packet loss is too high.

  • Image Latency This term refers to delays (in milliseconds) that are incurred when voice packets traverse the network. Latency results in long pauses within conversations, and clipped words.

  • Image Jitter This refers to uneven latency and packet loss, which results in noisy calls that contain pops and clicks or crackling sounds.

  • Image Echo This is the annoying effect of hearing your voice repeated, an issue with which so many of us are familiar. It is often caused when voice data is translated from a circuit-switched format to the IP format. This is usually corrected during installation by special echo-canceling software.

Prioritizing Voice and Video on a Virtual Local Area Network

Organizations can program voice telephony as a separate Virtual Local Area Network (VLAN). These “virtual” networks act as separate LANs. IP PBX components use common protocols and control the types of devices that are able to communicate with IP telephones and hold audio and video conferences. See Table 2-3 in the Appendix for additional protocols and VoIP terms.

VLANs perform the following special treatment for IP endpoints:

  • Image 802.1P protocols tag voice and multimedia packets to prioritize them for improved availability with less delay. The tag distinguishes voice and video packets. In addition to multimedia, tagging protocols such as 802.1P are used for conferencing services, as well.

  • Image VLANs shield endpoints from hackers by allowing only certain types of packets through firewalls to reach them. This is accomplished by means of policies for firewall logical ports that are dedicated to voice and video traffic. Logical firewall ports are defined in software rather than actual physical ports.

IP PBX Architecture

Private Branch Exchange (IP PBX) architecture varies somewhat among different manufacturers, but the servers for call processing and gateway functionality are common to all. The gateway converts Voice over Internet Protocol (VoIP) signals to those compatible with the Public Switched Telephone Network (PSTN). These are located in a separate gateway or in a router. In addition, Layer 2 switches transmit calls to other devices connected to the switch and backbone switches.

Connecting IP Telephones to Layer 2 Switches and RJ45 Jacks

Computers and phones often share the same jack and cabling with the Layer 2 switch. In this configuration, the user’s personal computer is plugged into a data outlet on the back of the telephone, and the telephone is plugged into the nearest RJ45 data jack. (IP telephones are often referred to as endpoints or nodes, terms for items such as PCs and printers connected to LANs.)

Note

For greater redundancy and to create a dedicated path to the Layer 2 switch, the PC and telephone can each use a separate RJ45 jack, cabling, and port on the Layer 2 switch. This requires additional hardware in the switch and an extra cable run from the telephone to the switch. In either case, voice, video, and data share the fiber-optic cabling and ports on Layer 3 switches that transmit traffic between floors and buildings. This is the enterprise backbone.

Communications Servers and Voice as an Application on the LAN

Communications servers perform call processing, sending instructions on how to set up calls. They send instructions for features such as three-party conferencing, speed dial, and transfer. Communications servers host the software for the system, trunking, and station features. These servers specify the trunks over which international, local, and interstate calls should be routed. A trunk is a fiber optical or copper cabling link to broadband or the public switched telephone network (PSTN). The servers have lists of user profiles with extension numbers and permissions. Permissions include specifying who is allowed to use particular features such as video conferencing and placing calls to international locations.

Most IP servers run on UNIX, Linux, or Microsoft’s Windows operating systems. These operating systems control basic operations, including how information gets on and off the network, how it’s organized in memory, and the administrative interface for staff members to monitor and maintain the system. System administration and programming is performed via a computer logged in to the communications server.

In addition to basic voice features and call processing, some communications servers also hold software for specialized applications, such as audio and video conferencing, speech recognition, contact centers, voicemail, and Unified Communications (UC) systems. In other instances, these applications might be on separate application servers. Whether they reside on separate servers or in the communications server, any device connected to the corporate LAN—even those at remote locations—can be given permission in the communications server to access these applications.

Media Gateways, Protocol Translation, and Signaling

Media gateways contain Digital Signal Processors (DSPs) that compress voice data to make it smaller and more efficient to transmit. DSPs also convert analog voice signals to digital, and vice versa. They then bundle the digital voice signals into packets compatible with LANs. In addition, media gateways include circuit packs with ports for connections to traditional circuit-switched analog and digital trunks.

Media gateways are responsible for managing some security, monitoring call quality, detecting and transmitting off-hook conditions, and handling touch-tone, dial tone, busy signal, and ring-no-answer conditions. Media gateways transmit signals for VoIP such as ringing, call setup, and touch-tone in separate channels from voice calls.

Note

Central Power Distribution and Backup via Power over Ethernet

Every IP telephone needs its own source of electrical power, unless it’s connected to a PC’s USB port. To avoid the expense and labor of providing local electricity to each phone at installation, organizations can power them via the Layer 2 switch to which the they are connected. This ensures that all phones are connected to a UPS (Uninterrupted Power Supply) to survive power surges and interruptions. Common power sources and backup also avoid the problem of ensuring that each phone is near an electrical outlet, is plugged in, and has an electrical cord.

To bring power to groups of endpoints, organizations use 802.3af, the IEEE standard known as Power over Ethernet (PoE) that defines how power can be carried from the Layer 2 switch to the IP telephone by using the same cabling that transmits voice and data. Battery backups and generators are deployed in wiring closets where switches support mission-critical telephones or attendant consoles. PoE is also used to power corporate wireless Wi-Fi antennas and base stations. (See Chapter 7 for more information about corporate wireless networks.)

Session Initiation Protocol—Compatible Trunks

Session Initiation Protocol (SIP) signaling is a key protocol for connecting IP networks together. It is also used to connect enterprise locations with data networks for voice calling. Signaling systems used in office phone systems need to be compatible with signals used in a carrier’s network if voice calls are to be sent over data links. This is so that the equipment can interpret addressing signals, ringing signals, and busy signals sent by office IP PBXs, and vice versa.

Without this compatibility, organizations with IP telephone systems need to support two different sets of trunks, one set for voice and another set for data. Or they can use a gateway to translate their IP non-SIP calls to SIP. (Trunks are used to carry voice and data on public networks.) Without SIP there are extra costs and long-distance fees for gateways, and voice quality is impaired. The signal’s quality is impaired when converting VoIP to formats compatible with SIP data networks when they are sent, and converting them back to VoIP when they are received. Impairment results because compression does not re-create voice in exactly the same format when it’s decompressed, and vice versa. Thus, its quality is degraded every time voice data is converted.

Direct Inward Dialing

Direct Inward Dialing (DID) is a feature that routes incoming calls directly to a telephone system extension without operator intervention. DID traffic can be carried on existing digital trunks used for incoming and outgoing voice calls and on SIP-type trunks carrying voice and data traffic.

As Figure 2-9 illustrates, DID service is made up of “software” telephone numbers. Each number is not assigned a specific trunk; rather, a few hundred DID numbers are able to trunk share paths. When DID calls reach a specific slot in the telephone company’s switch, the carrier’s switching equipment looks at the digits dialed and identifies the call as belonging to a particular organization. The telephone company equipment passes the last three or four digits of the dialed number to the organization’s telephone system. The onsite telephone system reads the digits and sends the call directly to the correct telephone.

A figure illustrates the Direct Inward Calling (DID).

Figure 2-9 DID soft numbers pointed to an organization’s broadband circuits. (Photo by Chun-Tso Lin/123RF)

Unified Communications, Contact Centers, and Video Conferencing

Value-added services add functionality to telephone systems. Unified Communications (UC) combines voicemail, e-mail, instant messaging, audio and desktop video conferencing in a single mailbox. Contact centers manage large numbers of incoming and outgoing telephone calls, texts, and e-mail messages. Voice response units (VRUs) enable callers to enter and access computer information. UC applications are available on IP-wired phones as well as mobile devices.

Integrating Conferencing, Instant Messaging, and E-Mail through UC

Unified Communications systems provide the capability to retrieve e-mail and voicemail messages from a single device such as a PC or a touch-tone telephone. Additionally, they contain presence indicators that show the availability of people in their group.

UC systems enable people who travel to have the convenience of checking only one source for messages and sharing information with others via audio and video conferencing. Retrieving messages from PCs, tablets, and smartphones also affords users the ability to prioritize messages and listen to the most important ones first. It eliminates the need to hear all voicemail messages before getting to the critical ones.

On many Unified Communications systems, when users access messages from a touch-tone phone, text-to-speech software converts e-mail to speech for callers and reads their e-mail messages to them. Employees access messages through e-mail software on their PCs or remotely on their mobile devices. IP PBX providers that supply UC systems provide customers single-source support for their phone and messaging systems. Examples of Unified Communications systems are: 8×8, RingCentral, Mitel, and Vocalocity.

Desktop Video Conferencing

Many UC packages and IP PBXs include desktop video and audio conferencing capabilities. See Figure 2-10 for an example of video conferencing on a personal computer. These require software clients on each personal computer and a server with the software. The conferencing software can be either in the communications server used for the main telephone system, or in a separate server along with other communications applications. Often, as is the case with Microsoft, Mitel and Cisco, the application itself may be hosted in the cloud. Participants in audio conferences with access to collaboration software can also illustrate ideas by using PowerPoint or other types of documents. They can edit as well as view the documents jointly.

A figure illustrates the desktop video conferencing set-up.

Figure 2-10 Desktop video conferencing set-up. (Photos by Andrea De Martin/123RF [top, left]; Mangostar/123RF [top, center]; Dean Drobot/123RF [top, right]; Fizkes/Shutterstock [top, far right; Agenturfotografin/123RF [bottom, left]; Racorn/123RF [bottom, right])

Using video conferencing capabilities such as this, co-workers with video cameras on their devices can work together on a more personal level. Another advantage is that staff members can take advantage of the same communications capability away from the office as they do in their office. If WebRTC (Real Time Communications) software is embedded in browsers, users with disparate PCs can participate in the same videoconferences.

Note

In addition to the use of licensed video conferencing software, small companies now routinely use Skype for audio and video conferencing because of its low cost (or often no cost) and ubiquity. Universal availability is an important feature when attempting to use video conferencing with people outside of your own organization.

Collaboration Software—Productivity Boosts

Collaboration software enables staff in different locations to collaborate on joint projects and documents. Users can look at the same documents on their PC or tablet screens while on an audio conference call or simply speaking together on the telephone. Staff can additionally add comments to documents or edit them. There are also calendaring functions so that people can coordinate dates for meetings. Collaboration applications are generally Internet-based. To improve security, some collaboration services encrypt texts so that data remains private. They may also have strict password and log-on requirements.

One example of collaboration software is Cisco’s WebEx collaboration. WebEx can be accessed from desktops, smartphones, and laptops or from specialized video conferencing equipment for meetings. Its screen sharing enables staff to view the same documents while on a videoconference or audio conference or one-to-one telephone call. WebEx provides strong security features such as password management, encryption (the use of algorithms to scramble bits for added privacy and security), and role-based access to limit or widen access to WebEx for designated groups of employees. Many software collaboration tools include the option of purchasing a whiteboard so that a group of remote staff can access documents and add written comments on the white board.

HD video conferencing systems often are integrated with collaboration software. They make it easier to integrate collaboration capabilities that include enabling people in meetings at different locations to view and edit the same documents.

Collaboration sessions can be held in real time or people can log in at different times to add comments, send text messages, coordinate calendars, or edit documents. Other companies in addition to Cisco that provide collaboration software are Amazon, Asana, Atlassian, Citrix, IBM, Google Drive, Masergy, Microsoft Teams, Oracle, Slack, Trello, and Unify. There are many others, most of them free.

Video Conferencing

Because of the ready availability and consumers’ positive experiences with desktop video conferencing services offered by Google, Amazon, and Skype, staff expect conferencing to be readily available at work. When employees travel with laptops, smartphones, or tablet computers, they often take advantage of formerly residential video offerings to stay in contact with colleagues and customers. These alternative ways to conduct business with distant customers and offices have enhanced collaboration between staff at diverse locations worldwide.

Microsoft Teams’ conferencing service and Cisco’s web-based WebEx audio and video conferencing can be integrated with Microsoft Office 365 applications on users’ desktops.

Video Conferencing Powered by the Cloud

Organizations including Amazon and Google offer video conferencing based in the cloud. Customers who use these services avoid purchasing and maintaining video conferencing equipment. Some of these services offer add-on features such as Unified Communications, and screen sharing where participants can all view the same document. Some of these services are Cisco’s WebEx, Citrix’s GoTo Meeting, West Corporation’s Intercall, and PGI (Owned by Siris Capital Group).

Immersive HD Video Conferencing

The terms “immersive” and HD video refer to group video conferencing systems that transmit high-definition video signals. Rooms designed specifically for immersive video conferencing contain a conference table and have specially configured lighting to enhance the video experience for users. Seating at the conference table might be arranged so that all of the chairs face the video screens.

In addition, rather than just containing one video monitor, these rooms have a wall of either three large, flat-panel monitors or, less often, an entire wall-size monitor. Viewing life-size images of remote co-workers, customers, or business partners in high resolution imparts the feeling of actually being in the same room with them. Prices for room size video conferencing gear have dropped and new user interfaces have resulted in easier-to-use systems. In most cases, there is no longer a need for specially trained IT staff to run the video conferences.

If the video system is based on a common signaling protocol such as SIP and H.323, they can hold conferences with organizations outside of their own company, and with colleagues who might be using another manufacturer’s system. Manufacturers of immersive systems include Polycom, Inc., Tandberg (part of Cisco Systems), and Vidyo, Inc.

Communications Platform as a Service (CPaaS) vs. Hosted IP PBXs

As its name suggests, hosted IP PBX services relieve an organization of the burden of maintaining hardware and software for a phone system and adjunct services including contact centers. CPaaS systems, in addition to providing IP voice features, add customized solutions to complex communications issues between companies and their customers.

Hosted IP PBXs—Providers Managing Software and Gateways

Hosted systems have the same features as on-site IP PBXs, but the server software and gateway equipment are hosted at a provider’s site. This is an increasingly popular method of maintaining telephone systems. Rather than use capital to pay for a telephone system, organizations pay ongoing subscription fees based on the number of employees and the addition of value-added applications. With hosting, vendors easily add new features to their server without customers needing to call their supplier to reprogram their server software.

Customers do have an administrative screen where they can add and delete staff names and specify classes of service—restrictions on where staff can call or services they have permission to access. Customers that use a hosted IP PBX service are connected to their system via high-speed broadband connections. These network connections also transmit busy signals and ring tones between the provider and the customer’s handset.

Hosted VoIP systems are attractive to smaller and medium-sized companies that might not have in-house expertise to manage IP telephony and applications such as collaboration, contact centers, or unified communications. These companies might also be uncertain about their future growth. In addition, hosted systems provide disaster recovery and portability. If the customer loses electricity or their LAN crashes, they can access their telephony functions remotely.

Large companies with specialized needs, such as hospitals and financial organizations, may not choose to use a hosted solution, as it might not fit their special requirements for high-level security and privacy or other specialized features. Hosted systems are designed to provide the same features to all customers. Most do encrypt broadband data on the links between the host and the customer.

VoIP hosting providers include the manufacturers of IP PBXs listed above, as well as telephone companies and cable providers such as AT&T, Verizon, and Comcast Business.

Communications Platform as a Service—Solutions for Communications Problems

CPaaS companies develop software solutions related to voice telephony to often-complex communications issues. For example, Twilio, a communications software developer located in San Francisco, developed communications software that ride-hailing provider Uber uses for messaging and calls between its drivers and people that need rides. It additionally developed the messaging software used by Facebook’s WhatsApp.

Most CPaaS solutions are located in the cloud. Genband, Mitel, SAP, and Vonage with its acquisition of Nexmo are other CPaaS providers.

Contact Centers—Efficiencies for Incoming and Outgoing Communications

Contact centers are groups of agents that handle incoming customer service calls, text messages, chat, web queries, and outgoing sales, credit, and collections calls. Contact centers consist of software that is used to manage incoming and outgoing communications with agents, particularly when there are more calls, text messages, chat sessions, and e-mail than there are people to always respond to them immediately. Importantly, contact centers provide statistics on utilization of agents, telecommunications lines, and numbers of e-mail, text, and chat sessions.

The main theory behind grouping agents into “pools” is that large groups of agents can handle more calls than the same number of agents organized into small groups, without overflow of calls between groups. This is analogous to the practice by the United States Post Office of using one long line for postal clerks rather than an individual line for each clerk. With one line for all postal workers, a clerk will be available more quickly from the pool and the same number of clerks will help more people within a given amount of time than by forming separate lines for each clerk.

Contact centers route incoming traffic to the agent that has been idle the longest within the appropriate agent group, based on the telephone number dialed or by the customer’s account or telephone number. If all agents are busy, the contact center shunts the call to a queue, routes the call to an alternate group of agents, or gives the caller the option to leave a voicemail message. Sophisticated call centers additionally route calls to groups of multilingual staff or staff with knowledge about particular products or technologies.

Callers to contact centers can recognize when they reach one if they hear a message such as the following:

Please hold, and the next available agent will take your call.

Virtual and Dispersed Contact Centers

Agents using the same contact center software can be located in other cities or countries. The software distributes calls based on criteria such as time of day or level of traffic. In addition, agents can work from home via their broadband connection and remotely log in to the contact center system, which then routes calls and messages to them. These are called virtual call centers. Many firms establish virtual call centers to save on real estate, furniture, and utility expenses.

Contact center systems are offered by almost all VoIP manufacturers as well as providers such as software developer Aspect, which offers it independently of the telephone system.

Contact Center Staffing—Do-it-Yourself Transactions

Contact centers are under increasing pressure to increase staff productivity, justify the number of agents hired, and improve the quality of customer service to attract and retain customers. Various companies in the United States and Europe outsource their centers to India, the Philippines, and other countries with lower labor rates, many people with multilingual abilities, and the availability of high-bandwidth networks. Alternatively, organizations may move contact centers to parts of their own country with lower salaries and office rents.

To respond to more customers without adding more staff in the existing center, companies are deploying technologies such as automatic and written responses to e-mail, speech recognition, web-based sales, online forums, and chats. The practice of customers completing transactions without speaking with an agent is referred to as self-service. Some companies operate their sales function almost entirely on the Web. And, of course, Amazon built its entire business models around the Web.

Many of these web-based businesses are well regarded for their customer service, even though they don’t actually involve any human contact. Moreover, many consumers prefer self-service on the Web over listening to long, automated menus and waiting in phone queues.

Contact Center Statistics

Reports on the real-time status of calls and electronic messages are the lifeblood of a contact center and are closely monitored by management. Management then uses these statistics to plan network design and analyze staffing requirements. Statistics are also used as an aid in improving web site design by alerting organizations to the most frequently asked questions. These issues can then be more fully clarified on the web site or via automated e-mail messages designed to explain common procedures for fixing or analyzing software issues.

Statistics are organized into reports that provide real-time and historical data on incoming and outgoing calls and agents’ usage on individual trunks so that managers will know if they have the correct number of broadband trunks. Reports also indicate the number of calls and web hits associated with each toll-free number and web advertisement so that response rates generated by ad campaigns can be analyzed. They additionally alert supervisors to unusually high or low call volumes so that staffing can be adjusted accordingly.

E-mail Response Management Software

E-mail response management software is another tool for increasing self-service. It can be used independently or as part of call center software. E-mail response management software installed on an organization’s server routes e-mail messages to agents by topic, subject, and content. If appropriate, the software can automatically respond to the e-mail. Natural language software for written language is capable of interpreting and extracting concepts in e-mail messages. It can link phrases in a meaningful way. E-mail response management systems also create reports based on the number of messages received, by message subject, date, and content. This provides management with information on which web sections need to be improved.

E-mail response management software pulls out messages to managed mailboxes such as those addressed to generic functions such as [email protected], [email protected], and [email protected]. They route them to groups of agents or respond to them with an automatic reply, based on subject, message, and content.

Automatic Links to Customer Relationship Management

Customer relationship management (CRM) is a customer-centric strategy that aims to make information such as customers’ transaction histories accessible across the entire organization. CRM can comprise many elements including sales automation with automated proposals and brochures available to representatives. CRM also includes marketing tools that track the impact of direct mail, e-mail, and Internet-based campaigns. In addition, customer service with links to billing, payment status, purchase histories, open issues, and universal queues are considered part of CRM. The largest providers of CRM software are Oracle, SAP, and Siebel Systems.

Telephone systems such as Ring Central automatically link agents handling incoming calls to the CRM information about callers so that agents have ordering and other information to more quickly answer and resolve customers’ issues. The caller ID with the called party’s telephone number automatically links call center agents to the appropriate CRM data.

Voice Response Units—Routing and Accessing Information via Touch-Tone or Speech

Voice Response Units (VRUs) provide information to callers based on touch-tone signals or spoken commands. Customers can call their bank or credit card company to ascertain their balance or learn if a payment has been received. They enter their account number and PIN when prompted by an electronic voice. The VRU repeats the requested information, in spoken word.

Companies justify the expense of speech recognition and VRUs because they need fewer people to speak with live agents.

The following are examples of Voice Response Unit applications:

  • Image Newspapers subscribers can use it to stop and start delivery before and after vacations, or report missed delivery of newspapers.

  • Image Airlines can link to flight information.

  • Image Prescription drug benefit programs enable people to order drug refills.

  • Image Companies use VRUs as adjuncts to or in some cases substitutes for company operators. The voice response unit is programmed to answer and route:

    • – All calls

    • – Calls to particular telephone extensions or departments

    • – After-hours calls

  • Image Here is a classic example:

    Thank you for calling ABC Company. If you know your party’s extension, you may dial it now. For sales, press 1. For customer service, press 2.

Poorly scripted, confusing, and overly long scripts tend to be a source of frustration for callers. Another source is the inability in some organizations to speak with a live operator. In some instances, pressing 0 does nothing except trigger the initial confusing menu to be replayed.

Using Speech Recognition to Expand Self-Service

Many call centers add speech recognition to their integrated voice response platforms to make them more user-friendly and faster for callers to navigate. Toll-free directory services ask callers to speak the name of the company for which they require a toll-free number without operator assistance. Local telephone companies use speech recognition so that customers can easily obtain billing information without speaking with a billing representative. Making speech recognition and IVR user friendly is an important factor in ensuring callers’ acceptance of the service.

How Speech Recognition Works

Speech recognition works by first detecting and then capturing parts of spoken words (utterances). After removing background noise, it converts the captured utterances to a digital representation. Capturing the speech and digitally representing it is done by DSPs on high-speed specialized computer chips. The speech recognition software then breaks the sounds into small chunks and compares various properties of the chunks of sound to large amounts of previously captured data contained in a database.

Speech recognition systems can be either speaker dependent or speaker independent. Speaker independent systems such as those used by Amtrak, Apple, and the United States toll free directory are speaker independent. They recognize words spoken by large numbers of people. Originally, speaker independent systems recognized a limited number of words such as yes, no, and numeric digits. However, improved recognition and faster computer chips now enable speaker independent systems to recognize large vocabularies of words.

Speaker dependent systems include Nuance’s speech recognition software for consumers. Nuance also offers speech recognition software for specialized professions that use specific vocabularies. This includes radiologists who use speech recognition to dictate reports on results of medical imaging exams such as x-rays and MRIs. Speaker dependent software needs to be “trained” to recognize a particular person’s commands. This is accomplished by having the purchaser of the speech recognition software read a few passages of text until the software recognizes the person’s speech pattern.

Natural language speech recognition systems are those with the ability to recognize speech where users don’t use specific, predefined commands such as “copy the phrase hello Mary”, or “cut yours truly” as found in desk-top speech recognition applications. For example, natural language systems recognize “Turn on the Alarm” as well as “Set the alarm.” Digital Assistant products such as Amazon’s Alexa are capable of recognizing natural language commands. These systems are speaker independent systems.

High-speed computer processors perform the digitization and comparisons in milliseconds. They also take into account gender differences and regional accents. Speech recognition software contains different databases of expected responses based on the application. A corporate directory has a different speech database than one for airline scheduling or lost-luggage applications.

Until recently, Nuance had somewhat of a monopoly on speech recognition software. But, currently machine learning and computer chips with larger memories and faster transistors have led research universities and large companies including Microsoft and IBM to develop speech recognition, particularly for wearable devices such as Hoboken, NJ-based Essence Group’s emergency alert devices that enable an incapacitated elderly person to call for help by voice commands without using his or her hands.

Speech recognition is integrated in:

  • Image Home digital assistants

    • – Amazon’s Echo and Dot

    • – Alphabet’s Google Home

    • – Microsoft’s Cortana,

    • – Apple’s Siri

    • – Baidu’s Raven

    • – Samsung’s ViV

  • Image Set-top boxes

    • – Apple TV

    • – Roku

    • – Comcast X1

Speech Recognition in Home Automation Devices

A digital assistant is essentially a computerized home assistant that supplies information in response to spoken questions and requests for information asked by users. Digital assistants are made possible by natural language speech recognition, high-speed computer networks, and machine learning. (See Chapter 1 for “Machine Learning.”)

The large amounts of captured speech and information gathered on search engines and computers are key to the development of digital assistants. For example, Google has troves of data with commonly requested search requests. In addition, as digital assistants’ databases of known utterances grow, the systems’ speech recognition will continue to improve accuracy.

Digital assistants perform data look-ups using owners’ Wi-Fi networks and broadband links. The look-ups are microsecond-speed data connections to the Digital Assistant’s manufacturer’s data centers, and their partners’ computers. These computers send responses back to questions such as “Alexa, what is the weather today?” The following is a sampling of requests that digital assistants can respond to:

  • Image What is the time?

  • Image What is the time in China?

  • Image Add eggs to my shopping list.

  • Image What is the solution to this math problem?

  • Image How many miles is Australia from New York?

  • Image Play relaxing music.

  • Image Play music by Mozart.

  • Image Connect me to Spotify.

  • Image Set the alarm for 6:30 AM

Digital Assistant Software Integrated in Other Devices

Digital assistant software is integrated into smartphones, watches, and computers. An app on smartphones is required for these integrations. For example, Amazon’s Alexa digital assistant software is integrated in cars, including Ford Motor Company and Tesla automobiles. There are also agreements to include Amazon’s Alexa software in refrigerators, ovens (“Alexa, turn on the oven to 350 degrees, heat water for coffee”), thermostats, and lights. All of these integrations are ways that humans can interact with machines using speech as an input technology.

Privacy with Digital Assistants

Privacy is residential customers’ most common worry about home automation devices. According to the 2016 research by consulting firm Parks Associates, a U.S. Research and Analysis firm for Internet of Things (IoT), smart home, and connected entertainment technologies, half of consumers stated that privacy is their most pressing concern about home automated devices. Examples of IoT wireless products are drones, monitors, and lights, controlled by software.

The term privacy denotes the ability of people to control which people can see information about them. There have been fears that Amazon’s digital assistants (Echo and Dot) store customers’ speech. Amazon has stated that it only saves commands directed to its digital assistants, not other random speech spoken in the same room. It further has stated that each user is associated with a particular randomly selected code and that Amazon does not know which user is associated with any code associated with customers’ speech.

Possible Long-Term Impact of Digital Assistants

In addition to privacy, concerns about digital assistants revolve around depersonalization resulting from less need for communications between people. As people increasingly turn to computers to get information will they have less of a requirement to speak to each other?

In addition to privacy and depersonalization, there are worries about a possible decrease in children’s ability to do math. This is because “asking Alexa” provides children solutions to even complex math problems. It’s still early days in the availability of digital assistants to know for certain what its impact will be in the future. In the early days of calculators, many thought children would not learn basic math skills. Each new innovation brings concerns. The answers to these concerns are initially unknown.

Appendix

Table 2-2 LAN Protocols, Devices, and Terms

Protocol, Service, or Device

Description

Backbone

The wiring running from floor to floor in single buildings and from building to building within a campus. A backbone connects switches in different wiring closets to one another. Backbones support high concentrations of traffic in carrier and enterprise networks.

Blade server

Blade servers are computers packaged on individual boards—blades—that plug into slots in a chassis. A chassis is the metal frame on which components fit, similar to a custom cabinet with slots for specific items. In high density, chassis blades are arranged vertically. In low density, three- or four-blade servers, they are arranged horizontally. Chassis are placed on racks in data centers. Vertical arrangements conserve space and can share power supplies, fans, and cabling. In addition, blades can be easily added.

File server

A specialized computer with a large hard drive. File servers provide users with access to documents and applications in central locations in LANs.

Load balancing

The capability of equipment to balance traffic between networks and devices so that one network or device is not overloaded while others carry little or no traffic.

Local Area Network (LAN)

A group of devices such as computers, printers, and scanners that can communicate with one another within a limited geographic area, such as a floor, department, or small cluster of buildings.

Layer 2 switch (also called a switching hub)

A switch located in a wiring closet that allows multiple simultaneous transmissions within a single LAN. Layer 2 switches provide a dedicated connection during an entire transmission.

Layer 3 switch (also known as a routing switch)

A switch that routes traffic across the LAN backbone based on IP (network) addresses. They are more complex to manage than Layer 2 switches, but they can use alternate paths if one path is out of service. They are located in data centers and link wiring closets and buildings within a campus.

Layer 4 switch (also known as a content switch)

A switch located at hosting sites and corporate and government sites that host their own web pages. Layer 4 switches connect web traffic to the desired web pages by looking at the URL, the web address from which each packet was transferred to the site.

Router

Routers carry traffic between LANs, from enterprises to the Internet, and across the Internet. They are more complex than switches because they have routing tables with addresses and perform other functions. Routers select the best available path over which to send data.

Server

A centrally located computer with common departmental or organizational files such as personnel records, e-mails, sales data, price lists, student information, and medical records. Servers are connected to Layer 2 or Layer 3 switches. Access to servers can be restricted to authorized users only.

Virtual Local Area Network (VLAN)

A virtual local area network is made up of devices, usually personal computers or VoIP devices, whose addresses are programmed as a group in Layer 2 switches. This segregates them from the rest of the corporate network so that all devices in the same VLAN can be given a higher priority or level of security. They are programmed as a separate LAN but are physically connected to the same switch as other devices.

Wide Area Network (WAN)

A group of data devices, usually LANs, that communicate with one another between multiple cities.

Table 2-3 Protocols and VoIP Terms

Protocols and Terms for VoIP Service

Description

802.1pq

802.1pq is used to tag certain Virtual LAN traffic to indicate that it is part of a special group. It might tag voice packets to segregate them for special treatment and monitoring. It also contains bits that identify the packet’s priority level.

CoS

Class of Service provides priority to particular types of traffic. Voice or video can be designated with a higher priority than voicemail.

DoS

Denial-of-Service attacks occur when hackers attempt to disrupt communications by bombarding endpoints or proxies with packets.

G.711

G.711 is used to compress voice signals at 64,000 bits per second plus a 6- to 21-kilobit header for VoIP services. It produces good voice quality but uses more network capacity than other compression techniques. This technique requires 60 milliseconds to process and “look ahead” (check the route).

G.723.1

G.723.1 is a compression protocol that uses small packets with 6.3Kbps compression. Small packets are more efficient than large ones, in terms of bandwidth use. With the header, total bandwidth is about 16Kbps.

G.729

G.729 is a voice compression standard used in VoIP. It compresses voice signals from 64,000 bits per seconds to 8,000 bits per second. The header brings the total bandwidth to about 18,000 bits per second.

H.323

H.323 is a family of signaling standards for multimedia transmissions over packet networks adopted by the International Telecommunications Union (ITU). Microsoft Corporation and Intel Corporation adopted the standard in 1996 for sending voice over packet networks. H.323 includes standards for compressing calls and for signaling. It has higher overhead than the newer signaling protocol, SIP.

Presence theft

This refers to the impersonation of a legitimate IP telephone or other device by a hacker.

Proxy server

A proxy server screens communications between endpoints to verify that the called and calling parties are who they say they are and that no virus will be sent. A proxy server might sit between a VoIP server and external devices that request an audio conference or videoconference session. This is referred to as intermediating sessions. Proxy servers are also used between IP telephones and the Internet.

QoS

Quality of Service guarantees a particular level of service. To meet these guarantees, service providers or IT staff members allocate bandwidth for certain types of traffic.

RTP

Real-time Transport Protocol is an Internet Engineering Task Force (IETF) standardized protocol for transmitting multimedia in IP networks. RTP is used for the “bearer” channels—the actual voice, video, and image content. SIP is commonly used for the signaling to set up and tear down sessions.

SCCP

Skinny Client Control Protocol is a Cisco proprietary signaling protocol for sending signals between devices in Cisco telephone systems. It is also referred to as “Cisco Skinny.”

SIP

Session Initiation Protocol establishes sessions over IP networks, such as those for telephone calls, audio conferencing, click-to-dial from the Web, and instant messaging exchanges between devices. It is also used to link IP telephones from different manufacturers to SIP-compatible IP telephone systems. It is used in landline and mobile networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset