CHAPTER 2
Information Systems, IT Architecture, Data Governance, and Cloud Computing

Introduction

One of the most popular business strategies for achieving success is the development of a competitive advantage. Competitive advantage exists when a company has superior resources and capabilities than its competitors that allow it to achieve either a lower cost structure or a differentiated product. For long-term business success, companies strive to develop sustainable competitive advantages, or competitive advantages that cannot be easily copied by the competition (Porter, 1998). To stay ahead, corporate leaders must constantly seek new ways to grow their business in the face of rapid technology changes, increasingly empowered consumers and employees, and ongoing changes in government regulation. Effective ways to thrive over the long term are to launch new business models and strategies or devise new ways to outperform competitors. Because these new business models, strategies, and performance capabilities will frequently be the result of advances in technology, the company’s ability to leverage technological innovation over time will depend on its approach to enterprise IT architecture, information management, and data governance. The enterprisewide IT architecture, or simply the enterprise architecture (EA), guides the evolution, expansion, and integration of information systems (ISs), digital technology, and business processes. This guidance enables companies to more effectively leverage their IT capability to achieve maximum competitive advantage and growth over the long term. Information management guides the acquisition, custodianship, and distribution of corporate data and involves the management of data systems, technology, processes, and corporate strategy. Data governance, or information governance, controls enterprise data through formal policies and procedures. One goal of data governance is to provide employees and business partners with high-quality data they can trust and access on demand.

Bad decisions can result from the analysis of inaccurate data, which is widely referred to as dirty data, and lead to increased costs, decreased revenue, and legal, reputational, and performance-related consequences. For example, if data is collected and analyzed based on inaccurate information because advertising was conducted in the wrong location for the wrong audience, marketing campaigns can become highly skewed and ineffective. Companies must then begin costly repairs to their datasets to correct the problems caused by dirty data. This creates a drop in customer satisfaction and a misuse of resources in a firm. One example of an organization taking strides to clean the dirty data collected through inaccurate marketing is the data management platform, MEDIATA, which runs bidding systems and ad location services for firms looking to run ads on websites (see Table 2.1). Let’s see how they did this.

TABLE 2.1 Opening Case Overview

Company MEDIATA was launched as Valued Interactive Media (VIM) in 2009. Rebranded in 2013 as MEDIATA
Industry Communications; Advertising
Product Lines Wide range of programmatic solutions and products to provide practical solutions for digital marketing campaigns to deliver successful online advertising campaigns to organizations across Australia, Hong Kong, and New Zealand
Digital Technology Information management and data governance to increase trust and accessibility of data to facilitate a company’s vision
Business Vision Shake up the online advertising industry. Improve transparency and foster greater cooperation between partners
Website www.mediataplatform.com

2.1 IS Concepts and Classification

Before we being to explore the value of information systems (ISs) to an organization, it’s useful to understand what an IS is, what it does, and what types of ISs are typically found at different levels of an organization.

In addition to supporting decision-making, coordination, and control in an organization, ISs also help managers and workers analyze problems, visualize complex sets of data, and create new products. ISs collect (input) and manipulate data (process), and generate and distribute reports (output) and based on the data-specific IT services, such as processing customer orders and generating payroll, are delivered to the organization. Finally, the ISs save (store) the data for future use. In addition to the four functions of IPOS, an information needs feedback from its users and other stakeholders to help improve future systems as demonstrated in Figure 2.2.

Illustration of IPOS cycle.

FIGURE 2.2 IPOS cycle.

The following example demonstrates how the components of the IPOS work together: To access a website, Amanda opens an Internet browser using the keyboard and enters a Web address into the browser (input). The system then uses that information to find the correct website (processing) and the content of the desired site is displayed in the Web browser (output). Next, Amanda bookmarks the desired website in the Web browser for future use (storage). The system then records the time that it took to produce the output to compare actual versus expected performance (feedback).

Components of an IS

A computerized IS consists of six interacting components. Regardless of type and where and by whom they are used within an organization, the components of an IS must be carefully managed to provide maximum benefit to an organization (see Figure 2.3).

Illustration of Components of an IS.

FIGURE 2.3 Components of an IS.

  1. Hardware Any physical device used in a computerized IS. Examples include central processing unit (CPU), sound card, video card, network card, hard drive, display, keyboard, motherboard, processor, power supply, modem, mouse, and printer.
  2. Software A set of machine-readable instructions (code) that makes up a computer application that directs a computer’s processor to perform specific operations. Computer software is nontangible, contrasted with system hardware, which is the physical component of an IS. Examples include Internet browser, operating system (OS), Microsoft Office, Skype, and so on.
  3. People Any person involved in using an IS. Examples include programmers, operators help desk, and end-users.
  4. Procedures Documentation containing directions on how to use the other components of an IS. Examples include operational manual and user manual.
  5. Network A combination of lines, wires, and physical devices connected to each other to create a telecommunications network. In computer networks, networked computing devices exchange data with each other using a data link. The connections between nodes are established using either cable media or wireless media. Networks can be internal or external. If they are available only internally within an organization, they are called “intranets.” If they are available externally, they are called “internets.” The best-known example of a computer network is the World Wide Web.
  6. Data Raw or unorganized facts and figures (such as invoices, orders, payments, customer details, product numbers, product prices) that describe conditions, ideas, or objects.

Data, Information, Knowledge, and Wisdom

As you can see in Figure 2.3, data is the central component of any information system. Without data, an IS would have no purpose and companies would unable to conduct business. Generally speaking, ISs process data into meaningful information that produces corporate knowledge and ultimately creates wisdom that fuels corporate strategy.

Data are the raw material from which information is produced; the quality, reliability, and integrity of the data must be maintained for the information to be useful. Data are the raw facts and figures that are not organized in any way. Examples are the number of hours an employee worked in a certain week or the number of new Ford vehicles sold from the first quarter (Q1) of 2015 through the second quarter (Q2) of 2017 (Figure 2.4).

Illustration of Examples of data, information, knowledge, and wisdom.

FIGURE 2.4 Examples of data, information, knowledge, and wisdom.

Information is an organization’s most important asset, second only to people. Information provides the “who,” “what,” “where,” and “when” of data in a given context. For example, summarizing the quarterly sales of new Ford vehicles from Q1 2015 through Q2 2017 provides information that shows sales have steadily decreased from Q2 2016.

Knowledge is used to answer the question “how.” In our example, it would involve determining how the trend can be reversed, for example, customer satisfaction can be improved, new features can be added, and pricing can be adjusted.

Wisdom is more abstract than data and information (that can be harnessed) and knowledge (that can be shared). Wisdom adds value and increases effectiveness. It answers the “why” in a given situation. In the Ford example, wisdom would be corporate strategists evaluating the various reasons for the sales drop, creatively analyzing the situation as a whole, and developing innovative policies and procedures to reverse the recent downward trend in new vehicle sales.

ISs collect or input and process data to create and distribute reports or other outputs based on information gleaned from the raw data to support decision-making and business processes that, in turn, produce corporate knowledge that can be stored for future use. Figure 2.5 shows the input-processing-output-storage (IPOS) cycle.

Illustration of Input-processing-output-storage model.

FIGURE 2.5 Input-processing-output-storage model.

Types of ISs

An IS may be as simple as a single computer and a printer used by one person, or as complex as several thousand computers of various types (tablets, desktops, laptops, mainframes) with hundreds of printers, scanners, and other devices connected through an elaborate network used by thousands of geographically dispersed employees. Functional ISs that support business analysts and other departmental employees range from simple to complex, depending on the type of employees supported. The following examples show the support that IT provides to major functional areas.

  1. Marketing Utilizing IBM software, Bolsa de Comercio de Santiago, a large stock exchange in Chile, is able to process its ever-increasing, high-volume trading in microseconds. The Chilean stock exchange system can do the detective work of analyzing current and past transactions and market information, learning, and adapting to market trends and connecting its traders to business information in real time. Immediate throughput in combination with analytics allows traders to make more accurate decisions.
  2. Sales According to the New England Journal of Medicine, one in five patients suffers from preventable readmissions, which cost taxpayers over $17 billion a year. In the past, hospitals have been penalized for high readmission rates with cuts to the payments they receive from the government (Zuckerman et al., 2016). Using effective management information systems (MISs), the health-care industry can leverage unstructured information in ways not possible before, according to Matt McClelland, manager of information governance for Blue Cross Blue Shield of North Carolina. “With proper support, information governance can bridge the gaps among the need to address regulation and litigation risk, the need to generate increased sales and revenue, and the need to cut costs and become more efficient. When done right, information governance positively impacts every facet of the business,” McClelland said in the Information Governance Initiative (Jarousse, 2016).

Figure 2.6 illustrates the classification of the different types of ISs used in organizations, the typical level of workers who use them and the types of input/output (I/O) produced by each of the ISs. At the operational level of the organization, line workers use transaction processing systems (TPSs) to capture raw data and pass it along (output) to middle managers. The raw data is then input into office automation (OA) and MISs by middle managers to produce information for use by senior managers. Next, information is input into decision support systems (DSSs) for processing into explicit knowledge that will be used by senior managers to direct current corporate strategy. Finally, corporate executives input the explicit knowledge provided by the DSSs into executive information systems (EISs) and apply their experience, expertise, and skills to create wisdom that will lead to new corporate strategies.

Scheme for Hierarchy of ISs, input/output, and user levels.

FIGURE 2.6 Hierarchy of ISs, input/output, and user levels.

Transaction Processing System (TPS)

A TPS is designed to process specific types of data input from ongoing transactions. TPSs can be manual, as when data are typed into a form on a screen, or automated by using scanners or sensors to capture barcodes or other data (Figure 2.7). TPSs are usually operated directly by frontline workers and provide the key data required to support the management of operations.

Photo illustration of Scanners automating the input of data into a transaction processing system.

FIGURE 2.7 Scanners automate the input of data into a transaction processing system (TPS).

Organizational data are processed by a TPS, for example, sales orders, reservations, stock control, and payments by payroll, accounting, financial, marketing, purchasing, inventory control, and other functional departments. The data are usually obtained through the automated or semiautomated tracking of low-level activities and basic transactions. Transactions are either:

  • internal transactions that originate within the organization or that occur within the organization, for example, payroll, purchases, budget transfers, and payments (in accounting terms, they are referred to as accounts payable); or
  • external transactions that originate from outside the organization, for example, from customers, suppliers, regulators, distributors, and financing institutions.

TPSs are essential systems. Transactions that are not captured can result in lost sales, dissatisfied customers, unrecorded payments, and many other types of data errors with financial impacts. For example, if the accounting department issued a check to pay an invoice (bill) and it was cashed by the recipient, but information about that transaction was not captured, then two things happen. First, the amount of cash listed on the company’s financial statements is incorrect because no deduction was made for the amount of the check. Second, the accounts payable (A/P) system will continue to show the invoice as unpaid, so the accounting department might pay it a second time. Likewise, if services are provided, but the transactions are not recorded, the company will not bill for them and thus lose service revenue.

Batch versus Online Real-Time Processing

Data captured by a TPS are processed and stored in a database; they then become available for use by other systems. Processing of transactions is done in one of two modes:

  1. Batch processing A TPS in batch processing mode collects all transaction for a day, shift, or other time period, and then processes the data and updates the data stores. Payroll processing done weekly or bi-weekly is an example of batch mode.
  2. Online transaction processing (OLTP) or real-time processing The TPS processes each transaction as it occurs, which is what is meant by the term real-time processing. In order for OLTP to occur, the input device or website must be directly linked via a network to the TPS. Airlines need to process flight reservations in real time to verify that seats are available.

Batch processing costs less than real-time processing. A disadvantage is that data are inaccurate because they are not updated immediately, in real time.

Processing Impacts Data Quality

As data are collected or captured, they are validated to detect and correct obvious errors and omissions. For example, when a customer sets up an account with a financial services firm or retailer, the TPS validates that the address, city, and postal code provided are consistent with one another and also that they match the credit card holder’s address, city, and postal code. If the form is not complete or errors are detected, the customer is required to make the corrections before the data are processed any further.

Data errors detected later may be time-consuming to correct or cause other problems. You can better understand the difficulty of detecting and correcting errors by considering identity theft. Victims of identity theft face enormous challenges and frustration trying to correct data about them.

Management Information System (MIS)

An MIS is built on the data provided by TPS. MISs are management-level systems that are used by middle managers to help ensure the smooth running of an organization in the short to medium term. The highly structured information provided by these systems allows managers to evaluate an organization’s performance by comparing current with previous outputs. Functional areas or departments―accounting, finance, production/operations, marketing and sales, human resources, and engineering and design―are supported by ISs designed for their particular reporting needs. General-purpose reporting systems are referred to as management information systems (MISs). Their objective is to provide reports to managers for tracking operations, monitoring, and control.

Typically, a functional system provides reports about such topics as operational efficiency, effectiveness, and productivity by extracting information from databases and processing it according to the needs of the user. Types of reports include the following:

  • Periodic These reports are created or run according to a pre-set schedule. Examples are daily, weekly, and quarterly. Reports are easily distributed via e-mail, blogs, internal websites (called intranets), or other electronic media. Periodic reports are also easily ignored if workers do not find them worth the time to review.
  • Exception Exception reports are generated only when something is outside the norm, either higher or lower than expected. Sales in hardware stores prior to a hurricane may be much higher than the norm. Or sales of fresh produce may drop during a food contamination crisis. Exception reports are more likely to be read because workers know that some unusual event or deviation has occurred.
  • Ad hoc, or on demand Ad hoc reports are unplanned reports. They are generated to a mobile device or computer on demand as needed. They are generated on request to learn more about a situation, problem, or opportunity.

Reports typically include interactive data visualizations, such as column and pie charts, as shown in Figure 2.8.

Scheme for a report produced by an MIS.

FIGURE 2.8 Sample report produced by an MIS.

Decision Support System (DSS)

A DSS is a knowledge-based system used by senior managers to facilitate the creation of knowledge and allow its integration into the organization. More specifically, a DSS is an interactive application that supports decision-making by manipulating and building upon the information from an MIS and/or a TPS to generate insights and new information.

Configurations of a DSS range from relatively simple applications that support a single user to complex enterprisewide systems. A DSS can support the analysis and solution of a specific problem, evaluate a strategic opportunity, or support ongoing operations. These systems support unstructured and semistructured decisions, such as make-or-buy-or-outsource decisions, or what products to develop and introduce into existing markets.

Degree of Structure of Decisions

Decisions range from structured to unstructured. Structured decisions are those that have a well-defined method for solving and the data necessary to reach a sound decision. An example of a structured decision is determining whether an applicant qualifies for an auto loan, or whether to extend credit to a new customer―and the terms of those financing options. Structured decisions are relatively straightforward and made on a regular basis, and an IS can ensure that they are done consistently.

At the other end of the continuum are unstructured decisions that depend on human intelligence, knowledge, and/or experience―as well as data and models to solve. Examples include deciding which new products to develop or which new markets to enter. Semistructured decisions fall in the middle of the continuum. DSSs are best suited to support these types of decisions, but they are also used to support unstructured ones. To provide such support, DSSs have certain characteristics to support the decision-maker and the overall decision-making process.

The main characteristic that distinguishes a DSS from an MIS is the inclusion of models. Decision-makers can manipulate models to conduct experiments and sensitivity analyses, for example, what-if and goal seeking. What-if analysis refers to changing assumptions or data in the model to observe the impacts of those changes on the outcome. For example, if sales forecasts are based on a 5% increase in customer demand, a what-if analysis would replace the 5% with higher and/or lower estimates to determine what would happen to sales if demand changed. With goal seeking, the decision-maker has a specific outcome in mind and needs to determine how that outcome could be achieved and whether it is feasible to achieve that desired outcome. A DSS can also estimate the risk of alternative strategies or actions.

California Pizza Kitchen (CPK) uses a DSS to support inventory decisions. CPK has over 200 locations in 32 U.S. states and 13 other countries, including 17 California Pizza Kitchen nontraditional, franchise concepts designed for airports, universities, and stadiums. Maintaining optimal inventory levels at all its restaurants was challenging and time-consuming. The original MIS was replaced by a DSS to make it easy for the chain’s managers to maintain updated records, generate reports as and when needed, and make corporate- and restaurant-level decisions. Many CPK restaurants reported a 5% increase in sales after the DSS was implemented.

Executive Information System (EIS)

EISs are strategic-level information systems that help executives and senior managers analyze the environment in which the organization exists. They typically are used to identify long-term trends and to plan appropriate courses of action. The information in such systems is often weakly structured and comes from both internal and external sources. EISs are designed to be operated directly by executives without the need for intermediaries and easily tailored to the preferences of the individual using them. An EIS organizes and presents data and information from both external data sources and internal MIS or TPS in an easy-to-use dashboard format to support and extend the inherent capabilities of senior executives.

Initially, EISs were custom-made for an individual executive. However, a number of off-the-shelf EIS packages now exist and some enterprise-level systems offer a customizable EIS module.

The ways in which the different characteristics of the various types of ISs are classified is shown in Table 2.2.

TABLE 2.2 Characteristics of Types of Information Systems

Type Characteristics
TPS Used by operations personnel
Produce information for other ISs
Use internal and external data
Efficiency oriented
MIS Used by lower and middle managers
Based on internal information
Support structured decisions
Inflexible
Lack analytical capabilities
Focus on past and present data
DSS Used by senior managers
Support semistructured or unstructured decisions
Contain models or formulas that enable sensitivity analysis, what-if analysis, goal seeking, and risk analysis
Use internal and external data plus data added by the decision-maker who may have insights relevant to the decision situation
Predict the future
EIS Used by C-level managers
Easy-to-use, customizable interface
Support unstructured decisions
Use internal and external data sources
Focus on effectiveness of the organization
Very flexible
Focus on the future

Here’s an example of how these ISs are used together to add value in an organization. Day-to-day transaction data collected by the TPS are converted into prescheduled summarized reports by middle managers using an MIS. The findings in these reports are then analyzed by senior managers who use a DSS to support their semistructured or unstructured decision-making. DSSs contain models that consist of a set of formulas and functions, such as statistical, financial, optimization, and/or simulation models. Corporations, government agencies, the military, health care, medical research, major league sports, and nonprofits depend on their DSSs to answer what-if questions to help reduce waste in production operations, improve inventory management, support investment decisions, and predict demand and help sustain a competitive edge.

Customer data, sales, and other critical data produced by the DSS are then selected for further analysis, such as trend analysis or forecasting demand and are input into an EIS for use by top level management, who add their experience and expertise to make unstructured decisions that will affect the future of the business.

Figure 2.9 shows how the major types of ISs relate to one another and how data flow among them. In this example,

  1. Data from online purchases are captured and processed by the TPS and then stored in the transactional database.
  2. Data needed for reporting purposes are extracted from the database and used by the MIS to create periodic, ad hoc, or other types of reports.
  3. Data are output to a DSS where they are analyzed using formulas, financial ratios, or models.
Illustration of Flow of data from point of sale (POS) through processing, storage, reporting, decision support, and analysis. Also shows the relationships among different types of ISs.

FIGURE 2.9 Flow of data from point of sale (POS) through processing, storage, reporting, decision support, and analysis. Also shows the relationships among different types of ISs.

ISs Exist within Corporate Culture

It is important to remember that ISs do not exist in isolation. They have a purpose and a social (organizational) context. A common purpose is to provide a solution to a business problem. The social context of the system consists of the values and beliefs that determine what is admissible and possible within the culture of the organization and among the people involved. For example, a company may believe that superb customer service and on-time delivery are critical success factors. This belief system influences IT investments, among other factors.

The business value of IT is determined by the people who use them, the business processes they support, and the culture of the organization. That is, IS value is determined by the relationships among ISs, people, and business processes―all of which are influenced strongly by organizational culture.

In an organization, there may be a culture of distrust between the technology and business employees. No enterprise IT architecture methodology or data governance can bridge this divide unless there is a genuine commitment to change. That commitment must come from the highest level of the organization―senior management. Methodologies cannot solve people problems; they can only provide a framework in which those problems can be solved.

Concept Check 2.1

  1. A(n) information system is a combination of ___________________ and _______________ using the technology to support business processes, operations, management and decision making at different levels of an organization.
a. Computers and software
b. Information technology and people’s activities
c. Hardware and software
d. People and networks
Correct or Incorrect?

 

  1. Which of the following are the six components of an information system?
a. Hardware, software, machines, people, data, printers
b. Software, hardware, networks, servers, information, people
c. Data, procedures, people, software, machines, networks
d. Hardware, software, people, procedures, network, data
Correct or Incorrect?

 

  1. A(n) __________ supports line workers who input the day‐to‐day transactions in a firm.
a. TPS
b. MIS
c. EIS
d. DSS
Correct or Incorrect?

 

  1. A(n) _____ is a strategic‐level information system that helps executives and senior managers analyze the environment in which the organization exists.
a. TPS
b. MIS
c. DSS
d. EIS
Correct or Incorrect?

 

2.2 IT Infrastructure, IT Architecture, and Enterprise Architecture

Every enterprise has a core set of ISs and business processes that execute the transactions that keep it in business. Transactions include processing orders, order fulfillment and delivery, purchasing inventory and supplies, hiring and paying employees, and paying bills. To most effectively utilize its IT assets, an organization must create an IT infrastructure, IT architecture, and an enterprise architecture (EA) as shown in Figure 2.10.

Illustration of Comparing IT infrastructure, IT architecture, and enterprise architecture.

FIGURE 2.10 Comparing IT infrastructure, IT architecture, and enterprise architecture.

IT infrastructure is an inventory of the physical IT devices that an organization owns and operates. The IT infrastructure describes an organization’s entire collection of hardware, software, networks, data centers, facilities, and other related equipment used to develop, test, operate, manage, and support IT services. It does NOT include the people or process components of an information system.

IT architecture guides the process of planning, acquiring, building, modifying, interfacing, and deploying IT resources in a single department within an organization. The IT architecture should offer a way to systematically identify technologies that work together to satisfy the needs of the departments’ users. The IT architecture is a blueprint for how future technology acquisitions and deployment will take place. It consists of standards, investment decisions, and product selections for hardware, software, and communications. The IT architecture is developed first and foremost based on department direction and business requirements.

Enterprise architecture (EA) reviews all the information systems across all departments in an organization to develop a strategy to organize and integrate the organization’s IT infrastructures to help it meet the current and future goals of the enterprise and maximize the value of technology to the organization. In this way, EA provides a holistic view of an organization with graphic and text descriptions of strategies, policies, information, ISs, and business processes and the relationships between them.

The EA adds value in an organization in that it can provide the basis for organizational change just as architectural plans guide a construction project. Since a poorly crafted enterprise architecture (EA) can also hinder day-to-day operations and efforts to execute business strategy, it is more important than ever before to carefully consider the EA within your organization when deciding on an approach to business, technology, and corporate strategy. Simply put, EA helps solve two critical challenges: where an organization is going, and how it will get there.

The success of EA is measured not only in financial terms, such as profitability and return on investment (ROI), but also in nonfinancial terms, for example, improved customer satisfaction, faster speed to market, and lower employee turnover as diagrammed in Figure 2.11 and demonstrated in IT at Work 2.1.

Illustration of Enterprise architecture success.

FIGURE 2.11 Enterprise architecture success.

EA Helps to Maintain Sustainability

As you read in Chapter 1, the volume, variety, and speed of data being collected or generated have increased dramatically over the past decade. As enterprise ISs become more complex, long-range IT planning is critical. Companies cannot simply add storage, new apps, or data analytics on an as-needed basis and expect those additional IT assets to work with existing systems.

The relationship between complexity and planning for the future is easier to see in physical things such as buildings and transportation systems. For example, if you are constructing a simple holiday cabin in a remote area, there is no need to create a detailed plan for future expansion. On the other hand, if you are building a large commercial development in a highly populated area, you’re not likely to succeed without a detailed project plan. Relating this to the case of enterprise ISs, if you are building a simple, single-user, nondistributed system, you would not need to develop a well-thought-out growth plan. However, this approach would not be feasible to enable you to successfully manage big data, copious content from mobiles and social networks, and data in the cloud. Instead, you would need a well-designed set of plans, or blueprints, provided by an EA to align IT with business objectives by guiding and controlling hardware acquisition, software add-ons and upgrades, system changes, network upgrades, choice of cloud services, and other digital technology investments that you will need to make your business sustainable.

There are two specific strategic issues that the EA is designed to address:

  1. IT systems’ complexity IT systems have become unmanageably complex and expensive to maintain.
  2. Poor business alignment Organizations find it difficult to keep their increasingly expensive IT systems aligned with business needs.

Business and IT Benefits of EA

Having the right EA in place is important for the following reasons:

  • EA cuts IT costs and increases productivity by giving decision-makers access to information, insights, and ideas where and when they need them.
  • EA determines an organization’s competitiveness, flexibility, and IT economics for the next decade and beyond. That is, it provides a long-term view of a company’s processes, systems, and technologies so that IT investments do not simply fulfill immediate needs.
  • EA helps align IT capabilities with business strategy―to grow, innovate, and respond to market demands, supported by an IT practice that is 100% in accord with business objectives.
  • EA can reduce the risk of buying or building systems and enterprise applications that are incompatible or unnecessarily expensive to maintain and integrate.

Developing an Enterprise Architecture (EA)

Developing an EA starts with the organization’s goals, for example, where does it want to be in three years? and identifies the strategic direction in which it is heading and the business drivers to which it is responding. The goal is to make sure that everyone understands and shares a single vision. As soon as managers have defined this single shared vision of the future, they then consider the impact this vision will have on the business, technical, information, and solutions architectures of the enterprise. This shared vision of the future will dictate changes in all these architectures, assign priorities to those changes, and keep those changes grounded in business value.

According to Microsoft, the EA should include the four different perspectives shown in Table 2.3.

TABLE 2.3 Components of an Enterprise Architecture

Business architecture How the business works. Includes broad business strategies and plans for moving the organization from where it is now to where it wants to be. Processes the business uses to meet its goals.
Application architecture Portfolio of organization’s applications. Includes descriptions of automated services that support business processes; descriptions of interactions and interdependencies between the organization’s ISs.
Information architecture What the organization needs to know to perform its business processes and operations. Includes standard data models; data management policies and descriptions of patterns of information production and use in an organization.
Technology architecture Hardware and software that supports the organization. Examples include desktop and server software; OSs; network connectivity components; printers, modems.

It is important to recognize that the EA must be dynamic, not static. To sustain its effectiveness, it should be an ongoing process of aligning the creation, operation, and maintenance of IT across the organization with the ever-changing business objectives. As business needs change, so must the EA, as demonstrated in IT at Work 2.2.

Concept Check 2.2

  1. A(n) _______________ is an inventory of the physical IT devices that an organization owns and operates.
a. Enterprise architecture
b. IT architecture
c. IT infrastructure
d. Enterprise infrastructure
Correct or Incorrect?

 

  1. A blueprint for how future technology acquisitions and deployment will take place refers to:
a. IT infrastructure
b. IT architecture
c. Enterprise architecture
d. Organizational network
Correct or Incorrect?

 

  1. The two components of an information system that are NOT included in an organization’s IT infrastructure are:
a. Hardware and software
b. Software and people
c. People and procedures
d. Networks and hardware
Correct or Incorrect?

 

  1. An enterprise architecture reviews all the information systems across all department in an organization to:
a. Develop a strategy to organize and integrate the organization’s IT infrastructure
b. Help meet current and future goals of the enterprise
c. Maximize value of technology to the organization
d. All of the above
e. None of the above
Correct or Incorrect?

 

2.3 Information Management and Data Governance

As shown in Figure 2.3, data is the heart of the business and the central component of an IS. Most business initiatives succeed or fail based on the quality of their data. Effective planning and decision-making depend on systems being able to make data available in usable formats on a timely basis. Almost everyone manages information. You manage your social and cloud accounts across multiple mobile devices and computers. You update or synchronize (“synch”) your calendars, appointments, contact lists, media files, documents, and reports. Your productivity depends on the compatibility of devices and applications and their ability to share data. Not being able to transfer and synch whenever you add a device or app is bothersome and wastes your time. For example, when you switch to the latest mobile device, you might need to reorganize content to make dealing with data and devices easier. To simplify add-ons, upgrades, sharing, and access, you might leverage cloud services such as iTunes, Instagram, Diigo, and Box.

This is just a glimpse at some of the information management situations that organizations face today and shows why a continuous plan is needed to guide, control, and govern IT growth. As with building construction (Figure 2.13), blueprints and models help guide and govern future IT and digital technology investments.

Photo illustration of office executives brainstorming over blueprints and models.

FIGURE 2.13 Blueprints and models, like those used for building construction, are needed to guide and govern an enterprise’s IT assets.

Information Management Harnesses Scattered Data

Business information is generally scattered throughout an enterprise, stored in separate systems dedicated to specific purposes, such as operations, supply chain management, or customer relationship management. Major organizations have over 100 data repositories (storage areas). In many companies, the integration of these disparate systems is limited―as is users’ ability to access all the information they need. As a result, despite all the information flowing through companies, executives, managers, and workers often struggle to find the information they need to make sound decisions or do their jobs. The overall goal of information management is to eliminate that struggle through the design and implementation of a sound data governance program and a well-planned EA.

Providing easy access to large volumes of information is just one of the challenges facing organizations. The days of simply managing structured data are over. Now, organizations must manage semistructured and unstructured content from social and mobile sources even though that data may be of questionable quality.

Information management is critical to data security and compliance with continually evolving regulatory requirements, such as the Sarbanes-Oxley Act, Basel III, the Computer Fraud and Abuse Act (CFAA), the USA PATRIOT Act, and the Health Insurance Portability and Accountability Act (HIPAA).

Issues of information access, management, and security must also deal with information degradation and disorder―where people do not understand what data mean or how the data can be useful.

Reasons for Information Deficiencies

Organizational information and decision support technologies have developed over many decades. During that time management teams’ priorities have changed along with their understanding of the role of IT within the organization; technology has advanced in unforeseeable ways, and IT investments have been increased or decreased based on competing demands on the budget. Other common reasons why information deficiencies are still a problem include:

  1. Data silos Information can be trapped in departmental data silos (also called information silos), such as marketing or production databases. Data silos are illustrated in Figure 2.14. Since silos are unable to share or exchange data, they cannot consistently be updated. When data are inconsistent across multiple enterprise applications, data quality cannot (and should not) be trusted without extensive verification. Data silos exist when there is no overall IT architecture to guide IT investments, data coordination, and communication. Data silos support a single function and, as a result, do not support an organization’s cross-functional needs.

    For example, most health-care organizations are drowning in data, yet they cannot get reliable, actionable insights from these data. Physician notes, registration forms, discharge summaries, documents, and more are doubling every five years. Unlike structured machine-ready data, these are messy data that take too much time and effort for health-care providers to include in their business analysis. So, valuable messy data are routinely left out. Millions of insightful patient notes and records sit inaccessible or unavailable in separate clinical data silos because historically there has been no easy way to analyze the information they contain.

  2. Lost or bypassed data Data can get lost in transit from one system to another. Or, data might never get captured because of inadequately tuned data collection systems, such as those that rely on sensors or scanners. Or, the data may not get captured in sufficient detail, as described in Tech Note 2.2.
  3. Poorly designed interfaces Despite all the talk about user-friendly interfaces, some ISs are horrible to deal with. Poorly designed interfaces or formats that require extra time and effort to figure out increase the risk of errors from misunderstanding the data or ignoring them.
  4. Nonstandardized data formats When users are presented with data in inconsistent or nonstandardized formats, errors increase. Attempts to compare or analyze data are more difficult and take more time. For example, if the Northeast division reports weekly gross sales revenues per product line and the Southwest division reports monthly net sales per product, you cannot compare their performance without converting the data to a common format. Consider the extra effort needed to compare temperature-related sales, such as air conditioners, when some temperatures are expressed in degrees Fahrenheit and others in Centigrade.
  5. Difficult to hit moving targets The information that decision-makers want keeps changing―and changes faster than ISs can respond to because of the first four reasons in this list. Tracking tweets, YouTube hits, and other unstructured content requires expensive investments―which managers find risky in an economic downturn.
Image described by caption and surrounding text.

FIGURE 2.14 Data (or information) silos are ISs that do not have the capability to exchange data with other systems, making timely coordination and communication across functions or departments difficult.

These are the data challenges managers have to face when there is little or no information management. Companies undergoing fast growth or merger activity or those with decentralized systems (each division or business unit manages its own IT) will end up with a patchwork of reporting processes. As you would expect, patchwork systems are more complicated to modify, too rigid to support an agile business, and more expensive to maintain.

Factors Driving the Shift from Silos to Sharing and Collaboration

Senior executives and managers are aware of the problems associated with their data silos and information management problems, but they also know about the huge cost and disruption associated with converting to newer IT architectures. The “silo effect” occurs when different departments of an organization do not share data and/or communicate effectively enough to maintain productivity. Surprisingly, 75% of employers believe team work and collaboration are essential, but only 18% of employees receive communication evaluations during performance critiques (Marchese, 2016). In the new age of efficiency of service, many companies like Formaspace, an industrial manufacturing and service corporation, must work toward complete cloud integration of old silos to increase customer service and generate more revenue. Enabling applications to interact with one another in an automated fashion to gain better access to data increases meaningful productivity and decreases time and effort spent in manual collaboration efforts. In an illustration of how silo integration is essential for a modern corporation, IT technician at Formaspace, Loddie Alspach, claims that in 2015, the company managed to increase revenues by 20% using Amazon-based cloud technology (Shore, 2015). However, companies are struggling to integrate thousands of siloed global applications, while aligning them to business operations. To remain competitive, they must be able to analyze and adapt their business processes quickly, efficiently, and without disruption.

Greater investments in collaboration technologies have been reported by the research firm Forrester (Keitt, 2014). A recent study identified four main factors that have influenced the increased use of cloud technologies, as shown in Table 2.4 (Rai et al., 2015).

TABLE 2.4 Key Factors Leading to Increased Migration to the Cloud

Cost Savings
Efficient Use of Resources
Unlimited Scalability of Resources
Lower Maintenance

Business Benefits of Information Management

Based on the examples you have read, the obvious benefits of information management are:

  1. Improves decision quality Decision quality depends on accurate and complete data.
  2. Improves the accuracy and reliability of management predictions It is essential for managers to be able to predict sales, product demand, opportunities, and competitive threats. Management predictions focus on “what is going to happen” as opposed to financial reporting on “what has happened.”
  3. Reduces the risk of noncompliance Government regulations and compliance requirements have increased significantly in the past decade. Companies that fail to comply with laws on privacy, fraud, anti-money laundering, cybersecurity, occupational safety, and so on face harsh penalties.
  4. Reduces the time and cost of locating and integrating relevant information.

Data Governance: Maintaining Data Quality and Cost Control

The success of every data-driven strategy or marketing effort depends on data governance. Data governance policies must address structured, semistructured, and unstructured data (discussed in Section 2.3) to ensure that insights can be trusted.

Enterprisewide Data Governance

With an effective data governance program, managers can determine where their data are coming from, who owns them, and who is responsible for what―in order to know they can trust the available data when needed. Data governance is an enterprisewide project because data cross boundaries and are used by people throughout the enterprise. New regulations and pressure to reduce costs have increased the importance of effective data governance. Governance eliminates the cost of maintaining and archiving bad, unneeded, or inaccurate data. These costs grow as the volume of data grows. Governance also reduces the legal risks associated with unmanaged or inconsistently managed information.

Three industries that depend on data governance to comply with regulations or reporting requirements are the following:

  • Food industry In the food industry, data governance is required to comply with food safety regulations. Food manufacturers and retailers have sophisticated control systems in place so that if a contaminated food product, such as spinach or peanut butter, is detected, they are able to trace the problem back to a particular processing plant or even the farm at the start of the food chain.
  • Financial services industry In the financial services sector, strict reporting requirements of the Dodd−Frank Wall Street Reform and Consumer Protection Act of 2010 are leading to greater use of data governance. The Dodd−Frank Act regulates Wall Street practices by enforcing transparency and accountability in an effort to prevent another significant financial crisis like the one that occurred in 2008.
  • Health-care industry Data are health care’s most valuable asset. Hospitals have mountains of electronic patient information. New health-care accountability and reporting obligations require data governance models for transparency to defend against fraud and to protect patients’ information.

Master Data and Master Data Management (MDM)

Master data is the term used to describe business-critical information on customers, products and services, vendors, locations, employees, and other things needed for operations and business transactions. Master data are fundamentally different from the high volume, velocity, and variety of big data and traditional data. For example, when a customer applies for automobile insurance, data provided on the application become the master data for that customer. In contrast, if the customer’s vehicle has a device that sends data about his or her driving behavior to the insurer, those machine-generated data are transactional or operational, but not master data.

Data are used in two ways―both depend on high-quality trustworthy data:

  1. For running the business Transactional or operational use
  2. For improving the business Analytic use

Master data are typically quite stable and typically stored in a number of different systems spread across the enterprise. Master data management (MDM) links and synchronizes all critical data from those disparate systems into one file called a master file, to provide a common point of reference. MDM solutions can be complex and expensive. Given their complexity and cost, most MDM solutions are out of reach for small and medium companies. Vendors have addressed this challenge by offering cloud-managed MDM services. For example, in 2013, Dell Software launched its next-generation Dell Boomi MDM. Dell Boomi provides MDM, data management, and data quality services (DQS)―and they are 100% cloud-based with near real-time synchronization.

Data governance and MDM manage the availability, usability, integrity, and security of data used throughout the enterprise. Strong data governance and MDM are needed ensure data are of sufficient quality to meet business needs. The characteristics and consequences of weak or nonexistent data governance are listed in Table 2.5.

TABLE 2.5 Characteristics and Consequences of Weak or Nonexistent Data Governance and MDM

  • Data duplication causes isolated data silos.
  • Inconsistency exists in the meaning and level of detail of data elements.
  • Users do not trust the data and waste time verifying the data rather than analyzing them for appropriate decision-making.
  • Leads to inaccurate data analysis.
  • Bad decisions are made on perception rather than reality, which can negatively affect the company and its customers.
  • Results in increased workloads and processing time.

Data governance and MDM are a powerful combination. As data sources and volumes continue to increase, so does the need to manage data as a strategic asset in order to extract its full value. Making business data consistent, trusted, and accessible across the enterprise is a critical first step in customer-centric business models. With data governance, companies are able to extract maximum value from their data, specifically by making better use of opportunities that are buried within behavioral data.

Concept Check 2.3

  1. Which of the following is NOT a benefit of information management:
a. Improves decision quality
b. Increases risk of noncompliance
c. Improves accuracy and reliability of management predictions
d. Reduces time and cost of locating and integrating relevant information
Correct or Incorrect?

 

  1. ___________________ is the control of enterprise data through formal policies and procedures to help ensure data can be trusted and are accessible.
a. Data governance
b. Data management
c. Master data management
d. Information management
Correct or Incorrect?

 

  1. Business critical information on customers, products and services, vendors, locations and employees is called:
a. Big data
b. Master data
c. Dirty data
d. Siloed data
Correct or Incorrect?

 

  1. Data governance and MDM benefit the organization by eliminating:
a. Inconsistent data
b. Long processing times
c. Bad decisions based on “dirty data”
d. All of the above
Correct or Incorrect?

 

2.4 Data Centers and Cloud Computing

Data centers and cloud computing are types of IT infrastructures or computing systems. Data center also refers to the building or facility that houses the servers and equipment. In the past, there were few IT infrastructure options. Companies owned their servers, storage, and network components to support their business applications and these computing resources were on their premises. Now, there are several choices for an IT infrastructure strategy―including cloud computing. As is common to IT investments, each infrastructure configuration has strengths, weaknesses, and cost considerations.

Data Centers

Traditionally, data and database technologies were kept in data centers that were typically run by an in-house IT department (Figure 2.15) and consisted of on-premises hardware and equipment that store data within an organization’s local area network.

Photo illustration of row of network servers in a data center.

FIGURE 2.15 A row of network servers in a data center.

Today, companies may own and manage their own on-premises data centers or pay for the use of their vendors’ data centers, such as in cloud computing, virtualization, and software-as-a-service arrangements (Figure 2.16).

Image described by caption and surrounding text.

FIGURE 2.16 Data centers are the infrastructure underlying cloud computing, virtualization, networking, security, delivery systems, and software-as-a-service.

In an on-premises data center connected to a local area network, it is easier to restrict access to applications and information to authorized, company-approved people and equipment. In the cloud, the management of updates, security, and ongoing maintenance are outsourced to a third-party cloud provider where data is accessible to anyone with the proper credentials and Internet connection. This arrangement can make a company more vulnerable since it increases exposure of company data at many more entry and exit points. Here are some examples of data centers.

  • National Climatic Data Center The National Climatic Data Center is an example of a public data center that stores and manages the world’s largest archive of weather data.
  • U.S. National Security Agency The National Security Agency’s (NSA) data center, shown in Figure 2.17 is located in Bluffdale, UT. It is the largest spy data center for the NSA. People who think their correspondence and postings through sites like Google, Facebook, and Apple are safe from prying eyes should rethink that belief. You will read more about reports exposing government data collection programs in Chapter 5.
  • Apple Apple has a 500,000-square-foot data center in Maiden, NC, that houses servers for various iCloud and iTunes services. The center plays a vital role in the company’s back-end IT infrastructure. In 2014 Apple expanded this center with a new, smaller 14,250-square-foot tactical data center that also includes office space, meeting areas, and breakrooms.
Photo illustration of NSA data center.

FIGURE 2.17 The NSA data center in Bluffdale, UT.

Since only the company owns the infrastructure, a data center is more suitable for organizations that run many different types of applications and have complex workloads. A data center, like a factory, has limited capacity. Once it is built, the amount of storage and the workload the center can handle does not change without purchasing and installing more equipment.

When a Data Center Goes Down, so Does Business

Data center failures disrupt all operations regardless of who owns the data center. Here are two examples.

  • Uber The startup company Uber experienced an hour-long outage in February 2014 that brought its car-hailing service to a halt across the country. The problem was caused by an outage at its vendor’s West Coast data center. Uber users flooded social media sites with complaints about problems kicking off Uber’s app to summon a driver-for-hire.
  • WhatsApp WhatsApp also experienced a server outage in early 2014 that took the service offline for 2.5 hours. WhatsApp is a smartphone text-messaging service that had been bought by Facebook for $19 billion. “Sorry we currently experiencing server issues. We hope to be back up and recovered shortly,” WhatsApp said in a message on Twitter that was retweeted more than 25,000 times in just a few hours. The company has grown rapidly to 450 million active users within five years, nearly twice as many as Twitter. More than two-thirds of these global users use the app daily. WhatsApp’s server failure drove millions of users to a competitor. Line, a messaging app developed in Japan, added 2 million new registered users within 24 hours of WhatsApp’s outage―the biggest increase in Line’s user base within a 24-hour period.

These outages point to the risks of maintaining the complex and sophisticated technology needed to power digital services used by millions or hundreds of millions of people.

Integrating Data to Combat Data Chaos

An enterprise’s data are stored in many different or remote locations―creating data chaos at times. And some data may be duplicated so that they are available in multiple locations that need a quick response. Therefore, the data needed for planning, decision-making, operations, queries, and reporting are scattered or duplicated across numerous servers, data centers, devices, and cloud services. Disparate data must be unified or integrated in order for the organization to function.

Data Virtualization

As organizations have transitioned to a cloud-based infrastructure, data centers have become virtualized. For example, Cisco offers data virtualization, which gives greater IT flexibility. The process of data virtualization involves abstracting, transforming, merging, and delivering data from disparate sources. The main goal of data virtualization is to provide a single point of access to the data. By aggregating data from a wide range of sources users can access applications without knowing their exact location. Using data virtualization methods, enterprises can respond to change more quickly and make better decisions in real time without physically moving their data, which significantly cuts costs. Cisco Data Virtualization makes it possible to:

  • Have instant access to data at any time and in any format.
  • Respond faster to changing data analytics needs.
  • Cut complexity and costs.

Compared to traditional (nonvirtual) data integration and replication methods, data virtualization accelerates time to value with:

  • Greater agility Speeds 5–10 times faster than traditional data integration methods
  • Streamlined approach 50–75% time savings over data replication and consolidation methods
  • Better insight Instant access to data

Software-Defined Data Center

Data virtualization has led to the latest development in data centers—the software-defined data center (SDDC). An SDDC facilitates the integration of the various infrastructures of the SDDC silos within organizations and optimizes the use of resources, balances workloads, and maximizes operational efficiency by dynamically distributing workloads and provisioning networks. The goal of the SDDC is to decrease costs and increase agility, policy compliance, and security by deploying, operating, managing, and maintaining applications. In addition, by providing organizations with their own private cloud, SDDCs provide greater flexibility by allowing organizations to have on-demand access to their data instead of having to request permission from their cloud provider (see Figure 2.18).

Illustration of Corporate IT infrastructures consisting of an on-premises data center and off-premises cloud computing.

FIGURE 2.18 Corporate IT infrastructures can consist of an on-premises data center and off-premises cloud computing.

The base resources for the SDDC are computation, storage, networking, and security. Typically, the SDDC includes limited functionality of service portals, applications, OSs, VM hardware, hypervisors, physical hardware, software-defined networking, software-defined storage, a security layer, automation and management layers, catalogs, a gateway interface module, and third-party plug-ins (Figure 2.19).

Illustration of SDDC infrastructure.

FIGURE 2.19 SDDC infrastructure (adapted from Sturm et al., 2017).

It is estimated that the market share for SDDCs will grow from the current level of $22 billion to more than $77 billion in the next five years. As the use of SDDCs grows at this extraordinary rate, data center managers will be called upon to scale their data centers exponentially at a moment’s notice. Unfortunately, this is impossible to achieve using the traditional data center infrastructure. In the SDDC, software placement and optimization decisions are based on business logic, not technical provisioning directives. This requires changes in culture, processes, structure, and technology. The SDDC isolates the application layer from the physical infrastructure layer to facilitate faster and more effective deployment, management, and monitoring of diverse applications. This is achieved by finding each enterprise application an optimal home in a public or private cloud environment or draw from a diverse collection of resources.

From a business perspective moving to a SDDC is motivated by the need to improve security, increase alignment of the IT infrastructure with business objectives and provision of applications more quickly.

Traditional data centers had dedicated, isolated hardware that results in poor utilization of resources and very limited flexibility. Second-generation virtualization data cases improved resource use by consolidating virtualized servers. By reducing the steps needed to decrease the time it takes to deploy workloads, facilitating the definition of applications and resource needs, the SDDC creates an even more flexible environment in which enterprise applications can be quickly reconfigured and supported to provide infrastructure-as a service (IaaS). Transitioning to an SDDC enables an organization to optimize its resource usage, provide capacity on demand, improve business-IT alignment, improve agility and flexibility of operations, and save money (Figure 2.20).

Illustration of Evolution of data centers.

FIGURE 2.20 Evolution of data centers (adapted from Sturm et al., 2017).

Cloud Computing

In a business world where first movers gain the advantage, IT responsiveness and agility provide a competitive edge and lead to sustainable business practices. Yet, many IT infrastructures are extremely expensive to manage and too complex to easily adapt. A common solution is cloud computing. Cloud computing is the general term for infrastructures that use the Internet and private networks to access, share, and deliver computing resources. More specifically, IBM defines cloud computing as “the delivery of on-demand computing resources—everything from applications to data centers—over the Internet on a pay-for-use basis” (IBM, 2016).

Cloud computing is the delivery of computing and storage resources as a service to end-users over a network. Cloud systems are scalable. That is, they can be adjusted to meet changes in business needs. At the extreme, the cloud’s capacity is unlimited depending on the vendor’s offerings and service plans. A drawback of the cloud is control because a third party manages it. Unless the company uses a private cloud within its network, it shares computing and storage resources with other cloud users in the vendor’s public cloud. Public clouds allow multiple clients to access the same virtualized services and utilize the same pool of servers across a public network. In contrast, private clouds are single-tenant environments with stronger security and control for regulated industries and critical data. In effect, private clouds retain all the IT security and control provided by traditional IT infrastructures with the added advantages of cloud computing.

Selecting a Cloud Vendor

Because cloud is still a relatively new and evolving business model, the decision to select a cloud service provider should be approached with even greater diligence than other IT decisions. As cloud computing becomes an increasingly important part of the IT delivery model, assessing and selecting the right cloud provider also become the most strategic decisions that business leaders undertake. Providers are not created equally, so it is important to investigate each provider’s offerings prior to subscribing. When selecting and investing in cloud services, there are several service factors a vendor needs to address. These evaluation factors are listed in Table 2.6.

TABLE 2.6 Service Factors to Consider when Evaluating Cloud Vendors or Service Providers

Factors Examples of Questions to Be Addressed
Delays What are the estimated server delays and network delays?
Workloads What is the volume of data and processing that can be handled during a specific amount of time?
Costs What are the costs associated with workloads across multiple cloud computing platforms?
Security How are data and networks secured against attacks? Are data encrypted and how strong is the encryption? What are network security practices?
Disaster recovery and business continuity How is service outage defined? What level of redundancy is in place to minimize outages, including backup services in different geographical regions? If a natural disaster or outage occurs, how will cloud services be continued?
Technical expertise and understanding Does the vendor have expertise in your industry or business processes? Does the vendor understand what you need to do and have the technical expertise to fulfill those obligations?
Insurance in case of failure Does the vendor provide cloud insurance to mitigate user losses in case of service failure or damage? This is a new and important concept.
Third-party audit or an unbiased assessment of the ability to rely on the service provided by the vendor Can the vendor show objective proof with an audit that it can live up to the promises it is making?

Vendor Management and Cloud Service Agreements (CSAs)

The move to the cloud is also a move to vendor-managed services and cloud service agreements (CSAs). Also referred to as cloud service level agreements (SLAs), the CSA or SLA is a negotiated agreement between a company and service provider that can be a legally binding contract or an informal contract. You can review a sample CSA used by IBM by visiting http://www-05.ibm.com/support/operations/files/pdf/csa_us.pdf.

Staff experienced in managing outsourcing projects may have the necessary expertise for managing work in the cloud and policing SLAs with vendors. The goal is not building the best CSA terms, but negotiating the terms that align most closely with the business needs. For example, if a server becomes nonoperational and it does not support a critical business operation, it would not make sense to pay a high premium for reestablishing the server within one hour. On the other hand, if the data on the server support a business process that would effectively close down the business for the period of time that it was not accessible, it would be prudent to negotiate the fastest possible service in the CSA and pay a premium for that high level of service.

In April 2015, the Cloud Standards Customer Council (CSCC) published the Practical Guide to Cloud Service Agreements, Version 2.0, to reflect changes that have occurred since 2012 when it first published the Practical Guide to Cloud Service Level Agreements. The new guide provides a practical reference to help enterprise IT and business decision-makers analyze CSAs from different cloud service providers. The main purpose of a CSA is to set clear expectations for service between the cloud customer (buyer) and the cloud provider (seller), but CSAs should also exist between a customer and other cloud entities, such as the cloud carrier, the cloud broker, and even the cloud auditor. Although the various service delivery models, that is, IaaS, PaaS, SaaS, and so on, may have different requirements, the guide focuses on the requirements that are common across the various service models (Cloud Standards Customer Council, 2015, p. 4).

Implementing an effective management process is an important step in ensuring internal and external user satisfaction with cloud services. Table 2.7 lists the 10 steps that should be taken by cloud customers to evaluate cloud providers’ CSAs in order to compare CSAs across multiple providers or to negotiate terms with a selected provider.

TABLE 2.7 Ten Steps to Evaluate a CSA

1. Understand roles and responsibilities of the CSA customer and provider
2. Evaluate business-level policies and compliance requirements relevant to the CSA customer
3. Understand service and deployment model differences
4. Identify critical performance objectives such as availability, response time, and processing speed. Ensure they are measurable and auditable
5. Evaluate security and privacy requirements for customer information that has moved into the provider’s cloud and applications, functions, and services being operated in the cloud to provide required service to the customer
6. Identify service management requirements such as auditing, monitoring and reporting, measurement, provisioning, change management, and upgrading/patching
7. Prepare for service failure management by explicitly documenting cloud service capabilities and performance expectations with remedies and limitations for each
8. Understand the disaster recovery plan
9. Develop a strong and detailed governance plan of the cloud services on the customer side
10. Understand the process to terminate the CSA

Cloud Infrastructure

The cloud has greatly expanded the options for enterprise IT infrastructures because any device that accesses the Internet can access, share, and deliver data. Cloud computing is a valuable infrastructure because:

  1. It is dynamic, not static and provides a way to make applications and computing power available on demand. Applications and power are available on demand because they are provided as a service. For example, any software that is provided on demand is referred to as software as a service (SaaS). Typical SaaS products are Google Apps and www.Salesforce.com. Section 2.5 discusses SaaS and other cloud services.
  2. Helps companies become more agile and responsive while significantly reducing IT costs and complexity through improved workload optimization and service delivery.

Move to Enterprise Clouds

A majority of large organizations have hundreds or thousands of software licenses that support business processes, such as licenses for Microsoft Office, Oracle database management, IBM CRM (customer relationship management), and various network security software. Managing software and their licenses involves deploying, provisioning, and updating them―all of which are time-consuming and expensive. Cloud computing overcomes these problems.

Issues in Moving Workloads from the Enterprise to the Cloud

Building a cloud strategy is a challenge, and moving existing applications to the cloud is stressful. Despite the business and technical benefits, the risk exists of disrupting operations or customers in the process. With the cloud, the network and WAN (wide area network) become an even more critical part of the IT infrastructure. Greater network bandwidth is needed to support the increase in network traffic. And, putting part of the IT architecture or workload into the cloud requires different management approaches, different IT skills, and knowing how to manage vendor relationships and contracts.

Infrastructure Issues

There is a big difference because cloud computing runs on a shared infrastructure, so the arrangement is less customized to a specific company’s requirements. A comparison to help understand the challenges is that outsourcing is like renting an apartment, while the cloud is like getting a room at a hotel.

With cloud computing, it may be more difficult to get to the root of performance problems, like the unplanned outages that occurred with Google’s Gmail and Workday’s human resources apps. The trade-off is cost versus control.

Increasing demand for faster and more powerful computers, and increases in the number and variety of applications are driving the need for more capable IT architectures.

Concept Check 2.4

  1. Data centers and cloud computing are types of:
a. IT infrastructures
b. Hardware and software
c. IT architectures
d. Databases
Correct or Incorrect?

 

  1. Data virtualization adds value in an organization in all of the following ways, EXCEPT:
a. Increased agility
b. Streamline approach to data handling
c. Provide better business insight
d. Create more data
Correct or Incorrect?

 

  1. ________________________ is the delivery of computing and storage resources as a service to end‐users over a network.
a. Internet browser
b. Cloud computing
c. Database management
d. Data governance
Correct or Incorrect?

 

  1. A big advantage of clouding computing is:
a. Scalability
b. Limited capacity
c. Easily controlled
d. Exclusivity
Correct or Incorrect?

 

2.5 Cloud Services and Virtualization

Managers want streamlined, real-time, data-driven enterprises, yet they may face budget cuts. Sustaining performance requires the development of new business applications and analytics capabilities, which comprise the front end and the data stores and digital infrastructure, or back end, to support them. The back end is where the data reside. The problem is that data may have to navigate through a congested IT infrastructure that was first designed decades ago. These network or database bottlenecks can quickly wipe out the competitive advantages from big data, mobility, and so on. Traditional approaches to increasing database performance―manually tuning databases, adding more disk space, and upgrading processors―are not enough when you are you are dealing with streaming data and real-time big data analytics. Cloud services help to overcome these limitations. Cloud services are outsourced to a third-party cloud provider who manages the updates, security, and ongoing maintenance.

At first glance, virtualization and cloud computing may appear to be quite similar. However, cloud computing and virtualization are inherently different. Unlike cloud computing that involves multiple computers or hardware devices sending data through vendor-provided networks, virtualization is the replacement of a tangible physical component with a virtual one. Each of these concepts are described and discussed in the following sections.

Anything as a Service (XAAS) Models

The cloud computing model for on-demand delivery of and access to various types of computing resources also extends to the development of business apps. Figure 2.21 shows four “as a service” (XaaS) solutions based on the concept that the resource―software, platform, infrastructure, or data—can be provided on demand regardless of geolocation. As these as service solutions develop, the focus is changing from massive technology implementation costs to business-reengineering programs that enable XaaS platforms (Fresht, 2014).

Illustration of Four as a service solutions: software, platform, infrastructure, and data as a service.

FIGURE 2.21 Four as a service solutions: software, platform, infrastructure, and data as a service.

Cloud services are services made available to users on demand via the Internet from a cloud computing provider’s servers instead of being accessed through an organization’s on-premises servers. Cloud services are designed to provide easy, scalable access to applications, resources, and services, and are fully managed by a cloud services provider.

Cloud computing is often referred to as a “stack” or broad range of services built on top of each other under the name cloud. These cloud services can be defined as follows:

  • Software as a service (SaaS) is a widely used model in which software is available to users from a service provider as needed. A provider licenses a SaaS application to customers as an on-demand service, through a subscription, a pay-as-you-go model, or free of charge (where revenue can be generated by other means, such as through sale of advertisements).
  • Platform as a service (PaaS) is a computing platform that enables the quick and easy creation, testing, and deployment of web applications without the necessity of buying and maintaining the software and infrastructure underneath it. It is a set of tools and services that make coding and deploying these applications faster and more efficient.
  • Infrastructure as a service (IaaS) is a way of delivering servers, storage, networks, workload balancers, and OSs as an on-demand service.
  • Data as a service (DaaS) is an information provision and distribution model in which data files (including text, images, sounds, and videos) are made available to customers over a network by a service provider.

Software as a Service (SaaS)

SaaS is a rapidly growing method of delivering software and is particularly useful in applications in which there are considerable interactions between the organization and external entities that do not confer a competitive advantage, for example, e-mail and newsletters. It is also useful when an organization is going to be needing a particular type of software for a short period of time or for a specific project, and for software that is used periodically, for example, tax, payroll, or billing software. SaaS is not appropriate for accessing applications that require fast processing of real-time data or applications where regulation does not permit data being hosted externally.

Other terms for SaaS are on-demand computing and hosted services. The idea is basically the same: Instead of buying and installing expensive packaged enterprise applications, users can access software applications over a network, using an Internet browser. To use SaaS, a service provider hosts the application at its data center and customers access it via a standard Web browser.

The SaaS model was developed to overcome the common challenge to an enterprise of being able to meet fluctuating demands on IT resources efficiently. It is used in many business functions, primarily customer relationship management (CRM), accounting, human resources (HR), service desk management, communication, and collaboration.

There are thousands of SaaS vendors. www.Salesforce.com is one of the most widely known SaaS providers. Other examples are Google Docs and collaborative presentation software Prezi. For instance, instead of installing Microsoft Word on your own computer, and then loading Word to create a document, you use a browser to log into Google Docs. Only the browser uses your computer’s resources.

Platform as a Service (PaaS)

PaaS provides a standard unified platform for developing, testing, and deploying software over the Web. This computing platform allows the creation of Web applications quickly and easily without the complexity of buying and maintaining the underlying infrastructure. Without PaaS, the cost of developing some applications would be prohibitive. Examples of PaaS include databases, Web servers, development tools, and execution runtime. PaaS is particularly useful when multiple software developers are working on a software development project of when other external parties need to interact with the development process and for when developers want to automate testing and deployment services. It is less useful in those instances where application performance needs to be customized to the underlying hardware and software or an application needs to be highly portable in terms of where it is hosted. Some examples of PaaS include Microsoft Azure Service, www.Force.com, and Google App Engine.

Infrastructure as a Service (IaaS)

Rather than purchasing all the components of its IT infrastructure, organizations buy their computing resources as a fully outsourced Infrastructure as a Service (IaaS) on demand. Generally, IaaS can be acquired as a Public or Private infrastructure or a combination of the two (Hybrid). A public IaaS is one that consists of shared resources deployed on a self-service basis over the Internet. On the other hand, a private IaaS is provided on a private network. And, a hybrid IaaS is a combination of both public and private. IaaS is useful where organizations experience significant highs and lows in terms of demand on the infrastructure, for new or existing organizations who have budgetary constraints on hardware investment and in situations where an organization has temporary infrastructure needs. Some IaaS providers you may be familiar with include Amazon Web Services (AWS) and Rackspace.

Data as a Service (DaaS)—The New Kid on the Block

DaaS is the newest entrant into the XaaS arena. DaaS enables data to be shared among clouds, systems, apps, and so on regardless of the data source or where they are stored. Data files, including text, images, sound, and video, are made available to customers over a network, typically the Internet. DaaS makes it easier for data architects to select data from different pools, filter out sensitive data, and make the remaining data available on demand.

A key benefit of DaaS is that it transfers the risks and responsibilities associated with data management to a third-party cloud provider. Traditionally, organizations stored and managed their data within a self-contained storage system, however, as data become more complex, it is increasingly difficult and expensive to maintain using the traditional data model. Using DaaS, organizational data are readily accessible through a cloud-based platform and can be delivered to users despite organizational or geographical constraints. This model is growing in popularity as data become more complex, difficult, and expensive to maintain. Some of the most common business applications currently using DaaS are CRM and enterprise resource planning (ERP). For an example of Daas, see IT at Work 2.3.

As a Service Models Are Enterprisewide and Can Trigger Lawsuits

The various As a Service models are used in various aspects of business. You will read how these specific services, such as CRM and HR management, are being used for operational and strategic purposes in later chapters. Companies are frequently adopting software, platform, infrastructure, data management, and starting to embrace mobility as a service and big data as a service because they typically no longer have to worry about the costs of buying, maintaining, or updating their own data servers. Both hardware and human resources expenses can be cut significantly. Service arrangements all require that managers understand the benefits and trade-offs―and how to negotiate effective SLAs and CSAs. Regulations mandate that confidential data be protected regardless of whether the data are on-premises or in the cloud. Therefore, a company’s legal department needs to get involved in these IT decisions. Put simply, moving to cloud services is not simply an IT decision because the stakes around legal and compliance issues are very high.

Going Cloud

Cloud services can advance the core business of delivering superior services to optimize business performance. Cloud can cut costs and add flexibility to the performance of critical business apps. And, it can improve responsiveness to end-consumers, application developers, and business organizations. But to achieve these benefits, there must be IT, legal, and senior management oversight because a company still must meet its legal obligations and responsibilities to employees, customers, investors, business partners, and society.

Virtualization and Virtual Machines

There are many types of virtualization, such as virtual storage devices, virtual desktops, virtual OSs, and virtual servers for network virtualization. You can think of virtualization as a model for a physical component that is built into computer code, to create a software program that acts in the same way as the physical component it is modeling. For example, a virtual machine is a software representation of a computer, rather than an actual computer and a virtual server sends and receives signals just like a physical one, even though it doesn’t have its own circuitry and other physical components.

You might ask why organizations want to virtualize their physical computing and networking devices. The answer is a gross underutilization of inefficient use of resources. Computer hardware had been designed to run a single OS and a single app, which leaves most computers vastly underutilized. Virtualization is a technique that creates a virtual (i.e., nonphysical) layer and multiple virtual machines (VMs) to run on a single physical machine. The virtual (or virtualization) layer makes it possible for each VM to share the resources of the hardware. Figure 2.22 shows the relationship among the VMs and physical hardware.

Illustration of Virtual machines running on a  computer hardware layer.

FIGURE 2.22 Virtual machines running on a simple computer hardware layer.

What Is a Virtual Machine?

Just as virtual reality is not real, but a software-created world, a virtual machine is a software-created computer. Technically, a virtual machine (VM) is created by a software layer, called the virtualization layer, as shown in Figure 2.22. That layer has its own Windows or other OS and apps, such as Microsoft Office, as if it were an actual physical computer. A VM behaves exactly like a physical computer and contains its own virtual―that is, software-based―CPU, RAM (random access memory), hard drive, and network interface card (NIC). An OS cannot tell the difference between a VM and a physical machine, nor can applications or other computers on a network tell the difference. Even the VM thinks it is a “real” computer. Users can set up multiple real computers to function as a single PC through virtualization to pool resources to create a more powerful VM.

Virtualization is a concept that has several meanings in IT and therefore several definitions. The major type of virtualization is hardware virtualization, which remains popular and widely used. Virtualization is often a key part of an enterprise’s disaster recovery plan. In general, virtualization separates business applications and data from hardware resources. This separation allows companies to pool hardware resources―rather than dedicate servers to applications―and assign those resources to applications as needed.

Different types of virtualization include:

  • Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device managed from a central console.
  • Server virtualization consolidates multiple physical servers into virtual servers that run on a single physical server.
  • Desktop virtualization is software technology that separates the desktop environment and associated application software from the physical machine that is used to access it.
  • Application virtualization is the practice of running software from a remote server rather than on the user’s computer.
  • Network virtualization combines the available resources in a network by splitting the network load into manageable parts, each of which can be assigned (or reassigned) to a particular server on the network.
  • Hardware virtualization is the use of software to emulate hardware or a total computer environment other than the one the software is actually running in. It allows a piece of hardware to run multiple OS images at once. This kind of software is sometimes known as a virtual machine.

Virtualization Characteristics and Benefits

Virtualization increases the flexibility of IT assets, allowing companies to consolidate IT infrastructure, reduce maintenance and administration costs, and prepare for strategic IT initiatives. Virtualization is not primarily about cost-cutting, which is a tactical reason. More importantly, for strategic reasons, virtualization is used because it enables flexible sourcing and cloud computing.

The characteristics and benefits of virtualization are as follows:

  1. Memory-intensive VMs need a huge amount of RAM (random access memory, or primary memory) because of their massive processing requirements.
  2. Energy-efficient VMs minimize energy consumed running and cooling servers in the data center―representing up to a 95% reduction in energy use per server.
  3. Scalability and load balancing When a big event happens, such as the Super Bowl, millions of people go to a website at the same time. Virtualization provides load balancing to handle the demand for requests to the site. The VMware infrastructure automatically distributes the load across a cluster of physical servers to ensure the maximum performance of all running VMs. Load balancing is key to solving many of today’s IT challenges.

Virtualization consolidates servers, which reduces the cost of servers, makes more efficient use of data center space, and reduces energy consumption. All of these factors reduce the total cost of ownership (TCO). Over a three-year life cycle, a VM costs approximately 75% less to operate than a physical server. IT at Work 2.4 describes one example of how virtualization can help organizations provide higher levels of customer service and improve productivity.

Concept Check 2.5

  1. A method of delivering servers, storage, networks, workload balancers and operating systems as an on‐demand service is referred to as:
a. Software as a service
b. Platform as a service
c. Infrastructure as a service
d. Data as a service
Correct or Incorrect?

 

  1. _____________________ is an information provision and distribution model in which text, images, sounds and videos are made available to customers over a network by a service provider.
a. Software as a service
b. Platform as a service
c. Infrastructure as a service
d. Data as a service
Correct or Incorrect?

 

  1. All of the following are useful applications of IaaS EXCEPT:
a. Infrastructure demands remain consistent
b. Infrastructure demands fluctuate
c. Budget constraints are imposed on infrastructure spending
d. Organization has temporary infrastructure needs
Correct or Incorrect?

 

  1. Virtualization benefits an organization in the following ways:
a. Reduces costs
b. Make more efficient use of data center space
c. Reduces energy consumption
d. All of the above
Correct or Incorrect?

 

  1. ________________________ is the practice of running software from a service server rather than on the user’s computer.
a. Storage virtualization
b. Desktop virtualization
c. PC virtualization
d. Application virtualization
Correct or Incorrect?

 

Key Terms

ad hoc report

batch processing

cloud computing

cloud service agreements (CSAs)

customer-centric

data

data as a service (DaaS)

data center

data governance

data silo

database

decision support systems (DSS)

dirty data

enterprise architecture (EA)

exception report

executive information systems (EISs)

goal seeking

information

information management

information systems (ISs)

infrastructure as a service (IaaS)

IT infrastructure

IPOS

knowledge

management information systems (MIS)

master data

master data management (MDM)

master file

model

online transaction processing (OLTP)

platform as a service (PaaS)

private cloud

public cloud

real-time processing

service level agreement (SLA)

software as a service (SaaS)

software-defined data center (SDDC)

stack

structured decisions

transaction processing systems (TPS)

unstructured decisions

virtualization

virtual machine (VM)

what-if analysis

wisdom

Assuring Your Learning

References

  1. Bloomberg, J. “Change as Core Competency: Transforming the Role of the Enterprise Architect.” Forbes, June 16, 2016.
  2. Cailean, I. “What Role Do Algorithms Play in Programmatic Advertising?” Trade Mod, January 6, 2016. http://www.trademob.com/what-role-do-algorithms-play-in-programmatic-advertising
  3. Cloud Standards Customer Council. Practical Guide to Cloud Service Agreements, Version 2.0. April 2015. http://www.cloud-council.org/deliverables/CSCC-Practical-Guide-to-Cloud-Service-Agreements.pdf
  4. Conn, J. “EHRs vs. Paper: A Split-decision on Accuracy.” Modern Healthcare, July 8, 2016.
  5. Fresht, P. “The Ten Tenets Driving the As-a-service Economy.” Horses for Sources, October 6, 2014. http://www.horsesforsources.com/as a service-economy_100614
  6. IBM. “What is Cloud Computing?” IBM, June 6, 2016. https://www.ibm.com/cloud-computing/learn-more/what-is-cloud-computing
  7. Jarousse, L. A. “Information Governance for Hospitals.” Hospitals & Health Networks, February 18, 2016.
  8. Keitt, T. J. “Collaboration Technology Should Be Part of Your Customer Experience Tool Kit.” Forrester.com, June 30, 2014.
  9. Lunden, I. “Enterprise Chat App Slack Ties up with Salesforce in a Deep Product Partnership.” Tech Crunch, September 27, 2016. https://techcrunch.com/2016/09/27/enterprise-chat-app-slack-ties-up-with-salesforce-in-a-deep-platform-partnership
  10. Marchese, L. “How the ‘Silo Effect’ Is Hurting Cross Team Collaboration.” Trello, May 10, 2016.
  11. NextGen Healthcare. “Health Information Exchange (HIE).” NextGen Healthcare, March 31, 2016.
  12. Office of the National Coordinator for Health Information Technology. “Percent of Hospitals, By Type, that Possess Certified Health IT.” Office of the National Coordinator for Health Information Technology, May 31, 2016.
  13. Porter, M. Competitive Advantage: Creating and Sustaining Superior Performance. Free Press, 1998.
  14. Rai, R., G. Sahoo, and S. Mehfuz. “Exploring the Factors Influencing the Cloud Computing Adoption: A Systematic Study on Cloud Migration.” Springerplus, April 25, 2015, 4, 197.
  15. Reeves, M. G. and R. Bowen. “Developing a Data Governance Model in Health Care.”Healthcare Financial Management, February 2013, 67(2): 82–86.
  16. Schneider, M. “Case Study: How MEDIATA Increased Campaign Performance with Hyperlocal Targeting.” Skyhook Wireless, July 22, 2014.
  17. Schneider, M. “Solving the Dirty Data Problem in Location-Based Advertising.” Street Fight, January 7, 2015.
  18. Shore, J. “Cloud-Based Integration Seeks to Tear Down Data Silos.” Tech Target, August 19, 2015.
  19. Sturm, R., C. Pollard, and J. Craig. Application Performance Management in the Digital Enterprise. Elsevier, March 2017.
  20. Zuckerman, M.P.H., Sheingold, Ph.D., Orav, Ph.D., Ruhter, M.P.P., M.H.S.A., and Epstein, M.D. “Readmissions, Observation, and the Hospital Readmissions Reduction Program.” The New England Journal of Medicine, April 21, 2016.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset