One of the most popular business strategies for achieving success is the development of a competitive advantage. Competitive advantage exists when a company has superior resources and capabilities than its competitors that allow it to achieve either a lower cost structure or a differentiated product. For long-term business success, companies strive to develop sustainable competitive advantages, or competitive advantages that cannot be easily copied by the competition (Porter, 1998). To stay ahead, corporate leaders must constantly seek new ways to grow their business in the face of rapid technology changes, increasingly empowered consumers and employees, and ongoing changes in government regulation. Effective ways to thrive over the long term are to launch new business models and strategies or devise new ways to outperform competitors. Because these new business models, strategies, and performance capabilities will frequently be the result of advances in technology, the company’s ability to leverage technological innovation over time will depend on its approach to enterprise IT architecture, information management, and data governance. The enterprisewide IT architecture, or simply the enterprise architecture (EA), guides the evolution, expansion, and integration of information systems (ISs), digital technology, and business processes. This guidance enables companies to more effectively leverage their IT capability to achieve maximum competitive advantage and growth over the long term. Information management guides the acquisition, custodianship, and distribution of corporate data and involves the management of data systems, technology, processes, and corporate strategy. Data governance, or information governance, controls enterprise data through formal policies and procedures. One goal of data governance is to provide employees and business partners with high-quality data they can trust and access on demand.
Bad decisions can result from the analysis of inaccurate data, which is widely referred to as dirty data, and lead to increased costs, decreased revenue, and legal, reputational, and performance-related consequences. For example, if data is collected and analyzed based on inaccurate information because advertising was conducted in the wrong location for the wrong audience, marketing campaigns can become highly skewed and ineffective. Companies must then begin costly repairs to their datasets to correct the problems caused by dirty data. This creates a drop in customer satisfaction and a misuse of resources in a firm. One example of an organization taking strides to clean the dirty data collected through inaccurate marketing is the data management platform, MEDIATA, which runs bidding systems and ad location services for firms looking to run ads on websites (see Table 2.1). Let’s see how they did this.
TABLE 2.1 Opening Case Overview
Company | MEDIATA was launched as Valued Interactive Media (VIM) in 2009. Rebranded in 2013 as MEDIATA |
Industry | Communications; Advertising |
Product Lines | Wide range of programmatic solutions and products to provide practical solutions for digital marketing campaigns to deliver successful online advertising campaigns to organizations across Australia, Hong Kong, and New Zealand |
Digital Technology | Information management and data governance to increase trust and accessibility of data to facilitate a company’s vision |
Business Vision | Shake up the online advertising industry. Improve transparency and foster greater cooperation between partners |
Website | www.mediataplatform.com |
Before we being to explore the value of information systems (ISs) to an organization, it’s useful to understand what an IS is, what it does, and what types of ISs are typically found at different levels of an organization.
In addition to supporting decision-making, coordination, and control in an organization, ISs also help managers and workers analyze problems, visualize complex sets of data, and create new products. ISs collect (input) and manipulate data (process), and generate and distribute reports (output) and based on the data-specific IT services, such as processing customer orders and generating payroll, are delivered to the organization. Finally, the ISs save (store) the data for future use. In addition to the four functions of IPOS, an information needs feedback from its users and other stakeholders to help improve future systems as demonstrated in Figure 2.2.
The following example demonstrates how the components of the IPOS work together: To access a website, Amanda opens an Internet browser using the keyboard and enters a Web address into the browser (input). The system then uses that information to find the correct website (processing) and the content of the desired site is displayed in the Web browser (output). Next, Amanda bookmarks the desired website in the Web browser for future use (storage). The system then records the time that it took to produce the output to compare actual versus expected performance (feedback).
A computerized IS consists of six interacting components. Regardless of type and where and by whom they are used within an organization, the components of an IS must be carefully managed to provide maximum benefit to an organization (see Figure 2.3).
As you can see in Figure 2.3, data is the central component of any information system. Without data, an IS would have no purpose and companies would unable to conduct business. Generally speaking, ISs process data into meaningful information that produces corporate knowledge and ultimately creates wisdom that fuels corporate strategy.
Data are the raw material from which information is produced; the quality, reliability, and integrity of the data must be maintained for the information to be useful. Data are the raw facts and figures that are not organized in any way. Examples are the number of hours an employee worked in a certain week or the number of new Ford vehicles sold from the first quarter (Q1) of 2015 through the second quarter (Q2) of 2017 (Figure 2.4).
Information is an organization’s most important asset, second only to people. Information provides the “who,” “what,” “where,” and “when” of data in a given context. For example, summarizing the quarterly sales of new Ford vehicles from Q1 2015 through Q2 2017 provides information that shows sales have steadily decreased from Q2 2016.
Knowledge is used to answer the question “how.” In our example, it would involve determining how the trend can be reversed, for example, customer satisfaction can be improved, new features can be added, and pricing can be adjusted.
Wisdom is more abstract than data and information (that can be harnessed) and knowledge (that can be shared). Wisdom adds value and increases effectiveness. It answers the “why” in a given situation. In the Ford example, wisdom would be corporate strategists evaluating the various reasons for the sales drop, creatively analyzing the situation as a whole, and developing innovative policies and procedures to reverse the recent downward trend in new vehicle sales.
ISs collect or input and process data to create and distribute reports or other outputs based on information gleaned from the raw data to support decision-making and business processes that, in turn, produce corporate knowledge that can be stored for future use. Figure 2.5 shows the input-processing-output-storage (IPOS) cycle.
An IS may be as simple as a single computer and a printer used by one person, or as complex as several thousand computers of various types (tablets, desktops, laptops, mainframes) with hundreds of printers, scanners, and other devices connected through an elaborate network used by thousands of geographically dispersed employees. Functional ISs that support business analysts and other departmental employees range from simple to complex, depending on the type of employees supported. The following examples show the support that IT provides to major functional areas.
Figure 2.6 illustrates the classification of the different types of ISs used in organizations, the typical level of workers who use them and the types of input/output (I/O) produced by each of the ISs. At the operational level of the organization, line workers use transaction processing systems (TPSs) to capture raw data and pass it along (output) to middle managers. The raw data is then input into office automation (OA) and MISs by middle managers to produce information for use by senior managers. Next, information is input into decision support systems (DSSs) for processing into explicit knowledge that will be used by senior managers to direct current corporate strategy. Finally, corporate executives input the explicit knowledge provided by the DSSs into executive information systems (EISs) and apply their experience, expertise, and skills to create wisdom that will lead to new corporate strategies.
A TPS is designed to process specific types of data input from ongoing transactions. TPSs can be manual, as when data are typed into a form on a screen, or automated by using scanners or sensors to capture barcodes or other data (Figure 2.7). TPSs are usually operated directly by frontline workers and provide the key data required to support the management of operations.
Organizational data are processed by a TPS, for example, sales orders, reservations, stock control, and payments by payroll, accounting, financial, marketing, purchasing, inventory control, and other functional departments. The data are usually obtained through the automated or semiautomated tracking of low-level activities and basic transactions. Transactions are either:
TPSs are essential systems. Transactions that are not captured can result in lost sales, dissatisfied customers, unrecorded payments, and many other types of data errors with financial impacts. For example, if the accounting department issued a check to pay an invoice (bill) and it was cashed by the recipient, but information about that transaction was not captured, then two things happen. First, the amount of cash listed on the company’s financial statements is incorrect because no deduction was made for the amount of the check. Second, the accounts payable (A/P) system will continue to show the invoice as unpaid, so the accounting department might pay it a second time. Likewise, if services are provided, but the transactions are not recorded, the company will not bill for them and thus lose service revenue.
Data captured by a TPS are processed and stored in a database; they then become available for use by other systems. Processing of transactions is done in one of two modes:
Batch processing costs less than real-time processing. A disadvantage is that data are inaccurate because they are not updated immediately, in real time.
As data are collected or captured, they are validated to detect and correct obvious errors and omissions. For example, when a customer sets up an account with a financial services firm or retailer, the TPS validates that the address, city, and postal code provided are consistent with one another and also that they match the credit card holder’s address, city, and postal code. If the form is not complete or errors are detected, the customer is required to make the corrections before the data are processed any further.
Data errors detected later may be time-consuming to correct or cause other problems. You can better understand the difficulty of detecting and correcting errors by considering identity theft. Victims of identity theft face enormous challenges and frustration trying to correct data about them.
An MIS is built on the data provided by TPS. MISs are management-level systems that are used by middle managers to help ensure the smooth running of an organization in the short to medium term. The highly structured information provided by these systems allows managers to evaluate an organization’s performance by comparing current with previous outputs. Functional areas or departments―accounting, finance, production/operations, marketing and sales, human resources, and engineering and design―are supported by ISs designed for their particular reporting needs. General-purpose reporting systems are referred to as management information systems (MISs). Their objective is to provide reports to managers for tracking operations, monitoring, and control.
Typically, a functional system provides reports about such topics as operational efficiency, effectiveness, and productivity by extracting information from databases and processing it according to the needs of the user. Types of reports include the following:
Reports typically include interactive data visualizations, such as column and pie charts, as shown in Figure 2.8.
A DSS is a knowledge-based system used by senior managers to facilitate the creation of knowledge and allow its integration into the organization. More specifically, a DSS is an interactive application that supports decision-making by manipulating and building upon the information from an MIS and/or a TPS to generate insights and new information.
Configurations of a DSS range from relatively simple applications that support a single user to complex enterprisewide systems. A DSS can support the analysis and solution of a specific problem, evaluate a strategic opportunity, or support ongoing operations. These systems support unstructured and semistructured decisions, such as make-or-buy-or-outsource decisions, or what products to develop and introduce into existing markets.
Decisions range from structured to unstructured. Structured decisions are those that have a well-defined method for solving and the data necessary to reach a sound decision. An example of a structured decision is determining whether an applicant qualifies for an auto loan, or whether to extend credit to a new customer―and the terms of those financing options. Structured decisions are relatively straightforward and made on a regular basis, and an IS can ensure that they are done consistently.
At the other end of the continuum are unstructured decisions that depend on human intelligence, knowledge, and/or experience―as well as data and models to solve. Examples include deciding which new products to develop or which new markets to enter. Semistructured decisions fall in the middle of the continuum. DSSs are best suited to support these types of decisions, but they are also used to support unstructured ones. To provide such support, DSSs have certain characteristics to support the decision-maker and the overall decision-making process.
The main characteristic that distinguishes a DSS from an MIS is the inclusion of models. Decision-makers can manipulate models to conduct experiments and sensitivity analyses, for example, what-if and goal seeking. What-if analysis refers to changing assumptions or data in the model to observe the impacts of those changes on the outcome. For example, if sales forecasts are based on a 5% increase in customer demand, a what-if analysis would replace the 5% with higher and/or lower estimates to determine what would happen to sales if demand changed. With goal seeking, the decision-maker has a specific outcome in mind and needs to determine how that outcome could be achieved and whether it is feasible to achieve that desired outcome. A DSS can also estimate the risk of alternative strategies or actions.
California Pizza Kitchen (CPK) uses a DSS to support inventory decisions. CPK has over 200 locations in 32 U.S. states and 13 other countries, including 17 California Pizza Kitchen nontraditional, franchise concepts designed for airports, universities, and stadiums. Maintaining optimal inventory levels at all its restaurants was challenging and time-consuming. The original MIS was replaced by a DSS to make it easy for the chain’s managers to maintain updated records, generate reports as and when needed, and make corporate- and restaurant-level decisions. Many CPK restaurants reported a 5% increase in sales after the DSS was implemented.
EISs are strategic-level information systems that help executives and senior managers analyze the environment in which the organization exists. They typically are used to identify long-term trends and to plan appropriate courses of action. The information in such systems is often weakly structured and comes from both internal and external sources. EISs are designed to be operated directly by executives without the need for intermediaries and easily tailored to the preferences of the individual using them. An EIS organizes and presents data and information from both external data sources and internal MIS or TPS in an easy-to-use dashboard format to support and extend the inherent capabilities of senior executives.
Initially, EISs were custom-made for an individual executive. However, a number of off-the-shelf EIS packages now exist and some enterprise-level systems offer a customizable EIS module.
The ways in which the different characteristics of the various types of ISs are classified is shown in Table 2.2.
TABLE 2.2 Characteristics of Types of Information Systems
Type | Characteristics |
TPS | Used by operations personnel Produce information for other ISs Use internal and external data Efficiency oriented |
MIS | Used by lower and middle managers Based on internal information Support structured decisions Inflexible Lack analytical capabilities Focus on past and present data |
DSS | Used by senior managers Support semistructured or unstructured decisions Contain models or formulas that enable sensitivity analysis, what-if analysis, goal seeking, and risk analysis Use internal and external data plus data added by the decision-maker who may have insights relevant to the decision situation Predict the future |
EIS | Used by C-level managers Easy-to-use, customizable interface Support unstructured decisions Use internal and external data sources Focus on effectiveness of the organization Very flexible Focus on the future |
Here’s an example of how these ISs are used together to add value in an organization. Day-to-day transaction data collected by the TPS are converted into prescheduled summarized reports by middle managers using an MIS. The findings in these reports are then analyzed by senior managers who use a DSS to support their semistructured or unstructured decision-making. DSSs contain models that consist of a set of formulas and functions, such as statistical, financial, optimization, and/or simulation models. Corporations, government agencies, the military, health care, medical research, major league sports, and nonprofits depend on their DSSs to answer what-if questions to help reduce waste in production operations, improve inventory management, support investment decisions, and predict demand and help sustain a competitive edge.
Customer data, sales, and other critical data produced by the DSS are then selected for further analysis, such as trend analysis or forecasting demand and are input into an EIS for use by top level management, who add their experience and expertise to make unstructured decisions that will affect the future of the business.
Figure 2.9 shows how the major types of ISs relate to one another and how data flow among them. In this example,
It is important to remember that ISs do not exist in isolation. They have a purpose and a social (organizational) context. A common purpose is to provide a solution to a business problem. The social context of the system consists of the values and beliefs that determine what is admissible and possible within the culture of the organization and among the people involved. For example, a company may believe that superb customer service and on-time delivery are critical success factors. This belief system influences IT investments, among other factors.
The business value of IT is determined by the people who use them, the business processes they support, and the culture of the organization. That is, IS value is determined by the relationships among ISs, people, and business processes―all of which are influenced strongly by organizational culture.
In an organization, there may be a culture of distrust between the technology and business employees. No enterprise IT architecture methodology or data governance can bridge this divide unless there is a genuine commitment to change. That commitment must come from the highest level of the organization―senior management. Methodologies cannot solve people problems; they can only provide a framework in which those problems can be solved.
Every enterprise has a core set of ISs and business processes that execute the transactions that keep it in business. Transactions include processing orders, order fulfillment and delivery, purchasing inventory and supplies, hiring and paying employees, and paying bills. To most effectively utilize its IT assets, an organization must create an IT infrastructure, IT architecture, and an enterprise architecture (EA) as shown in Figure 2.10.
IT infrastructure is an inventory of the physical IT devices that an organization owns and operates. The IT infrastructure describes an organization’s entire collection of hardware, software, networks, data centers, facilities, and other related equipment used to develop, test, operate, manage, and support IT services. It does NOT include the people or process components of an information system.
IT architecture guides the process of planning, acquiring, building, modifying, interfacing, and deploying IT resources in a single department within an organization. The IT architecture should offer a way to systematically identify technologies that work together to satisfy the needs of the departments’ users. The IT architecture is a blueprint for how future technology acquisitions and deployment will take place. It consists of standards, investment decisions, and product selections for hardware, software, and communications. The IT architecture is developed first and foremost based on department direction and business requirements.
Enterprise architecture (EA) reviews all the information systems across all departments in an organization to develop a strategy to organize and integrate the organization’s IT infrastructures to help it meet the current and future goals of the enterprise and maximize the value of technology to the organization. In this way, EA provides a holistic view of an organization with graphic and text descriptions of strategies, policies, information, ISs, and business processes and the relationships between them.
The EA adds value in an organization in that it can provide the basis for organizational change just as architectural plans guide a construction project. Since a poorly crafted enterprise architecture (EA) can also hinder day-to-day operations and efforts to execute business strategy, it is more important than ever before to carefully consider the EA within your organization when deciding on an approach to business, technology, and corporate strategy. Simply put, EA helps solve two critical challenges: where an organization is going, and how it will get there.
The success of EA is measured not only in financial terms, such as profitability and return on investment (ROI), but also in nonfinancial terms, for example, improved customer satisfaction, faster speed to market, and lower employee turnover as diagrammed in Figure 2.11 and demonstrated in IT at Work 2.1.
As you read in Chapter 1, the volume, variety, and speed of data being collected or generated have increased dramatically over the past decade. As enterprise ISs become more complex, long-range IT planning is critical. Companies cannot simply add storage, new apps, or data analytics on an as-needed basis and expect those additional IT assets to work with existing systems.
The relationship between complexity and planning for the future is easier to see in physical things such as buildings and transportation systems. For example, if you are constructing a simple holiday cabin in a remote area, there is no need to create a detailed plan for future expansion. On the other hand, if you are building a large commercial development in a highly populated area, you’re not likely to succeed without a detailed project plan. Relating this to the case of enterprise ISs, if you are building a simple, single-user, nondistributed system, you would not need to develop a well-thought-out growth plan. However, this approach would not be feasible to enable you to successfully manage big data, copious content from mobiles and social networks, and data in the cloud. Instead, you would need a well-designed set of plans, or blueprints, provided by an EA to align IT with business objectives by guiding and controlling hardware acquisition, software add-ons and upgrades, system changes, network upgrades, choice of cloud services, and other digital technology investments that you will need to make your business sustainable.
There are two specific strategic issues that the EA is designed to address:
Having the right EA in place is important for the following reasons:
Developing an EA starts with the organization’s goals, for example, where does it want to be in three years? and identifies the strategic direction in which it is heading and the business drivers to which it is responding. The goal is to make sure that everyone understands and shares a single vision. As soon as managers have defined this single shared vision of the future, they then consider the impact this vision will have on the business, technical, information, and solutions architectures of the enterprise. This shared vision of the future will dictate changes in all these architectures, assign priorities to those changes, and keep those changes grounded in business value.
According to Microsoft, the EA should include the four different perspectives shown in Table 2.3.
TABLE 2.3 Components of an Enterprise Architecture
Business architecture | How the business works. Includes broad business strategies and plans for moving the organization from where it is now to where it wants to be. Processes the business uses to meet its goals. |
Application architecture | Portfolio of organization’s applications. Includes descriptions of automated services that support business processes; descriptions of interactions and interdependencies between the organization’s ISs. |
Information architecture | What the organization needs to know to perform its business processes and operations. Includes standard data models; data management policies and descriptions of patterns of information production and use in an organization. |
Technology architecture | Hardware and software that supports the organization. Examples include desktop and server software; OSs; network connectivity components; printers, modems. |
It is important to recognize that the EA must be dynamic, not static. To sustain its effectiveness, it should be an ongoing process of aligning the creation, operation, and maintenance of IT across the organization with the ever-changing business objectives. As business needs change, so must the EA, as demonstrated in IT at Work 2.2.
As shown in Figure 2.3, data is the heart of the business and the central component of an IS. Most business initiatives succeed or fail based on the quality of their data. Effective planning and decision-making depend on systems being able to make data available in usable formats on a timely basis. Almost everyone manages information. You manage your social and cloud accounts across multiple mobile devices and computers. You update or synchronize (“synch”) your calendars, appointments, contact lists, media files, documents, and reports. Your productivity depends on the compatibility of devices and applications and their ability to share data. Not being able to transfer and synch whenever you add a device or app is bothersome and wastes your time. For example, when you switch to the latest mobile device, you might need to reorganize content to make dealing with data and devices easier. To simplify add-ons, upgrades, sharing, and access, you might leverage cloud services such as iTunes, Instagram, Diigo, and Box.
This is just a glimpse at some of the information management situations that organizations face today and shows why a continuous plan is needed to guide, control, and govern IT growth. As with building construction (Figure 2.13), blueprints and models help guide and govern future IT and digital technology investments.
Business information is generally scattered throughout an enterprise, stored in separate systems dedicated to specific purposes, such as operations, supply chain management, or customer relationship management. Major organizations have over 100 data repositories (storage areas). In many companies, the integration of these disparate systems is limited―as is users’ ability to access all the information they need. As a result, despite all the information flowing through companies, executives, managers, and workers often struggle to find the information they need to make sound decisions or do their jobs. The overall goal of information management is to eliminate that struggle through the design and implementation of a sound data governance program and a well-planned EA.
Providing easy access to large volumes of information is just one of the challenges facing organizations. The days of simply managing structured data are over. Now, organizations must manage semistructured and unstructured content from social and mobile sources even though that data may be of questionable quality.
Information management is critical to data security and compliance with continually evolving regulatory requirements, such as the Sarbanes-Oxley Act, Basel III, the Computer Fraud and Abuse Act (CFAA), the USA PATRIOT Act, and the Health Insurance Portability and Accountability Act (HIPAA).
Issues of information access, management, and security must also deal with information degradation and disorder―where people do not understand what data mean or how the data can be useful.
Organizational information and decision support technologies have developed over many decades. During that time management teams’ priorities have changed along with their understanding of the role of IT within the organization; technology has advanced in unforeseeable ways, and IT investments have been increased or decreased based on competing demands on the budget. Other common reasons why information deficiencies are still a problem include:
For example, most health-care organizations are drowning in data, yet they cannot get reliable, actionable insights from these data. Physician notes, registration forms, discharge summaries, documents, and more are doubling every five years. Unlike structured machine-ready data, these are messy data that take too much time and effort for health-care providers to include in their business analysis. So, valuable messy data are routinely left out. Millions of insightful patient notes and records sit inaccessible or unavailable in separate clinical data silos because historically there has been no easy way to analyze the information they contain.
These are the data challenges managers have to face when there is little or no information management. Companies undergoing fast growth or merger activity or those with decentralized systems (each division or business unit manages its own IT) will end up with a patchwork of reporting processes. As you would expect, patchwork systems are more complicated to modify, too rigid to support an agile business, and more expensive to maintain.
Senior executives and managers are aware of the problems associated with their data silos and information management problems, but they also know about the huge cost and disruption associated with converting to newer IT architectures. The “silo effect” occurs when different departments of an organization do not share data and/or communicate effectively enough to maintain productivity. Surprisingly, 75% of employers believe team work and collaboration are essential, but only 18% of employees receive communication evaluations during performance critiques (Marchese, 2016). In the new age of efficiency of service, many companies like Formaspace, an industrial manufacturing and service corporation, must work toward complete cloud integration of old silos to increase customer service and generate more revenue. Enabling applications to interact with one another in an automated fashion to gain better access to data increases meaningful productivity and decreases time and effort spent in manual collaboration efforts. In an illustration of how silo integration is essential for a modern corporation, IT technician at Formaspace, Loddie Alspach, claims that in 2015, the company managed to increase revenues by 20% using Amazon-based cloud technology (Shore, 2015). However, companies are struggling to integrate thousands of siloed global applications, while aligning them to business operations. To remain competitive, they must be able to analyze and adapt their business processes quickly, efficiently, and without disruption.
Greater investments in collaboration technologies have been reported by the research firm Forrester (Keitt, 2014). A recent study identified four main factors that have influenced the increased use of cloud technologies, as shown in Table 2.4 (Rai et al., 2015).
TABLE 2.4 Key Factors Leading to Increased Migration to the Cloud
Cost Savings |
Efficient Use of Resources |
Unlimited Scalability of Resources |
Lower Maintenance |
Based on the examples you have read, the obvious benefits of information management are:
The success of every data-driven strategy or marketing effort depends on data governance. Data governance policies must address structured, semistructured, and unstructured data (discussed in Section 2.3) to ensure that insights can be trusted.
With an effective data governance program, managers can determine where their data are coming from, who owns them, and who is responsible for what―in order to know they can trust the available data when needed. Data governance is an enterprisewide project because data cross boundaries and are used by people throughout the enterprise. New regulations and pressure to reduce costs have increased the importance of effective data governance. Governance eliminates the cost of maintaining and archiving bad, unneeded, or inaccurate data. These costs grow as the volume of data grows. Governance also reduces the legal risks associated with unmanaged or inconsistently managed information.
Three industries that depend on data governance to comply with regulations or reporting requirements are the following:
Master data is the term used to describe business-critical information on customers, products and services, vendors, locations, employees, and other things needed for operations and business transactions. Master data are fundamentally different from the high volume, velocity, and variety of big data and traditional data. For example, when a customer applies for automobile insurance, data provided on the application become the master data for that customer. In contrast, if the customer’s vehicle has a device that sends data about his or her driving behavior to the insurer, those machine-generated data are transactional or operational, but not master data.
Data are used in two ways―both depend on high-quality trustworthy data:
Master data are typically quite stable and typically stored in a number of different systems spread across the enterprise. Master data management (MDM) links and synchronizes all critical data from those disparate systems into one file called a master file, to provide a common point of reference. MDM solutions can be complex and expensive. Given their complexity and cost, most MDM solutions are out of reach for small and medium companies. Vendors have addressed this challenge by offering cloud-managed MDM services. For example, in 2013, Dell Software launched its next-generation Dell Boomi MDM. Dell Boomi provides MDM, data management, and data quality services (DQS)―and they are 100% cloud-based with near real-time synchronization.
Data governance and MDM manage the availability, usability, integrity, and security of data used throughout the enterprise. Strong data governance and MDM are needed ensure data are of sufficient quality to meet business needs. The characteristics and consequences of weak or nonexistent data governance are listed in Table 2.5.
TABLE 2.5 Characteristics and Consequences of Weak or Nonexistent Data Governance and MDM
|
Data governance and MDM are a powerful combination. As data sources and volumes continue to increase, so does the need to manage data as a strategic asset in order to extract its full value. Making business data consistent, trusted, and accessible across the enterprise is a critical first step in customer-centric business models. With data governance, companies are able to extract maximum value from their data, specifically by making better use of opportunities that are buried within behavioral data.
Data centers and cloud computing are types of IT infrastructures or computing systems. Data center also refers to the building or facility that houses the servers and equipment. In the past, there were few IT infrastructure options. Companies owned their servers, storage, and network components to support their business applications and these computing resources were on their premises. Now, there are several choices for an IT infrastructure strategy―including cloud computing. As is common to IT investments, each infrastructure configuration has strengths, weaknesses, and cost considerations.
Traditionally, data and database technologies were kept in data centers that were typically run by an in-house IT department (Figure 2.15) and consisted of on-premises hardware and equipment that store data within an organization’s local area network.
Today, companies may own and manage their own on-premises data centers or pay for the use of their vendors’ data centers, such as in cloud computing, virtualization, and software-as-a-service arrangements (Figure 2.16).
In an on-premises data center connected to a local area network, it is easier to restrict access to applications and information to authorized, company-approved people and equipment. In the cloud, the management of updates, security, and ongoing maintenance are outsourced to a third-party cloud provider where data is accessible to anyone with the proper credentials and Internet connection. This arrangement can make a company more vulnerable since it increases exposure of company data at many more entry and exit points. Here are some examples of data centers.
Since only the company owns the infrastructure, a data center is more suitable for organizations that run many different types of applications and have complex workloads. A data center, like a factory, has limited capacity. Once it is built, the amount of storage and the workload the center can handle does not change without purchasing and installing more equipment.
Data center failures disrupt all operations regardless of who owns the data center. Here are two examples.
These outages point to the risks of maintaining the complex and sophisticated technology needed to power digital services used by millions or hundreds of millions of people.
An enterprise’s data are stored in many different or remote locations―creating data chaos at times. And some data may be duplicated so that they are available in multiple locations that need a quick response. Therefore, the data needed for planning, decision-making, operations, queries, and reporting are scattered or duplicated across numerous servers, data centers, devices, and cloud services. Disparate data must be unified or integrated in order for the organization to function.
As organizations have transitioned to a cloud-based infrastructure, data centers have become virtualized. For example, Cisco offers data virtualization, which gives greater IT flexibility. The process of data virtualization involves abstracting, transforming, merging, and delivering data from disparate sources. The main goal of data virtualization is to provide a single point of access to the data. By aggregating data from a wide range of sources users can access applications without knowing their exact location. Using data virtualization methods, enterprises can respond to change more quickly and make better decisions in real time without physically moving their data, which significantly cuts costs. Cisco Data Virtualization makes it possible to:
Compared to traditional (nonvirtual) data integration and replication methods, data virtualization accelerates time to value with:
Data virtualization has led to the latest development in data centers—the software-defined data center (SDDC). An SDDC facilitates the integration of the various infrastructures of the SDDC silos within organizations and optimizes the use of resources, balances workloads, and maximizes operational efficiency by dynamically distributing workloads and provisioning networks. The goal of the SDDC is to decrease costs and increase agility, policy compliance, and security by deploying, operating, managing, and maintaining applications. In addition, by providing organizations with their own private cloud, SDDCs provide greater flexibility by allowing organizations to have on-demand access to their data instead of having to request permission from their cloud provider (see Figure 2.18).
The base resources for the SDDC are computation, storage, networking, and security. Typically, the SDDC includes limited functionality of service portals, applications, OSs, VM hardware, hypervisors, physical hardware, software-defined networking, software-defined storage, a security layer, automation and management layers, catalogs, a gateway interface module, and third-party plug-ins (Figure 2.19).
It is estimated that the market share for SDDCs will grow from the current level of $22 billion to more than $77 billion in the next five years. As the use of SDDCs grows at this extraordinary rate, data center managers will be called upon to scale their data centers exponentially at a moment’s notice. Unfortunately, this is impossible to achieve using the traditional data center infrastructure. In the SDDC, software placement and optimization decisions are based on business logic, not technical provisioning directives. This requires changes in culture, processes, structure, and technology. The SDDC isolates the application layer from the physical infrastructure layer to facilitate faster and more effective deployment, management, and monitoring of diverse applications. This is achieved by finding each enterprise application an optimal home in a public or private cloud environment or draw from a diverse collection of resources.
From a business perspective moving to a SDDC is motivated by the need to improve security, increase alignment of the IT infrastructure with business objectives and provision of applications more quickly.
Traditional data centers had dedicated, isolated hardware that results in poor utilization of resources and very limited flexibility. Second-generation virtualization data cases improved resource use by consolidating virtualized servers. By reducing the steps needed to decrease the time it takes to deploy workloads, facilitating the definition of applications and resource needs, the SDDC creates an even more flexible environment in which enterprise applications can be quickly reconfigured and supported to provide infrastructure-as a service (IaaS). Transitioning to an SDDC enables an organization to optimize its resource usage, provide capacity on demand, improve business-IT alignment, improve agility and flexibility of operations, and save money (Figure 2.20).
In a business world where first movers gain the advantage, IT responsiveness and agility provide a competitive edge and lead to sustainable business practices. Yet, many IT infrastructures are extremely expensive to manage and too complex to easily adapt. A common solution is cloud computing. Cloud computing is the general term for infrastructures that use the Internet and private networks to access, share, and deliver computing resources. More specifically, IBM defines cloud computing as “the delivery of on-demand computing resources—everything from applications to data centers—over the Internet on a pay-for-use basis” (IBM, 2016).
Cloud computing is the delivery of computing and storage resources as a service to end-users over a network. Cloud systems are scalable. That is, they can be adjusted to meet changes in business needs. At the extreme, the cloud’s capacity is unlimited depending on the vendor’s offerings and service plans. A drawback of the cloud is control because a third party manages it. Unless the company uses a private cloud within its network, it shares computing and storage resources with other cloud users in the vendor’s public cloud. Public clouds allow multiple clients to access the same virtualized services and utilize the same pool of servers across a public network. In contrast, private clouds are single-tenant environments with stronger security and control for regulated industries and critical data. In effect, private clouds retain all the IT security and control provided by traditional IT infrastructures with the added advantages of cloud computing.
Because cloud is still a relatively new and evolving business model, the decision to select a cloud service provider should be approached with even greater diligence than other IT decisions. As cloud computing becomes an increasingly important part of the IT delivery model, assessing and selecting the right cloud provider also become the most strategic decisions that business leaders undertake. Providers are not created equally, so it is important to investigate each provider’s offerings prior to subscribing. When selecting and investing in cloud services, there are several service factors a vendor needs to address. These evaluation factors are listed in Table 2.6.
TABLE 2.6 Service Factors to Consider when Evaluating Cloud Vendors or Service Providers
Factors | Examples of Questions to Be Addressed |
Delays | What are the estimated server delays and network delays? |
Workloads | What is the volume of data and processing that can be handled during a specific amount of time? |
Costs | What are the costs associated with workloads across multiple cloud computing platforms? |
Security | How are data and networks secured against attacks? Are data encrypted and how strong is the encryption? What are network security practices? |
Disaster recovery and business continuity | How is service outage defined? What level of redundancy is in place to minimize outages, including backup services in different geographical regions? If a natural disaster or outage occurs, how will cloud services be continued? |
Technical expertise and understanding | Does the vendor have expertise in your industry or business processes? Does the vendor understand what you need to do and have the technical expertise to fulfill those obligations? |
Insurance in case of failure | Does the vendor provide cloud insurance to mitigate user losses in case of service failure or damage? This is a new and important concept. |
Third-party audit or an unbiased assessment of the ability to rely on the service provided by the vendor | Can the vendor show objective proof with an audit that it can live up to the promises it is making? |
The move to the cloud is also a move to vendor-managed services and cloud service agreements (CSAs). Also referred to as cloud service level agreements (SLAs), the CSA or SLA is a negotiated agreement between a company and service provider that can be a legally binding contract or an informal contract. You can review a sample CSA used by IBM by visiting http://www-05.ibm.com/support/operations/files/pdf/csa_us.pdf.
Staff experienced in managing outsourcing projects may have the necessary expertise for managing work in the cloud and policing SLAs with vendors. The goal is not building the best CSA terms, but negotiating the terms that align most closely with the business needs. For example, if a server becomes nonoperational and it does not support a critical business operation, it would not make sense to pay a high premium for reestablishing the server within one hour. On the other hand, if the data on the server support a business process that would effectively close down the business for the period of time that it was not accessible, it would be prudent to negotiate the fastest possible service in the CSA and pay a premium for that high level of service.
In April 2015, the Cloud Standards Customer Council (CSCC) published the Practical Guide to Cloud Service Agreements, Version 2.0, to reflect changes that have occurred since 2012 when it first published the Practical Guide to Cloud Service Level Agreements. The new guide provides a practical reference to help enterprise IT and business decision-makers analyze CSAs from different cloud service providers. The main purpose of a CSA is to set clear expectations for service between the cloud customer (buyer) and the cloud provider (seller), but CSAs should also exist between a customer and other cloud entities, such as the cloud carrier, the cloud broker, and even the cloud auditor. Although the various service delivery models, that is, IaaS, PaaS, SaaS, and so on, may have different requirements, the guide focuses on the requirements that are common across the various service models (Cloud Standards Customer Council, 2015, p. 4).
Implementing an effective management process is an important step in ensuring internal and external user satisfaction with cloud services. Table 2.7 lists the 10 steps that should be taken by cloud customers to evaluate cloud providers’ CSAs in order to compare CSAs across multiple providers or to negotiate terms with a selected provider.
TABLE 2.7 Ten Steps to Evaluate a CSA
1. | Understand roles and responsibilities of the CSA customer and provider |
2. | Evaluate business-level policies and compliance requirements relevant to the CSA customer |
3. | Understand service and deployment model differences |
4. | Identify critical performance objectives such as availability, response time, and processing speed. Ensure they are measurable and auditable |
5. | Evaluate security and privacy requirements for customer information that has moved into the provider’s cloud and applications, functions, and services being operated in the cloud to provide required service to the customer |
6. | Identify service management requirements such as auditing, monitoring and reporting, measurement, provisioning, change management, and upgrading/patching |
7. | Prepare for service failure management by explicitly documenting cloud service capabilities and performance expectations with remedies and limitations for each |
8. | Understand the disaster recovery plan |
9. | Develop a strong and detailed governance plan of the cloud services on the customer side |
10. | Understand the process to terminate the CSA |
The cloud has greatly expanded the options for enterprise IT infrastructures because any device that accesses the Internet can access, share, and deliver data. Cloud computing is a valuable infrastructure because:
A majority of large organizations have hundreds or thousands of software licenses that support business processes, such as licenses for Microsoft Office, Oracle database management, IBM CRM (customer relationship management), and various network security software. Managing software and their licenses involves deploying, provisioning, and updating them―all of which are time-consuming and expensive. Cloud computing overcomes these problems.
Building a cloud strategy is a challenge, and moving existing applications to the cloud is stressful. Despite the business and technical benefits, the risk exists of disrupting operations or customers in the process. With the cloud, the network and WAN (wide area network) become an even more critical part of the IT infrastructure. Greater network bandwidth is needed to support the increase in network traffic. And, putting part of the IT architecture or workload into the cloud requires different management approaches, different IT skills, and knowing how to manage vendor relationships and contracts.
There is a big difference because cloud computing runs on a shared infrastructure, so the arrangement is less customized to a specific company’s requirements. A comparison to help understand the challenges is that outsourcing is like renting an apartment, while the cloud is like getting a room at a hotel.
With cloud computing, it may be more difficult to get to the root of performance problems, like the unplanned outages that occurred with Google’s Gmail and Workday’s human resources apps. The trade-off is cost versus control.
Increasing demand for faster and more powerful computers, and increases in the number and variety of applications are driving the need for more capable IT architectures.
Managers want streamlined, real-time, data-driven enterprises, yet they may face budget cuts. Sustaining performance requires the development of new business applications and analytics capabilities, which comprise the front end and the data stores and digital infrastructure, or back end, to support them. The back end is where the data reside. The problem is that data may have to navigate through a congested IT infrastructure that was first designed decades ago. These network or database bottlenecks can quickly wipe out the competitive advantages from big data, mobility, and so on. Traditional approaches to increasing database performance―manually tuning databases, adding more disk space, and upgrading processors―are not enough when you are you are dealing with streaming data and real-time big data analytics. Cloud services help to overcome these limitations. Cloud services are outsourced to a third-party cloud provider who manages the updates, security, and ongoing maintenance.
At first glance, virtualization and cloud computing may appear to be quite similar. However, cloud computing and virtualization are inherently different. Unlike cloud computing that involves multiple computers or hardware devices sending data through vendor-provided networks, virtualization is the replacement of a tangible physical component with a virtual one. Each of these concepts are described and discussed in the following sections.
The cloud computing model for on-demand delivery of and access to various types of computing resources also extends to the development of business apps. Figure 2.21 shows four “as a service” (XaaS) solutions based on the concept that the resource―software, platform, infrastructure, or data—can be provided on demand regardless of geolocation. As these as service solutions develop, the focus is changing from massive technology implementation costs to business-reengineering programs that enable XaaS platforms (Fresht, 2014).
Cloud services are services made available to users on demand via the Internet from a cloud computing provider’s servers instead of being accessed through an organization’s on-premises servers. Cloud services are designed to provide easy, scalable access to applications, resources, and services, and are fully managed by a cloud services provider.
Cloud computing is often referred to as a “stack” or broad range of services built on top of each other under the name cloud. These cloud services can be defined as follows:
SaaS is a rapidly growing method of delivering software and is particularly useful in applications in which there are considerable interactions between the organization and external entities that do not confer a competitive advantage, for example, e-mail and newsletters. It is also useful when an organization is going to be needing a particular type of software for a short period of time or for a specific project, and for software that is used periodically, for example, tax, payroll, or billing software. SaaS is not appropriate for accessing applications that require fast processing of real-time data or applications where regulation does not permit data being hosted externally.
Other terms for SaaS are on-demand computing and hosted services. The idea is basically the same: Instead of buying and installing expensive packaged enterprise applications, users can access software applications over a network, using an Internet browser. To use SaaS, a service provider hosts the application at its data center and customers access it via a standard Web browser.
The SaaS model was developed to overcome the common challenge to an enterprise of being able to meet fluctuating demands on IT resources efficiently. It is used in many business functions, primarily customer relationship management (CRM), accounting, human resources (HR), service desk management, communication, and collaboration.
There are thousands of SaaS vendors. www.Salesforce.com is one of the most widely known SaaS providers. Other examples are Google Docs and collaborative presentation software Prezi. For instance, instead of installing Microsoft Word on your own computer, and then loading Word to create a document, you use a browser to log into Google Docs. Only the browser uses your computer’s resources.
PaaS provides a standard unified platform for developing, testing, and deploying software over the Web. This computing platform allows the creation of Web applications quickly and easily without the complexity of buying and maintaining the underlying infrastructure. Without PaaS, the cost of developing some applications would be prohibitive. Examples of PaaS include databases, Web servers, development tools, and execution runtime. PaaS is particularly useful when multiple software developers are working on a software development project of when other external parties need to interact with the development process and for when developers want to automate testing and deployment services. It is less useful in those instances where application performance needs to be customized to the underlying hardware and software or an application needs to be highly portable in terms of where it is hosted. Some examples of PaaS include Microsoft Azure Service, www.Force.com, and Google App Engine.
Rather than purchasing all the components of its IT infrastructure, organizations buy their computing resources as a fully outsourced Infrastructure as a Service (IaaS) on demand. Generally, IaaS can be acquired as a Public or Private infrastructure or a combination of the two (Hybrid). A public IaaS is one that consists of shared resources deployed on a self-service basis over the Internet. On the other hand, a private IaaS is provided on a private network. And, a hybrid IaaS is a combination of both public and private. IaaS is useful where organizations experience significant highs and lows in terms of demand on the infrastructure, for new or existing organizations who have budgetary constraints on hardware investment and in situations where an organization has temporary infrastructure needs. Some IaaS providers you may be familiar with include Amazon Web Services (AWS) and Rackspace.
DaaS is the newest entrant into the XaaS arena. DaaS enables data to be shared among clouds, systems, apps, and so on regardless of the data source or where they are stored. Data files, including text, images, sound, and video, are made available to customers over a network, typically the Internet. DaaS makes it easier for data architects to select data from different pools, filter out sensitive data, and make the remaining data available on demand.
A key benefit of DaaS is that it transfers the risks and responsibilities associated with data management to a third-party cloud provider. Traditionally, organizations stored and managed their data within a self-contained storage system, however, as data become more complex, it is increasingly difficult and expensive to maintain using the traditional data model. Using DaaS, organizational data are readily accessible through a cloud-based platform and can be delivered to users despite organizational or geographical constraints. This model is growing in popularity as data become more complex, difficult, and expensive to maintain. Some of the most common business applications currently using DaaS are CRM and enterprise resource planning (ERP). For an example of Daas, see IT at Work 2.3.
The various As a Service models are used in various aspects of business. You will read how these specific services, such as CRM and HR management, are being used for operational and strategic purposes in later chapters. Companies are frequently adopting software, platform, infrastructure, data management, and starting to embrace mobility as a service and big data as a service because they typically no longer have to worry about the costs of buying, maintaining, or updating their own data servers. Both hardware and human resources expenses can be cut significantly. Service arrangements all require that managers understand the benefits and trade-offs―and how to negotiate effective SLAs and CSAs. Regulations mandate that confidential data be protected regardless of whether the data are on-premises or in the cloud. Therefore, a company’s legal department needs to get involved in these IT decisions. Put simply, moving to cloud services is not simply an IT decision because the stakes around legal and compliance issues are very high.
Cloud services can advance the core business of delivering superior services to optimize business performance. Cloud can cut costs and add flexibility to the performance of critical business apps. And, it can improve responsiveness to end-consumers, application developers, and business organizations. But to achieve these benefits, there must be IT, legal, and senior management oversight because a company still must meet its legal obligations and responsibilities to employees, customers, investors, business partners, and society.
There are many types of virtualization, such as virtual storage devices, virtual desktops, virtual OSs, and virtual servers for network virtualization. You can think of virtualization as a model for a physical component that is built into computer code, to create a software program that acts in the same way as the physical component it is modeling. For example, a virtual machine is a software representation of a computer, rather than an actual computer and a virtual server sends and receives signals just like a physical one, even though it doesn’t have its own circuitry and other physical components.
You might ask why organizations want to virtualize their physical computing and networking devices. The answer is a gross underutilization of inefficient use of resources. Computer hardware had been designed to run a single OS and a single app, which leaves most computers vastly underutilized. Virtualization is a technique that creates a virtual (i.e., nonphysical) layer and multiple virtual machines (VMs) to run on a single physical machine. The virtual (or virtualization) layer makes it possible for each VM to share the resources of the hardware. Figure 2.22 shows the relationship among the VMs and physical hardware.
Just as virtual reality is not real, but a software-created world, a virtual machine is a software-created computer. Technically, a virtual machine (VM) is created by a software layer, called the virtualization layer, as shown in Figure 2.22. That layer has its own Windows or other OS and apps, such as Microsoft Office, as if it were an actual physical computer. A VM behaves exactly like a physical computer and contains its own virtual―that is, software-based―CPU, RAM (random access memory), hard drive, and network interface card (NIC). An OS cannot tell the difference between a VM and a physical machine, nor can applications or other computers on a network tell the difference. Even the VM thinks it is a “real” computer. Users can set up multiple real computers to function as a single PC through virtualization to pool resources to create a more powerful VM.
Virtualization is a concept that has several meanings in IT and therefore several definitions. The major type of virtualization is hardware virtualization, which remains popular and widely used. Virtualization is often a key part of an enterprise’s disaster recovery plan. In general, virtualization separates business applications and data from hardware resources. This separation allows companies to pool hardware resources―rather than dedicate servers to applications―and assign those resources to applications as needed.
Different types of virtualization include:
Virtualization increases the flexibility of IT assets, allowing companies to consolidate IT infrastructure, reduce maintenance and administration costs, and prepare for strategic IT initiatives. Virtualization is not primarily about cost-cutting, which is a tactical reason. More importantly, for strategic reasons, virtualization is used because it enables flexible sourcing and cloud computing.
The characteristics and benefits of virtualization are as follows:
Virtualization consolidates servers, which reduces the cost of servers, makes more efficient use of data center space, and reduces energy consumption. All of these factors reduce the total cost of ownership (TCO). Over a three-year life cycle, a VM costs approximately 75% less to operate than a physical server. IT at Work 2.4 describes one example of how virtualization can help organizations provide higher levels of customer service and improve productivity.
ad hoc report
batch processing
cloud computing
cloud service agreements (CSAs)
customer-centric
data
data as a service (DaaS)
data center
data governance
data silo
database
decision support systems (DSS)
dirty data
enterprise architecture (EA)
exception report
executive information systems (EISs)
goal seeking
information
information management
information systems (ISs)
infrastructure as a service (IaaS)
IT infrastructure
IPOS
knowledge
management information systems (MIS)
master data
master data management (MDM)
master file
model
online transaction processing (OLTP)
platform as a service (PaaS)
private cloud
public cloud
real-time processing
service level agreement (SLA)
software as a service (SaaS)
software-defined data center (SDDC)
stack
structured decisions
transaction processing systems (TPS)
unstructured decisions
virtualization
virtual machine (VM)
what-if analysis
wisdom