Chapter 12
Migration Planning

For many organizations, cloud computing is a new approach to delivering information services. These organizations may have built large, complex infrastructures running a wide array of applications using on-premises data centers. Now those same organizations want to realize the advantages of cloud computing.

They can start by building new systems in the cloud. That will bring some benefits, but there are likely further advantages to migrating existing applications from on-premises data centers to the cloud. This requires methodical planning that includes the following:

  • Integrating cloud services with existing systems
  • Migrating systems and data to support a solution
  • Software license mapping
  • Network planning
  • Testing and proof-of-concept development
  • Dependency management planning

This chapter is organized around the first four focus areas. Testing and proof-of-concept development along with dependency management planning are discussed within the context of the other focus areas.

Integrating Cloud Services with Existing Systems

Cloud migrations are inherently about incrementally changing existing infrastructure in order to use cloud services to deliver information services. During migrations, some applications will move to the cloud, but those migrated applications may be integrated with other applications still running on premises. You will need to plan a migration carefully to minimize the risk of disrupting services while maximizing the likelihood of successfully moving applications and data to the cloud.

It helps to think of these tasks as part of a four-step migration.

  1. Assess
  2. Plan
  3. Deploy
  4. Optimize

During the assessment phase, take inventory of applications and infrastructure. Document considerations for moving each application to the cloud, including issues such as compliance, licensing, and dependencies. Not all applications are suitable for the cloud; some will be better suited for staying in the current environment. For example, a legacy application running on a mainframe that is scheduled to be removed from service within the next two years is not a good candidate for migrating to the cloud. Identify each application's dependencies on other applications and data.

During the assessment phase, consider migrating one or two applications in an effort to learn about the cloud, develop experience running applications in the cloud, and get a sense of the level of effort required to set up networking and security. This is also a good time to perform a total cost of ownership assessment so the business has an understanding of current costs and anticipated costs after migration. Finally, determine the order in which you will migrate workloads.

In the planning phase you should focus on defining the foundations of your cloud environment, including the resource organization hierarchy, identities, groups, and roles.

There are at least three ways to design a resource hierarchy. An environment-oriented hierarchy separates development, test, and production resources. A functional organization separates business functions into separate units, such as folders. In a granular-oriented hierarchy, there are additional layers of folders in a functional breakdown that implements additional organizational levels.

With regard to identities, you will need to plan how you will manage user and service account identities, including whether you will use G Suite or Cloud Identity domains and if you will be integrating an existing identity provider into your Google Cloud infrastructure. In addition to users, you will need to decide how to structure different types of roles, such as managers for the resource hierarchy, network administration, and security.

At this stage, you will also plan your network topology and connectivity.

Next, in the deployment phase, you will decide on a deployment approach and begin to move workloads to the cloud. Fully manual deployments are simple and can get fast results, but they are best saved for proof-of-concept work only. For production workloads, it is better to use a service that can automatically replicate existing workloads to Google Cloud. Migrate for Compute Engine migrates VM-based applications, while Migrate for Anthos converts VM workloads to containers in Google Kubernetes Engine. Google Cloud SQL provides replication support, which can help with migrating on-premises databases to the Google Cloud. VMware Engine is a tool for migrating VMware workloads from your existing systems to Google Cloud VMware engine.

It is highly recommended you use configuration management tools, such as Puppet, Ansible, Chef, and Salt, to automate configuration once you have provisioned virtual machine infrastructure. When deploying to Google Kubernetes Engine, use GKE deployment services.

It is also recommended that you use infrastructure-as-code (IaC) tools to provision cloud resources. Both Terraform and Deployment Manager can be used for IaC.

Consider which applications are dependent on specific data sources. Also understand the level of security required for each data source. Confidential and sensitive data will require more attention to security than public information. At this stage, you can decide how you will migrate data, for example, by using gsutil or the Google Cloud Transfer Appliance. Also, if data is being updated during the migration, you will need to develop a way to synchronize the on-premises data with the cloud data after the data has been largely migrated.

After the data has been migrated, you can move applications. If you are using a lift-and-shift model, then virtual machines can be migrated from on-premises to the cloud. If you want to migrate applications running on VMs to containers running on Kubernetes clusters, you will need to plan for that transition.

Finally, once data and applications are in the cloud, you can shift your focus to optimizing the cloud implementation. For example, during the optimization phase, you may add monitoring and logging to applications. You may improve reliability by using multiregional features such as global load balancing. Some third-party tools, such as ETL tools, can be replaced by GCP services, such as Cloud Dataflow. You can also consider using managed database services instead of managing your own databases.

Migrating Systems and Data to Support a Solution

Part of migration planning is determining which applications should be migrated and in what order. You also need to consider how to migrate data.

Planning for Systems Migrations

A significant amount of effort can go into understanding applications and their dependencies. During the assessment phase, document the characteristics of each application that may be migrated to the cloud. Include at least the following:

  • Consider the criticality of the system. Systems that must be available 24 hours a day, 7 days a week or risk significant adverse impact on the business are highly critical and considered Tier 1. Moderately important applications, such as batch processing jobs that can tolerate some delay or degradation in service, are Tier 2. Tier 3 applications include all others.
  • Document the production level of each application. Typical levels are production, staging, test, and development.
  • Also note whether the application is developed by a third party or in-house and the level of support available. If the third-party vendor that developed the application is out of business or if a custom application has been minimally maintained for extended periods and there is relatively little in-house expertise with the application, document that risk.
  • Consider the service-level agreements (SLAs) that the application has in place. Can this application tolerate downtime without missing SLA commitments? What compliance issues should be considered with moving this application?
  • How well documented is the application? Is there design, runtime, and architecture documentation? What troubleshooting guides are available?
  • If the application were moved to the cloud, what level of effort would be required to make it functional in the cloud? An application running on premises in a Docker container will require minimal changes, while an application running on an operating system not available in the cloud will likely be more problematic.
  • What databases are read from by this application? What databases are written to by this application? Are these high-availability databases? If so, where are failover databases located? What is the recovery time objective in the event of a failover?
  • On what other systems is the application dependent, such as identity management, messaging, monitoring, and log collection? How is the application backed up?
  • How well automated are the deployment and maintenance of the application? Are manual operations needed for normal operations?

The answers to these questions will help you determine the level of risk involved with migrating an application and the level of effort required to perform the migration. The information is also useful for understanding the dependencies between applications. If an application is moved to the cloud, it must be able to access all other systems and data upon which it depends; similarly, systems that depend on the migrated application must be able to continue to access it.

The details collected here can also inform you about tolerable downtime to perform a switchover that stops sending traffic to the on-premises solution and starts sending it to the cloud implementation of the system. If no downtime is acceptable, you will need to have two systems running in parallel: one in the cloud and one on premises before switching over. You should also carefully consider how to manage any state information that is maintained by the system. Maintaining consistency in distributed systems is not trivial. If systems have scheduled maintenance windows, those could be used to perform a switchover without risking missing SLA commitments. For Tier 3 applications, a switchover may only require notifying users that the system will be unavailable for a time.

You should understand how to deploy a migrated system in the cloud. If the system's deployment is already automated, then the same processes may be used in the cloud. For example, code could be copied from an on-premises version control repository to one in the cloud. The build, test, and deployment scripts could be copied as well. If a CI/CD system like Jenkins is used, then it will need to be in place in the cloud as well. Alternatively, you could modify the deployment process to use GCP services such as Cloud Build.

Determine how you will monitor the performance of the system after migration. If an application depends on an on-premises log collection service, for example, a comparable service will be needed in the cloud. Cloud Logging would be a good option if you are open to some modification to the system.

Finally, consider any business factors that may influence the ROI of running the application in the cloud. For example, if a hardware refresh required to continue to support the application or business is lost because the application cannot scale sufficiently in the current environment, then stakeholders may want to prioritize moving that application.

In addition to considering risks and level of effort required to migrate an application, it is important to understand how data will be moved.

Planning for Data Migration

As you plan for data migrations, you will need to consider factors related to data governance and the way that data is stored. Different types of storage require different procedures to migrate data. This section covers two scenarios: migrating object storage and migrating relational data. These are not the only ways that you might need to migrate data, but they demonstrate some of the factors that you'll need to keep in mind as you plan a data migration.

Data Governance and Data Migration

Before migrating data, you should understand any regulations that cover that data. For example, in the United States, the privacy of personal healthcare data is governed by HIPAA regulations, while in the European Union, the GDPR restricts where data on EU citizens may be stored. Businesses and organizations may have their own data classifications and data governance policies. These should be investigated and, if they exist, considered when planning data migrations.

Migrating Object Storage

Archived data, large object data, and other data that is stored on premises in an object store or filesystem storage may be migrated to Cloud Storage. In these cases, you should do the following:

  • Plan the structure of buckets
  • Determine roles and access controls
  • Understand the time and cost of migrating data to Cloud Storage
  • Plan the order in which data will be transferred
  • Determine the method to transfer the data

When transferring data from an on-premises data center, transferring with gsutil is a good option when the data volume is less than 10 TB of data and network bandwidth is at least 100 Mbps. If the volume of data is over 20 TB, the Google Transfer Appliance is recommended. When the volume of data is between 10 TB and 20 TB, consider the time needed to transfer the data at your available bandwidth. If the cost and time requirements are acceptable, use gsutil; otherwise, use the Google Transfer Appliance.

Migrating Relational Data

When migrating relational databases to the cloud, you should consider the volume of data to migrate, but you should also understand how the database is used and any existing SLAs that may constrain your migration options.

One way to migrate a relational database is to export data from the database, transfer the data to the cloud, and then import the data into the cloud instance of the database. This option requires the database to be unavailable to users during the migration. To ensure a consistent export, the database should be locked for writes during the export operation. It can be available for read operations during the export. Once the data is available in the cloud database, database applications can be configured to point to the new database.

If a database SLA or other requirements do not allow for an export-based migration, you should consider creating a replica of the database in which the replica database is in the Google Cloud. This configuration is referred to as primary/replica or leader/follower, and in general it is the preferred migration method. Whenever there is a change to the primary or leader, the same change is made to the replica or follower instance. Once the database has synchronized the data, database applications can be configured to point to the cloud database.

Google Cloud offers the Database Migration Service for migrating PostgreSQL and MySQL databases to Cloud SQL. At the time of writing, SQL Server is not supported by the Database Migration Service, but it is expected to be supported in the future. The Database Migration Service supports both lift-and-shift migration and continuous replication. When a migration is created, a read replica is created in Cloud SQL and populated with data from the source database. When you are ready to make Cloud SQL the read/write version of your database, you can promote the Cloud SQL database, and it becomes the primary replica and is able to accept both reads and writes.

Software Licensing Mapping

Another task in migration planning is understanding the licenses for the software you plan to use in the cloud. Operating system, application, middleware services, and third-party tools may all have licenses.

There are a few different ways to pay for software running in the cloud. In some cases, the cost of licensing is included with cloud service charges. For example, the cost of licensing a Windows server operating system may be included in the hourly charge.

In other cases, you may have to pay for the software directly in one of two ways.

Software vendors may charge based on usage, much like cloud service pricing. For example, you may be charged for each hour the software is in use on an instance. This is called the pay-as-you-go model or metered model.

In other cases, licensing may be based on some resource metric, such as number of cores on a physical server. Even though you have a license to run software on premises, do not assume that the license applies to cloud use. Vendors may have restrictions that limit the use of a license to on-premises infrastructures. It is important to verify if the license can be moved to the cloud or converted to a cloud license.

You may have an existing license that can be used in the cloud, or you may purchase a license from the vendor specifically for use in the cloud. This is sometimes referred to as the bring-your-own-license (BYOL) model. Watch for licenses that are based on physical core or physical processor, such as Microsoft SharePoint or SQL Server. In these cases, you may need to use sole-tenant nodes in Google Cloud. Google has outlined the following steps for bringing an existing license to Google Cloud:

  1. Prepare images according to license requirements.
  2. Activate licenses.
  3. Import virtual disk files and create images.
  4. Create sole-tenant node templates.
  5. Create sole-tenant node groups.
  6. Provision VMs on the node groups with the virtual disk files.
  7. Track license usage.
  8. Report license usage to your vendor.

For additional details, see cloud.google.com/compute/docs/nodes/bringing-your-own-licenses. You can install the IAP Desktop tool (github.com/GoogleCloudPlatform/iap-desktop/releases/tag/2.21.681) to help monitor and report on your license usage on sole-tenant nodes.

Also consider the fact that licenses for software that you have may be used in the cloud but may not map neatly to new usage patterns. For example, a single site license for an on-premises application may not be appropriate for an application that will run in multiple regions in the cloud.

As you analyze your licenses, you may find that you have more than one option when moving applications to the cloud. For example, you may have the option to bring your own license or pay as you go. You should consider how the application will be used and assess which of the options is better for your organization.

Network Planning

While much of this chapter has focused on application and data migration, it is important to consider networking as well. If you are planning to migrate completely to the cloud and eventually stop using on-premises solutions, you will need to plan to configure your GCP network as well as plan for the transition period when you will have both on-premises and cloud-based infrastructure. If you intend to continue to use on-premises infrastructure along with cloud infrastructure in a hybrid cloud scenario, you will also need to have a plan for your GCP network as well as long-term connectivity between the cloud and your on-premises network.

Network migration planning can be broken down into four broad categories of planning tasks.

  • Virtual private clouds (VPCs)
  • Access controls
  • Scaling
  • Connectivity

Planning for each of these will help identify potential risks and highlight architecture decisions that need to be made.

Virtual Private Clouds

Virtual private clouds are collections of networking components and configurations that organize your infrastructure in the Google Cloud. The components of VPCs include the following:

  • Networks
  • Subnets
  • IP addresses
  • Routes
  • Virtual private networks (VPNs)

VPC infrastructure is built on Google's software-defined networking platform. Google manages the underlying software and hardware that implement VPCs.

Networks are private RFC 1918 address spaces. These are networks that use one of three ranges.

  • 10.0.0.0 to 10.255.255.255, with 16,777,216 available addresses
  • 172.16.0.0 to 172.31.255.255, with 1,048,576 available addresses
  • 192.168.0.0 to 192.168.255.255, with 65,546 available addresses

Within networks, you can create subnets to group resources by region. Google networking can manage subnets automatically, or you can manage your own subnets. Automatically managed subnets are useful when you want a subnet in each region and you are not connecting to other networks. In general, it is recommended that you manage your own subnets, which are called custom mode networks. These provide you with complete control over which regions have subnets and the address ranges of each subnet.

VM instances can have both internal and external IP addresses. The internal IP address is used for traffic within the VPC network. The external IP address is optional, and it is used to communicate with external networks. External addresses may be ephemeral or static. As part of your planning process, you should map out the use of external IP addresses and if they should be ephemeral or static. Static IP addresses are used when you need a consistent long-term IP address, for example, for a public website or API endpoint.

Routes are rules for forwarding traffic. Some routes are generated automatically when VPCs are created, but you can also create custom routes. Routes between subnets are created by default. You may want to create custom routes if you need to implement many-to-one NAT or transparent proxies.

VPNs are provided by the Cloud VPN service that links your Google Cloud VPC to an on-premises network or another cloud service, such as AWS or Azure. VPCs use an IPSec tunnel to secure transmission. Cloud VPN is currently available as Classic VPN and HA VPN, but some features of Classic VPN are being deprecated on March 31, 2022. Google encourages customers to use HA VPN. HA VPN provides 99.99 percent availability by using two interfaces and two IP addresses.

A single VPN gateway can sustain up to 3 Gbps. If this is not sufficient, you may need to plan for additional VPN gateways or use a Cloud Interconnect connection instead.

Network Access Controls

You should plan how you will control access to network management functions using IAM roles. Some networking-specific roles that you may want to consider assigning are as follows:

  • Network Admin: For full permissions to manage network resources
  • Network Viewer: For read-only access to network resources
  • Security Admin: For managing firewall rules and SSL certificates
  • Compute Instance Admin: For managing VM instances

Firewalls are used to control the flow of traffic between subnets and networks. You should plan what types of traffic will be allowed to enter and leave each subnet. For firewall rule purposes, you will need to define the traffic protocol, such as TCP or IP, whether it applies to incoming (ingress) or outgoing (egress) traffic, and the priority of the rule. Higher-priority rules take precedence over lower-priority rules.

Scaling

Planning for network scalability may entail the use of autoscaling in managed instance groups and, if you have large static objects that are referenced from around the globe, the use of a content distribution network.

Cloud load balancing can distribute traffic and workloads globally using a single anycast IP address.

Application layer load balancing can be done using HTTP(S) layer 7 load balancing, which can distribute traffic across regions routing traffic to the nearest healthy node. Traffic can also be routed based on content type. For example, video may be distributed globally using Cloud CDN, Google Cloud's content delivery network.

Network load balancing occurs at layer 4. This type of load balancing is useful for dealing with spikes in TCP and IP traffic, load balancing additional protocols, or supporting session affinity.

Also consider how you will manage DNS records for services. Cloud DNS is a Google-managed DNS service that can be used to make your services globally available. Cloud DNS is designed to provide high-availability and low-latency DNS services.

Connectivity

If you are maintaining a hybrid cloud, you will want to plan for networking between the Google Cloud and your on-premises data center.

You may want to consider using Cloud Interconnect, which routes traffic directly to Google Cloud without going over internet networks. Cloud Interconnect is available as Dedicated Interconnect and Partner Interconnect. Dedicated provides a direct physical connection between Google Cloud's network and your on-premises network. Partner Interconnect provides a connection to the Google Cloud network using a service provided by a telecom service provider that can physically connect to both your on-premises network and the Google Cloud network.

See the Google documentation (cloud.google.com/interconnect/pricing) for details on the capacities and costs of each of these options.

Summary

Migration planning requires broad scope planning that ranges from business service considerations to network design planning. It should include planning for integration with existing systems. This itself is a broad topic within migration planning and is best addressed using a five-step plan that includes assessment, pilot, data migration, application migration, and optimization.

Before migrating systems and data, it is important to understand dependencies between systems, service-level commitments, and other factors that contribute to the risk of migrating a service. Generally, it is recommended to migrate data first and then migrate applications. Migrating databases takes additional planning to avoid data loss or disruption in services during the migration. Review software licenses during migration planning and determine how you will license software in the cloud. Options include bring-your-own-license, pay-as-you-go, or including the license with other cloud charges. Network planning includes planning virtual private clouds, network access controls, scalability, and connectivity.

Exam Essentials

  • Cloud migrations are inherently about incrementally changing existing infrastructure to use cloud services to deliver information services. You will need to plan a migration carefully to minimize the risk of disrupting services while maximizing the likelihood of successfully moving applications and data to the cloud. For many organizations, cloud computing is a new approach to delivering information services. These organizations may have built large, complex infrastructures running a wide array of applications using on-premises data centers. Now those same organizations want to realize the advantages of cloud computing.
  • Know the four stages of migration planning: assessment, planning, deployment, and optimization. During the assessment phase, take inventory of applications and infrastructure. During the planning stage, you will define fundamental aspects of your cloud services, including the structure of the resource hierarchy as well as identities, roles, and groups. You will also migrate one or two applications in an effort to learn about the cloud and develop experience running applications in the cloud. In the deployment phase, data and applications are moved in a logical order that minimizes the risk of service disruption. Finally, once data and applications are in the cloud, you can shift your focus to optimizing the cloud implementation.
  • Understand how to assess the risk of migrating an application. Considerations include service-level agreements, criticality of the system, availability of support, and quality of documentation. Consider other systems on which the migrating system depends. Consider other applications that depend on the migrating system. Watch for challenging migration operations, such as performing a database replication and then switching to a cloud instance of a database.
  • Understand how to map licensing to the way you will use the licensed software in the cloud. Operating system, application, middleware services, and third-party tools may all have licenses. There are a few different ways to pay for software running in the cloud. In some cases, the cost of licensing is included with cloud service charges. In other cases, you may have to pay for the software directly in one of two ways. You may have an existing license that can be used in the cloud, known as the BYOL model, or you may purchase a license from the vendor specifically for use in the cloud. In other cases, software vendors will charge based on usage, much like cloud service pricing.
  • Know the steps involved in planning a network migration. Network migration planning can be broken down into four broad categories of planning tasks: VPCs, access controls, scaling, and connectivity. Planning for each of these will help identify potential risks and highlight architecture decisions that need to be made. Consider how you will use networks, subnets, IP addresses, routes, and VPNs. Plan for linking on-premises networks to the Google Cloud using either VPNs or Cloud Interconnect.

Review Questions

  1. Your midsize company has decided to assess the possibility of moving some or all of its enterprise applications to the cloud. As the CTO, you have been tasked with determining how much it would cost and what the benefits of a cloud migration would be. What would you do first?
    1. Take inventory of applications and infrastructure, document dependencies, and identify compliance and licensing issues.
    2. Create a request for proposal from cloud vendors.
    3. Discuss cloud licensing issues with enterprise software vendors.
    4. Interview department leaders to identify their top business pain points.
  2. You are working with a colleague on a cloud migration plan. Your colleague would like to start migrating data. You have completed an assessment but no other preparation work. What would you recommend before migrating data?
    1. Migrating applications
    2. Conducting a pilot project
    3. Migrating all identities and access controls
    4. Redesigning relational data models for optimal performance
  3. As the CTO of your company, you are responsible for approving a cloud migration plan for services that include a wide range of data. You are reviewing a proposed plan that includes a data migration plan. Network and security plans are being developed in parallel and are not yet complete. What should you look for as part of the data migration plan?
    1. Database configuration details, including IP addresses and port numbers
    2. Specific firewall rules to protect databases
    3. An assessment of data classifications and regulations relevant to the data to be migrated
    4. A detailed description of current backup operations
  4. A client of yours is prioritizing applications to move to the cloud. One system written in Java is a Tier 1 production system that must be available 24/7; it depends on three Tier 2 services that are running on premises, and two other Tier 1 applications depend on it. Which of these factors is least important from a risk assessment perspective?
    1. The application is written in Java.
    2. The application must be available 24/7.
    3. The application depends on three Tier 2 services.
    4. Two other Tier 1 applications depend on it.
  5. As part of a cloud migration, you will be migrating a relational database to the cloud. The database has strict SLAs, and it should not be down for more than a few seconds a month. The data stores approximately 500 GB of data, and your network has 100 Gbps bandwidth. What method would you consider first to migrate this database to the cloud?
    1. Use a third-party backup and restore application.
    2. Use the MySQL data export program and copy the export file to the cloud.
    3. Set up a replica of the database in the cloud, synchronize the data, and then switch traffic to the instance in the cloud.
    4. Transfer the data using the Google Transfer Appliance.
  6. Your company is running several third-party enterprise applications. You are reviewing the licenses and find that they are transferrable to the cloud, so you plan to take advantage of that option. This form of licensing is known as which one of the following?
    1. Compliant licensing
    2. Bring-your-own-license
    3. Pay-as-you-go license
    4. Metered pricing
  7. Your company is running several third-party enterprise applications. You are reviewing the licenses and find that they are not transferrable to the cloud. You research your options and see that the vendor offers an option to pay based on your level of use of the application in the cloud. What is this option called?
    1. Compliant licensing
    2. Bring-your-own-license
    3. Pay-as-you-go license
    4. Incremental payment licensing
  8. You have been asked to brief executives on the networking aspects of the cloud migration. You want to begin at the highest level of abstraction and then drill down into lower-level components. What topic would you start with?
    1. Routes
    2. Firewalls
    3. VPCs
    4. VPNs
  9. You have created a VPC in Google Cloud, and subnets were created automatically. What range of IP addresses would you not expect to see in use with the subnets?
    1. 10.0.0.0 to 10.255.255.255
    2. 172.16.0.0 to 172.31.255.255
    3. 192.168.0.0 to 192.168.255.255
    4. 201.1.1.0 to 201.2.1.0
  10. During migration planning, you learn that traffic to the subnet containing a set of databases must be restricted. What mechanism would you plan to use to control the flow of traffic to a subnet?
    1. IAM roles
    2. Firewall rules
    3. VPNs
    4. VPCs
  11. During migration planning, you learn that some members of the network management team will need the ability to manage all network components, but others on the team will only need read access to view the state of the network. What mechanism would you plan to use to control the user access?
    1. IAM roles
    2. Firewall rules
    3. VPNs
    4. VPCs
  12. Executives in your company have decided that the company should not route its GCP-only traffic over public internet networks. What Google Cloud service would you plan to use to geographically distribute the workload of an enterprise application?
    1. Global load balancing
    2. Simple network management protocol
    3. Content delivery network
    4. VPNs
  13. Executives in your company have decided to expand operations from just North America to Europe as well. Applications will be run in several regions. All users should be routed to the nearest healthy server running the application they need. What Google Cloud service would you plan to use to meet this requirement?
    1. Global load balancing
    2. Cloud Interconnect
    3. Content delivery network
    4. VPNs
  14. Executives in your company have decided that the company should expand its service offerings to a global market. Your company distributes educational video content online. Maintaining low latency is a top concern. What type of network service would you expect to use to ensure low-latency access to content from around the globe?
    1. Routes
    2. Firewall rules
    3. Content delivery network
    4. VPNs
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset