Chapter 6
Designing Networks

The Google Cloud Professional Architect exam includes questions about networking, especially around creating virtual private clouds in GCP and linking on-site data centers to GCP resources using VPNs. Load balancing within regions and globally is also covered on the exam. This chapter will cover all these issues from an architecture perspective.

As an architect you should be familiar with the basic abstractions used to design and implement networking. In particular, you should keep in mind the seven-layer Open Systems Interconnection (OSI) Network model whenever designing networks or diagnosing problems with networks.

The seven-layer model consists of the following:

  • Layer 1, Physical, represents the physical base of the network, including cables, radio frequency, voltages, and other aspects of the physical implementation of networking.
  • Layer 2, Data Link, handles data transfer between two nodes in a network as well as error correction for the physical layer. Layer 2 has two sublayers, Media Access Control (MAC) and Logical Link Control (LLC). Switches often, but not always, operate at layer 2.
  • Layer 3, Network, manages packet forwarding using routers. The IP protocol exists at layer 3.
  • Layer 4, Transport, controls data transfer between systems. This layer manages how the amount of data is sent and where to send it. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) operate at layer 4.
  • Layer 5, Session, manages sessions or interactions over time between applications. The handshake portion of Transport Layer Security (TLS) operates at layer 5.
  • Layer 6, Presentation, manages the mapping from application representations to network representations. Encryption and decryption of network traffic is performed at layer 6.
  • Layer 7, Application, is the top layer of the OSI network model. Layer 7 provides functionality for applications, like web browsers, to access lower-level network services.

Architects often need to reason about issues at layers 3, 4, and 7, such as when designing subnets, implementing firewall rules, or controlling traffic to applications using a web application firewall.

IP Addressing, Firewall Rules, and Routers

Architects are expected to understand the building blocks of networking. These include IP addresses and classless inter-domain routing (CIDR) block notation, firewall rules to control the flow of traffic, and routers and internetwork communications.

IP Address Structure

When we talk about networking in cloud environments, we are talking about IP networking. An IP network is a set of devices that can communicate with each other directly using internet protocols.

Networks can be partitioned into multiple subsets of devices known as subnets. Subnets allow for more efficient flow of traffic in large networks.

An Internet Protocol (IP) address is an identifier for a device, virtual device, or service on a network using the IP protocol. IP addresses are designed to support the forwarding of network packets along routes from a source to a destination.

IP addresses can be specified using either IPv4 or IPv6. IPv4 uses four octets, such as 192.168.20.10. IPv6 uses eight 16-bit blocks, such as FE80:0000:0000:0000:0202:B3FF:FE1E:8329. IPv4 is 32bit address, and IPv6 is a 128bit address. For the purposes of the exam, understanding IPv4 addressing should be sufficient.

When you create a subnet, you will have to specify a range of IP addresses. Any resource that needs an IP address on that subnet will receive an IP address in that range. Each subnet in a VPC should have distinct, nonoverlapping IP ranges.

You can specify an IP range using the Classless Inter-Domain Routing (CIDR) notation. This consists of an IPv4 IP address followed by a /, followed by an integer in the range 1 to 32. The integer specifies the number of bits used to identify the network; the remaining bits are used to determine the host address.

For example, if you specified 172.16.0.0/12, this would mean that the first 12 bits of the IP address specify the network. This is called the subnet mask, and the /12 represents the subnet mask. The remaining 20 bits are used for host addresses. Since there are 20 bits available, there can be 1,048,574 IP addresses in that range.

Public vs. Private Addressing

Within IP networks we have the option of using private IP addresses, public IP addresses, or both. Private IP addresses are non-internet routable addresses that are reserved for internal use. Public IP addresses are used when we want to communicate with the internet or internet-routable addresses.

A public IP address can be used with private IP addresses using a process known as network address translation (NAT). NAT is used to reduce the number of public IP addresses needed in network.

The Internet Engineering Task Force (IETF) has defined three IP address ranges as private addresses.

  • 10.0.0.0/8: 10.0.0.0 to 10.255.255.255
  • 172.16.0.0/12: 172.16.0.0 to 172.31.255.255
  • 192.168.0.0/16: 192.168.0.0 to 192.168.255.255

The 10.0.0.0/8 range has 16,777,216 addresses, the 172.16.0.0/12 range has 1,048,576 addresses, and the 192.168.0.0/1 range has 65,536 addresses.

Firewall Rules

Firewall rules control network traffic by blocking or allowing traffic into (ingress) or out of (egress) a network, subnet, or device. Two implied firewall rules are defined with VPCs: one blocks all incoming traffic, and the other allows all outgoing traffic. You can change this behavior by defining firewall rules with higher priority.

Firewall rules have a priority specified by an integer from 0 to 65535, with 0 being the highest priority and 65535 being the lowest. The two implied firewall rules have an implied priority of 65535, so you can override those by specifying a rule with a lower number that has a higher priority.

In addition to the two implied rules, which cannot be deleted, there are four default rules assigned to the default network in a VPC. These rules are as follows:

  • default-allow-internal allows ingress connections for all protocols and ports among instances in the network.
  • default-allow-ssh allows ingress connections on TCP port 22 from any source to any instance in the network. This allows users to ssh into Linux servers.
  • default-allow-rdp allows ingress connections on TCP port 3389 from any source to any instance in the network. This lets users use Remote Desktop Protocol (RDP) developed by Microsoft to access Windows servers.
  • default-allow-icmp allows ingress ICMP traffic from any source to any instance in the network.

All of these rules have a priority of 65534, the second-lowest priority.

Firewall rules have several attributes in addition to priority. They are as follows:

  • The direction of traffic: This is either ingress or egress.
  • The action: This is either allow or deny traffic.
  • The target: This defines the instances to which the rule applies.
  • The source: This is for ingress rules or the destination for egress rules.
  • A protocol specification: This includes TCP, UDP, or ICMP, for example.
  • A port number: A communication endpoint associated with a process.
  • An enforcement status: This allows network administrators to disable a rule without having to delete it.

Firewall rules are global resources that are assigned to VPCs, so they apply to all VPC subnets in all regions. Since they are global resources, they can be used to control traffic between regions in a VPC.

Cloud Router

A router is device or service that connects multiple networks and enables communication between those networks. Routers may be implemented as physical devices, such as a rack-mounted appliance in a data center, or as a software-defined network service, which is the case with Google Cloud's Cloud Router.

Cloud Router uses the Border Gateway Protocol (BGP) to advertise IP address ranges to other networks and builds customer dynamic routes based on IP address information it receives from other BGP peers. Cloud Router provides routing services for the following:

  • Dedicated Interconnect
  • Partner Interconnect
  • HA VPN
  • Supported router appliances

By default, Cloud Router only advertises subnet routes. You can, however, configure custom route advertisements to advertise only some subnet routes.

Cloud Armor

Cloud Armor is a layer 7 web application firewall (WAF) designed to mitigate distributed denial-of-service (DDoS) attacks and prevent other unwanted access to applications, such as cross-site scripting and SQL injection attacks. The preconfigured rules include protection against the Open Web Application Security Project (OWASP) Top 10 threats.

Cloud Armor is configured using security policies that are designed to scrub incoming requests from common layer 7 attacks. Some policies are available preconfigured, and you can manually configure policies as well. In addition to rules defined using the Cloud Armor custom rules language, you can also specify named IP lists to allow traffic only from trusted third parties.

Virtual Private Clouds

VPCs are like a network in a data center; they are network-based organizational structures for controlling access to GCP resources. VPCs organize Compute Engine instances, App Engine Flexible instances, and GKE clusters. They are global resources, so a single VPC can span multiple regions.

A VPC is associated with a project or an organization, and projects can have multiple VPCs. Resources within a VPC can communicate with other resources in the same VPC, subject to firewall rules. Resources can also communicate with Google APIs and services.

VPC Subnets

In Google Cloud, a subnet is a regional resource that has a defined range of IP addresses associated with it. It should be noted that IP ranges are defined for subnets; virtual private clouds (VPCs) do not have IP address ranges associated with them.

A VPC can have subnets in each region to provide private addresses to resources in the region. Since the subnets are part of a larger network, they must have distinct IP address ranges. For example, a VPC with three subnets might use the ranges 10.140.10.0/20, 10.140.20.0/20, and 10.140.30.0/20 for the subnets. When a VPC is created, it can automatically create subnets in each region, or you can specify custom subnet definitions for each region that should have a subnet. If subnets are created automatically, their IP ranges are based on the region. All automatic subnets are assigned IP addresses in the 10.nnn.0.0/20 range.

VPCs use routes to determine how to route traffic within the VPC and across subnets. Depending on the configuration of the VPC, the VPC can learn regional routes only or multiregional, global routes.

VPCs have three modes: default, auto-mode, and custom. Default mode is automatically created when you enable a project, but this can be turned off using organizational constraints. Auto-mode creates a subnet in every region when enabled, and they all use a specific range, 10.128.0.0/9. Custom mode is used for production environments when you want full control of subnetting. Also, VPC reserves four IP addresses from each subnet, and also the smallest allowed subnet is /29 in Google Cloud.

Shared VPC

Sometimes, it is necessary for resources in different projects to communicate. For example, a data warehouse project may need to access a transactional database in an e-commerce project to load e-commerce data into the data warehouse. For organizational reasons, it may be preferable to keep the e-commerce and data warehouse systems in separate projects.

In general, Google recommends using a single VPC network when it meets your needs because a single VPC network is easier to manage than the alternatives. However, if you have multiple teams that have their own projects, then creating a single Shared VPC host project with a single Shared VPC network can be used to meet network access requirements without a lot of additional management overhead. With this configuration, resources in those projects can communicate across project boundaries using private IP addresses.

A Shared VPC is a way to connect resources from multiple projects to a common VPC network using private IP addresses. Shared VPCs have one host project and one or more service projects. The host project and service project must be in the same organization with one exception for service projects during migrations, in which case the service project may be in a different organization.

The VPC networks in host projects are known as Shared VPC networks. A Shared VPC network is defined in the host project and centrally shared. There are two sharing options for subnets. All subnets in the host, including those created in the future, are shared. The other option is to specify individual subnets that are shared.

Organization policy constraints can be used to prevent accidental deletion of host projects, restrict where nonhost projects can be attached as service projects, and constrain which subnets in the host project service projects can use.

Another advantage of Shared VPCs is that you can separate project and network management duties. For example, some administrators may be given privileges to manage network resources, such as firewall rules, while others are given privileges to manage project resources, like instances. One way to allow traffic to flow between instances in each VPC is to use a Shared VPC.

Shared VPCs are useful when projects are in the same organization but are not used when traffic needs to flow between projects in different organizations. In that case, VPC network peering can be used.

VPC Network Peering

VPC network peering enables different VPC networks to communicate using private IP address space, as defined in RFC 1918. VPC network peering is used as an alternative to using external IP addresses or using VPNs to link networks.

It is important to note that VPC peering can connect VPCs between organizations; VPC sharing does not operate between organizations.

VPC network peering is typically used in software-as-a-service (SaaS) platforms when the SaaS provider wants to make its services available to customers, which use different organizations within GCP. Also, organizations that have multiple network administrative domains can use VPC network peering to access resources across those domains using private IP addressing.

The following are three primary advantages of VPC network peering:

  • There is lower latency because the traffic stays on the Google network and is not subject to conditions on the public internet.
  • Services in the VPC are inaccessible from the public internet, reducing the attack surface of the organization.
  • There are no egress charges associated with traffic when using VPC network peering.

It is important to note that peered networks manage their own resources, such as firewall rules and routes. This is different from firewall rules and routes in a VPC, which are associated with the entire VPC. Also, there is a maximum of 25 peering connections from a single VPC.

VPC network peering works with Compute Engine, App Engine Flexible Environment, and Google Kubernetes Engine.

VPC network peering requires both sides to set up a peering relationship. Peering is available only when both sides have matching configurations. If one side deletes a peering connection, the connection in the other network enters an inactive mode.

The latency and throughput of peering traffic are the same as of private traffic within the network.

Hybrid-Cloud Networking

Hybrid-cloud networking is the practice of providing network services between an on-premises data center and a cloud. When two or more public clouds are linked together, that is called a multicloud network. Multicloud networks may also include private data centers. Typically, architects recommend hybrid-cloud or multicloud environments when there are workloads that are especially well suited to run in one environment over another or when they are trying to mitigate the risk of dependency on a particular cloud service. Here are some examples:

  • A batch processing job that uses a custom legacy application designed for a mainframe is probably best run on-premises.
  • Ad hoc batch processing, such as transforming a large number of image files to a new format, is a good candidate for a cloud computing environment, especially when low-cost preemptible VMs are available.
  • An enterprise data warehouse that is anticipated to grow well into petabyte scale is well suited to run in a cloud service such as BigQuery.

Hybrid-Cloud Design Considerations

When workloads are run in different environments, there is a need for reliable networking with adequate capacity. A data warehouse in the cloud may use cloud and on-premises data sources, in which case the network between the on-premises data center and GCP should have sufficient throughput to transfer data for transformation and load operations performed in the cloud.

In addition to throughput, architects need to consider latency. When running batch processing workflow, latency is less of an issue than when running applications that depend on services in the cloud and in a local data center. A web application running GCP may need to call an application programming interface (API) function running on premises to evaluate some business logic that is implemented in a COBOL application running on a mainframe. In this case, the time to execute the function and the round-trip time transmitting data must be low enough to meet the web application's SLAs.

Reliability is also a concern for hybrid-cloud networking. A single network interconnect can become a single point of failure. Using multiple interconnects, preferably from different providers, can reduce the risk of losing internetwork communications. If the cost of maintaining two interconnects is prohibitive, an organization could use a VPN that runs over the public internet as a backup. VPNs do not have the capacity of interconnects, but the limited throughput may be sufficient for short periods of time.

Architects also need to understand when to use different network topologies. Some common topologies are as follows:

  • Mirrored topology: In this topology, the public cloud and private on-premises environments mirror each other. This topology could be used to set up test or disaster recovery environments.
  • Meshed topology: With this topology, all systems within all clouds and private networks can communicate with each other.
  • Gated egress topology: In this topology, on-premises service APIs are made available to applications running in the cloud without exposing them to the public internet.
  • Gated ingress topology: With this topology, cloud service APIs are made available to applications running on premises without exposing them to the public internet.
  • Gated egress and ingress topology: This topology combines gated egress and gated ingress.
  • Handover topology: In this topology, applications running on premises upload data to a shared storage service, such as Cloud Storage, and then a service running in GCP consumes and processes that data. This is commonly used with data warehousing and analytic services.

Depending on the distribution of workloads, throughput and latency requirements, and topology, an architect may recommend one or more of these options supported in GCP.

Hybrid-Cloud Implementation Options

Hybrid-cloud computing is supported by three types of network links.

  • Cloud VPN
  • Cloud Interconnect
  • Direct peering

Each of these options has advantages that favor their use in some cases. Also, there may be situations where more than one of these options is used, especially when functional redundancy is needed.

Cloud VPN

Cloud VPN is a GCP service that provides virtual private networks between GCP and on-premises networks. Cloud VPN is implemented using IPSec VPNs and is available in two types, HA VPN and Classic VPN. Some Classic VPN functionality is scheduled to be deprecated on March 31, 2022.

HA VPN provides an IPSec VPN connection with 99.99 percent availability. HA VPN uses two connections to provide high availability. Each connection has its own external IP address. HA VPN gateways support multiple tunnels. It is possible to configure an HA VPN with just one active gateway, but that does not meet requirements for the 99.99 percent availability SLA.

Classic VPN uses one network interface and one external IP address and provides 99.9 percent availability.

Each Cloud VPN tunnel supports up to 3 Gbps.

Data is transmitted over the public internet, but the data is encrypted at the origin gateway and decrypted at the destination gateway to protect the confidentiality of data in transit. Encryption is based on the Internet Key Exchange (IKE) protocol.

Cloud Interconnect

The Cloud Interconnect service provides high throughput and highly available networking between GCP and on-premises networks. Cloud Interconnect is available in 10 Gbps or 100 Gbps configurations when using a direct connection between a Google Cloud access point and your data center, known as Dedicated Interconnect.

When using a third-party network provider, called a Partner Interconnect, customers have the option of configuring 50 Mbps to 50 Gbps connections.

The advantages of using Cloud Interconnect include the following:

  • You can transmit data on private connections. Data does not traverse the public internet.
  • Private IP addresses in Google Cloud VPCs are directly addressable from on-premises devices. There is no need for NAT or a VPN tunnel.
  • You have the ability to scale up Dedicated Interconnects to 80 Gbps using eight 10 Gbps direct interconnects or 200 Gbps using two 100 Gbps interconnects.
  • You have the ability to scale up Partner Interconnects to 80 Gbps using eight 10 Gbps partner interconnects.

A disadvantage of Cloud Interconnect is the additional cost and complexity of managing a direct or partnered connection. If low latency and high availability are not required, then using Cloud VPN will be less expensive and require less management.

An alternative to Cloud Interconnect is direct peering.

Direct Peering

Network peering is a network configuration that allows for routing between networks.

Direct peering is a form of peering that allows customers to connect their networks to a Google network point of access. This kind of connection is not a GCP service—it is a lower-level network connection that is outside of GCP. It works by exchanging Border Gateway Protocol (BGP) routes, which define paths for transmitting data between networks. It does not make use of any GCP resources, like VPC firewall rules or GCP access controls.

Direct peering should be used when you need access to Google Workspace services in addition to Google Cloud services. In other cases, Google recommends using Dedicated Interconnect or Partner Interconnect.

When working with hybrid computing environments, first consider workloads and where they are optimally run and how data is exchanged between networks. This can help you determine the best topology for the hybrid or multicloud network.

There are three options for linking networks: Dedicated/Partner Interconnect, VPN, and direct peering. Interconnects provide high throughput, low latency, and high availability. VPNs are a lower-cost option that does not require managing site-to-site connections, but throughput is lower. A third, not generally recommended, option is direct peering. This is an option when requirements dictate that the connection between networks be at the level of exchanging BGP routes.

Service-Centric Networking

Networking has traditionally been device-centric with IP addresses assigned to physical or virtual devices. This model does not always work well in the cloud. One of the advantages of using managed cloud services is that they abstract away from implementation details, like the type and number of servers supporting a service. For example, when you use BigQuery for data analysis, you do not need to configure servers to run your queries, and you do not need to specify an IP address when using this service. While this is advantageous from a management perspective, it means you do not have access to IP-based network controls.

Google Cloud provides several private access options for resources in a VPC to access APIs and services without requiring an external API.

Private Service Connect for Google APIs

The Private Service Connect for Google APIs allows users to connect to Google APIs and services through an endpoint within their VPC network without the need for an external IP address. The endpoint will forward traffic to the appropriate API or service. Clients can be GCP resources and on-premises systems. GCP resources may or may not have an external IP address.

Private Service Connect endpoints are configured to access one of two bundles of APIs. The All APIs endpoint (all-apis) provides access to the same APIs as private.googleapis.com. VPC-SC (vpc-sc) provides access to the same APIs as restricted.googleapis.com.

Private Service Connect for Google APIs with Consumer HTTP(S)

The Private Service Connect for Google APIs with Consumer HTTP(S) is used to connect Google APIs and services using internal HTTP(S) load balancers. Clients can be in GCP or on-premises.

Private Google Access

Private Google Access is used to connect external IP addresses and Private Google Access domains to GCP APIs and services through the VPC's default internet gateway. This private access option is used when GCP resources do not have external IP addresses.

Private Google Access is enabled at the VPC subnet level. Private Google Access does not enable APIs; you will need to do that separately. Your network will need to have routes for the destination IP range used by Google APIs and services. If you use the private.googleapis.com or restricted.googleapis.com domain name, you have to set up DNS records to direct traffic to the IP addresses of those domains.

Private Google Access for On-Premises Hosts

The Private Google Access for On-premises Hosts is used to connect on-premises hosts to Google APIs and service through a VPC network. On-premises clients may have external IP addresses, but they are not required.

Cloud VPN and Cloud Interconnect can be used with Private Google Access for on-premises hosts. This allows on-premises hosts to use internal IP addresses to reach Google services.

Private Service Connect for Published Services

The Private Service Connect for Published Services is used to connect to services in another VPC without using an external IP address. The service being accessed needs to be published using the Private Service Connect for Service Producers service.

Private Service Access

Private Service Access is used to connect from a serverless environment on GCP to resources within a VPC using IP addresses. This is implemented using a VPC Network Peering connection. The GCP VM instances connecting to services may have an external IP address, but they do not need one.

Serverless VPC Access

Serverless VPC Access is used to connect from a serverless environment in GCP to resources in a VPC using an internal address. This option supports Cloud Run, App Engine Standard, and Cloud Functions.

Load Balancing

Load balancing is the practice of distributing work across a set of resources. GCP provides five different load balancers for different use cases. To determine which load balancer is an appropriate choice in a given scenario, you will have to consider three factors.

  • Is the workload distributed to servers within a region or across multiple regions?
  • Does the load balancer receive traffic from internal GCP resources only or from external sources as well?
  • What protocols does the load balancer need to support?

The answers to these questions will help you determine when to use each of the five types:

  • Network TCP/UDP
  • Internal TCP/UDP
  • HTTP(S)
  • SSL Proxy
  • TCP Proxy

Regional Load Balancing

The two regional load balancers are Network TCP/UDP and Internal TCP/UDP. Both work with TCP and UDP protocols as their names imply.

Network TCP/UDP

The Network TCP/UDP load balancer distributes workload based on IP protocol, address, and port. This load balancer uses forwarding rules to determine how to distribute traffic. Forwarding rules use the IP address, protocol, and ports to determine which servers, known as a target pool, should receive the traffic.

The Network TCP/UDP is a nonproxied load balancer, which means that it passes data through the load balancer without modification. This load balancer only distributes traffic to servers within the region where the load balancer is configured.

All traffic from the same connection is routed to the same instance. This can lead to imbalance if long-lived connections tend to be assigned to the same instance. This is an external-facing resource based in a specific region.

Internal TCP/UDP

The Internal TCP/UDP load balancer is the only internal load balancer. It is used to distribute traffic from GCP resources, and it allows for load balancing using private IP addresses. It is a regional load balancer.

Instances of the Internal TCP/UDP load balancer support routing either TCP or UDP packets but not both. Traffic passes through the Internal TCP/UDP load balancer and is not proxied.

The Internal TCP/UDP load balancer is a good choice when distributing workload across a set of backend services that run on a Compute Engine instance group in which all the backend instances are assigned private IP addresses.

The internal TCP/UDP load balancers route traffic within a VPC. The network TCP/UDP load balancers operate outside of VPCs. They handle traffic that can originate from anywhere on the internet, on VMs in VPCs with eternal addresses, or on VMs in VPCs through NAT.

When traffic needs to be distributed across multiple regions, then one of the global load balancers should be used.

Global Load Balancing

The three global load balancers are the HTTP(S), SSL Proxy, and TCP Proxy Load Balancing load balancers. All global load balancers require the use of the Premium Tier of network services.

HTTP(S) Load Balancing

The HTTP(S) load balancer is used when you need to distribute HTTP and HTTPS traffic globally, or at least across two or more regions.

HTTP(S) load balancers use forwarding rules to direct traffic to a target HTTP proxy. These proxies then route the traffic to a URL map, which determines which target group to send the request to based on the URL. For example, www.example.com/documents will be routed to the backend servers that serve that kind of request, while www.example.com/images would be routed to a different target group.

The backend service then routes the requests to an instance within the target group based on capacity, health status, and zone.

In the case of HTTPS traffic, the load balancer uses SSL certificates that must be installed on each of the backend instances. The backend resource for an HTTP(S) load balancer can also be a storage bucket.

SSL Proxy Load Balancing

The SSL Proxy load balancer terminates SSL/TLS traffic at the load balancer and distributes traffic across the set of backend servers. After the SSL/TLS traffic has been decrypted, it can be transmitted to backend servers using either TCP or SSL. SSL is recommended. Also, this load balancer is recommended for non-HTTPS traffic; HTTPS traffic should use the HTTP(S) load balancer.

The SSL Proxy load balancers will distribute traffic to the closest region that has capacity. Another advantage of this load balancer is that it offloads SSL encryption/decryption for backend instances.

TCP Proxy Load Balancing

TCP Proxy Load Balancing lets you use a single IP address for all users regardless of where they are on the globe, and it will route traffic to the closest instance.

TCP Proxy Load Balancing load balancers should be used for non-HTTPS and non-SSL traffic.

GCP provides load balancers tailored for regional and global needs as well as specialized to protocols. When choosing a load balancer, consider the geographic distribution of backend instances, the protocol used, and whether the traffic is from internal GCP resources or potentially from external devices.

Additional Network Services

Service Directory

Service Directory is a managed service for centralizing information about your services. Specifically, it manages metadata about services by allowing you to publish, discover, and connect to services. Service Directory is essentially an endpoint registry.

Service Directory supports workloads in Compute Engine and Kubernetes Engine. It also supports services in your on-premises data center and third-party clouds.

Cloud CDN

Cloud CDN is a content delivery network managed by Google Cloud. As with any content delivery network, Cloud CDN provides the means to distribute content across the globe in ways to minimize latency when accessing that data.

Cloud CDN works with external HTTP(S) Load Balancing. The load balancer provides a public IP address while the CDN backend is responsible for providing content.

Cloud CDN content can come from several sources, including Compute Engine instance groups, zonal network endpoint groups, App Engine, Cloud Run, Cloud Functions, and Cloud Storage.

Cloud DNS

Cloud DNS is a managed global domain name service used to publish domain names.

DNS is a hierarchical, distributed database that uses authoritative servers to hold DNS name records and uses nonauthoritative servers to cache DNS data for improved performance.

DNS has several types of records, including A records, which are address records that map domain names to IP addresses. CNAME or canonical name records store aliases. MX is a mail exchange record, while NS records are name server records that assign a DNS zone to an authoritative server.

Cloud DNS supports public and private zones. Public zones are visible to the internet. Private zones are visible only from specified VPCs.

Summary

VPCs are virtual private clouds that define a network associated with a project. VPCs have subnets. Subnets are assigned IP ranges, and all instances within a subnet are assigned IP addresses from its range. VPCs can share resources by setting up Shared VPCs. Shared VPCs have one host project and one or more service projects.

VPC network peering enables different VPC networks to communicate using a private IP address space, as defined in RFC 1918. VPC network peering is used as an alternative to using external IP addresses or using VPNs to link networks.

The flow of traffic within a VPC is controlled by firewall rules. Two implied rules allow all outgoing traffic and deny most incoming traffic. Implied rules cannot be deleted, but they can be overridden by higher-priority rules. When subnets are automatically created for a VPC, a set of default rules are created to allow typical traffic patterns, such as using SSH to connect to an instance.

Hybrid-cloud networking is the practice of providing network services between an on-premises data center and a cloud. Design considerations include latency, throughput, reliability, and network topology. Hybrid-cloud networks can be implemented using Cloud VPN, Cloud Interconnect, and direct peering.

Google Cloud provides service-centric networking services for controlling access to APIs and services.

Load balancing is the practice of distributing work across a set of resources. GCP provides five different load balancers: Network TCP/UDP, Internal TCP/UDP, HTTP(S), SSL Proxy, and TCP Proxy Load Balancing. Choose a load balancer based on regional or multiregional distribution of traffic, protocol, and internal or external traffic.

Exam Essentials

  • Understand virtual private clouds. Virtual private clouds are like a network in a data center; they are network-based organizational structures for controlling access to GCP resources. They are global resources, so a single VPC can span multiple regions. VPCs are global resources. Subnets are regional resources.
  • Know VPCs may be shared. Shared VPCs include a host VPC and one or more service VPCs. Shared VPCs are used to make resources in one project accessible to resources in other projects. Another advantage of Shared VPCs is that you can separate project and network management duties.
  • Know what firewall rules are and how to use them. Firewall rules control network traffic by blocking or allowing traffic to (ingress) or from (egress) a network. Firewall rules in Google Cloud are defined at the network level, but connections are allowed or denied on a per-instance basis. Two implied rules allow all outgoing traffic and deny most incoming traffic. Implied rules cannot be deleted, but they can be overridden by higher-priority rules. When subnets are automatically created for a VPC, default rules are created to allow typical traffic patterns. These rules include default-allow-internal, default-allow-ssh, default-allow-rdp, and default-allow-icmp.
  • Know CIDR block notation. You can specify an IP range using the CIDR notation. This consists of an IPv4 IP address followed by a /, followed by an integer. The integer specifies the number of bits used to identify the subnet; the remaining bits are used to determine the host address.
  • Understand why hybrid-cloud networking is needed. When workloads are run in different environments, there will be a need for reliable networking with adequate capacity. Key considerations include latency, throughput, reliability, and network topology.
  • Understand hybrid-cloud connectivity options and their pros and cons. Three ways to implement hybrid-cloud connectivity are Cloud VPN, Cloud Interconnect, and direct peering. Cloud VPN is a GCP service that provides virtual private networks between GCP and on-premises networks using the public internet. The Cloud Interconnect service provides high throughput and highly available networking between GCP and an on-premises network using private network connections.
  • Know service-centric networking options for private access. Private access options allow VMs in a VPC to reach APIs and services without requiring an external IP address. Serverless VPC access is used with Cloud Run, App Engine Standard, and Cloud Functions.
  • Know the five types of load balancers and when to use them. The five types of load balancers are Network TCP/UDP, Internal TCP/UDP, HTTP(S), SSL Proxy, and TCP Proxy. Choosing among these requires understanding if traffic will be distributed within a single region or across multiple regions, which protocols are used, and whether the traffic is internal or external to GCP.

Review Questions

  1. Your team has deployed a VPC with default subnets in all regions. The lead network architect at your company is concerned about possible overlap in the use of private addresses. How would you explain how you are dealing with the potential problem?
    1. You inform the network architect that you are not using private addresses at all.
    2. When default subnets are created for a VPC, each region is assigned a different IP address range.
    3. You have increased the size of the subnet mask in the CIDR block specification of the set of IP addresses.
    4. You agree to assign new IP address ranges on all subnets.
  2. A data warehouse service running in GCP has all of its resources in a single project. The e-commerce application has resources in another project, including a database with transaction data that will be loaded into the data warehouse. The data warehousing team would like to read data directly from the database using extraction, transformation, and load processes that run on Compute Engine instances in the data warehouse project. Which of the following network constructs could help with this?
    1. Shared VPC
    2. Regional load balancing
    3. Direct peering
    4. Cloud VPN
  3. An intern working with your team has changed some firewall rules. Prior to the change, all Compute Engine instances on the network could connect to all other instances on the network. After the change, some nodes cannot reach other nodes. What might have been the change that causes this behavior?
    1. One or more implied rules were deleted.
    2. The default-allow-internal rule was deleted.
    3. The default-all-icmp rule was deleted.
    4. The priority of a rule was set higher than 65535.
  4. The network administrator at your company has asked that you configure a firewall rule that will always take precedence over any other firewall rule. What priority would you assign?
    1. 0
    2. 1
    3. 65534
    4. 65535
  5. During a review of a GCP network configuration, a developer asks you to explain CIDR notation. Specifically, what does the 8 mean in the CIDR block 172.16.10.2/8?
    1. 8 is the number of bits used to specify a host address.
    2. 8 is the number of bits used to specify the subnet mask.
    3. 8 is the number of octets used to specify a host address.
    4. 8 is the number of octets used to specify the subnet mask.
  6. Several new firewall rules have been added to a VPC. Several users are reporting unusual problems with applications that did not occur before the firewall rule changes. You'd like to debug the firewall rules while causing the least impact on the network and doing so as quickly as possible. Which of the following options is best?
    1. Set all new firewall priorities to 0 so that they all take precedence over other rules.
    2. Set all new firewall priorities to 65535 so that all other rules take precedence over these rules.
    3. Disable one rule at a time to see whether that eliminates the problems. If needed, disable combinations of rules until the problems are eliminated.
    4. Remove all firewall rules and add them back one at a time until the problems occur and then remove the latest rule added back.
  7. An executive wants to understand what changes in the current cloud architecture are required to run compute-intensive machine learning workloads in the cloud and have the models run in production using on-premises servers. The models are updated daily. There is no network connectivity between the cloud and on-premises networks. What would you tell the executive?
    1. Implement additional firewall rules.
    2. Use global load balancing.
    3. Use hybrid-cloud networking.
    4. Use regional load balancing.
  8. To comply with regulations, you need to deploy a disaster recovery site that has the same design and configuration as your production environment. You want to implement the disaster recovery site in the cloud. Which topology would you use?
    1. Gated ingress topology
    2. Gated egress topology
    3. Handover topology
    4. Mirrored topology
  9. Network engineers have determined that the best option for linking the on-premises network to GCP resources is by using an IPSec VPN. Which GCP service would you use in the cloud?
    1. Cloud IPSec
    2. Cloud VPN
    3. Cloud Interconnect IPSec
    4. Cloud VPN IKE
  10. Network engineers have determined that a link between the on-premises network and GCP will require an 8 Gbps connection. Which option would you recommend?
    1. Cloud VPN
    2. Partner Interconnect
    3. Direct Interconnect
    4. Hybrid Interconnect
  11. Network engineers have determined that a link between the on-premises network and GCP will require a connection between 60 Gbps and 80 Gbps. Which hybrid-cloud networking services would best meet this requirement?
    1. Cloud VPN
    2. Cloud VPN and Direct Interconnect
    3. Direct Interconnect and Partner Interconnect
    4. Cloud VPN, Direct Interconnect, and Partner Interconnect
  12. The director of network engineering has determined that any links to networks outside of the company data center will be implemented at the level of BGP routing exchanges. What hybrid-cloud networking option should you use?
    1. Direct peering
    2. Indirect peering
    3. Global load balancing
    4. Cloud IKE
  13. A startup is designing a social site dedicated to discussing global political, social, and environmental issues. The site will include news and opinion pieces in text and video. The startup expects that some stories will be exceedingly popular, and others won't be, but they want to ensure that all users have a similar experience with regard to latency, so they plan to replicate content across regions. What load balancer should they use?
    1. HTTP(S)
    2. SSL Proxy
    3. Internal TCP/UDP
    4. TCP Proxy
  14. As a developer, you foresee the need to have a load balancer that can distribute load using only private RFC 1918 addresses. Which load balancer would you use?
    1. Internal TCP/UDP
    2. HTTP(S)
    3. SSL Proxy
    4. TCP Proxy
  15. After a thorough review of the options, a team of developers and network engineers have determined that the SSL Proxy load balancer is the best option for their needs. What other GCP service must they have to use the SSL Proxy load balancer?
    1. Cloud Storage
    2. Cloud VPN
    3. Premium Tier networking
    4. TCP Proxy Load Balancing
  16. You want to connect to access Cloud Storage APIs from a Compute Engine VM that has only an internal IP address. What GCP service would you use to enable that access?
    1. Private Service Connect for Google APIs
    2. Dedicated Interconnect
    3. Partner Interconnect
    4. HA VPN
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset