© Julian Soh, Marshall Copeland, Anthony Puca, and Micheleen Harris 2020
J. Soh et al.Microsoft Azurehttps://doi.org/10.1007/978-1-4842-5958-0_13

13. Network Platform as a Service

Julian Soh1 , Marshall Copeland2, Anthony Puca3 and Micheleen Harris1
(1)
Washington, WA, USA
(2)
Texas, TX, USA
(3)
Colorado, CO, USA
 

Azure Virtual Network (VNet) should be the starting point before deploying any virtual machine, web app, or storage container. The virtual networks in your Azure subscription are the global and local networks used for connectivity in the cloud. Internally, VNet allows secure private connections for all of your business resources. Externally, it supports exposing public endpoints to the Internet for access by clients or cyberattacks.

This chapter provides information focusing on the Azure cloud-native networking services (i.e., platform as a service). You’ll learn about the service, how it is used in the Azure platform, and why it is helpful to your business. In Chapter 2, you were provided an overview of Azure networking by creating a virtual network for all services and then dividing that network into subnets.

The address space used by the IP address is divided using Classless Inter-Domain Routing (CIDR) . This range of IP subnets is represented by using a slash and a number (i.e., /16, /24, /128, and so on).

CIDR eases the work required by reducing the load on large network routers, which increases the size of the routing table. The aggregation of IP addresses through CIDR reduces the size of the routing tables; it is used inside Microsoft Azure for all VNets.

This chapter will not enable you to be a network architect, but you learn about the advantages of using Azure networking platform services, including how it helps route IP traffic and security.

Figure 13-1 visualizes the layers of service in the Azure network.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig1_HTML.jpg
Figure 13-1

Layers of Microsoft Azure network security visualization

When you’re working in the Azure portal, you can create a TCP/IP network. Figure 13-2 is a screenshot of the Azure portal during the final wizard journey page. The view displays creating a large virtual network, many subnets, and default network security services, including distributed denial of service (DDoS) protection and firewalls.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig2_HTML.jpg
Figure 13-2

Azure portal view of virtual network platform services

You should reference Figure 13-2 as you continue through this chapter and learn about the network services and security features, and how they support Microsoft Azure’s network layer of defense. We don’t want to go too deeply into a network layer and cybersecurity-focused discussion; however, a high-level reference of the Open Systems Interconnection (OSI) model is helpful.

Table 13-1 provides a conceptual view of the communication framework; you may reference this information as you read through this chapter.
Table 13-1

OSI Model to Reference Azure Network Services Support

Layer Name

Number

Description

Application

7

Data: Applications run for users and computers to interact

Presentation

6

Data: Usable Data, encryption / decryption

Session

5

Data: Sync, controls ports and sessions and connections

Transport

4

Segments: Host to host, data over TCP or UDP protocols

Network

3

Packets with IP address, for data delivery

Datalink

2

Frames (Mac address, Nic) logical link for data

Physical

1

Bits: stream for the Network Interface Card (NIC)

Note

An excellent and free computer security glossary is at https://​csrc.​nist.​gov/​Glossary.

Azure DDoS Protection

Azure DDoS (distributed denial of service) protection is mitigation enabled with the “basic” service as part of the network platform (i.e., when you build a VNet). The service continuously monitors while providing defense against common network-level attacks. As a reminder, according to the National Institute of Standards and Technology (NIST), a DDoS attack is “a denial of service technique that uses numerous hosts to perform the attack.” A simple definition is an overwhelming malicious attack to disrupt services and make them unavailable. This concept becomes clear as you consider the magnitude of globally coordinated DDoS attacks measured by the network bandwidth disruption, sometimes measured in terabits per second.

Before you can use the Azure network platform service DDoS, the purchase must be completed so that it appears as an option when you use the portal to create a VNet.

Figure 13-3 shows purchasing the service in the Azure portal. The DDoS protection service is a monthly recurring fee, but if you only use it for a portion of days or weeks, Azure offers a prorated bill for the hours and data used.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig3_HTML.jpg
Figure 13-3

Azure portal view to purchase the DDoS protection plan

You are billed a service fee and a data processing fee for the protected resources in the Virtual Network plus. This cost is a service fee plus a calculated data processing on the egress bandwidth traffic (outgoing) from any VM (i.e., traffic from the VM to the Internet). IP traffic that travels across the ExpressRoute or virtual private network (VPN) gateway is not counted. As you continue to understand the services in Microsoft Azure that add to overall charges, consider the real-time analysis and storing application traffic for historical analysis storage are part of the data processing charges.

We all want to know how much a network cloud service costs. The worst answer is, “it depends.” The (retail) monthly charge at the time of writing is $2,944/month to protect 100 Azure resources. These resources (i.e., IaaS VM) can be across a single Azure tenant or multiple Azure subscriptions. Every resource over 100 costs another $30/month per VM.

The list of protected Azure network platforms continues to expand as services are added and improved. The resources that benefit from DDoS protection include
  • Application Gateway

  • Application Gateway (with WAF)

  • IaaS virtual machine attached to a Public IP

  • Load balancers

  • Azure Service Fabric

  • IaaS network virtual appliance (i.e., Marketplace NVA)

Azure Service Fabric is a platform node cluster that, as a service, scales and runs applications on the OS of your choice. DevOps teams can deploy applications in any programming language over the Service Fabric using guest executables and containers. You can optionally choose to implement communication using the Service Fabric SDK (software development kit), which supports .NET and Java.

Web Application Firewall

Malicious attacks attempt to exploit commonly known vulnerabilities. The Azure Web Application Firewall (WAF) follows defined rules based on the Open Web Application Security Project (OWASP) core rule sets 3.1, 3.0, or 2.2.9. The firewall allows the security administrator to create IP traffic rules that allow or deny IP traffic.

Creating a WAF can easily be completed following the Azure portal journey views, as shown in Figure 13-4.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig4_HTML.jpg
Figure 13-4

Azure portal view to create a web application firewall policy

The WAF portal configuration enables the OWASP policies to be applied to different Azure services, including Azure Front Door and Azure Content Delivery Network (CDN), which are discussed in this chapter. After selecting the WAF policy and naming the default policies are enabled, as shown in Figure 13-5.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig5_HTML.jpg
Figure 13-5

Azure portal showing options to configure default OWASP managed rules

Note

You can learn more about the OWASP and the ModSecurity Core Rule Set used in Azure WAF at https://​owasp.​org/​www-project-modsecurity-core-rule-set/​.

The Azure Firewall supports creating custom rules but comes with built-in rules that reduce security risks based on the OWASP collection of top-10 application security risks. The firewall helps to educate developers and be more aware of web application security. The top-10 list from 2017 includes
  • A1-Injection

  • A2-Broken Authentication

  • A3-Sensitive Data Exposure

  • A4-XML External Entities (XXE)

  • A5-Broken Access Control

  • A6-Security Misconfiguration

  • A7-Cross-Site Scripting (XSS)

  • A8-Insecure Deserialization

  • A9-Using Components with Known Vulnerabilities

  • A10-Insufficient Logging and Monitoring

The program accepts contributions to its top 10 list.

Application Gateway

The cloud-native Azure Application Gateway provides Transport Layer Security (TLS) protocol termination (sometimes this is referred to using the older reference: Secure Socket Layer (SSL) offloading.) Azure Application Gateway manages web traffic based on HTTP request headers or URI paths, so this is marketed as a decision-based routing gateway. It routes traffic at the top of the OSI model at layer 7 (see Table 13-1). It is used primarily for web traffic with a built-in load balancer duplicate.

The traditional third-party (on-premises) application gateway, also called an application proxy, works at OSI Layer 4 – TCP and UDP. The support for an ephemeral IP address, port translation support protocols used for instant messaging, BitTorrent, and File Transfer Protocol (FTP) . You should be aware of the application gateway total cost has a basic cost version (small, medium, large) and an integrated Azure firewall (small, medium, large) cost version, and both have versions have a data processing cost.

During the creation process, the front-end tab is an IP address that is typically set to public. You can create a new public IP address to separate it from the back-end tab. Add routing rules and configuration for your back-end servers, as shown in Figure 13-6. In this chapter’s examples, we use the Microsoft IIS configuration templates in the GitHub repository.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig6_HTML.jpg
Figure 13-6

Azure portal create an application gateway

Load Balancers

Azure load balancers work at OSI layer 4 (refer to Table 13-1) to distribute incoming traffic across the different virtual machines. It is important to understand that at layer 4, the Transport Control Protocol (TCP) and Universal Data Protocol (UDP), allows hackers access to port scanning, they try to identify open network ports. Restricting access reduces the security risk using the configuration to narrowly allow traffic.

The load balancer requires the following.
  • source IP address

  • source port number

  • destination IP address

  • destination port number

  • protocol type

Data that arrives at the load balancers front end (inbound) are distributed to the load on the back end (outbound). The Azure load balancers use health probes to determine the virtual machines in the Azure scale set to route traffic.

A public load balancer supports Internet traffic from a public IP address to a back-end pool of servers. The public IP is separated from the internal IP using network address translation (NAT) . An internal load balancer uses private IP addresses only to route traffic inside a virtual network.

The health probes supported by the load balancer include
  • HTTPS (HTTP probe with TLS wrapper)

  • HTTP

  • TCP

Creating a network to support a load balancer is discussed in Chapter 11; once the networks are configured, you can manually create the external or internal load balancer in the Azure portal, as shown in Figure 13-7.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig7_HTML.jpg
Figure 13-7

Azure portal view to create the Azure load balancer

Azure local balancers add a layer of security in the native cloud network platform that also reduces the risk of DDoS attacks when they are narrowly configured. Permitting IP-focused traffic and denying others reduces the security risk of server overload or traffic overload. There are two purchasing options or SKUs to choose, basic and standard.

Customization is very granular with the standard load balancer configuration. Also, Azure virtual machines scale sets and availability sets can only be connected to one load balancer, basic or standard, but not both. Once you select one of the SKU versions, it cannot be changed, only deleted and rebuilt.

Standard load balancers have many features, but the following are a few key features to help guide your decision.
  • 1000 instances back-end pool size

  • Internal load balancer for HA ports

  • Outbound NAT rule configuration

  • Reset capability for idle TCP

  • Inbound and outbound multiple front-end support

Azure Front Door Service

The Azure Front Door service operates at OSI application layer 7 (see Table 13-1) and is classified as a delivery network for applications. The Front Door Service supports Microsoft’s global deployment model to enable high availability for business-critical web applications. It also includes features of the Azure Application Gateway, OSI layer 7, acceleration platform, and global HTTP(s) load balancers. It provides built-in DDoS protection and application layer security and caching. Front Door enables you to build applications that enable
  • Built-in DDoS

  • Application layer security

  • Caching

  • High availability

Front Door services support web and mobile apps, cloud services, and virtual machines. Also, you can include on-premises services in your Front Door architecture for hybrid deployments or migration strategies to the Azure cloud. If you currently do not have a Front Door service, follow the exercise to create the servers for testing.

Exercise: Create Front Door Service
Remember the two web apps that you created in Chapter 12? In this exercise, you deploy the Azure Front Door service and place these web apps behind it.
  1. 1.

    In the Home view of your Azure portal, select Create a resource. Enter Front Door, and select Create.

     
  2. 2.

    In the Basic Journey tab, enter a new resource group and location, and then click Next : Configuration. Your screen should look similar to the following screenshot.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Figa_HTML.jpg
  1. 3.

    In the Step 1 square, click + to enter the front-end host name, similar to what’s shown in the following screenshot. Notice that you have the option to enable session affinity (i.e., sticky connections; leave disabled for this exercise). Also, there is an option to enable WAF (leave it disabled for this exercise). Click Add.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Figb_HTML.jpg
  1. 4.

    The next step is to add back-end pools. In the Step 2 square, click + in the top-right of the screen, and enter a unique name for the back-end pool, similar to what’s shown in the following screenshot.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Figc_HTML.jpg
  1. 5.

    Click the +. Add a back-end label to add the first web apps from the earlier steps. The following screenshot should look similar to your screen.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Figd_HTML.jpg
  1. 6.

    Select App service from the first drop-down menu. The back-end host name fills in automatically. Your screen should look similar to the following screenshot. Click Add.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Fige_HTML.jpg
  1. 7.

    You need to add the second web app from West Europe. Click + and add a back end. Select Backend host name and then select the other web app. Your screen should look similar to the following screenshot. Select Add.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Figf_HTML.jpg
  1. 8.

    With both web apps configured (see Chapter 12) for the back-end pool, select Add to configure the routing rules. Select Add.

     
  2. 9.

    Enter a unique routing rule name. This connects the front-end request to forward to the back-end pool. Leave the other features at their default for this exercise. Click Add.

     
  3. 10.

    Your screen should show that the front end, back end, and routing rules are configured. Click Next : Tags.

     
  4. 11.

    Use the drop-down menu to select the Front Door app and location, and then click Next : Review create. Select Create.

     
  5. 12.

    The Azure portal changes to indicate your Front door deployment is underway. Wait for the completion, and select Go to resource to view the service.

     
  6. 13.

    The following screenshot should look similar to your Front Door screen.

     
../images/336094_2_En_13_Chapter/336094_2_En_13_Figg_HTML.jpg

Azure Firewall

The Azure Firewall Manager portal view is shown in Figure 13-8 display a preview feature; however, the portal changes over time. As new features are added, the portal (real estate) is consolidated or reorganize with the underlying Azure Firewall APIs continued to be supported. The current API is at https://docs.microsoft.com/en-us/rest/api/firewall/azurefirewalls.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig8_HTML.jpg
Figure 13-8

Azure firewall manager view from Azure portal

The rules to allow traffic through the Azure firewall are sometimes called whitelist and rules that block traffic may be referred to as blacklist. Rules have different priorities, so the rule is tested, and if it matches, no future rules are tested. The rules are similar to a network security group (NSG) from Chapter 2 and Chapter 7. Rules contain a name, protocol, source type, source IP, destination type, destination address, and destination ports and must include a priority number (100–6500) and action (allow /deny).

Figure 13-9 shows the Azure portal to add a network rule collection.
../images/336094_2_En_13_Chapter/336094_2_En_13_Fig9_HTML.jpg
Figure 13-9

Azure portal to add network rule collection with attributes

Summary

In this chapter, you learned about the Azure network platform and its many services, including an out-of-the-box global network service that you can leverage. You learned about the DDoS protection built-in to Azure and additional options for granular control and analysis. Then, you learned about the security features in Azure Web Application Firewall and load balancers.

You learned about the use of application gateways to improve the customer experience. You went through an exercise to create Azure Front Door service, which included many Azure features like DDoS protection and web application firewalls. The final topic covered using Azure Firewall to create custom rules to allow or deny network traffic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset