Azure Virtual Network (VNet) should be the starting point before deploying any virtual machine, web app, or storage container. The virtual networks in your Azure subscription are the global and local networks used for connectivity in the cloud. Internally, VNet allows secure private connections for all of your business resources. Externally, it supports exposing public endpoints to the Internet for access by clients or cyberattacks.
This chapter provides information focusing on the Azure cloud-native networking services (i.e., platform as a service). You’ll learn about the service, how it is used in the Azure platform, and why it is helpful to your business. In Chapter 2, you were provided an overview of Azure networking by creating a virtual network for all services and then dividing that network into subnets.
The address space used by the IP address is divided using Classless Inter-Domain Routing (CIDR) . This range of IP subnets is represented by using a slash and a number (i.e., /16, /24, /128, and so on).
CIDR eases the work required by reducing the load on large network routers, which increases the size of the routing table. The aggregation of IP addresses through CIDR reduces the size of the routing tables; it is used inside Microsoft Azure for all VNets.
This chapter will not enable you to be a network architect, but you learn about the advantages of using Azure networking platform services, including how it helps route IP traffic and security.
You should reference Figure 13-2 as you continue through this chapter and learn about the network services and security features, and how they support Microsoft Azure’s network layer of defense. We don’t want to go too deeply into a network layer and cybersecurity-focused discussion; however, a high-level reference of the Open Systems Interconnection (OSI) model is helpful.
OSI Model to Reference Azure Network Services Support
Layer Name | Number | Description |
---|---|---|
Application | 7 | Data: Applications run for users and computers to interact |
Presentation | 6 | Data: Usable Data, encryption / decryption |
Session | 5 | Data: Sync, controls ports and sessions and connections |
Transport | 4 | Segments: Host to host, data over TCP or UDP protocols |
Network | 3 | Packets with IP address, for data delivery |
Datalink | 2 | Frames (Mac address, Nic) logical link for data |
Physical | 1 | Bits: stream for the Network Interface Card (NIC) |
An excellent and free computer security glossary is at https://csrc.nist.gov/Glossary.
Azure DDoS Protection
Azure DDoS (distributed denial of service) protection is mitigation enabled with the “basic” service as part of the network platform (i.e., when you build a VNet). The service continuously monitors while providing defense against common network-level attacks. As a reminder, according to the National Institute of Standards and Technology (NIST), a DDoS attack is “a denial of service technique that uses numerous hosts to perform the attack.” A simple definition is an overwhelming malicious attack to disrupt services and make them unavailable. This concept becomes clear as you consider the magnitude of globally coordinated DDoS attacks measured by the network bandwidth disruption, sometimes measured in terabits per second.
Before you can use the Azure network platform service DDoS, the purchase must be completed so that it appears as an option when you use the portal to create a VNet.
You are billed a service fee and a data processing fee for the protected resources in the Virtual Network plus. This cost is a service fee plus a calculated data processing on the egress bandwidth traffic (outgoing) from any VM (i.e., traffic from the VM to the Internet). IP traffic that travels across the ExpressRoute or virtual private network (VPN) gateway is not counted. As you continue to understand the services in Microsoft Azure that add to overall charges, consider the real-time analysis and storing application traffic for historical analysis storage are part of the data processing charges.
We all want to know how much a network cloud service costs. The worst answer is, “it depends.” The (retail) monthly charge at the time of writing is $2,944/month to protect 100 Azure resources. These resources (i.e., IaaS VM) can be across a single Azure tenant or multiple Azure subscriptions. Every resource over 100 costs another $30/month per VM.
Application Gateway
Application Gateway (with WAF)
IaaS virtual machine attached to a Public IP
Load balancers
Azure Service Fabric
IaaS network virtual appliance (i.e., Marketplace NVA)
Azure Service Fabric is a platform node cluster that, as a service, scales and runs applications on the OS of your choice. DevOps teams can deploy applications in any programming language over the Service Fabric using guest executables and containers. You can optionally choose to implement communication using the Service Fabric SDK (software development kit), which supports .NET and Java.
Web Application Firewall
Malicious attacks attempt to exploit commonly known vulnerabilities. The Azure Web Application Firewall (WAF) follows defined rules based on the Open Web Application Security Project (OWASP) core rule sets 3.1, 3.0, or 2.2.9. The firewall allows the security administrator to create IP traffic rules that allow or deny IP traffic.
You can learn more about the OWASP and the ModSecurity Core Rule Set used in Azure WAF at https://owasp.org/www-project-modsecurity-core-rule-set/.
A1-Injection
A2-Broken Authentication
A3-Sensitive Data Exposure
A4-XML External Entities (XXE)
A5-Broken Access Control
A6-Security Misconfiguration
A7-Cross-Site Scripting (XSS)
A8-Insecure Deserialization
A9-Using Components with Known Vulnerabilities
A10-Insufficient Logging and Monitoring
The program accepts contributions to its top 10 list.
Application Gateway
The cloud-native Azure Application Gateway provides Transport Layer Security (TLS) protocol termination (sometimes this is referred to using the older reference: Secure Socket Layer (SSL) offloading.) Azure Application Gateway manages web traffic based on HTTP request headers or URI paths, so this is marketed as a decision-based routing gateway. It routes traffic at the top of the OSI model at layer 7 (see Table 13-1). It is used primarily for web traffic with a built-in load balancer duplicate.
The traditional third-party (on-premises) application gateway, also called an application proxy, works at OSI Layer 4 – TCP and UDP. The support for an ephemeral IP address, port translation support protocols used for instant messaging, BitTorrent, and File Transfer Protocol (FTP) . You should be aware of the application gateway total cost has a basic cost version (small, medium, large) and an integrated Azure firewall (small, medium, large) cost version, and both have versions have a data processing cost.
Load Balancers
Azure load balancers work at OSI layer 4 (refer to Table 13-1) to distribute incoming traffic across the different virtual machines. It is important to understand that at layer 4, the Transport Control Protocol (TCP) and Universal Data Protocol (UDP), allows hackers access to port scanning, they try to identify open network ports. Restricting access reduces the security risk using the configuration to narrowly allow traffic.
source IP address
source port number
destination IP address
destination port number
protocol type
Data that arrives at the load balancers front end (inbound) are distributed to the load on the back end (outbound). The Azure load balancers use health probes to determine the virtual machines in the Azure scale set to route traffic.
A public load balancer supports Internet traffic from a public IP address to a back-end pool of servers. The public IP is separated from the internal IP using network address translation (NAT) . An internal load balancer uses private IP addresses only to route traffic inside a virtual network.
HTTPS (HTTP probe with TLS wrapper)
HTTP
TCP
Azure local balancers add a layer of security in the native cloud network platform that also reduces the risk of DDoS attacks when they are narrowly configured. Permitting IP-focused traffic and denying others reduces the security risk of server overload or traffic overload. There are two purchasing options or SKUs to choose, basic and standard.
Customization is very granular with the standard load balancer configuration. Also, Azure virtual machines scale sets and availability sets can only be connected to one load balancer, basic or standard, but not both. Once you select one of the SKU versions, it cannot be changed, only deleted and rebuilt.
1000 instances back-end pool size
Internal load balancer for HA ports
Outbound NAT rule configuration
Reset capability for idle TCP
Inbound and outbound multiple front-end support
Azure Front Door Service
Built-in DDoS
Application layer security
Caching
High availability
Front Door services support web and mobile apps, cloud services, and virtual machines. Also, you can include on-premises services in your Front Door architecture for hybrid deployments or migration strategies to the Azure cloud. If you currently do not have a Front Door service, follow the exercise to create the servers for testing.
- 1.
In the Home view of your Azure portal, select Create a resource. Enter Front Door, and select Create.
- 2.
In the Basic Journey tab, enter a new resource group and location, and then click Next : Configuration. Your screen should look similar to the following screenshot.
- 3.
In the Step 1 square, click + to enter the front-end host name, similar to what’s shown in the following screenshot. Notice that you have the option to enable session affinity (i.e., sticky connections; leave disabled for this exercise). Also, there is an option to enable WAF (leave it disabled for this exercise). Click Add.
- 4.
The next step is to add back-end pools. In the Step 2 square, click + in the top-right of the screen, and enter a unique name for the back-end pool, similar to what’s shown in the following screenshot.
- 5.
Click the +. Add a back-end label to add the first web apps from the earlier steps. The following screenshot should look similar to your screen.
- 6.
Select App service from the first drop-down menu. The back-end host name fills in automatically. Your screen should look similar to the following screenshot. Click Add.
- 7.
You need to add the second web app from West Europe. Click + and add a back end. Select Backend host name and then select the other web app. Your screen should look similar to the following screenshot. Select Add.
- 8.
With both web apps configured (see Chapter 12) for the back-end pool, select Add to configure the routing rules. Select Add.
- 9.
Enter a unique routing rule name. This connects the front-end request to forward to the back-end pool. Leave the other features at their default for this exercise. Click Add.
- 10.
Your screen should show that the front end, back end, and routing rules are configured. Click Next : Tags.
- 11.
Use the drop-down menu to select the Front Door app and location, and then click Next : Review create. Select Create.
- 12.
The Azure portal changes to indicate your Front door deployment is underway. Wait for the completion, and select Go to resource to view the service.
- 13.
The following screenshot should look similar to your Front Door screen.
Azure Firewall
The rules to allow traffic through the Azure firewall are sometimes called whitelist and rules that block traffic may be referred to as blacklist. Rules have different priorities, so the rule is tested, and if it matches, no future rules are tested. The rules are similar to a network security group (NSG) from Chapter 2 and Chapter 7. Rules contain a name, protocol, source type, source IP, destination type, destination address, and destination ports and must include a priority number (100–6500) and action (allow /deny).
Summary
In this chapter, you learned about the Azure network platform and its many services, including an out-of-the-box global network service that you can leverage. You learned about the DDoS protection built-in to Azure and additional options for granular control and analysis. Then, you learned about the security features in Azure Web Application Firewall and load balancers.
You learned about the use of application gateways to improve the customer experience. You went through an exercise to create Azure Front Door service, which included many Azure features like DDoS protection and web application firewalls. The final topic covered using Azure Firewall to create custom rules to allow or deny network traffic.