Chapter 4: Azure Network Security

In Chapter 1, An Introduction to Azure Security, we briefly touched on network security in Azure, but only discussed how network security is handled by Microsoft inside Azure data centers. As the network also falls under the shared responsibility model, in this chapter, we will discuss network security from a user aspect and how to handle the security we are responsible for.

We will cover the following topics in this chapter:

  • Understanding Azure Virtual Network
  • Considering other virtual network security options
  • Azure DDoS protection
  • Azure Bastion
  • Hub-and-spoke network topology
  • Understanding Azure Application Gateway
  • Understanding Azure Front Door

Understanding Azure Virtual Network

The first step in the transition from an on-premises environment to the cloud is Infrastructure as a Service (IaaS). One of the key elements of IaaS is Virtual Networks (VNets). VNets are a virtual representation of our local network with IP address ranges, subnets, and all other network components that we would find in local infrastructure. Recently, we have seen a lot of cloud network components introduced to on-premises networks as well, with the introduction of Software-Defined Networking (SDN) in Windows Server 2016.

Before we start looking at VNet security, let's remember that naming standards should be applied to all Azure resources, and networking is no exception. As environments grow, this will help you have better control over your environment, easier management, and more insight into your security posture.

Each VNet that we create is a completely isolated piece of a network in Azure. We can create multiple VNets inside one subscription, or even multiple VNets inside one region. There is no direct communication between any VNets, even those created inside a single subscription or region, unless configured otherwise. The first thing that needs to be configured for a VNet is the IP address range. The next thing we need is a subnet with its own range. One VNet can have multiple subnets. Each subnet must have its own IP address range within the VNet's IP address range and cannot overlap with other subnets in the same VNet.

One thing we need to consider when defining the IP address range is that it should not overlap with other VNets we use. Even when there is no initial requirement to create a connection between different VNets, this may become a requirement in the future.

Important Note

VNets that have overlapping IP ranges will not be compatible for connection.

VNets are used for communication between Azure resources over private IP addresses. Primarily, they're used for communication between Azure Virtual Machines (VMs), but other resources can be configured to use private IP addresses for communication as well.

Communication between Azure VMs occurs over a Network Interface Card (NIC). Each VM can be assigned one or more NICs, depending on the VM size. A bigger size allows more NICs to be associated with a VM. Each NIC can be assigned a private and public IP address. A private IP address is required, and a public IP address is optional. As an NIC must have a private IP address, it must be associated with VNet and a subnet on the same VNet.

As a first line of defense, we can use a Network Security Group (NSG) to control traffic for Azure VMs. NSGs can be used to control inbound and outbound traffic. Default inbound and outbound rules are created during the NSG's creation, but we can change (or remove) these rules and create additional rules based on our requirements. The default inbound rules are shown in the following screenshot:

Figure 4.1 – Inbound security rules

Figure 4.1 – Inbound security rules

The default inbound rules will allow any traffic coming from within the VNet and any traffic forwarded from Azure Load Balancer. All other traffic will be blocked.

Conversely, the default outbound rule will allow almost any outbound traffic. The default rules will allow any outgoing traffic to a VNet or the internet, as in the following screenshot:

Figure 4.2 – Outbound security rules

Figure 4.2 – Outbound security rules

To add a new inbound rule, we need to define the source, source port range, destination, destination port range, protocol, action, priority, and name. Optionally, we can add a description that will help us understand why this rule was created. An example of how to create a rule to allow traffic over port 443 (HTTPS) is shown in the following screenshot:

Figure 4.3 – Adding new inbound security rules

Figure 4.3 – Adding new inbound security rules

Alternatively, we can create the same rule with Azure PowerShell:

  1. First, we need to create a resource group where resources will be deployed:

    New-AzResourceGroup -Name "Packt-Security" '

    -Location  "westeurope"

  2. Next, we need to deploy our VNet:

    New-AzVirtualNetwork -Name "Packt-VNet"  '

    -ResourceGroupName "Packt-Security"  '

    -Location "westeurope" '

    -AddressPrefix  10.11.0.0/16

  3. And finally, we deploy an NSG and create a rule:

    New-AzNetworkSecurityGroup -Name "nsg1" '

    -ResourceGroupName "Packt-Security" '

    -Location "westeurope"

    $nsg=Get-AzNetworkSecurityGroup -Name 'nsg1' `

    -ResourceGroupName 'Packt-Security'

    $nsg | Add-AzNetworkSecurityRuleConfig `

    -Name 'Allow_ HTTPS' `

    -Description 'Allow_HTTPS' `

    -Access Allow -Protocol Tcp `

    -Direction Inbound `

    -Priority 100 `

    -SourceAddressPrefix Internet `

    -SourcePortRange * `

    -DestinationAddressPrefix * `

    -DestinationPortRange 443 `

    | Set-AzNetworkSecurityGroup

In order to add a new outbound rule, we need to define the same option as the inbound rule. An example of how to create a rule to deny traffic over port 22 is shown in the following screenshot:

Figure 4.4 – Adding new outbound security rules

Figure 4.4 – Adding new outbound security rules

Note that priority plays a very important role when it comes to NSGs. A lower number means higher priority and a higher number means lower priority. If we have two rules that are in contradiction, the rule with the lower number will take precedence. For example, if we create a rule to allow traffic over port 443 with a priority of 100 and create a rule to deny traffic over port 443 with a priority of 400, traffic will be allowed, as the Allow rule has a greater priority.

Again, we can use Azure PowerShell to create the rule:

$nsg | Add-AzNetworkSecurityRuleConfig -Name 'Allow_SSH' '

-Description 'Allow_SSH' '

-Access Allow -Protocol Tcp '

-Direction Outbound -Priority 100 '

-SourceAddressPrefix VirtualNetwork -SourcePortRange * '

-DestinationAddressPrefix * -DestinationPortRange 22 '

| Set-AzNetworkSecurityGroup

An NSG can be associated with subnets and NICs. An NSG associated with a subnet level will have all the rules applied to all the devices associated with that subnet. When an NSG is associated with an NIC, rules will be applied only to that NIC. It's recommended to associate NSGs with subnets instead of NICs for more simple management. It's easy to manage traffic on an NIC level when we have fewer VMs. But when the number of VMs is in the dozens, hundreds, or even thousands, it becomes very hard to manage traffic on an NIC level. It's much better to have VMs that have similar requirements associated with a subnet level.

In order to associate an NSG with a subnet, follow these steps:

  1. Go to the Subnet section under NSG1 and select Associate, as in the following screenshot:
Figure 4.5 – NSG - Subnets blade

Figure 4.5 – NSG - Subnets blade

  1. Next, we select the VNet, as in the following screenshot:
Figure 4.6 – VNet association with NSG

Figure 4.6 – VNet association with NSG

  1. Finally, we select the subnet and confirm.
Figure 4.7 – Subnet association with NSG

Figure 4.7 – Subnet association with NSG

Now, all of the Azure VMs added to this subnet will have all the NSG rules applied immediately.

The Azure PowerShell script to associate an NSG with a subnet is the following:

$vnet = Get-AzVirtualNetwork -Name 'Packt-VNet' '

-ResourceGroupName 'Packt-Security'

Add-AzVirtualNetworkSubnetConfig -Name FrontEnd '

-AddressPrefix 10.11.0.0/24 -VirtualNetwork $vnet '

$subnet = Get-AzVirtualNetworkSubnetConfig '

-VirtualNetwork $vnet -Name FrontEnd '

$nsg = Get-AzNetworkSecurityGroup '

-ResourceGroupName 'Packt-Security' -Name 'nsg1'

$subnet.NetworkSecurityGroup = $nsg

Set-AzVirtualNetwork -VirtualNetwork $vnet

Let's take a simple three-tier application architecture as an example. Here, we would have VMs accessible from outside (over the internet), and these VMs should be placed in a DMZ subnet, which would be associated with an NSG that would allow such traffic. Next, we would have an application tier, which would allow traffic inside a VNet but no direct access over the internet.

The application tier would be associated with an appropriate subnet, which would (with the NSG on a subnet level) deny traffic over the internet but allow any traffic coming from the DMZ. Lastly, we would have a database tier, which would allow only traffic coming from the application tier, using the NSG associated with a subnet level. This way, any request would be able to reach the DMZ tier. Once a request is validated, it can pass to the application tier and, from there, it can reach the database tier. No direct communication is allowed between the DMZ and database tiers, and a direct request is not allowed to go from the internet to the application or database tiers.

Figure 4.8 – 3-tier network setup

Figure 4.8 – 3-tier network setup

For more security, resources can be associated with application groups. Using NSGs and application groups, we can create additional security rules with more network filtering options. Resources are associated with application security groups (ASGs), and then we can link this group to the NSG rule and allow traffic to access a specific group of resources inside VNet or the subnet. Association is usually made based on workload type, so traffic can be filtered on an additional level. Combining NSG and ASG provides better control, so traffic is not only controlled based on network segmentation but at the application level as well, which provides a way to allow specific traffic only for some resources inside the same network.

We will now be looking at connecting on-premises networks with Azure.

Connecting on-premises networks with Azure

In most cases, we already have some sort of local infrastructure and want to use the cloud as a hybrid where we combine cloud and on-premises resources. In such cases, we need to think about how we are going to access VNet from our local network.

There are three options available:

  • Point-to-Site connection (P2S) is usually used for management and/or end use connections. It enables you to create a connection from a single on-premises computer to Azure VNet. It has a secure connection, but not the most persistent one, and shouldn't be used for production purposes, only to perform management and maintenance tasks, or to access applications.
  • Site-to-Site connection (S2S) is a persistent connection that enables a network-to-network connection. In this case, that would be from an on-premises network to a VNet, where all on-premises devices can connect to Azure resources and vice versa. Using S2S enables you to expend local infrastructure to Azure, use a hybrid cloud, and take advantage of the best things both on-premises and cloud networks can offer.
  • ExpressRoute is a direct connection from a local data center to Azure. It doesn't go over the internet and offers a much better connection. Compared to an S2S connection, ExpressRoute offers more reliability and speed with lower network latency.

Next, we will be checking out how to create an S2S connection.

Creating an S2S connection

In order to create an S2S connection, several resources must be created. First, we need to create a Virtual Network Gateway (VNG). During the process of creating a VNG, we need to define a subscription, a name for the VNG, the region where it will be created, and a VNet must be selected.

The VNet that we can select is limited to the region where the VNG will be created. A separate gateway subnet must be defined, so we can either select an existing one or it will be created automatically if it doesn't exist on the selected VNet.

In the same section, we need to define the public IP address (create a new one or select an existing one) and select to enable (or disable) active-active mode or BGP. An example is shown in the screenshot that follows.

The following details need to be filled in:

  • Subscription
  • Instance details
  • Public IP address

You can see an example of this in the following screenshot:

Figure 4.9 – Creating a VNG

Figure 4.9 – Creating a VNG

Another resource we need to create is a Local Network Gateway (LNG). To create an LNG, we need to define the following:

  • Name
  • IP address
  • Address space
  • Subscription
  • Resource group
  • Location
  • BGP settings

The BGP settings are optional. The IP address we need to define is the public IP address of our VPN device, and the address range is the address range of our local network. An example is shown in the following screenshot:

Figure 4.10 – Creating an LNG

Figure 4.10 – Creating an LNG

After a VNG and an LNG are created, we need to create a connection in VNet:

  1. Under Connection settings in the VNet blade, add a new connection.
  2. The following parameters need to be defined: Name, Connection type, Virtual network gateway, Local network gateway, Shared key (PSK), and IKE Protocol.
  3. Subscription, Resource group, and Location will be locked and will use the same options as the ones assigned to the selected VNet:
Figure 4.11 – Creating a VPN connection

Figure 4.11 – Creating a VPN connection

After a connection is created in Azure, we still need to create a connection on our local VPN device. It's highly recommended to only use supported devices (most industry leaders are supported, such as Cisco, Palo Alto, and Juniper, to name a few). When configuring a connection on a local VPN device, we need to take into account all the parameters used on the Azure side.

Once a connection is configured on both sides, the tunnel is up, and we should be able to access Azure resources from an on-premises network and from Azure to a local network. Of course, we can control how traffic flows and what can access what, how, and under which conditions.

We have now seen how to create connections between Azure VNets and local networks, but we often need to connect one VNet with another VNet. Of course, it's still important to keep the same level of security, even if everything is inside Azure. In the next section, we'll discuss how to connect networks in such a situation.

Connecting a VNet to another VNet

In a case where we have multiple VNets in Azure, we may need to create a connection between them in order to allow services to communicate between networks. There are two ways in which we can achieve this goal:

  • The first one would be to create an S2S between the VNets. In this case, the process is very similar to creating an S2S between a VNet and a local network. We need to create a VNG for both VNets, but we don't need an LNG. When creating a connection, we need to select VNet-to-VNet in Connection types and select appropriate VNGs.
  • Another option would be to create VNet peering. An S2S connection is secure and encrypted, but it passes over the internet. Peering uses an Azure backbone network to route traffic, and it never leaves Azure. This makes peering even safer.

To create peering between VNets, we need to carry out the following steps:

  1. Go to the Peerings section in the VNet blade and add a new peering.
  2. We need to define the name and the VNet we want to create a connection to.
  3. Other settings are also present, such as whether we want to allow connections to go both ways, or whether we want to allow forwarded traffic.

A peering example is shown in the following screenshot:

Figure 4.12 – Creating VNet-to-VNet peering

Figure 4.12 – Creating VNet-to-VNet peering

It's very important to understand additional security settings in VNet peering and how they affect network traffic.

The network access settings will define traffic access from one VNet to another. For example, we may want to enable access from VNet A to VNet B. But, because of security settings, we want to block access from VNet B to VNet A. This way, resources in VNet A will be able to access resources in VNet B, but resources in VNet B will not be able to access resources in VNet A.

Figure 4.13 – VNet peering

Figure 4.13 – VNet peering

In the next section, we will define how we handle forwarded traffic. Let's say that VNet A is connected to VNet B and VNet C. There is no connection between VNet B and VNet C. With these settings, we define whether we want to allow traffic from VNet B to reach VNet C via VNet A. The same thing can be defined the other way around.

Figure 4.14 – VNet peering with multiple VNets

Figure 4.14 – VNet peering with multiple VNets

The gateway transit settings allow us to control whether we want a current connection to be able to use other connections with another network. For example, VNet A is connected to VNet B, and VNet B is connected to an on-premises network (or another VNet). This setting will define whether traffic from VNet A will be able to reach the on-premises network. In this case, one of the VNets would be replaced with the on-premises network. If there is a connection between the on-premises network and VNet A, and a connection between VNet A and VNet B, the gateway transit would decide whether traffic from VNet B can reach the on-premises network.

In the next section, we will be discussing another important security option, which is service endpoints in VNets.

VNet service endpoints

VNet service endpoints enable us to extend some Platform as a Service (PaaS) services to use private address spaces. With service endpoints, we connect services (that don't have this option by default) to our VNet enabling services to communicate over a private network. This way, traffic is never exposed publicly, and data exchange is carried out over the Microsoft Azure backbone network.

Only some Azure services are supported when it comes to service endpoints. The list of services is subject to change and new services can be added over time. For more details, check the Microsoft documentation for service endpoints.

The first security benefit from using service endpoints is definitely that data never leaves the private space. Let's say that we have Azure App Service and Azure SQL Database connected to the VNet with service endpoints. This way, all communication between the web application on App Service and the database on Azure SQL Database would be done securely over the Azure backbone network. No data would be exposed publicly, as is the case when using the same services without endpoints.

Without this feature, both services would only have public IP addresses and communication between them going over the internet. Even though there are ways of doing this securely, with communication being sent encrypted over HTTPS, using service endpoints partly removes the security risk in this communication.

But the security benefits of using service endpoints don't stop there. As services connected to VNet with service endpoints are assigned to a specific subnet, all security rules associated with this subnet are applied to our services as well. If an NSG blocks specific traffic on our subnet, the same traffic will be blocked for PaaS services as well.

We can enable service endpoints on VNet either during the creation of a VNet or at a later time. Service endpoints are enabled on a subnet level, and this can be done either on a VNet or subnet configuration. Follow these steps to enable service endpoints in VNet:

  1. Go to the VNet blade and select Service endpoints. Click Add and select the subnet and services you want to use, as in the following screenshot:
Figure 4.15 – Adding PaaS service endpoints

Figure 4.15 – Adding PaaS service endpoints

  1. Go to the subnet configuration and select the services, as in the following screenshot:
Figure 4.16 – Enabling service endpoints on a subnet

Figure 4.16 – Enabling service endpoints on a subnet

Enabling service endpoints on a VNet and subnet is only half the job. We need to enable settings on a PaaS service for the service endpoint to take effect. When enabling the service endpoint in the service settings, only subnets with enabled service endpoints will show up.

Service endpoints allow us to combine PaaS with IaaS. However, there are additional options when it comes to integrating PaaS with a private VNet using private endpoints. Let's take a look at this security enhancement as well.

Private endpoints

Private endpoints enable further integration between Azure PaaS services and VNet. Whereas service endpoints allow secure communication between PaaS and IaaS, private endpoints fully integrate PaaS to VNet. Service endpoints allow communication over the Microsoft backbone network, but PaaS services are still available over the internet. Private endpoint integrates service to VNet, after which a service is assigned a private IP address and all communications are done over a private network (VNet).

Using private endpoints, PaaS workloads can be accessed exclusively over a private network and never exposed to access over the internet. This provides an additional network security layer and mitigates the risk of publicly exposing services (even through a firewall). Services configured to use private endpoints can be accessed from the same VNet, a peered VNet, and on-premises using S2S or ExpressRoute, if other security rules (such as NSGs for example) so allow. Not all PaaS services support private endpoints, new services are added all the time, and it's recommended to check the Microsoft documentation to verify the currently supported list.

With private endpoints, we complete the network section that is available directly on Azure VNet settings. However, there are other things we need to consider when it comes to network security. Let's see what else is available to increase network security in Azure.

Considering other VNet security options

For additional security and traffic control, a Network Virtual Appliance (NVA) can be used. An NVA can be deployed from Azure Marketplace. Once deployed, you will realize that an NVA is, in fact, an Azure VM with a third-party firewall installed. Most industry leaders are present in Azure Marketplace and we can deploy firewall solutions that we are used to in an on-premises environment. It's important to mention that we don't have to decide between NSGs or NVAs; these can be combined for additional security.

Additional network security can be achieved with Azure Firewall as well. Azure Firewall is a firewall as a service. It allows better network control than an NSG and can be compared to an NVA solution in many aspects. But Azure Firewall also has a few advantages compared to an NVA, such as built-in high availability, the option to deploy to multiple availability zones, and cloud scalability. This means that no load balancers are needed. We can span Azure Firewall across multiple Availability Zones (and achieve an SLA of 99.99%), and scaling is configured to automatically accommodate any change in network traffic. Some options that are supported include application filtering, network traffic filtering, FQDN tags, service tags, outbound SNAT support, inbound DNAT support, and multiple public IP addresses. With these options, we can have complete control of network traffic in our VNet. It's important to mention that Azure Firewall is compliant with many security standards, including SOC 1 Type 2, SOC 2 Type 2, SOC 3, PCI DSS, and ISO 27001, 27018, 20000-1, 22301, 9001, and 27017.

Next, we will be looking at how to deploy and configure Azure Firewall with PowerShell.

Azure Firewall deployment and configuration

This example – to deploy and configure Azure Firewall – requires Azure PowerShell. However, Azure Firewall can be configured and deployed through the Azure portal as well.

Azure Firewall deployment

In order to deploy Azure Firewall, we need to set up the required network and infrastructure:

  1. First, we need to create subnets, create a VNet, and associate the subnets with the VNet:

    $FWsub = New-AzVirtualNetworkSubnetConfig -Name '

    AzureFirewallSubnet -AddressPrefix 10.0.1.0/26

    $Worksub = New-AzVirtualNetworkSubnetConfig `

    -Name Workload-SN `

    -AddressPrefix 10.0.2.0/24

    $Jumpsub = New-AzVirtualNetworkSubnetConfig `

    -Name Jump-SN `  

    -AddressPrefix 10.0.3.0/24

    $testVnet = New-AzVirtualNetwork -Name Packt-VNet `

    -ResourceGroupName Packt-Security `

    -Location "westeurope" `

    -AddressPrefix 10.0.0.0/16 `

    -Subnet $FWsub, $Worksub, $Jumpsub

  2. Next, we need to deploy Azure VM, which will be used as a jump box (the VM we connect to in order to perform admin tasks on other VMs in the network; we don't connect to other VMs directly, but only through a jump box):

    New-AzVm -ResourceGroupName Packt-Security `

    -Name "Srv-Jump" `

    -Location "westeurope" `

    -VirtualNetworkName Packt-VNet `

    -SubnetName Jump-SN `

    -OpenPorts 3389 `

    -Size "Standard_DS2"

  3. After the jump box, we create a test VM:

    $NIC = New-AzNetworkInterface `

    -Name Srv-work `

    -ResourceGroupName Packt-Security `

    -Location "westeurope" `

    -Subnetid $testVnet.Subnets[1].Id

    $VirtualMachine = New-AzVMConfig `

    -VMName Srv-Work  `

    -VMSize "Standard_DS2"

    $VirtualMachine = Set-AzVMOperatingSystem `

    -VM $VirtualMachine `

    -Windows -ComputerName Srv-Work `

    -ProvisionVMAgent -EnableAutoUpdate

    $VirtualMachine = Add-AzVMNetworkInterface `

    -VM $VirtualMachine `

    -Id $NIC.Id

    $VirtualMachine = Set-AzVMSourceImage `

    -VM $VirtualMachine `

    -PublisherName 'MicrosoftWindowsServer' `

    -Offer 'WindowsServer' `

    -Skus '2016-Datacenter' `

    -Version latest

    New-AzVM -ResourceGroupName Packt-Security `

    -Location "westeurope" `

    -VM $VirtualMachine -Verbose

  4. Finally, we deploy Azure Firewall:

    $FWpip = New-AzPublicIpAddress `

    -Name "fw-pip" `

    -ResourceGroupName  Packt-Security `

    -Location "westeurope" `

    -AllocationMethod Static `

    -Sku Standard

    $Azfw = New-AzFirewall -Name Test-FW01 `

    -ResourceGroupName Packt-Security `

    -Location "westeurope" `

    -VirtualNetworkName  Packt-VNet `

    -PublicIpName fw-pip

    $AzfwPrivateIP = ` $Azfw.IpConfigurations.privateipaddress

Next, we will look at the Azure Firewall configuration.

The Azure Firewall configuration

After Azure Firewall is deployed, it doesn't actually do anything. We need to create a configuration and rules in order for Azure Firewall to be effective:

  1. First, we will create a new route table with the BGP propagation disabled:

    $routeTableDG = New-AzRouteTable `

    -Name Firewall-rt-table `

    -ResourceGroupName Packt-Security `

    -location "westeurope" `

    -DisableBgpRoutePropagation

    Add-AzRouteConfig -Name "DG-Route" `

    -RouteTable $routeTableDG `

    -AddressPrefix 0.0.0.0/0 `

    -NextHopType "VirtualAppliance" `

    -NextHopIpAddress $AzfwPrivateIP `

    | Set-AzRouteTable

    Set-AzVirtualNetworkSubnetConfig `

    -VirtualNetwork $testVnet `

    -Name Workload-SN `

    -AddressPrefix 10.0.2.0/24 `

    -RouteTable $routeTableDG `

    | Set-AzVirtualNetwork

  2. Next, we create an application rule that allows outbound access to www.google.com:

    $AppRule1 = New-AzFirewallApplicationRule `

    -Name Allow-Google `

    -SourceAddress 10.0.2.0/24 `

    -Protocol http, https `

    -TargetFqdn www.google.com

    $AppRuleCollection = `

    New-AzFirewallApplicationRuleCollection `

    -Name App-Coll01 -Priority 200 `

    -ActionType Allow -Rule $AppRule1

    $Azfw.ApplicationRuleCollections = $AppRuleCollection

    Set-AzFirewall -AzureFirewall $Azfw

  3. We then create a rule to allow a DNS on port 53:

    $NetRule1 = New-AzFirewallNetworkRule `

    -Name "Allow-DNS" `

    -Protocol UDP -SourceAddress 10.0.2.0/24 `

    -DestinationAddress 209.244.0.3,209.244.0.4 `

    -DestinationPort 53

    $NetRuleCollection = `

    New-AzFirewallNetworkRuleCollection `

    -Name RCNet01 -Priority 200 `

    -Rule $NetRule1 -ActionType "Allow"

    $Azfw.NetworkRuleCollections = $NetRuleCollection

    Set-AzFirewall -AzureFirewall $Azfw

  4. And then we need to assign a DNS to an NIC:

    $NIC.DnsSettings.DnsServers.Add("209.244.0.3")

    $NIC.DnsSettings.DnsServers.Add("209.244.0.4")

    $NIC | Set-AzNetworkInterface

Try connecting to a jump box, and then through a jump box to test the VM. From the test VM, try resolving multiple URLs. Only www.google.com should succeed, as all outbound traffic is denied except for the explicit allow rule we created.

It's important to remember that Azure Firewall offers Premium SKU, which provides additional features including TLS Inspection, IDPS, URL filtering, and web categories. TLS Inspection is used to analyze encrypted outbound data by decrypting outbound traffic, processing the data, and encrypting it again before forwarding it to its final destination. Intrusion Detection and Prevention System (IDPS) monitors all network activities, provides analyses, and detects potential malicious activities. URL filtering extends the standard capability of FQDN filtering (www.packt.com) to consider the entire URL (https://www.packtpub.com/product/mastering-azure-security/9781839218996). Web categories provide the option to allow or deny access to certain website categories, including gambling, social media, and other websites. Azure Firewall helps us to control and inspect traffic, but there are many other threats that can disrupt and endanger our network communication. Let's take a look at a solution that can help us mitigate DDoS attacks.

Azure DDoS protection

Distributed Denial of Service (DDoS) is one of the most common cyber attacks. A DDoS attack attempts to overload system resources and make a system unavailable to legitimate users. An attack can target any endpoint that is publicly reachable through the internet.

Azure DDoS protection comes in two different flavors: Basic and Standard.

Every property in Azure is protected by DDoS Basic protection at no additional cost. To protect customers and prevent impacts on other customers, Basic protection provides defense against network layer attacks with always-on traffic monitoring and real-time mitigation. It requires no additional configuration or any user action; it is a built-in service protecting all Azure services, both IaaS and PaaS.

The standard plan provides additional functionalities, including the following:

  • Guaranteed availability
  • Cost protection
  • Custom mitigation policies
  • Metrics and alerts
  • Mitigation reports and flow logs
  • DDoS rapid response support

Azure DDoS Standard protection is a tenant-wide service protecting up to 100 public IP addresses by default, with an additional charge for each public IP address over 100. There is no need to deploy an instance in each subscription; one instance can protect all endpoints across tenants in multiple subscriptions.

However, the Standard plan comes in a bundle of 100 IP addresses by default and should be used only when multiple endpoints require protection. For resource-specific attacks on the application layer, take a look at the Web Application Firewall with Application Gateway and Front Door (later in this chapter).

Azure Bastion

When running IaaS, exposing management ports such as RDP (port 3389) or SSH (port 22) is not a good idea. Bad actors are constantly scanning public networks in the search for exposed endpoints. If they detect such a port open, they will trigger a brute-force attack in the hope of gaining access to a service. This is usually mitigated by creating a jump box, a VM that enables us to securely connect to it before connecting to other VMs on the network.

Azure Bastion is a service that provides the ability to connect to our VMs using the browser and Azure portal. Similar to a jump box, it provides a secure way to connect to our virtual network. But unlike a jump box (which we need to maintain and update), Azure Bastion is a fully managed service. With Azure Bastion, we are able to securely access VMs over RDP/SSH from the Azure portal over TLS.

Hub-and-spoke network topology

For large and enterprise organizations, a hybrid cloud can become complex, and hard to manage and secure. With multiple VNets and hybrid cloud implementation, it can become difficult to monitor network traffic or even know the exact traffic flow. For complex network topologies, it is recommended to implement the hub-and-spoke model. In this model, we have a central point (hub) to which all on-premises connections and VNets (spokes) are connected. This way, traffic is easy to monitor, inspect, and manage.

There are two possible implementations for the hub-and-spoke topology in Azure.

Hub VNet

Hub VNet implementation has a single VNet and a central network where everything else is connected:

Figure 4.17 – Hub virtual network

Figure 4.17 – Hub virtual network

All other networks (spokes) are connected to the hub VNet. On-premises networks are connected over VPN Gateway (or ExpressRoute) and VNets are connected with peering. A hub network can come with other network resources, such as Azure Firewall, Azure Bastion, and Azure DDoS Protection. All traffic needs to go through the hub network and it enables us to easily manage network traffic and monitor it in a central location.

Let's say we need to connect from an on-premises network to one of the VNets in Azure. The on-premises network is connected to the hub over VPN, and the VNet is connected to the hub over peering. If traffic needs to go from one network to another, it needs to go through the hub network. Using the hub network, we can define what types of traffic are allowed as well as monitor and inspect network packages.

A similar process can be applied when only VNets are in place. All VNets are only connected to hub networks over peering. If traffic needs to go from one network to another, it needs to go through the hub network.

Azure Virtual WAN is an alternative to the previous design, replacing the hub VNet with a managed service. All on-premises and VNets are still connected to the hub, but instead of managing hub networks ourselves, we have a managed service in place. Besides not managing hub networks, another benefit of this design is easier connectivity of networks across regions. Under Azure Virtual WAN, we can have multiple hubs in different regions for connecting VNets in the corresponding region. Communication between regions is done over a connection between hubs (over the Azure backbone network). All hubs are still managed in a central location, in Azure Virtual WAN.

Let's move on to networking in PaaS and see what else is available, besides securing PaaS with service endpoints. We can have better network control and prevent unwanted traffic even with publicly available endpoints.

Understanding Azure Application Gateway

The next Azure service that can help increase security is Application Gateway. Application Gateway is a web-traffic load balancer that enables traffic management for web applications. It operates as layer 7 (L-7, or application layer) load balancing. This means that it supports URL-based routing and can route requests based on the URI path or host header.

Application Gateway supports Secure Socket Layer (SSL/TSL) termination at the gateway. After the gateway, traffic flows unencrypted to backend servers, which are unburdened from encryption and decryption overheads. However, if this is not an option on account of security, compliance, or any other requirements, full end-to-end encryption is supported as well.

Application Gateway also supports scalability and zone redundancy. Scalability allows autoscaling depending on the traffic load, and zone redundancy allows the service to be deployed to multiple availability zones in order to provide better fault resiliency and remove the need to deploy the service to multiple zones manually.

Overall, Azure Application Gateway is an L-7 load balancer and we could question the security aspects of it (if we exclude SSL/TSL termination), as it's more a question of reliability and availability. However, Application Gateway has an amazing security feature called Azure Web Application Firewall (WAF). WAF protects web applications against common exploits and vulnerabilities.

WAF is based on the Open Web Application Security Project (OWASP) and is updated to address the latest vulnerabilities. As it's PaaS, all updates are done automatically without any user configuration. From a policy perspective, we can create multiple custom policies and apply different sets of policies to different web applications.

WAF can operate in two modes – detection and prevention. In detection mode, WAF will detect all suspicious requests but will not stop them, only log them. It's important to mention that WAF can be integrated with different logging tools, so logs can be stored for auditing purposes. When in protection mode, WAF will also block any malicious requests, return a 403 unauthorized access exception, and close the connection. Prevention mode also logs all attacks.

Attacks are categorized by four severity levels:

  • Critical (5)
  • Error (4)
  • Warning (3)
  • Notice (2)

Each level has a severity value and the threshold for blocking is 5. So, a single critical issue is enough to block a session with the value 5, but at least two error issues are needed to block a session, as one error with a value of 4 is below the threshold.

WAF works as a filter before Application Gateway – it will process a request, decide whether it's valid, and, based on this decision, it will allow the request to proceed to Application Gateway or reject the request. Once the request is allowed by WAF, Application Gateway acts as a normal L-7 load balancer, as if WAF was turned off. You can see that in the following screenshot:

Figure 4.18 – Application Gateway traffic flow

Figure 4.18 – Application Gateway traffic flow

Some of the attacks that can be detected and prevented with WAF are listed here:

  • SQL injection
  • Cross-site scripting
  • Command injection
  • Request smuggling
  • Response splitting
  • HTTP protocol violations and anomalies
  • Protection against crawlers and scanners
  • Geo-filter traffic

WAF on Application Gateway supports logging options to Azure Monitor, diagnostic logs to storage accounts, and integration with security tools such as Azure Security Center or Azure Sentinel.

Understanding Azure Front Door

Azure Front Door works very similarly to Application Gateway but on a different level. Like Application Gateway, it's an L-7 load balancer with an SSL offload. The difference is that Application Gateway works with services in a single region, whereas Azure Front Door allows us to define, manage, and monitor routing on a global level. With Azure Front Door, we can ensure the highest availability using global distribution. A similar thing can be achieved with Azure Traffic Manager (in terms of global distribution), but this service lacks L-7 load balancing and SSL offloading.

What Azure Front Door provides actually combines Application Gateway and Traffic Manager to enable an L-7 load balancer with global distribution. It's also important to mention that a WAF is also available on Azure Front Door. Using a WAF on Azure Front Door, we can provide web application protection for globally distributed applications.

Summary

In this chapter, we addressed network security, and prior to that, we saw how to manage cloud identities. We need to remember that network security doesn't stop with IaaS and VNets. Network security basics are usually associated with VNets and NSGs. But even with IaaS, it does not stop there, and we have options to extend with an NVA or Azure Firewall. With PaaS, we can leverage VNet's service endpoints, but extend security with services such as Application Gateway or Azure Front Door.

However, with all the preclusions limiting who, how, when, and from where we can access our resources, we still need to handle sensitive information and data. The next chapter will address how we can manage certificates, secrets, passwords, and connection strings using Azure Key Vault.

Questions

As we conclude, here is a list of questions for you to test your knowledge regarding this chapter's material. You will find the answers in the Assessments section of the Appendix:

  1. We can control traffic in virtual networks with…

A. A network interface

B. A Network Security Group (NSG)

C. An Access Control List (ACL)

  1. What type of connection is available with on-premises networks?

A. Point-to-Site

B. Site-to-Site

C. VNet-to-VNet

  1. A connection between VNets can be made with…

A. VNet-to-VNet

B. VNet peering

C. Both of the above

D. None of the above

  1. Which feature allows us to connect PaaS services to a VNet?

A. Service connection

B. Service endpoints

C. ExpressRoute

  1. When multiple networks are involved…

A. We can define a route

B. Traffic is blocked by default

C. Traffic is allowed by default

D. A and B are correct

E. A and C are correct

  1. What type of attack cannot be blocked with Application Gateway?

A. SQL injection (SQLi)

B. Cross-Site Scripting (XSS)

C. Distributed Denial of Service (DDoS)

D. HTTP protocol violations

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset