© Julian Soh, Marshall Copeland, Anthony Puca, and Micheleen Harris 2020
J. Soh et al.Microsoft Azurehttps://doi.org/10.1007/978-1-4842-5958-0_2

2. Overview of Azure Infrastructure as a Service (IaaS) Services

Julian Soh1 , Marshall Copeland2, Anthony Puca3 and Micheleen Harris1
(1)
Washington, WA, USA
(2)
Texas, TX, USA
(3)
Colorado, CO, USA
 

The National Institute of Standards and Technology (NIST), a division of the US Department of Commerce, defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that are rapidly provisioned and released with minimal management effort or service provider interaction.” Within the NIST definition of cloud computing, three service models exist: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS).

IaaS is defined as the consumer’s ability to provision processing, storage, networks, and other fundamental computing resources, where the consumer can deploy and run arbitrary software, which includes operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components.

When this chapter was written, Microsoft Azure had 170 services, of which 55 were IaaS. In this chapter, we review the newest and most popular IaaS services and the major changes made to the existing ones. IaaS services run across all of Microsoft’s various Azure clouds and regions. There are 15 compute, 19 networking, and 16 storage services. A full list of services by category is at https://docs.microsoft.com/en-us/azure/index#pivot=products.

Azure Compute includes the following services.
  • Linux virtual machines

  • Windows virtual machines

  • Virtual machine availability sets

  • Virtual machine scale sets

  • Dedicated hosts

  • Proximity placement groups

  • Azure Batch

  • Azure Service Fabric

  • Azure Kubernetes Service (AKS)

  • CycleCloud

  • Azure VMware Solutions by CloudSimple

Azure Virtual Machines

Each of the Azure compute services offer various scalability and service-level agreements (SLAs), ranging from 99% to 99.99%. We’ll address this for each service reviewed in this chapter.

Each Azure virtual machine (VM) provides anywhere from 1 to 480 CPU cores or 960 CPU threads, which is the most compute power in the world. Memory-intensive workloads range from 1 GB to 24 TB for a single system, and local compute storage ranges from 4 GB to 64 TB and up to 160,000 IOPS, which is the input/output operations per second. This does not include the cloud storage options discussed later in this chapter.

Azure compute services offer networking speeds up to 100 Gbps InfiniBand interconnect.

One of the most overlooked aspects of virtual machines in Azure is the series. Too often, administrators provision virtual machines based on the number of cores and RAM, without understanding the underlying hardware architecture that the virtual machines reside on. Microsoft provides 12 hardware platforms for hosting virtual machines, and each one has a very specific purpose. The costs drastically differ.

Not all virtual machines are available in each Azure region. As discussed in Chapter 1, your workload may need to be in a specific Azure region due to hardware availability. For a detailed breakdown of the virtual machine series, refer to https://azure.microsoft.com/en-us/pricing/details/virtual-machines/series/. This page outlines not only the different underlying hardware architectures but also what each series is optimized for. The following list describes the virtual machine series.
  • A-series: Entry-level economical VMs for dev/test. Development and test servers, low traffic web servers, small to medium databases, servers for proofs-of-concept, and code repositories

  • Bs-series: Economical burstable VMs. Development and test servers, low-traffic web servers, small databases, microservices, servers for proofs-of-concept, build servers.

  • D-series: General-purpose compute: Enterprise-grade applications, relational databases, in-memory caching, and analytics. The latest generations are ideal for applications that demand faster CPUs, better local disk performance, or higher memories.

  • DC-series: Protect data in use. Confidential querying in databases, creation of scalable, confidential consortium networks, and secure multiparty machine learning algorithms. The DC-series VMs are ideal for building secure enclave-based applications to protect customers’ code and data while it’s in use.

  • E-series: Optimized for in-memory hyper-threaded applications. SAP HANA, SAP S/4 HANA, SQL Hekaton, and other large in-memory business-critical workloads.

  • F-series: Compute optimized virtual machines. Batch processing, web servers, analytics, and gaming.

  • G-series: Memory and storage optimized virtual machines. Large SQL and NoSQL databases, ERP, SAP, and data warehousing solutions.

  • H-series: High-performance computing virtual machines. Fluid dynamics, finite element analysis, seismic processing, reservoir simulation, risk analysis, electronic design automation, rendering, Spark, weather modeling, quantum simulation, computational chemistry, and heat-transfer simulation.

  • Ls-series: Storage-optimized virtual machines. NoSQL databases such as Cassandra, MongoDB, Cloudera, and Redis. Data warehousing applications and large transactional databases are great use cases as well.

  • M-series: Memory-optimized virtual machines. SAP HANA, SAP S/4 HANA, SQL Hekaton, and other large in-memory business-critical workloads requiring massive parallel compute power.

  • Mv2-series: Largest memory-optimized virtual machines. SAP HANA, SAP S/4 HANA, SQL Hekaton, and other large in-memory business-critical workloads requiring massive parallel compute power.

  • N-series: GPU-enabled virtual machines. Simulation, deep learning, graphics rendering, video editing, gaming, and remote visualization.

Note

Azure services that support a single solution can span multiple Azure regions.

Don’t worry about having all your cloud resources or services in the same Azure region. While some workloads may need this, traversing the Azure global network to get the service you need is not an issue thanks to high throughput and extremely low latency. For example, in the United States, a connection from the West Coast to the East Coast can be made in less than 60 ms (milliseconds). From Colorado, a user can connect to the East Coast in less than 45 ms. Your throughput may vary depending upon your location, the Internet service provider you’re using to connect to Azure, the connection type, and so forth. The point is to be aware that your cloud services may be in multiple regions, not just the closest one to you. This is discussed later in this chapter.

Azure virtual machines provide on-demand compute resources at the scale, size, and price that meets a customer’s budget. You can choose from a large variety of hardware architectures, while also designing for whatever availability needs you to have. SLAs range from 99% for a single virtual machine to 99.95% when you deploy two or more virtual machines in an availability set, this is covered in more detail in Chapter 10.

When building out Azure virtual machines, there are a few items that the administrator should consider.
  • Naming conventions

  • The Azure region

  • The storage container configuration hosting the virtual machine

  • The virtual machine series

  • The size of the virtual machine

  • The operating system

  • The configuration of the virtual machine

  • The ongoing monitoring and management of the virtual machine

Half of the Azure compute virtual machines run a flavor of Linux. Microsoft has the following distributions available in the virtual machines gallery, which is where Microsoft publishes images.
  • Linux on Azure - Endorsed Distributions

  • SUSE - Azure Marketplace - SUSE Linux Enterprise Server

  • Red Hat - Azure Marketplace - Red Hat Enterprise Linux 7.2

  • Canonical - Azure Marketplace - Ubuntu Server 16.04 LTS

  • Debian - Azure Marketplace - Debian 8 “Jessie”

  • FreeBSD - Azure Marketplace - FreeBSD 10.4

  • CoreOS - Azure Marketplace - CoreOS (Stable)

  • RancherOS - Azure Marketplace - RancherOS

  • Bitnami - Bitnami Library for Azure

  • Mesosphere - Azure Marketplace - Mesosphere DC/OS on Azure

  • Docker - Azure Marketplace - Azure Container Service with Docker Swarm

  • Jenkins - Azure Marketplace - CloudBees Jenkins Platform

Azure Batch

Azure Batch is a form of high-performance computing (HPC) , optimized for parallel workloads that allow customers to deploy their software across pools of compute nodes (or virtual machines) and schedule the jobs to run when they want. Azure Batch allows large-scale execution when workloads are time-consuming, and the processing can be scaled out across multiple systems. Azure Batch only charges you for the compute, storage, and networking resources when you are using the service. The compute cost is included in the Batch execution. Batch supports two modes of execution.
  • Intrinsically parallel workloads are processed in several independent parts, and access shared data, but don’t communicate with each other.

  • Tightly coupled workloads require communication with each other and use the Message Passing Interface (MPI) API to communicate with each other. HPC and GPU VM Series drastically improve this style of workload performance.

Azure Batch supports additional workloads, including, but not limited to, financial risk modeling using Monte Carlo simulations, VFX and 3D image rendering, image analysis and processing, media transcoding, genetic sequence analysis, optical character recognition (OCR), data ingestion, processing, ETL operations, software test execution, finite element analysis, fluid dynamics, multinode AI training, and execution of R algorithms.

Azure Service Fabric

To understand what Azure Service Fabric does, you must first understand the difference between monolithic and microservice applications. Briefly, monolithic applications are massive; they have numerous components that require updating the entire application at once, which incurs high risk due to potential issues and downtime. Microservice applications are made of several small services that communicate with each other. Since these scenario-focused services are separate, they can be updated and scaled independently, which reduces risk, increases flexibility, and provides a better long-term approach.

Azure Service Fabric is a service that provides packaging, deployment, and management of scalable and reliable microservices. While most of Service Fabric is viewed as a PaaS service because it is a platform on which to build highly scalable and resilient applications, it includes Service Fabric clusters, which leverage Azure compute and provide the ability to scale out to thousands of machines.

Azure CycleCloud

Azure CycleCloud is a new service that allows administrators to manage high-performance computing (HPC) clusters, also referred to as big data. CycleCloud supports deployment orchestration of all the necessary services, such as compute, networking, and storage. Deployment optimization, automation of operations like autoscaling, and delegation of administrators to clusters based on various constraints (including cost) are just a few of CycleCloud’s capabilities.

CycleCloud’s key advantage is that it is an open architecture, which allows any job-scheduler to be used with it. There are also advanced policy and governance features, such as cost reporting and controls, usage reporting, AD/LDAP integration, monitoring and alerting, and audit/event logging.

Azure VMware Solutions

Azure VMware Solutions (AVS) by CloudSimple is a fully managed service that allows customers to run their VMware-based virtual machines in Azure at any scale, without the lengthy and costly process of providing various vendors’ hyper-converged solutions. AVS is an Azure service that enables you to bring your VMware-based environments to Azure without major modifications. This provides the customer with the ability to use the same operating framework, such as processes, training, code, scripts, and so forth, which they have been using in their on-premises or hosted VMware environments, now in Azure.

Common use cases for this solution include datacenter expansion needs that have an urgency where customers don’t want to or don’t have time to train their personnel on the Azure Resource Manager tooling. Another frequent use-case for Azure VMware Solutions is datacenter retirement. As the need to shut down datacenters increases due to high operating costs and low optimization scenarios, customers can quickly move their VMware assets to Azure without retooling or retraining. A VMware-focused hybrid architecture between on-premises and Azure is another popular reason for using AVS. The hybrid model of on-premises VMware with AVS facilitates backups, disaster recovery models, operations, and compliance due to the same platform in both locations.

Azure VMware Solutions include VMware vSphere, vCenter, vSAN, NSX-T, and their corresponding tools. Azure VMware Solutions runs natively on Azure bare metal, not Microsoft Hyper-V hosts, so customers pay the same for a host regardless of the number of virtual machines running on it. VMware workloads on Azure are easily modernized through integration with Azure services such as Azure Active Directory, Azure AI, and Analytics.

Customers deploy Azure VMware Solutions through the Azure portal. Microsoft provides and supports the management systems, networking services, operating platform, and back-end infrastructure required to run native VMware environments at scale in Azure.​ This service is built on a deep partnership with VMware and is part of the VMware cloud verified program​.

Azure Storage Services

Azure Storage is a group of various Microsoft-managed services. These services include Azure Blobs, Azure Data Lake Storage, Azure Files, Azure Disks, Azure Archive, Azure Queues, and Azure Tables. Azure Storage services can be connected to public IPs, creating the debate on whether they’re PaaS or IaaS services. For the sake of this book, and NIST’s definition, we’re going to treat Azure Storage as an IaaS service since you cannot run virtual machines without it!

All Azure Storage services replicate the data blocks stored to a minimum of three locations in one datacenter. Administrators can choose between the Azure Storage redundancy tiers shown in Table 2-1.
Table 2-1

Azure Storage Redundancy Tiers

Storage Redundancy Name

SLA on Storage Objects over a Year

Locally Redundant Storage (LRS)

99.999999999 % (11 9s)

Zone Redundant Storage (ZRS)

99.9999999999 % (12 9s)

Geographically Redundant Storage (GRS)

99.99999999999999 % (16 9s)

Geographically Zone Redundant Storage (GZRS)

99.99999999999999 % (16 9s)

Read-Access Geographically Redundant Storage (RA-GRS)

99.99999999999999 % (16 9s)

Locally redundant storage (LRS) places three copies of your data in one datacenter, providing resiliency from a drive failure or other unplanned outages within the datacenter. Zone Redundant Storage (ZRS) provides three copies of your data in both a primary and a secondary datacenter or Azure region, providing a total of six copies, three in each location. Geo-redundant storage (GRS) provides three copies to two datacenters as ZRS does, but only from one Azure region being replicated asynchronously to another Azure region 600 miles or more apart.

The replicas made in GRS are not accessible to the customer; they are designed for business continuity or disaster recovery (BC/DR) purposes. Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region and is also replicated to a second geographic region for protection from regional disasters.

Read-access geo-redundant storage (RA-GRS) provides three copies to two datacenters as GRS does, but the replica is accessible to be read from the second Azure region.

All Azure Storage is encrypted at rest using 256-bit AES encryption, which is one of the strongest encryptions and is FIPS 140-2 compliant. Azure Storage encryption is enabled by default, regardless of storage tier, for all Azure Storage accounts. Azure Storage Redundancy options utilize Azure Storage Encryption. No coding or configuration is necessary to leverage it. Azure Storage encryption is free and does not impact storage performance. It can use Microsoft-managed keys, or customers can use their own keys via Azure Key Vault.

Azure Storage services are billed differently than many other services in Azure because they are always in use. While you can turn off a virtual machine and not pay for the compute, you cannot turn off storage. Storage is only billed for the blocks used. If you have reserved a 1 TB Azure disk, you only pay for the part of the 1 TB that is used. This gets even more complicated when you’re using workloads that are deduped or compressed, and the target amount of storage is hard to estimate. Certain Azure Storage services have tiers, also creating a tiered price model. For example, solid-state disks (SSDs) are more expensive than spinning disk drives. Another example is Azure Blob tiers, where moving from “hot” to “cool” to “archive” changes the price accordingly.

A detailed list of Azure Storage services is at https://azure.microsoft.com/en-us/services/storage/. For a more holistic view of all Azure Storage services, consider that they are available for any workload you run in Azure.

Blob Storage

Azure Blob storage is Microsoft’s object storage solution for the cloud; it is optimized for storing massive amounts of unstructured data, such as images, video, writing to log files, backups, and streaming content. Azure Blob storage supports three tiers: hot, cold, and archive. Blob-level tiering allows block blobs to be configured programmatically as usage patterns change.

Hot Access Tier

The hot access tier is optimized for storing data that is accessed frequently. It has the highest cost and the lowest access costs. Data can be migrated from the hot tier to the cool or archive tiers.

Cool Access Tier

The cool access tier is optimized for storing data that is infrequently accessed and stored for at least 30 days. This tier has lower storage costs than the hot access tier, but has higher access costs, making backups a good use case since they are not frequently accessed. Unlike many other cloud service providers, Microsoft Azure hot and cool tiers have the same performance. The difference between these tiers is their respective SLAs; the hot access tier has a 99.99% RA-GRS SLA, while the cold access tier has a 99.9% RA-GRS SLA.

Archive Access Tier

The archive access tier is unique in several ways. It is the only tier that cannot be configured as the deployment of the storage container. It is the only tier that is required to be configured at the Blob level. It has the lowest storage costs, yet the highest cost for access. Storing data that is rarely accessed, such as legal hold, evidence, other compliance data types, and long-term backups are the ideal use cases.

Data stored in the archive access tier should be stored for at least 180 days; otherwise, there may be an early deletion charge. Finally, flexible retrieval requirements should be acceptable for the data type due to the retrieval taking hours instead of seconds.

The Azure Blob Storage archive access tier is supported by many third-party hardware manufacturers and software manufacturers, such as those who make backup solutions. At the time of writing, there were 69 partner solutions in the Azure marketplace. These partner solutions are found both on the Azure portal and the Microsoft Azure Marketplace website under the Storage section at https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/storage?search=storage&page=1.

Storage Explorer

Storage Explorer is a stand-alone application supported on all major Microsoft and non-Microsoft operating systems, including Mac and Linux. Storage Explorer enables users to upload, download, and copy managed disks, and create snapshots. Because of these additional capabilities, you can use Storage Explorer to migrate data from on-premises to Azure and migrate data across Azure regions.

Data Lake Storage Gen2

Azure Data Lake (ADL) Storage Gen2 is the successor to Azure Data Lake Storage Gen1. It is highly scalable and cost-effective due to being built on top of Azure Blob Storage. It supports fine-grained access control lists (ACLs) , including Azure Active Directory integration, Azure Storage Encryption, automated life cycle policy management, atomic file operations, no limits on datastore size, optimizations for Spark and Hadoop integration, and tiered pricing.

The addition of a hierarchical namespace to Blob storage allows Azure Data Lake Storage Gen2 to treat operations the same way you do on a file system. You can delete a directory, which deletes all child objects. The need to enumerate all child objects is gone in Azure Data Lake Storage Gen2, making operations exponentially faster.

Azure Data Lake Storage Gen2 allows POSIX security ACLs to be applied to the folder or file level, which allows more granular permissions and security on the overall solution. These permissions are configurable through Azure Storage Explorer or frameworks like Hive and Spark.

Other key features include Hadoop integration emulating the Hadoop Distributed File System (HDFS) and big data analytics optimized ABFS driver.

Managed Disks

Azure Disk Storage offers persistent, high-performance disk solutions managed by Microsoft. It provides scalability to 50,000 disks in a single subscription within a single Azure region. Azure managed disks have a 99.999% availability SLA. Azure disks integrate with availability sets and zones, are supported by Azure backup, provide administrators with fine-grained, role-based access control (RBAC), and support two different types of disk encryption: server-side encryption and Azure Disk Encryption. Azure Disk Encryption uses Bitlocker for Microsoft volumes and DM-Crypt for Linux disk volumes.

When a virtual machine is built, there are three kinds of disks present to the administrator: OS disks, data disks, and temporary disks. Every VM that is created gets an OS disk; this is where the OS is installed and has a max size of 2 TB. Administrators have the option of adding a data disk at virtual machine deployment or after the fact. Data disks are managed disks attached to the virtual machine to store data and appear as local drives. Data disks are SCSI disks that have a maximum capacity of 35 TB. Temporary disks provide short-term storage for workloads. Temporary disks are deleted during a maintenance event, such as a reboot for patching. The temporary disk is re-created at each OS boot. Microsoft Windows virtual machines default the temporary disks to D: and Linux to /dev/sdb. This is ideal for applications that expect to have locations they can swap to. Many graphics-intensive apps have this requirement, and leveraging the temporary disk yields faster I/O than the OS disk. The size of the virtual machine determines the number of data disks that you can attach to it and the type of storage you can use to host the disks.

Queue Storage

Azure Queue Storage is designed to support standard queuing scenarios, such as decoupling application components to increase scalability and tolerance for failures, load leveling, and building process workflows. It provides asynchronous message queueing for communication between application components and a consistent programming model across other Azure Storage services.

Queue Storage messages are not always delivered in a first in, first out (FIFO) fashion. It is one of the two queue services offered by Microsoft Azure; the other is Azure Service Bus. Queue Storage is part of the Azure Storage service fabric. Storage queues should be used when many gigabytes of messages must be stored in a queue when tracking the progress of messages in a queue is desired or when there is a server-side log for all transactions requirement.

Azure Files

Azure Files is a new Azure Storage service that allows users to expose shares from Azure via SMB (Server Message Block). These shares are available to all major operations systems, both Microsoft and non-Microsoft. The Azure Files service allows customers to cache local copies of the data using Azure File Sync to minimize latency between on-premises and the cloud. This provides a pseudo local file server experience. The data shared via Azure Files is encrypted at rest, but also in transit via SMB 3.0 and HTTPS.

Azure Files has two tiers: Standard and Premium. The primary difference is the underlying storage architecture. Standard Azure Files reside on Microsoft’s lowest-cost storage, whereas Premium Azure Files resides on SSD-based storage designed to support I/O-intensive workloads that require file share semantics with significantly high throughput and low latency.

Azure Files has multiple parts to its pricing structure, including the cost of storage, and the cost of ingress, egress, reads, writes, and so forth. For a detailed breakdown, refer to https://azure.microsoft.com/en-us/pricing/details/storage/files/. To model your own scenario, use the Azure Calculator at https://azure.microsoft.com/en-us/pricing/calculator/.

Data Box

Azure Data Box is a physical device-based storage solution that allows customers to copy and ship large amounts of data to Microsoft. Customers that have low-bandwidth challenges, large amounts of data, or time constraints on their data uploads to Azure are candidates for Azure Data Box. Customers order a Data Box solution that fits their storage needs; upon receiving it, they fill it with their data, and then ship it to Microsoft. Once Microsoft receives the Data Box solution, it inspects and uploads your data, and then wipes the device. Azure Data Box has three solutions.
  • It provides customers up to five 8 TB SSDs, totaling 40 TB; 35 TB usable per order. These 2.5-inch drives come with an interface that uses 128-bit encryption. Data is copied over USB/SATA II, III interface using Robocopy or similar tools.

  • It provides customers a 50 lb. 100 TB/80 TB usable enclosure per order that is AES 256-bit encrypted to copy and safely ship to Azure. Data is copied over 1×1/10 Gbps RJ45, 2×10 Gbps SFP+ interface using SMB or NFS protocols.

  • Data Box Heavy is a self-contained, 500-lb. device capable of storing 1 PB/800 TB usable of data secured with AES 256-bit encryption. Data is copied over 4×1 Gbps RJ45, 4×40 Gbps QSFP+ interface.

All Azure Data Box solutions support Azure Block Blob, Page Blob, Azure Files, and managed disks. Data can only be accessed with a secure key provided via the Azure portal. Once your data is uploaded to Azure, the Data Box solutions are wiped clean and sanitized in accordance with NIST 800-88 R1 standards.

Ephemeral OS Disks

Ephemeral OS disks, which are only available to virtual machine series that support Premium Storage, are a new Azure Storage service that allows the disks that the OS is installed on to be deleted. Ephemeral OS Disks are free. They can be used with customer images or Azure Gallery images. They provide lower latency than an OS or data disk, and the ability to reimage or scale-out your virtual machine deployment or scale-set more quickly. Ephemeral OS disks are like the temporary disks, but for use where the OS resides. There are specific use cases for these disks in which the OS is always expected to run from a specific state, such as a “non-persistent” virtual desktop or server.

Azure Networking Services

Azure has 19 networking services, including applications, virtual machine–specific services like Azure Virtual Network or Azure Load Balancer, and unrestricted scalability-based services like Azure’s Virtual WAN. Azure networking services allow you to configure any type of connectivity, security, or availability model. It used to take hundreds of hours, the coordination of numerous vendors, and lengthy procurement and deployment times. Today, Azure networking services can be deployed in minutes, and reconfigured, redeployed, and removed with the same elasticity and billing model as other Azure services.

Since the first edition of this book, several new Azure networking services have emerged. Each of them satisfies regulatory compliance and are audited to standards regularly. For more information on the regulatory compliance of any Azure service, please refer to www.microsoft.com/en-us/TrustCenter/CloudServices/Azure/default.aspx.

The rest of this chapter discusses new or enhanced services in Azure. There is more information on some of these services in Chapter 9.

Azure Virtual Network

Azure Virtual Network is the core component of all Azure IaaS services, and increasing PaaS usage as Azure Private Link and Azure service endpoints become broader offerings. Azure Virtual Network allows virtual machines to communicate with each other, the Internet, on-premises, and so forth. The key benefits of Azure Virtual Network are the scale, availability, being cloud native, and isolation offered by Azure. Azure Virtual Network has address spaces, subnets, and resides in regions, subscriptions, and management groups. It is also free of charge.

Compute, network, data, analytics, identity, containers, Web, and hosted Azure services can be deployed to an Azure virtual network. More than 20 Azure services support Azure Virtual Network deployments. Communication between these services is enabled via Point-to-Site (P2S), Site-to-Site (S2S), or ExpressRoute connectivity models. Traffic is filtered on an Azure virtual network by network security groups (NSG) or by network virtual appliances (NVA), which are virtual machines performing the functions of a firewall but running code usually provided by mainstream firewall manufacturers (Cisco, Palo Alto Networks, Riverbed, FortiGate, Barracuda, or Checkpoint to name just a few).

Finally, Azure virtual networks allow routing of traffic between subnets, in Azure or on-premises, by using routing tables or Border Gateway Protocol (BGP) routes.

For more information on Azure networking, please refer to Chapter 3 and Chapter 7.

Azure Application Gateway and Web Application Firewall

When we talk about load balancers, we’re going to reference layer 4, the transport layer, and layer 7, the application layer, from the Open Systems Interconnection (OSI) model. Azure Application Gateway is a layer 7 load balancer. This is frequently confused with layer 4 load balancers, which are the most common; they route based on IP address or port data. Azure Load Balancer is layer 4. Azure Application Gateway can use URL or HTTP headers to make routing determinations. Azure Application Gateway provides load balancing and routing based on layer 7 the same way Microsoft Forefront Unified Access Gateway (UAG) Server and Microsoft Intelligent Application Gateway (IAG) provided these services as a licensed product.

Azure Application Gateway and Azure Web Application Firewall have a second edition, known as v2. It was released in 2019, and it provides many features in addition to the ones in the v1 edition. Autoscaling and zone redundancy are key benefits of v2, while User Defined Routing (UDR) was a key difference between the first Azure Application Gateway and the Azure Application Gateway v2 version. UDR support on Azure AppGW v2 is available via PowerShell, and the v2 version should provide all of v1’s functionality.

Azure DDoS Protection

DDoS (distributed denial-of-service) attacks are commonplace. They are in the news regularly, and companies can incur major financial losses from being the target of one. It is imperative for a cloud service provider (CSP) to protect itself and its customers from any kind of bad actors that try to cause their digital infrastructure harm. Almost anyone can fall victim to a DDoS attack, especially when they are hosted in countries that ignore them, can be launched for as little as $5, and can easily be purchased via a web service from a browser.

DDoS attacks cost lost revenue, sales, downtime, brand damage, lower brand value, operational expenses, and countless hours in personnel trying to mitigate or recover from them. In 2019, TechHQ reported that DDoS attacks cost US businesses $10 billion per year, and the average business lost $218,339. Go to https://techhq.com/2019/03/ddos-attacks-cost-us-businesses-10bn-per-year/ for the article on the significance and growing landscape of DDoS attacks.

Azure DDoS protection comes in two tiers: Basic and Standard. Every Azure customer gets Basic for free and automatically, and it’s always on. Azure’s global network is a key element used in the Standard tier to mitigate workloads from DDoS attacks. Azure’s Standard DDoS protection provides benefits specifically for customers using Azure Virtual Network resources. Azure network resources that expose workloads via public IP addresses are tuned by insights uncovered through machine learning of network traffic. DDoS Standard surfaces traffic insights through the Azure Monitor service. Several additional services are available with Standard DDoS Protection, including access to DDoS subject-matter experts (SME) during an attack, logs for SIEM integration, and post-attack mitigation reports.

ExpressRoute

Azure ExpressRoute is a private, dedicated, high bandwidth, low latency, SLA connection into the Microsoft Azure global network. ExpressRoute allows connectivity not only to Azure resources but Office 365 and Dynamics 365 as well. ExpressRoute uses private peering for connectivity to Azure virtual networks, and Microsoft peering for connectivity to PaaS and SaaS workloads, such as Cosmos DB, Azure SQL, Office 365, and Dynamics 365.

Azure Firewall

Azure Firewall is a stateful packet inspection (SPI) firewall managed by Microsoft. Firewall has unrestricted scalability and is highly available by default. It provides all the services you would expect from an enterprise-class firewall, but as a managed service. It includes a new service, Firewall Manager, where administrators can manage firewalls at scale, across Azure regions and subscriptions, with the ability to centrally manage the firewalls’ configurations and routing via global and local policies.

Azure Firewall supports the following capabilities.
  • Built-in high availability

  • Availability zones

  • Unrestricted cloud scalability

  • Application FQDN filtering rules

  • Network traffic filtering rules

  • FQDN tags

  • Service tags

  • Threat intelligence

  • Outbound SNAT support

  • Inbound DNAT support

  • Multiple public IP addresses

  • Azure monitor logging

Azure Front Door

Azure Front Door allows the global management and routing of customers’ web traffic. Front Door provides several ways to route traffic in the most efficient manner possible to the client. Front Door analyzes latency, priority, weight, and session affinity to determine the best routing for the traffic to be optimized. Front Door is highly available and can withstand an entire Azure region failure.

Azure Internet Analyzer

Azure Internet Analyzer provides customers the means to “benchmark” or measure the performance of network changes made within the environment. Internet Analyzer takes customers’ data and mashes it with Microsoft’s analytics to optimize network routing and topology. Internet Analyzer embeds a JavaScript client in a web application for customers to use for various measurements, such as latency. This new service allows customers to experiment with the “what if” scenarios before making major changes to their network, with the goal of understanding if the changes provide any performance gains.

Azure CDN

Azure Content Delivery Network (CDN) is a global caching service that allows high bandwidth and low latency to end users. It uses the closest point of presence (POP) to allow users to download data while minimizing the impact on their experience in a transparent fashion.

Microsoft Update is a great example of its usage. It’s a predictable payload that can easily be pre-staged. This allows consumers to download their update data not from Redmond, WA, or their nearest Azure region, but instead from a local point of presence that minimized latency to the data/service.

Azure Content Delivery Network should be evaluated whenever dynamic content needs to be served, and the audience is geographically distributed, even within a single country.

Azure Load Balancer

Azure Load Balancer is a layer 4 load balancer in the OSI model. Azure Load Balancer acts as a public load balancer and translates private IPs for virtual machines to public IPs on Azure Edge. This service is called a network address translation (NAT); it allows virtual machines to have Internet access. Load Balancer also has a private mode, where it uses private IP addresses on both the outside and inside, such as when you need to use a load balancer inside a virtual network or between on-premises and a virtual network. The following are some of the use cases for Azure Load Balancer.
  • Load balancing internal and external traffic to Azure virtual machines

  • Increasing availability by distributing resources within and across zones

  • Configuring outbound connectivity for Azure virtual machines

  • Using health probes to monitor load-balanced resources

  • Employing port forwarding to access virtual machines in a virtual network by public IP address and port

  • Enabling support for load-balancing of IPv6

Standard Load Balancer provides multidimensional metrics through Azure Monitor. These metrics can be filtered, grouped, and broken out for a given dimension. They provide current and historical insights into the performance and health of your service. Azure Load Balancer allows customers to:
  • Load balance services on multiple ports, multiple IP addresses, or both.

  • Move internal and external load balancer resources across Azure regions.

  • Load balance TCP and UDP flow on all ports simultaneously using HA ports.

Azure Load Balancer is priced on two tiers: Basic and Standard. Basic is free, while Standard is priced by a combination of the number of load balancing rules and the amount of data processed. For more pricing information, see https://azure.microsoft.com/en-us/pricing/details/load-balancer/.

Traffic Manager

Azure Traffic Manager is a DNS-based load balancer that allows the optimization of traffic to the most appropriate resources, determined by priority, weight, performance, geography, or subnets. Traffic Manager is the Microsoft version of Global Traffic Management (GTM) as an IaaS service. It is fully managed, highly available, and hosted in Azure. Common use cases for Traffic Manager include the ability to load balance between on-premise and Azure for mission-critical applications. Traffic Manager supports both Microsoft and non-Microsoft endpoints, as well as hybrid scenarios such as bursting into Azure for increased scale of a given workload.

VPN Gateway

A VPN gateway allows encrypted communication to flow between two Azure virtual networks or an Azure virtual network and an on-premises network. VPN gateways are made up of virtual machines deployed to a special subnet designed for routing. Because of this deployment model, the virtual machines are not configurable. Instead, all VPN gateway configuration is done through the Azure portal or as infrastructure as code.

A VPN gateway takes approximately 45 minutes to deploy due to the nature of its architecture and the VPN gateway subnet creation. Finally, VPN gateways support P2S, S2S, or ExpressRoute connectivity models. Bandwidth on VPN gateways can vary from 100 Mbps to 10 Gbps, depending upon the VPN gateway deployed. For a detailed breakdown of VPN gateway throughput, encryption used, and limitations, please refer to https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways .

Summary

Azure has grown substantially, both in capability and customer adoption, since we wrote the first book in 2015. Azure IaaS has seen an explosion in networking, storage, and security-related solutions. Several non-Microsoft vendors are now offering their products or services on the Azure platform. Everything in Azure is built from the ground up with security as the top priority. Zero trust, which is outside the scope of this book, is largely based on the principles of securing applications through several mechanisms, including identity and multifactor authentication, but not relying solely on network segmentation. That architecture doesn’t work in today’s cybersecurity landscape. Hence, the intense investment in securing networks from the world’s largest software provider is evident. This chapter provides an overview of some of the more popular services released over the past five years. Information on many of these cloud services is covered later in this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset