We’ve covered some good ground so far with hosting applications on Infrastructure as a Service (IaaS). We’re now going to step into Platform as a Service (PaaS) with Azure App Service. Developers that traditionally had web apps hosted on an Internet Information Services (IIS) server – even if it was a cloud-based VM with IaaS – are moving their applications so that they’re hosted on App Service, which brings even more benefits to this scenario than IaaS.
It’s important to understand that Azure App Service is more than just for hosting web apps, so we’ll start with an overview of App Service as a whole before turning our focus to web apps. We’ll also take what we learned about containers in the previous chapter and show how containers and App Service can work together to bring you even more value.
By the end of this chapter, you’ll have a solid understanding of Azure App Service. You’ll also understand how you can manage your web applications throughout their life cycle in the cloud, including configuring, scaling, and deploying changes in a controlled and non-disruptive way.
In this chapter, we will cover the following main topics:
The code files for this chapter can be downloaded from https://github.com/PacktPublishing/Developing-Solutions-for-Microsoft-Azure-AZ-204-Exam-Guide/tree/main/Chapter03
Code in Action videos for this chapter: https://bit.ly/3qPjR7R
Azure App Service is an HTTP-based PaaS service on which you can host web applications, RESTful APIs, and mobile backends, as well as automate business processes with WebJobs. With App Service, you can develop in some of the most common languages, including .NET, Java, Node.js, and Python. With WebJobs, you can run background automation tasks using PowerShell scripts, Bash scripts, and more. With App Service being a PaaS service, you get a fully managed service, with infrastructure maintenance and patching managed by Azure, so you can focus on development activities.
If your app runs in a Docker container, you can host the container on App Service as well – you can even run multi-container applications with Docker Compose. As early as Chapter 1, Azure and Cloud Fundamentals, we alluded to App Service allowing you to scale – automatically or manually – with your application being able to be hosted anywhere within the global Azure infrastructure while providing high availability options.
In addition to the features covered in this chapter, App Service also provides the option for App Service Environments (ASEs), which provide a fully isolated environment for securely running apps when you need very high-scale, secure, isolated network access and high compute utilization.
From a compliance perspective, App Service is International Organization for Standardization (ISO), Payment Card Industry (PCI), and System and Organization Control (SOC)-compliant. A good resource on compliance and privacy is Microsoft Trust Center (https://www.microsoft.com/trust-center).
Azure Marketplace was mentioned in both previous chapters as a source for resource images. Application templates can also be found within the marketplace for things such as WordPress, among others. With the rich Azure ecosystem, there are many other integrations for convenience and security (including Visual Studio and Visual Studio Code integrations), the list of which is increasing all the time.
App Service also provides continuous integration and continuous deployment (CI/CD) capabilities by allowing you to connect your app to Azure DevOps, GitHub, Bitbucket, FTP, or a local Git repository. App Service can then automatically sync with code changes you make, based on the source control repository and branch you specify.
App Service is charged based on the compute resources you use. Those resources are determined by the App Service plan on which you run your applications. App Service apps always run in an App Service plan, so this seems like the logical point at which to introduce App Service plans.
If you’re familiar with the concept of a server farm or cluster, where a collection of powerful servers provide functionality beyond that of a single machine, App Service plans should make sense (in fact, the resource type for App Service plans is Microsoft.Web/serverfarms). As we just mentioned briefly, an App Service plan defines the compute resources web apps use. I use the plural context because – just like in a server farm – you can have multiple apps using the same pool of compute resources, which is defined by the App Service plan. App Service plans define which operating system to use, the region in which the resources are created, the number of VM instances (under the hood, VM instances are running, but this is PaaS, so they’re maintained for you), the size of those VMs, and the pricing tier.
As you might be used to by now, some pricing tiers will provide access to features that aren’t available in others. For example, the Free and Shared tiers run on the same VM as other App Service apps, including other customers’ apps, and are intended for testing and development scenarios. These tiers also allocate resource quotas for the VM, meaning you can’t scale out. All remaining tiers other than Isolated and IsolatedV2 have dedicated VMs on which your apps can run unless you specifically place the apps within the same App Service plan. The Isolated and IsolatedV2 tiers run on dedicated VMs, but they also run on dedicated Azure Virtual Networks (VNets), providing network and compute isolation, as well as the maximum scaling out capabilities. Azure Function apps also have the option to run in an App Service plan.
A common misunderstanding is that you need to have one App Service plan per App Service application. This is not always necessary (you can’t mix Windows and Linux apps within the same App Service plan, so you’d need multiple plans if you have that). Remember – an App Service plan defines a set of resources that can be used by one or more applications. If you have multiple applications that aren’t resource-intensive and you have compute to spare within an App Service plan, by all means, consider adding those applications to the same App Service plan. One way to think of an App Service plan is as the unit of scale for App Service applications. If your App Service plan has five VM instances, your application or applications will run across all five of those instances. If you configured your App Service plan with autoscaling, all the applications within that App Service plan will scale together based on those autoscale settings. Within the Azure portal, App Service plans are described as representing the collection of physical resources that are used to host your apps:
Figure 3.1 – The Azure portal description of App Service plans
We’ll explore the Azure portal experience of creating an App Service plan here since this will make it easier to illustrate the configuration options:
Figure 3.2 – App Service plan details within the Azure portal
We’re going to be making use of staging slots and auto scale later in this chapter, so select the least expensive Production tier that provides these features. For me, that’s S1:
Figure 3.3 – App Service Spec Picker within the Azure portal
Notice that, depending on which tier you selected, the option to enable Zone redundancy is disabled, because that’s only available in higher tiers. Make a note of the SKU code, not the name. In this example, the SKU code is S1, not just Standard:
Figure 3.4 – SKU code showing in the Azure portal
az appservice plan create -n "<plan name>" -g "<resource group>" –sku "<SKU code>" --is-linux
Alternatively, use the following PowerShell command:
New-AzAppServicePlan -Name "<plan name>" -ResourceGroupName "<resource group>" -Tier "<SKU code>" -Location "<region>" -Linux
While the CLI accepts but doesn’t require a location because it will inherit from the resource group, PowerShell requires the location to be specified.
Now that we’ve explored the App Service plans that provide the underlying compute resources for your apps, we can move on to App Service web apps and put these App Service plans to good use.
Originally, App Service was only able to host web apps on Windows, but since 2017, App Service has been able to natively host web apps on Linux for supported application stacks. You can get a list of the available runtimes for Linux by using the following CLI command:
az webapp list-runtimes --linux -o tsv
The versions you see relate to the built-in container images that App Service uses behind the scenes. If your application requires a runtime that isn’t supported, you can deploy the web app with a custom container image. If you want to use your own custom containers, you can specify the image source from an ACR (which we’ll do shortly), Docker Hub, or another private registry.
Now, let’s create a basic web app using the Azure portal since – as with App Service plans – it’s easier to illustrate certain elements:
Select the .NET runtime stack that matches the version we used in the containers demo in the previous chapter (it was .NET 6.0 at the time of writing), then select Linux for the Operating System option, along with your appropriate region. Notice that the Linux App Service plan has already been selected for you in the Linux Plan field and that you can’t select the Windows one, despite it being in the same subscription and region (the resource group doesn’t matter).
Although we’re using pre-created App Service plans, notice that you can create a new one at this point. If you were to use the az webapp up (don’t do it right now) CLI command, it would automatically create a new resource group, app service plan, and web app.
Figure 3.5 – Web app starter page content
az webapp create -n "<globally unique name>" -g "<resource group>" --plan "<name of the Windows App Service plan previously created>"
Alternatively, use the following PowerShell command:
New-AzWebApp -Name "<globally unique app name>" -ResourceGroupName "<resource group>" -AppServicePlan "<name of the Windows App Service plan previously created>"
A location isn’t required here since it will inherit from the App Service plan (App Service plans will only be available within the same subscription and region).
With that, we’ve created some App Service plans and web apps. Now, let’s deploy some very basic code to one of our web apps:
git clone https://github.com/PacktPublishing/Developing-Solutions-for-Microsoft-Azure-AZ-204-Exam-Guide
Feel free to either work from the Chapter03 1-hello-world directory or create a new folder and copy the contents to it.
az webapp up -n "<name of the Windows web app>" --html -b
Here, we added the -b (the short version of --launch-browser) argument to open the app in the default browser after launching, but you don’t need to. It just saves time because you should browse to it now anyway. Using the --html argument ignores any app detection and just deploys the code as a static HTML app.
We will only be using the Linux App Service for the rest of this chapter, so the Windows one is no longer required unless you want to compare the experience with Linux/containers as we go along.
That was about as simple as it can get. We’re not going to run through every different type of deployment (deploying using a Git repository, for example), but feel free to check out the Microsoft documentation on that, should you wish. We’ll talk about CI/CD toward the end of this book as well. For now, the last deployment method we will look at before moving on is custom containers.
We’re going to reuse the aspnetapp sample we downloaded in the previous chapter, create an ACR, store our container image there, and then use that to deploy the containerized application to our App Service. Let’s get started:
Figure 3.6 – Copying the aspnetapp folder within VS Code
Figure 3.7 – New folder structure with the copied aspnetapp folder in VS Code
az acr create -g "<resource group>" -n "<ACR name>" --sku "Basic"
az acr update -n "<ACR name>" --admin-enabled true
az acr credential show -n "<ACR name>" --query "passwords[0].value"
az acr build --image "chapter3:1.0.0" --image "chapter3:latest" --registry "<ACR name>" --file "Dockerfile" .
We used two tags to illustrate how you can version your images with semantic versioning while also making sure that the most recent version is tagged with the latest tag as well. A link to information on semantic versioning can be found in the Further reading section of this chapter.
az webapp config container set -g "<resource group>" -n "<app-service>" -r "https://<ACR name>.azurecr.io" -i "<ACR name>.azurecr.io/chapter3:latest" -u "<ACR name>" -p "<password obtained from step 6>"
At the moment, anybody with a browser and an internet connection could access your web app if they had the URL. Now, let’s learn how authentication and authorization work with App Service so that we can require users to authenticate before being able to view our shiny new containerized web app.
Many web frameworks have authentication (signing users in) and authorization (providing access to those that should have access) features bundled with them, which could be used to handle our application’s authentication and authorization. You could even write tools to handle them if you’d like the most control. As you may imagine, the more you handle yourself, the more management you need to do. You should keep your security solution up-to-date with the latest updates, for example.
With App Service, you can make use of its built-in authentication and authorization capabilities so that users can sign in and use your app by writing minimal code (or none at all if the out-of-the-box features give you what you need). App Service uses federated identity, which means that a third-party identity provider – Google, for example – manages the user accounts and authentication flow, and App Service gets the resulting token for authorization.
Once you enable the authentication and authorization module (which we will shortly), all incoming HTTP requests will pass through it before being handled by your application. The module does several things for you:
On Windows App Service apps, the module runs as a native IIS module in the same sandbox as your application code. On Linux and container apps, the module runs in a separate container, isolated from your code. Because the module doesn’t run in-process, there’s no direct integration with specific language frameworks, although the relevant information your app may need is passed through using request headers, making this a good time for the authentication flow to be explained.
It’s important to understand, at least to some extent, what the authentication flow looks like with App Service, which is the same regardless of the identity provider, although different depending on whether or not you sign in with the provider’s SDK. With the provider’s SDK, the code handles the sign-in process and is often referred to as client flow; without the provider’s SDK, App Service handles the sign-in process and is often referred to as server flow. We’ll discuss some of the theory first, before checking it out in practice.
The first thing to know is that the different identity providers will have different sign-in endpoints. Here are the currently generally available (GA) identity providers and their respective sign-in endpoints:
The following diagram illustrates the different steps of the authentication flow, both using and not using the provider SDK:
Figure 3.8 – Authentication flow steps
You can configure the behavior of App Service when incoming requests aren’t authenticated. If you allow unauthenticated requests, unauthenticated traffic gets deferred to your application, and authenticated traffic gets passed along by App Service with the authentication information in the HTTP headers. If you set App Service to require authentication, any unauthenticated traffic gets rejected without getting passed to your application. The rejection can be a redirect to /.auth/login/<provider> for whichever provider you choose. You can also configure the rejection to be a 401 or a 403 response for all requests.
Seeing the authentication flow and authorization behavior in action will help cement your understanding of the topic, so let’s configure our App Service to make use of the authentication and authorization module. We’re going to use the Azure portal for this exercise, as that will be easier to illustrate and understand. The exam doesn’t require you to know all the commands to set this up programmatically; you just need to have some understanding of the setup and behavior. We’ll also go into more detail in Chapter 7, Implementing User Authentication and Authorization:
For simplicity, we’re going to create a new app registration because a detailed conversation about app registrations and service principles will come in Chapter 7, Implementing User Authentication and Authorization.
Figure 3.9 – In-browser developer tool’s Network tab
Although I’m referring to the in-browser developer tool, you’re more than welcome to use other tools if you wish.
Figure 3.10 – In-browser developer tools showing a 302 status code
If you haven’t connected the dots yet, we can review the authentication settings for our app and see that we configured unauthenticated requests to receive an HTTP 302 Found response and redirect to the identity provider (Microsoft, in our example):
Figure 3.11 – Authentication settings summary showing the 302 status configuration
Figure 3.12 – In-browser developer tool showing the redirect URI and client ID
At this point, you may want to clear the network log when you’re about to finish the sign-in process to start from a clean log when we sign in. You don’t have to, but it may make it easier to select entries when there are fewer.
Figure 3.13 – Permissions requested by the app registration
On both the Decoded token and Claims tabs, any app-specific roles your account has assigned would look as follows:
Figure 3.14 – Decoded token and claims entries showing assigned roles
I hope that going into that extra bit of detail and showing the authentication flow in action helped your understanding more than simply telling you the steps. We’ll now move on to the final topic of our App Service exploration by briefly looking at some of the available networking features.
Unless you’re using an ASE, which is the network-isolated SKU we mentioned earlier in this chapter, App Service deployments exist in a multitenant network. Because of this, we can’t connect our App Service directly to our network. Instead, there are networking features available to control inbound and outbound traffic and allow our App Service to connect to our network.
First, let’s talk about outbound communication. App Service roles that host our workload are called workers; the roles that handle incoming requests are called frontends. The Free and Shared App Service plans’ SKUs use multitenant workers (that is, the same worker VMs will be running multiple customer workloads). Other SKUs will run on workers that are dedicated to a single App Service plan.
This leads us to a quick mention of worker VM types – the Free, Shared, Basic, Standard, and Premium SKUs all use the same type of worker VM. PremiumV2 uses a different VM type, while PremiumV3 uses another VM type again.
Why is this important? Because if you scale your App Service to an SKU that uses a different worker VM type, the outbound IP addresses of your App Service will change. Several IP addresses get used for outbound calls from your app, which you can see in the Azure portal by going to the Properties blade under the Outbound IP addresses heading. Alternatively, you can use the following CLI command to list them for your app:
az webapp show -g "<resource group>" -n "<app-service>" --query outboundIpAddresses -o tsv
If you wanted to get a list of all possible outbound IP addresses that your app could use, including whether you were to scale your app up to another SKU, check the Additional Outbound IP Addresses heading in the same portal location, or use the following CLI command:
az webapp show -g "<resource group>" -n "<app-service>" --query possibleOutboundIpAddresses -o tsv
To allow your app to make outbound calls to a specific TCP endpoint, you can use the Hybrid Connection feature. At a very high level, you would install a relay agent called Hybrid Connection Manager (HCM) on a Windows Server 2012 or newer machine within the network that you want to connect to, which could also be on-premises, so long as outbound traffic to Azure over port 443 is allowed. If you’re already aware of the Azure Relay feature, this is built on the Hybrid Connections capability of that feature, but this is specialized for App Service specifically, only supporting making outbound calls to a specific TCP host and port. Both the App Service and HCM make outbound calls to the Relay, providing your app with a TCP tunnel to a fixed host and port on the other side of the HCM. When a DNS request from your app matches that of a configured Hybrid Connection endpoint, the outbound TCP traffic gets redirected through the Hybrid Connection.
The other networking feature for outbound traffic is VNet integration. VNet integration allows your app to securely make outbound calls to resources in or through your Azure virtual network (VNet), but it doesn’t grant inbound access. If you connect VNets within the same regions, you need to have a dedicated subnet in the VNet that you’re integrating with. If you connect to VNets in other regions (or a classic VNet within the same region), you need a VNet gateway to be created in the target VNet.
Unlike outbound IP addresses, each App Service will just have a single inbound IP address, as you may imagine. There are several features for handling inbound traffic, just as there are for outbound. If you configure your app with SSL, you can make use of the app-assigned address feature, which allows you to support any IP-based SSL needs you may have, as well as set up a dedicated IP address for your app that isn’t shared (if you delete the SSL binding, a new inbound IP address is assigned). Access restrictions allow you to filter inbound requests using a list of allow and deny rules, similar to how you would with a network security group (NSG). Finally, we have the private endpoint feature, which allows private and secure inbound connections to your app via Azure private link. This feature uses a private IP address from your VNet, which effectively brings the app into your VNet. This is popular when you only want inbound traffic to come from within your VNet.
There’s much more to Azure networking, but these are the headlines specific to Azure App Service. As you may imagine, there’s a lot more to learn about the features we’ve just discussed here. A link to App Service networking can be found in the Further reading section of this chapter, should you wish to dig deeper.
This ends our exploration of Azure App Service. Armed with this understanding, the remainder of this chapter should be a breeze in comparison. Now that we’ve gone into some depth regarding web apps, let’s look at some additional configuration options, as well as how to configure logging for our web app.
It’s important to understand how to configure application settings and how your app makes of use them, which we will build on in the last section of this chapter. There are also various types of logging available with App Service, some of which are only available to Windows and can be stored and generated in different ways. So, let’s take a look.
In the previous exercise, we navigated to the Configuration blade of our App Service to view an application configuration setting. We did the same thing in the previous chapter, without explaining the relevance of those settings in any detail. We’ll fill this gap now.
In App Service, application settings are passed as environment variables to your application at runtime. If you’re familiar with ASP.NET or ASP.NET Core and the appsettings.json or web.config files, these work in a similar way, but the App Service variables override the appsettings.json and web.config variables. You could have development settings in these files for connecting to local resources such as a local MySQL database, for example, but have production settings stored safely in App Service – they are always encrypted at rest and transmitted over an encrypted channel.
For Linux apps and custom containers (like ours), App Service uses the --env flag to pass the application settings to the container, which sets the environment variables on that container. Let’s check these settings out:
{
"name": "MY_CUSTOM_GREETING",
"value": "Hello, World!",
"slotSetting": false
}
Don’t worry, we’ll cover what slotSetting means later in this chapter. For now, don’t worry about it.
If we were in this same area with a Windows App Service app, we would also have a Default documents tab, which would allow us to define a prioritized list of documents to display when navigating to the root URL for the website. The first file in the list that matches is used.
Figure 3.15 – The Index.cshtml file within VS Code
Figure 3.16 – h3 heading showing the new environment variable value
az acr build --image "chapter3:1.1.0" --image "chapter3:latest" --registry "<ACR name>" --file "Dockerfile" .
az webapp restart -g "<resource group>" -n "<app-service>"
Figure 3.17 – Application setting value showing through application code
az webapp config appsettings set -g "<resource group>" -n "<app-service>" --settings "MY_CUSTOM_GREETING=Oh, hello again!"
Give the App Service a few moments to restart, then refresh the website for the app and see that your new value has been implemented.
One final configuration you should be aware of is cross-origin resource sharing (CORS), which comes supported with RESTful APIs for App Service. At a high level, CORS-supported browsers prevent web pages from making requests for restricted resources to a different domain to that which served the web page. By default, cross-domain requests (Ajax requests, for example) are forbidden by something called the same-origin policy, which prevents malicious code from accessing sensitive data on another site. There may be times when you want sites from other domains to access your app (if your App Service hosts an API, for example). In this case, you can configure CORS to allow requests from one or more (or all) domains.
In terms of the flow, the browser will make what’s known as a pre-flight request to the app URL using an OPTIONS verb to determine whether they have permission to perform the action. This request will include headers detailing the origin and the request method (GET, PUT, and so on). The response will show what actions (if any) the app is willing to accept. Although our app isn’t an API, we can still use it to prove the most basic functionality:
az webapp cors add -g "<resource group>" -n "<app-service>" --allowed-origins "http://somedomain.notreal"
curl -v -X OPTIONS "https://<app-service>.azurewebsites.net" -H "Access-Control-Request-Method: GET" -H "Origin: http://someotherdomain.notreal"
Notice the lack of helpful information in the response other than the line that contains something similar to {"code":400,"message":"The origin 'http://someotherdomain.notreal' is not allowed."}.
curl -v -X OPTIONS "https://<app-service>.azurewebsites.net" -H "Access-Control-Request-Method: GET" -H "Origin: http://somedomain.notreal"
Notice that this time, the response includes the Access-Control-Allow-Origin: http://somedomain.notreal response header. This is enough to tell us that CORS is working without actually creating an API. API management will be covered in Chapter 11, Implementing API Management, so there’s no need to go through this at this point. In a real-world situation, the JavaScript client would send the pre-flight request using the OPTIONS verb (like we did with cURL) and the server would respond, telling the client what the server is willing to accept (if anything). If it is, the actual request would then be made.
If you’re wondering why we haven’t touched on the App Configuration feature, that’s because we will look at it in more detail in Chapter 8, Implementing Secure Cloud Solutions. For now, we can move on to the topic of logging.
There are various types of logging available within App Service – some are Windows-specific while others are available for both Windows and Linux:
For logs stored within the App Service filesystem, you can access them via their direct URLs. For Windows apps, the URL for the diagnostic dump is https://<app-service>.scm.azurewebsites.net/api/dump. For Linux/container apps, the URL is https://<app-service>.scm.azurewebsites.net/api/logs/docker/zip. Within the portal, you can use Advanced Tools to access further information and the links just mentioned.
Let’s enable application logging in our app and have our code generate a log message to see this in action. We’ll start in the portal because that’s easier to show the options that are different between Linux and Windows apps:
To illustrate the differences between Linux and Windows apps, this is what you’d see if you went to the same location from a Windows app:
Figure 3.18 – App Service logging options for a Windows App Service
_logger.LogInformation("This is my custom information level log message.");
Then, save the file.
az acr build --image "chapter3:1.2.0" --image "chapter3:latest" --registry "<ACR name>" --file "Dockerfile" .
Then, restart the App Service however you wish (remember, we configured it to use the latest tag, so a restart is enough).
{"EventId":0,"LogLevel":"Information","Category":"aspnetapp.Pages.IndexModel","Message":"This is my custom information level log message.","State":{"Message":"This is my custom information level log message.","{OriginalFormat}":"This is my custom information level log message."}}
Now that we’ve got a good understanding of some key concepts of App Service and have run through some detailed topics and enabled logging, we’ll look at a topic that was very briefly touched on in Chapter 1, Azure and Cloud Fundamentals: scaling.
In Chapter 1, Azure and Cloud Fundamentals, we mentioned that the cloud offers elasticity so that it can scale and use as much capacity as you need when you need it. We specifically touched on scaling up (that is, vertical scaling) and scaling out (that is, horizontal scaling). Let’s jump into the portal once more and take a look:
Figure 3.19 – Custom metric condition visual
You can view any autoscale actions through the Run history tab, as well as via the App Service Activity Log. The following are a few quick points on scaling out when using autoscale outside of self-learning as we are:
One important point to remind you of is that since scaling rules are created on the App Service plan rather than the App Service (because, as we know, the App Service plan is responsible for the resources), if the App Service plan increases the instances, all of your App Services in that plan will run across that many instances, not just the App Service that caused the scaling.
So far, any impactful changes we’ve pushed to the App Service would cause the service to restart, which would lead to downtime. This is not desirable in most production environments. App Service has a powerful feature called deployment slots to allow you to test changes before they hit production, control how much traffic gets routed to each deployment slot, and then promote those changes to production with no downtime. Let’s wrap up this chapter by learning about deployment slots.
The first thing to know about deployment slots is that they are live apps with hostnames, content, and configuration settings. In a common modern development workflow, you’d deploy code through whatever means to a non-production deployment slot (often called staging, although this could be any name and there may be multiple slots between that and production) to test and validate. From there, you may start increasing the percentage of traffic that gets routed to the staging slot or you may just swap the slots – whatever was in production goes to staging and whatever was in staging goes to production, with no downtime.
Because it is just a swap, if something unexpected does happen as a result, you can swap the slots back and everything would return to before the swap occurred. Several actions take place during a swap, including the routing rules changing once all the slots have warmed up. There’s a documentation link in the Further reading section of this chapter, should you wish to explore this further.
We spoke about application configuration settings earlier in this chapter, but we purposely didn’t address what the slotSetting meant. With each deployment slot being its own app, they can have their own application configuration as well. If a setting isn’t configured as a deployment slot setting, that setting will follow the app when it gets swapped. If the setting is configured as a deployment slot setting, the setting will always be applied to whichever app is in that specific slot. This is helpful when there are environment-specific settings; you can have the same code in staging and production, but the settings will change, depending on which deployment slot that code is running from.
Different App Service plan tiers have a different number of deployment slots available, so that could be a consideration when deciding on which tier to select or scale to. As with some other settings we’ve discussed, Windows apps have an additional setting that’s not available with Linux/container apps: auto-swap.
Under the Configuration blade of a Windows App Service, under the General settings tab, you’ll see the option to enable auto-swap when code is pushed to that slot. For example, if you enable this setting (again, only available on Windows App Service apps) on the staging slot, each time you deploy code to that slot, once everything is ready, App Service will automatically swap that slot with the slot you specify in the settings. Don’t be disheartened if you want something like that but you’re using Linux/container apps – CI/CD pipelines can do so much more than that, and we’ll go into some detail at the end of this book on this kind of automation.
For now, let’s see deployment slots in action:
You can also use the following CLI command:
az webapp deployment slot create -g "<resource group>" -n "<app-service>" -s "staging" --configuration-source "<app-service>"
Alternatively, you can use the following PowerShell command:
New-AzWebAppSlot -ResourceGroupName "<resource group>" -Name "<app-service>" -Slot "staging"
This shows how you could test changes in the staging slot/app before pushing it to production. The documentation also explains how you can use a query string in a link to the App Service, which users could use to opt into the staging/beta/preview app experience.
If you wanted to, you could revert the changes by swapping the slots again. If our application setting was configured as a slot setting, we wouldn’t have noticed any changes because rather than following the app, the setting would have been stuck to the specific slot.
One final point to note is that although the default behavior is for all the slots of an App Service to share the same App Service plan, this can be changed by changing the App Service plan in each slot individually.
With that final point, we have come to the end of our exploration of App Service – congratulations! A lot of the concepts we’ve discovered here will help with the topics that will be covered throughout this book, as a lot of them will dive deeper or reference concepts we’ve already covered in some detail.
In this chapter, we dived into Azure App Service by looking at some fundamental features, such as App Service plans, as well as some basics of App Service web apps. We then delved into authentication and authorization, stepped through the authentication flow, and provided a summary of some networking features. Once our app was up and running, we went into some detail about configuration options and how application settings can be used by the application and are exposed as environment variables. We learned about the different types of built-in logging available with App Service and went through an exercise to have our application code log messages that App Service could process. Then, we learned how to automatically scale our App Service based on conditions and rules to make use of the elasticity that the cloud offers. Finally, we walked through how to make use of deployment slots to avoid downtime, control how changes are rolled out, and how to roll back changes, should this be required.
I hope that the topics and exercises we went through in this chapter have helped you understand the concepts that will be discussed later in this book. If you understand the fundamental concepts, you are much better prepared for the exam, which may contain some abstract questions that require this kind of understanding, rather than just example questions.
In the next chapter, we will introduce Azure Functions and what they do, while comparing with other services. We’ll also cover scaling. Then, we’ll start looking at developing Azure functions, triggers, and bindings, before moving on to developing stateful durable functions.
Answer the following questions to test your knowledge of this chapter:
To learn more about the topics that were covered in this chapter, take a look at the following resources: