© Julian Soh, Marshall Copeland, Anthony Puca, and Micheleen Harris 2020
J. Soh et al.Microsoft Azurehttps://doi.org/10.1007/978-1-4842-5958-0_20

20. Developing and Deploying Azure-based Applications

Julian Soh1 , Marshall Copeland2, Anthony Puca3 and Micheleen Harris1
(1)
Washington, WA, USA
(2)
Texas, TX, USA
(3)
Colorado, CO, USA
 

Introduction

Azure is often referred to as one of only a few hypercloud offerings today. The term hypercloud indicates that it has a worldwide footprint, access to almost limitless resources (primarily compute, memory, and storage), and all of it on-demand and deployable within minutes.

This core characteristic of Azure brings new capabilities to developers because it allows applications to be more robust and resilient since resources can be allocated upward as needed, requests can be rerouted away from areas that are experiencing technical issues due to uncontrolled natural threats, and response times can be significantly improved since data and transactions are located closer to the end user’s respective geographic location. There is also the ability to manage and deploy changes to versions of applications with minimal or no service interruptions.

These service-level agreements (SLAs) for applications have been around forever and are highly dependent on the underlying infrastructure. In this chapter, we cover built-in capabilities in Azure that helps improve SLAs that are impacted by application deployment efforts.

Trends in Cloud-based Application Development

The characteristics and capabilities of a hypercloud like Azure changes the way developers think about designing and deploying applications, and thus drives certain trends we see today. Although there are many changes that affect developers and the way things are traditionally done, the major trends identified and discussed in this chapter are
  • Platform as a service (PaaS)

  • Azure Web Apps (as a PaaS)

  • Containers (as a PaaS)

  • Built-in monitoring, debugging, and performance insights

We selected these topics because they are prominent trends in cloud computing today. In the case of Azure, these are native capabilities that can easily be adopted and provide significant improvement in SLAs with minimal effort. You see how this is accomplished through the hands-on exercises in this chapter.

Platform as a Service (PaaS)

In Chapter 12, we introduced the topic of PaaS and made the case for its adoption. As a recap, we said that PaaS provided a layer of abstraction from the underlying infrastructure that forms the foundation of any application. No matter how advanced an application is, or how many different acronyms for new technologies and methods we hear today, the bottom line is that everything is supported by computers that are connected via network cables and powered from an outlet. Updating and replacing aging or faulty hardware, adding new hardware, and making sure the power is not disrupted by a vacuum cleaner, for example (yes, this has happened to one of the authors in an enterprise setting), are all considered “busy work” that does not contribute to value-added advancement but are very important nonetheless.

In this section on PaaS, we are going to explore additional SLA-contributing capabilities that developers who are deploying applications on Azure should take advantage of.

Slots on Azure Web Apps

Slots are a unique aspect of Azure Web Apps that is easily overlooked. It is easy to understand, adopt, and contributes to the quality assurance (QA) effort and reduces downtime when it comes time to cut over from one version of an application to another.

Problems like bugs may be caught during testing, but there are usability and user interface (UI) issues that surface only during user acceptance testing. Errors may also surface when an application is deployed from the dev/test environment to the production environment because of different settings like security or hardware. That is why most organizations try to maintain at least two identical infrastructures for the test and the production environments. This is a costly approach both in terms of capital expense and the labor needed to synchronize two environments. Slots in Azure Web Apps address these issues.

Hands-on with Slots on Azure Web Apps

Think of slots in Azure Web Apps as mirror environments of the web app. Every web app has one slot when it is first deployed. This slot is the default production slot that developers publish their web applications to. Subsequent slots can be configured as mirror environments and these can be slots for development, testing, QA, and pre-prod or staging purposes. The best part of slots in Azure Web Apps is that it does not cost extra to create slots in Azure Web Apps that are in the standard or premium App Service Plans, and since for production environments, the standard App Service Plan is the minimally recommended one, there is no reason not to use slots because it is a zero-dollar feature. Now that we have briefly made the financial case to utilize slots, this next exercise show how to get started with slots quickly.
  1. 1.

    In the Azure portal, go the web app for the Sentiment Analysis Feedback form you developed in Chapter 15.

     
  2. 2.

    In the Overview pane, note that the App Service Plan is Standard or Premium. If you followed the exercise verbatim, the App Service Plan for this project should already be Standard. The reason why we are pointing this out here is because slots are available only in the Standard plan or higher. Slots are not available in the Basic App Service Plan.

     
  3. 3.

    Select Deployment slots, as referenced in Figure 20-1 under the Deployment section from the menu on the left.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig1_HTML.jpg
Figure 20-1

Add a deployment slot via the Azure portal

  1. 4.

    Note that there is an active running slot, and 100% of the traffic is currently being routed to this slot. Click Add Slot from the top of the pane, as shown in Figure 20-1.

     
  2. 5.

    A new pane open. Type Staging as the name of this new slot, and for the Clone settings from dropdown box, select the default running slot. In our example, since our running slot is called myfeedbackform, this is the only other available option from which you can clone settings from.

     
  3. 6.

    Click Add.

     
  4. 7.

    It takes a time for the slot to be created, but once this is done, click Close to close the pane that appeared in step 5.

     
  5. 8.

    You see now two slots which are both running. The one you just created has the slot name appended to the default slot name separated with a dash (-) . The newly created slot have 0% traffic routing to it.

     
  6. 9.

    Open another browser tab and go to the URL of the new slot (e.g., https://myfeedbackform-staging.azurewebsite.net). It should open to a generic landing page stating that the App Service is running, indicating the slot is live.

     
  7. 10.

    Go back to the portal and click the staging deployment slot, as shown in Figure 20-2. You be redirected to a new Web App page as if you are configuring a new and separate web app.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig2_HTML.jpg
Figure 20-2

Clicking the newly created staging web app

  1. 11.

    Get the publishing profile of the staging web app by clicking the Get Publish Profile on the top of the pane.

     
  2. 12.

    Launch Visual Studio and open the Sentiment Analysis Feedback form project.

     
  3. 13.

    Make a modification to the project in order to simulate a new version to be released to staging for QA before moving the application to production. For example, for this exercise, you open the Default.aspx page and change lines 7 and 15 to describe this application. Figure 20-3 shows these changes.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig3_HTML.jpg
Figure 20-3

Changes we made to Default.aspx to simulate staging a new release

Note

For something more visual, we also made a change to the code behind of Default.aspx to show the web app’s process ID by using System.diagnostic. Display the process ID in a label control (named lbl_ProcessID). Our code for Default.aspx.c looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Diagnostics;
namespace Sentiment_Feedback
{
    public partial clas _Default : Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            int nProcessID = Process.GetCurrentProcess().Id;
            lbl_ProcessID.Text = nProcessID.ToString()
        }
    }
}
  1. 1.

    Publish this version of the Sentiment Analysis Feedback form using the publishing profile for the Staging slot. Remember you downloaded the publish profile for the Staging slot in step 11.

     
  2. 2.

    Once the application is published, you should be automatically directed to the URL of the application, or if needed, you can refresh the page or open a new browser tab and go to the URL for the staging web app. Any of these methods would show that the application is now published on the same web app as the production site but with different URLs.

     
  3. 3.

    Take note of the difference between the versions of the application that is in the production slot versus the staging slot by identifying the changes you have made in step 13. Better yet, if you did modify the application to display the process ID of the web app, take note of the process ID of the two versions. They should have different process IDs indicating that these are different applications running in different application pools.

     
  4. 4.

    At this point, you can continue testing all the application’s capabilities in the staging slot by submitting a feedback and confirming that the application works.

     

What have you accomplished at this point? First, you created a separate slot from the same web app and we duplicated the settings from the production slot. That means you know there are no configuration settings that may be incompatible with this new version.

Secondarily, you can conduct user acceptance and QA testing without interrupting the production site.

Note

While there is no additional cost in deploying to additional slots in an Azure web app, and even though one of the best use-case scenarios for slots is QA and user acceptance testing that usually occurs during staging, remember that all slots are supported by the same underlying resources defined by the App Service Plan. Thus, there are scenarios that may not be a good fit for slots. Stress testing is one such example. In a stress test, you are simulating traffic to an application to ensure that it works under peak loads. Unless you have configured autoscaling for the App Service Plan, stress testing a slot affects the version of the application in the production slot because the test cannibalizes the compute and memory resources shared between all slots.

Next, let’s assume that the version of the application that we have deployed to the staging slot in the preceding exercise has passed user acceptance testing and QA. It is now time to deploy the new version of the application that is currently in the staging slot.
  1. 1.

    In the Azure portal, in the configuration pane for the web app, select Deployment slots in the menu on the left.

     
  2. 2.

    Then click the Swap menu option at the top of the pane, as shown in Figure 20-4.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig4_HTML.jpg
Figure 20-4

Swapping slots

  1. 3.

    By default, when the swap pane opens, it has the source of the swap set as the staging slot, and the target is the production slot. For now, ignore the Perform swap with preview checkbox and leave it unchecked.

     
  2. 4.

    Click Swap at the bottom of the pane.

     
  3. 5.

    When the swap is completed and successful, you be notified via a status bar at the bottom of the swap pane, as shown in Figure 20-5.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig5_HTML.jpg
Figure 20-5

Successfully swapped slots

  1. 6.

    Refresh the production site and you see that the application is now the version that was previously in the staging slot, and vice versa.

     

At this point, you have successfully deployed the latest version of the application by just swapping slots. URLs and settings were preserved, and there was no downtime when the swap occurred.

If you need to quickly roll back to the previous version of the application for whatever reason, all you have to do is reverse the swap by clicking swap and using the same source and target options. Remember that since the previous version is now in the staging slot, initiating a new swap is basically just reversing the first swap, but hopefully you do not have to do this because you had ample opportunity to carry out user acceptance testing and QA before the first swap!

Note

There are several settings that do not get swapped but remain with the slot. For example, if you have a custom domain name for the production slot (very likely scenario and covered in Chapter 12), this custom domain does not get swapped and remains with the production slot. Therefore, when a user now visits the custom domain URL, it resolve to the new version of the application that is now occupying the production slot. If the custom domain name configuration had also been swapped and if for example the staging slot did not have a custom domain, and then the custom URL would point to the staging slot that now holds the previous version of the application, which defeats the purpose of the swap. Aside from custom domains, these settings are also not swapped and remain with the respective slots:

Publishing endpoints

Custom Domain Names

SSL certificates and bindings

Scale settings

WebJobs schedulers

Containers

We assume that you have some understanding of containerization but here is a quick primer.

Nothing has revolutionized the IT industry than the introduction of virtualization technology. Virtualization has helped us maximize limited hardware resources by significantly increasing the density of workloads that can be supported. For IT professionals, this usually takes the form of a hypervisor that creates multiple virtual machines (VMs) that share a physical infrastructure. As you saw in Chapter 2, hyperclouds such as Azure have maximized and automated the power of virtualization to provide IaaS for the masses.

For developers, virtualization occurs at the software layer, unlike IaaS, which occurs at the operating system (OS) layer, but both virtualization layers can be combined. Figure 20-6 shows the evolution of virtualization as well as the high-level differences and implementation options for IaaS virtualization and application virtualization using containers.
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig6_HTML.png
Figure 20-6

Evolution of virtualization

Prior to any form of virtualization technology, applications are tightly bound to the OS, which is in turn tightly bound to the physical hardware. Capacity and compatibility between applications determine the number of applications that can run on any given hardware. Therefore, there is a need for a large hardware footprint. For example, in Figure 20-6, during pre-virtualization, applications A/B/C are compatible with each other and thus can occupy the same physical infrastructure. Applications D/E/F are also compatible but need to occupy a new hardware infrastructure because there are not enough resources on the A/B/C’s infrastructure. Applications H and G are not compatible with each other because there are conflicts in their respective software dependencies and thus must reside on their own separate infrastructure. This has the side effect of unused capacity that are still consuming resources such as power and cooling because the capacity is defined at the hardware layer.

With the introduction of IaaS virtualization, the OS can be virtualized into many VMs of different sizes. Applications that are compatible with each other can share a VM as long as there is enough capacity on the VM. Following along with the same example from before, applications G and H are still not compatible so we assign them their own VMs, choosing to have a larger VM for application G with spare capacity at the VM level for future applications, but scaling out a small VM for application H and leaving additional capacity on the physical infrastructure to spin up more VMs in future.

Containering applications addresses the interapplication compatibility issue. Each containerized application is packaged up in its own container together with all the application’s dependencies, and since containers are isolated from one another, they can coexist on the same physical hardware. Container virtualization software separates the OS from the applications, so you can either install the container software directly onto a physical host and its OS or on VMs, as shown in Figure 20-6. The goal is to optimize the underlying hardware resources by increasing the density of applications running on them.

Containers in Azure

If we leverage IaaS in Azure, we can bring up Linux VMs and deploy a client-server architecture for containers, like Kubernetes (K8S) on Linux.

Alternatively, we can consume containerization as a PaaS by using Azure Kubernetes Service (AKS) and Azure Container Registry.

Note

Azure Container Service (ACS) was retired on January 31, 2020. AKS was introduced in 2017 and has completely replaced ACS. See https://azure.microsoft.com/en-us/updates/azure-container-service--retire-on-january-31-2020/.

If your organization is already using on-premises containerization technology such as Red Hat’s OpenShift, you can port your solution to Azure Red Hat OpenShift, which is an Azure PaaS jointly managed by Microsoft and Red Hat.

The bottom line is that you can take advantage of containerization without the overhead of managing an IaaS infrastructure for containers since the service is a PaaS offering in Azure.

Hands-on with Docker Images and the Azure Container Registry

In this exercise, we take our Sentiment Analysis feedback application and make it a container image.
  1. 1.

    Go to http://docker.com/products/docker-desktop. Download and install Docker Desktop. Windows 10 Pro or Enterprise edition is required if you are doing this on a Windows machine. A reboot after the installation may also be required

     
  2. 2.

    Launch Visual Studio and open the Sentiment Analysis Feedback project.

     
  3. 3.
    Right-click the project, select Add, as referenced in Figure 20-7, and then select Docker Support.
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig7_HTML.jpg
    Figure 20-7

    Adding Docker Support to an existing project

     
  1. 4.

    Select Windows as the target OS when you see the Windows or Linux prompt.

     
  2. 5.

    This take some time as Visual Studio pulls the Docker images and dependencies during the build process. You can monitor the download and extraction process in the separately launched console window as seen in Figure 20-8 (this window may be minimized or hidden so you may not see it pop up).

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig8_HTML.jpg
Figure 20-8

Docker downloading and extracting images

  1. 6.

    Build and debug the application using Docker by making the selection in the menu. Click the play button. Make sure that Docker is selected, as seen in Figure 20-9.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig9_HTML.jpg
Figure 20-9

Set to Debug and Docker as the target

  1. 7.

    Visual Studio build the application, deploy it to a docker container, and launch the app in a browser. Debug the application in the browser to make sure everything works.

     
  2. 8.

    In Visual Studio, press Ctrl+Q and type Containers in the search box. The Containers window opens, and you see the resources and dependencies that are in the container, as shown in Figure 20-10.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig10_HTML.jpg
Figure 20-10

Browsing the envrionment, ports, and files in the container

Stop the debugging once you have tested the application. Let’s publish this application as a container image to Azure.
  1. 1.

    Right-click the project in Solution Explorer, and select Publish, and click Start.

     
  2. 2.

    Select Container Registry as the publish target, and then select Create New Azure Container Registry.

     
  3. 3.

    Click Create Profile as referenced in Figure 20-11.

     
  4. 4.
    Enter a globally unique DNS prefix, subscription, an existing or new resource group, SKU, and registry location. Then click Create.
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig11_HTML.jpg
    Figure 20-11

    Creating a new Container Registry in Azure

     
  1. 5.

    In the next screen, click Publish. This deployment may take a while. Monitor the Output windows in Visual Studio and subsequent command windows that Docker Desktop launches during the deployment.

     
  2. 6.

    Docker pushes the application to Azure Container Registry.

     

Now that we have successfully published the Container image for our application, we can use that image on any host capable of running Docker images.

Hands-on with Azure Kubernetes Service (AKS)

AKS is a hosted PaaS that you can use to run your containerized applications. In the previous exercise , we added container support to our Sentiment Feedback application, tested it, and then pushed it to our private Azure Container Registry. In this exercise, we publish the container to AKS so the application goes live for the public.

For this exercise, you have the option to use the Azure CLI, but for the following steps, we use Cloud Shell from the portal.
  1. 1.

    In the Azure portal, click the Cloud Shell icon in the top menu, as shown in Figure 20-12.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig12_HTML.jpg
Figure 20-12

Launching Cloud Shell from the portal

  1. 2.

    At the time of this writing, running Windows and Linux node pools in the same cluster is still a preview, so you must activate preview features. In the Cloud Shell, making sure that you are using Bash (not PowerShell) as the environment. Type the following.

     
az extension add --name aks-preview
Note

Read the Windows Server container support and check when it move from Preview to General Availability at https://azure.microsoft.com/en-us/blog/announcing-the-preview-of-windows-server-containers-support-in-azure-kubernetes-service/

  1. 3.

    Check for available updates. Enter1 the following.

     
az extension update –-name aks-preview
  1. 4.
    Enable Windows Preview feature. Enter the following command to do so but be aware that enabling the Windows Preview feature may take a while.
    az feature register –-name WindowsPreview –-namespace Microsoft.ContainerService
     
  2. 5.
    Wait for the Windows Preview feature to be enabled. You can validate that the Windows Preview feature is enabled by continuing to enter the following command until you see the state change from Registering to Registered, as shown in Figure 20-13.
    az feature list -o table –-query "[?contains(name,'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}"
     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig13_HTML.jpg
Figure 20-13

State of Windows Preview feature changing from Registering to Registered

  1. 6.

    Once the Windows Preview feature is registered, enter the following command to refresh the registration.

     
az provider register –-namespace Microsoft.ContainerService
  1. 7.
    You are now ready to create our AKS cluster that contain Linux and Windows nodes. If you are going to use an existing resource group, skip to the next step. Otherwise, if you want to put the AKS cluster in a new resource group, type the following.
    az group create --name <Enter_Name_Of_Your_AKS_Cluster> --location <Enter_the_region>
    Eg. az group create --name rg-containers --location westus
     
  2. 8.
    Create the AKS cluster with Kubernetes version 1.13.5 and above. To check what versions are available by region, use the following command:
    az aks get-versions --location <location> --output table
    E.g. az aks get-versions --location westus --output table
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig14_HTML.jpg
    Figure 20-14

    Kubernetes versions in the West US region and identifying the latest non-preview version

     
  1. 9.

    As seen in Figure 20-14, we identified the latest non-preview version of Kubernetes in the West US region is 1.15.7, which is greater than version 1.13.5, so we use this version for our cluster.

     
  2. 10.
    Create the AKS cluster
    PASSWORD_WIN="<Select_a_Password>"
    az aks create
        --resource-group <Resource_Group_Name_From_Step_7>
        --name <Name_Of_Cluster>
        --node-count 1
        --kubernetes-version 1.15.7
        --generate-ssh-keys
        --windows-admin-password $PASSWORD_WIN
        --windows-admin-username azureuser
        --enable-vmss
        --enable-addons monitoring
        --network-plugin azure
    E.g.
    PASSWORD_WIN="P@ssw0rd1234"
    az aks create
        --resource-group rg-containers
        --name JulianK8S
        --node-count 1
        --kubernetes-version 1.15.7
        --generate-ssh-keys
        --windows-admin-password $PASSWORD_WIN
        --windows-admin-username azureuser
        --enable-vmss
        --enable-addons monitoring
        --network-plugin azure

    Note If you already have an existing AKS cluster prior to enabling the Windows Preview feature, you are unable to apply the Windows Preview features to this cluster. You need to create a new cluster after the Windows Preview feature has been enabled.

     
  3. 11.
    Once the AKS cluster has been created, add a Windows node. You need to use the Windows node for our .NET Sentiment Feedback project.
    az aks nodepool add
        --resource-group <Resoucre_Group_Name_From_Step_7>
        --cluster-name <Name_Of_Cluster>
        --os-type Windows
        --name npwin
        --node-count 1
        --kubernetes-version 1.15.7
    E.g.
    az aks nodepool add
        --resource-group rg-containers
        --cluster-name JulianK8S
        --os-type Windows
        --name npwin
        --node-count 1
        --kubernetes-version 1.15.7
     
  4. 12.
    Connect the AKS cluster to the Azure Container Registry from the previous hands-on exercise to which we pushed the Sentiment Feedback image to.
    az aks update -n <Cluster_Name> -g <Resource_Group_Name_From_Step_7>
    --attach-acr <acrName>
    E.g.
    az aks update -n JulianK8S -g rg-containers
    --attach-acr JulianContainerRegistry
     
  5. 13.

    Enter the following command to determine the details for the Windows node within the cluster so you can use it in the YAML file in the next step. This value ensure that the application is deployed to the correct node in the cluster. Locate the information in the output, as shown in Figure 20-15.

     
kubectl get nodes --show-labels
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig15_HTML.jpg
Figure 20-15

Locating the node information in the output

  1. 14.
    Create the following YAML file using Visual Studio Code (https://code.visualstudio.com/Download) or your preferred lightweight IDE and save the file locally. You upload the file in the next step. Alternatively, you can also use the vi editor in Cloud Shell to create the YAML file instead of uploading it. Make sure that the YAML file references the correct Windows node that you determined in step 14, as shown in the following code and in Figure 20-16.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sentimentfeedback
      labels:
        app: sentimentfeedback
    spec:
      replicas: 1
      template:
        metadata:
          name: sentimentfeedback
          labels:
            app: sentimentfeedback
        spec:
          nodeSelector:
            "beta.kubernetes.io/os": windows
          containers:
          - name: sentimentfeedback
            image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
            resources:
              limits:
                cpu: 1
                memory: 800m
              requests:
                cpu: .1
                memory: 300m
            ports:
              - containerPort: 80
      selector:
        matchLabels:
          app: sentimentfeedback
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sentimentfeedback
    spec:
      type: LoadBalancer
      ports:
      - protocol: TCP
        port: 80
      selector:
        app: sentimentfeedback
     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig16_HTML.jpg
Figure 20-16

Visual Studio Code used to create the YAML file

  1. 15.
    Upload the YAML file to the Cloud Shell by clicking the upload option in the Cloud Shell menu, as referenced in Figure 20-17.
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig17_HTML.jpg
    Figure 20-17

    Uploading files to the Cloud Shell environment

     
  1. 16.

    You are now ready to deploy the Sentiment Feedback application by issuing the following command. It may take a while for the application to be deployed because it first needs to pull the container from ACR, and then the load balancer need to provision an external IP address for the application.

     
kubectl create -f ./sentimentfeedback.yaml
  1. 17.
    A deployment and a service be created for our application. To monitor the deployment, issue the following command. Initially, the External-IP shows Pending as the container is being pulled and as the load balancer acquires an IP address, but eventually an IP address is assigned, as referenced in Figure 20-18 and at that point, the application is accessible.
    kubectl get service <servicename>
    E.g.
    kubectl get service sentimentfeedback
     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig18_HTML.jpg
Figure 20-18

The assigned External-IP Address for the application

  1. 18.

    To test the application, open a browser window and enter the external IP address in the address bar. You should see the Sentiment Feedback application that you developed in Chapter 15.

     

What we accomplished in the two hands-on labs was to take the Sentiment Feedback application that was developed in Chapter 15 and create a Docker image which we then pushed to Azure Container Registry.

We then deployed an AKS service with nodes that support Windows Containers and associated AKS to ACR so the Sentiment Feedback application image can be pulled and deployed to the Windows node in the AKS cluster.

This chapter has provided you the knowledge and hands-on experience to deploy containerized applications, but an in-depth exploration of all the capabilities of Kubernetes and Docker is outside the scope of this book. However, you have the fundamentals required to fully explore all the benefits of containerization with other resources dedicated to the topic.

Troubleshooting and Monitoring AKS

The last topic in this chapter is the built-in performance monitoring and insights associated with Azure-based applications. These capabilities can help troubleshoot issues and monitor performance.

You can manage, troubleshoot, and monitor AKS and ACS via the cloud shell using traditional methods like kubectl. For example, Kubernetes published a good cheat sheet of frequently used kubectl commands and actions at https://kubernetes.io/docs/reference/kubectl/cheatsheet/ that can be used in the Cloud Shell.

Alternatively, the Azure portal also provides a graphical interface to AKS.

Hands-on Monitoring and Troubleshooting AKS

  1. 1.

    In the Azure portal, navigate to the AKS resource and click Monitor Container. Alternatively, you can click Insights in the menu along the left under Monitoring, as shown in Figure 20-19.

     
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig19_HTML.jpg
Figure 20-19

Monitoring and insights for the AKS resource in the Azure portal

  1. 2.
    You see a dashboard with all the utilization information for the AKS cluster, such as node memory utilization, pod information, and so forth. You can also click Time range, as referenced in Figure 20-20 at the top of the dashboard to show only the statistics for a particular time frame.
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig20_HTML.jpg
    Figure 20-20

    Setting the time range for the AKS dashboard

     
  1. 3.

    Click Deployments, and you see the Sentiment Feedback application that you deployed in the previous exercise show up in the list of deployments. If you issued a command like kubectl delete deployment sentimentfeedback in the Cloud Shell and then click Refresh on this dashboard, the deployment disappears from the list.

     
  2. 4.
    Next, click the Containers
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig21_HTML.jpg
    Figure 20-21

    Monitoring the container images in this AKS resource

     
  3. 5.
    Click View live data. You can see events and logging information, if any. This not only applies to Containers, but also to Nodes and Controllers. Figure 20-22 shows a screenshot of the controllers’ live data showing an error, therefore allowing you to troubleshoot the issue.
    ../images/336094_2_En_20_Chapter/336094_2_En_20_Fig22_HTML.jpg
    Figure 20-22

    Looking at the live data of an AKS resource

     
Note

As an example, the event message in Figure 20-22 is reporting node mismatch. This error told us that the application could not be deployed because it could not find the specified node in YAML or that the node AKS is trying to install this application to is incompatible. The final determination was basically the application was being pushed to the Linux node instead of the Windows node, and that allowed us to locate an omission in the YAML file.

Even though we only looked at some of the monitoring capabilities of AKS, almost all resources in Azure, especially PaaS like AKS and ACR, have Monitoring tools that you can access from the menu in the portal, as shown in Figure 20-23. Feel free to explore what is monitored in each resource and find out how to customize or drill down to the details.
../images/336094_2_En_20_Chapter/336094_2_En_20_Fig23_HTML.jpg
Figure 20-23

Built-in monitoring for Azure resources, especially PaaS

Summary

This chapter demonstrated two PaaS services that are prominent and popular among application developers: Azure Web Apps and containerization with Docker, ACR, and AKS. These are modern approaches to deploying and monitoring applications. Technologies such as Kubernetes are natively deployed in Azure and without significant configuration, you can use all the traditional commands in Cloud Shell to manage the service.

In the next chapter, you look at some automation features that take advantage of these new methods of application deployment and learn how approaches such as continuous integration and continuous deployment (CI/CD) are integrated into Azure.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset