Chapter 6

Delivering datacenter efficiency

In this chapter, we take a deep dive into some examples of datacenters in action, look at existing frameworks for delivering datacenter efficiency, explore examples of good, efficient IT practices in action, and walk through a scenario for moving a workload from your on-premises infrastructure to Azure while maintaining a process framework.

Snowflake datacenters

Take a moment and ask yourself, “When was the last time I worked for an organization that didn’t have a datacenter?” Explore that thought and reflect on how good or bad it was. Think about whether you had structured cabling; if so, was it labelled correctly? Did you have regular hardware refresh practices, and what was it like to deploy new servers? Also what have been the common elements of all the datacenters you have worked with?

It’s no secret that enterprises all over the globe have datacenters—big, small, or globally distributed. In fact there are an infinite number of different types of datacenters, and every single one is slightly (or in some cases significantly) different. In fact, datacenters are like snowflakes—each one is unique!

Why is uniqueness important? Well, because each datacenter is unique, we need to try to identify key components and pillars to help you focus on what datacenter efficiency looks like for you and give you a path that you can follow with slight deviations to begin to achieve efficiencies.

Before you embark on the journey, let us clear up one thing that’s ingrained in every fiber of every aspect of IT: Change is scary, and change is hard. However, as we continually mention in this book and chapter, change is required to get to modern IT and to delivering datacenter efficiency.

As we alluded to in Chapter 1, change becomes the focal point of delivering datacenter efficiency. It becomes the one challenge that you need to conquer above all else. In fact, through every iteration, change can take on a different formation that you need to conquer all over again. Change can shapeshift based on the problem you need to solve, and you need to potentially change your approach in whatever form that may take.

Datacenter efficiency isn’t only about improving the hardware or using the latest, greatest next-gen piece of software. It’s more fundamental, and a good comparison is exercising the human body.

If you try to run a 5-mile race on day one of exercising, would you be able to that? (If you can, you’re lucky!) Chances are that you would need to train your body to have sufficient stamina and lung capacity to run a race. The training required to be successful is pretty specific. What about a boxing match? Would you be able to last 15 rounds based on the same training you did to run 5 miles? Or would you even be able to run 20 miles on the same training that you did to be able to run 5 miles, or would change be required?

The point is that to meet the goals you need to change, and unfortunately you can’t entirely map out changes, and you will work off wire frameworks to achieve your goal.

Try thinking of change in terms of levels. As you recognize and build efficiencies into one level, you proceed to the next. But change also is bidirectional; sometimes you need to go down to the original level to fix areas to deliver the efficiency in the next higher level.

Figure 6-1 shows an example levels in respect of delivering datacenter efficiency.

This diagram shows two levels: Level 1 includes people, process, and technology, and level 2 includes compute, network, storage, security, automation, and identity.
Figure 6-1 Building blocks of delivering efficiency

If you start on Level 1, you can train people in the newer technologies and modern modes of administration, and you can give them challenges to meet enthusiastic service level agreements (SLAs) for the business. You can review and improve on how a service is delivered or optimize the tasks done in a process, and you could implement new technology to help you achieve items within people and process.

After you have some of these foundations, you can proceed to level 2 and start improving your compute, network, storage, and so on environments using the newly skilled people, the updated processes, and the updated technology. When something in level 2 is proving difficult to deliver efficiency on, review the foundation components in level 1 and decide whether there is more you could do at that level to strengthen delivering efficiency in level 2. As you walk through all aspects of the IT organization, start at level 1, examine and scrutinize what is in place, and start the change.

Datacenter transformation versus digital transformation

In almost every executive briefing, whitepaper, or blog post about modern IT, you see something about digital transformation. Digital transformation is critical for modern IT, and it’s absolutely critical for businesses in general to go through digital transformation because the potential of becoming stagnant and missing a large percentage of markets is too great.

However, among all this digital transformation exists a core problem much like the chicken and egg scenario. Which should come first: datacenter transformation or digital transformation? Given that datacenters run businesses, to be successful at digital transformation you need to at least begin addressing the topic of datacenter transformation. If we play devil’s advocate, though, we suggest that maybe you need to begin digital transformation to be able to have datacenter transformation.

The truth is that you need to examine both paradigms of transformation and map out intersecting paths. For example, in recent years customers have begun looking at their aging datacenters, specifically their secondary (usually disaster recovery) datacenters. These secondary datacenters typically incur almost equal amounts of cost to build when compared to the primary datacenter, often ranging into the tens of millions of dollars. They also require significant budgets to maintain and operate. In theory, as the production IT system grows, so should the investment into the secondary datacenter. If a disaster rolls in, you might find that the multimillion-dollar investment can’t meet the RTO/RPO defined in the financially backed SLAs the company has written. This is a significant problem.

Now introduce a public cloud technology like Azure and a need to digitally transform, which makes life for the IT organization easier, makes the business always available, and makes recovering from a disaster easier. The IT organization could essentially drop its entire secondary datacenter, replicate the entire infrastructure to Azure using Azure Site Recovery, and have a fully automated failover capability that can be tested and refined through each pass.

The company has begun to achieve datacenter efficiency through both a digital transformation (the introduction of automation and education for their employees) and datacenter transformation (using Azure as the secondary datacenter), which saves millions of dollars annually.

There are a million examples that we could use. Let’s take a look at our case study and focus on a challenge that IT staff at Fourth Coffee had, and let’s work toward solving that issue with practical implementations.

Fourth Coffee—eating their elephant

We’ve frequently referred to eating the elephant throughout this book, but it’s for a good reason. It’s the reminder that regardless of the subject of the chapter or section, the work involved isn’t easy, and it takes time to achieve. It is a step-by-step process often done as little changes (or bites).

You might remember from Chapter 2 that Fourth Coffee wants to undergo a transformation project. The project encompasses a vast array of goals, both business and IT related. To help you understand delivering datacenter efficiency a little more, we’re looking at some of their items and breaking them down further so that we can give some practical advice on how the company might achieve the goals.

If we look at some of the items Fourth Coffee wants to achieve and distill them into one specific item that we can used as a beacon for delivering datacenter efficiency, we would say that Fourth Coffee wants to deliver a better customer experience. This focal point allows us to break down the problem further by trying to understand the elements that would make a better customer experience. In Chapter 2, we established that items like a mobile ordering experience, always having the right stock so the customers can have their beverages, bringing more venues to the customer, and providing customized experiences would go a long way to achieve this.

Stop and think about whether this is a digital transformation, a datacenter transformation, or just a matter of delivering datacenter efficiency? In our opinion, it’s all three. Why? Because to achieve anything on the list, Fourth Coffee has to involve aspects of level 1 and level 2 as we outlined earlier in this chapter.

Now let’s take one single item from Fourth Coffee’s transformation goals: the mobile ordering experience. This gives us a framework for the other items we want to achieve. If we think about potential architectures that would support the goal, the technologies and/or entry points would be like others.

We have two problems for FC to achieve the goal of a mobile ordering experience:

  • The infrastructure is not built to handle it.

  • The IT team are not skilled to support it.

If we don’t address these two issues, this project would fail—maybe not on day 1 but very soon into the project because the complexities of building a system, let alone modifying a datacenter to support it, can be overwhelming.

Let’s take the problem into the two parts and walk you through how we could digitally transform to achieve this goal, perform some datacenter transformation, and achieve efficiency during the course of the project.

The infrastructure is not built to handle it

We’ve mentioned that Fourth Coffee has aging hardware and old software. The company has complicated managing their environment by piecing the components together over the last few decades. Adding the infrastructure to host a mobile ordering experience would likely tip the infrastructure into oblivion. The real question we need to solve is how can Fourth Coffee do more with less, or how does Fourth Coffee operate this infrastructure efficiently?

Remember one thing, terminology and your viewpoint can lead to making poor choices. An example is negatively judging words like “aging hardware and old software.” If you have a negative attitude, ultimately you start thinking about replacing hardware and software, which leads to extra cost, and you might not think of it as efficiently as you need to. If you take those kinds of terms at face value and realize that the hardware and software are still tangible assets with value, then you can look at efficiency in a very different manner.

Here’s a practical example: a server that was bought three years ago still has use. It can help you in building a resource pool, for example. Unfortunately, sometimes to drive efficiency, you must incur costs. Everything must be assessed for its end need and from there you decide what can be done.

In the Fourth Coffee scenario, we have a couple of items we must examine and understand how we can become more efficient. Let’s take the aging fiber channel SAN; it was purchased seven years ago and hosts hundreds of gigabytes (GB) of data. It runs at 4 GB/s and has reached its capacity for additional trays. Its extended warranty is expired, and it’s becoming a liability for the company. There is only one sensible thing for Fourth Coffee to do: replace it, which incurs cost. The challenge is deciding what to replace it with. SANs have been a cornerstone of enterprises for decades, but they also introduce dependencies on vendors, storage types, storage cards, special administrators, and so on.

To drive datacenter efficiency, you must take a step back and look at a straightforward concept: standardization. Standardization helps drive datacenter efficiency by using commodity hardware with no specialization to reach the demands of the business. Standardization simplifies the administration patterns, the support process, the maintenance cycles, and coverage in disasters.

What does this look like for Fourth Coffee? In the Windows Server 2016 world, we introduce the Storage Spaces Direct feature, which enables us to create highly reliable, scalable, and performant storage solutions built on commodity hardware. Rather than spending millions of dollars on specialized SAN hardware, Fourth Coffee could purchase a solution at a fraction of the cost. The company also could build a Storage Spaces Direct solution in virtual machines in Azure., which lends portability to the solution (that is, build it on premises and migrate to Azure later).

The administration concepts are based on PowerShell. The technology is Microsoft and is already a large percentage of Fourth Coffee’s IT estate. The hardware is commodity; if the company can’t get the previous type of disk, it just needs to match the speed and choose a different vendor. If Fourth Coffee loses a node in a storage spaces direct cluster, it can build a new one again with commodity hardware. Connectivity is provided by ethernet rather than special fiber card. Storage Spaces Direct also provides a large degree of redundancy across nodes by replicating data synchronously. Fourth Coffee also can implement a highly converged infrastructure that drives more efficiency with this technology. Figure 6-2 shows a layout of Storage Spaces Direct.

This diagram shows a sample Storage Spaces Direct cluster with four nodes.
Figure 6-2 Storage Spaces Direct system

The efficiency is driven from not having to get specialized knowledge to manage a system and not having to pay a premium for specialized hardware. Efficiency is also realized by simplifying the management experience, using interfaces that lend themselves to automation, reducing the power consumption of the storage, and so much more.

Fourth Coffee also should review its database solution for the point-of-sale (POS) system and force the upgrade to the latest edition of SQL Server. This allows us access to better features for performance and high availability. Straight away Fourth Coffee can migrate away from a physical cluster and use SQL Always On technology.

This new database system can be built in virtual machines. SQL Always On also allows for third-leg asynchronous copies for disaster recovery. In combination with Hyper-V, Scale-Out File Services, and Storage Spaces Direct, Fourth Coffee can build a highly efficient infrastructure to host the SQL database. Include Azure in the equation, and we can span our database to the cloud for an asynchronous disaster recovery copy.

Figure 6-3 shows the sample solution, including the spoke for the public cloud. It demonstrates a fully hyper-converged infrastructure that uses commodity hardware delivering unparalleled performance and reliability. The sample uses SQL Server in virtual machines for flexibility and availability sets to ensure that the SQL nodes will be available in the event of a physical node failure. The databases reside on the storage spaces’ direct pool with multiple copies spanned across the nodes for high availability with a 10 Gbit Ethernet Remote Direct Memory Access (RDMA) enabled backbone.

This diagram shows a sample hyper-converged infrastructure.
Figure 6-3 Hyper-converged infrastructure with SQL AlwaysOn and third spoke to Azure

All this drives efficiency in the datacenter from hardware usage to energy to solution cost. Then we look at the benefits that updated modern software will deliver, like SQL Always On and the asynchronous replication or Hyper-V replication to make our disaster recovery process easier. What becomes even more useful is that no matter its location, the administration experience remains similar, and that reduces the learning curve required to adopt this technology.

We could keep examining the infrastructure and highlighting how we could deliver more efficiencies; however: we need to take a look at another aspect of delivering datacenter efficiency in the next section. We will of course revisit this later in practical examples of achieving our goals and provide more details to expand on the delivery of a mobile ordering system.

The IT team is not skilled to support it?

Although updating hardware and software and reprovisioning existing assets will deliver some efficiency to our datacenter, those things are only part of the journey. Efficiency can be driven from standardization and, more importantly, automation. We could spend a lot of time automating elements of the infrastructure and deliver a large degree of efficiency from that alone.

Remember, a person’s time has a cost, and although it’s a cheesy saying, time is money! Saving someone’s time will save money, and if the person has more time to work on better tasks to improve IT then it will help deliver more services to the business with fewer resources. Standardization levels the playing field and simplifies our challenge of delivering efficiency and automation, and it helps save you the time to perform mundane tasks that are highly repetitive.

We mentioned in an earlier chapter that at Fourth Coffee the administrator for Exchange, Eddie, is a superstar. Although we know he’s great, no one can fully understand what he does. He’s always busy and has so many tasks to help users with their mailbox quotas, set up new users, change email address, ensure the message queues are being processed, and make sure the databases are replicating in their availability groups (database availability groups).

Eddie has no automation skills and has not invested the time to increase his knowledge of PowerShell. Knowing this, we could train Eddie on PowerShell and teach him tool building so that he can automate a large percentage of his tasks. Let’s focus on one scenario—changing a user’s email address. Manually this task takes five minutes to log on to the console, find the user, modify the property, and save the changes. With a PowerShell script, the task takes 30 seconds. If Eddie has to run this task manually 20 times a day, it would consume more than an hour of his day! With automation, it consumes 10 minutes. Eddie has now 70 minutes more per day to handle other tasks. If he goes one step further, he could build a self-service portal around the automated task so that he wouldn’t even need to spend 10 minutes doing that job.

By introducing standardization and automation, Eddie changes how he operates to deliver more efficiency. Fourth Coffee could apply this effort to the entire IT staff and development staff to deliver the efficiencies required for modern IT and the mobile ordering project.

Bringing datacenter efficiency to Fourth Coffee

Take a quick look at Figure 6-4, which shows Fourth Coffee’s current POS architecture.

The figure shows the Fourth Coffee point-of-sale architecture.
Figure 6-4 Fourth Coffee’s point-of-sale architecture

In this section, we take you through the practicality of looking at the three basic pillars of people, process, and technology and showing you how to achieve efficiency across areas like compute, network, storage, and so on, starting from the current state of the POS system and evolving to the mobile ordering system.

People investment

Before tackling any technology, you must invest in people and their skills. It’s common in an enterprise for a team to get stuck and not necessarily be aware of all the latest and greatest improvements and features from a technology or vendor group. They often can get siloed into what they have to deal with day to day.

Investing in people is a matter of fact. If you don’t do it, someone else will, and there has to be some expectation on the people that they keep themselves relatively up to date.

Here’s an example: An organization gives the employees some time every Friday to identify new technology that could help them into their daily activities. If no new technology can be identified, there’s an opportunity for the employees to identify how they can improve on the area and to give critical feedback.

The skills we want to encourage organizations to develop can be broken down into two main areas: soft skills and technical skills. We discuss items within each as we move through this chapter.

Soft skills

There are many skills an IT organization requires investment in. We highlight a few key areas where training employees can lead to greater productivity and efficiency for the IT organization:

  • Feedback: Teaching people to give concise, relevant feedback is not an easy task, but doing so leads to treasure! The rewards are simply too great to count when you have a method of getting the right information to affect positive changes in the business.

  • Presentation skills: Developing your team’s ability to articulate and deliver a message to the correct audience can be critical for the success of any project. When people successfully deliver a message, they can win the investment they require. When they’re ineffective in delivering the message, it can set the IT department back.

Note

James Whittaker provides a great reference for developing presentation skills. You can find some of the amazing material he has released at the following link:

news.microsoft.com/stories/people/james-whittaker.html

  • Unconscious bias: This is a curious skill to develop. It involves helping people understand that they can unknowingly judge conversations and ideas based on the people involved. Training people to recognize this behavior and correct it can lead to greater efficiency in team collaboration.

Technical skills investment

Unfortunately, developing soft skills is only half the battle! Technical skills help the IT organization perform tasks more efficiently and build out tools that can be reused by the rest of the organization, allowing cross-pollination of skills throughout the IT silos.

In this section, we identify and dive into some of the technical skill areas that we feel greatly benefit organizations. We try to break down each area of investment to help describe what is it, what it looks like, and what can it do to help you make priority decisions on where to spend time.

Graphic User Interface (Gui) Tools

What is it?

Windows Server has long shipped with a set of GUI tools to perform administration tasks for managing servers. Some of these tools work on local and remote systems, but others work only on local machines.

What does it look like?

There have been numerous attempts to standardize the look and experience of GUI management tools, including the Control Panel, Microsoft Management Console (MMC), Server Manager, and, more recently, Honolulu. The result is that there is no single coherent GUI experience for managing Windows Server. But that is not why we don’t use GUIs in this book. If you want to pursue modern IT, we strongly recommend against using GUIs to do management for the following reasons:

  • Quality control: If you have a change you need to perform on 100 servers and that change involves 100 mouse clicks, what are the chances you’ll reproduce those mouse clicks in order 100 times? (Hint: 0)

  • Lack of productivity: Imagine the effort required to run a GUI on 100 machines to perform an operation. Now image the effort required if you have 1,000 machines or that you need to do this work for the thousands of test VMs that you create every week.

  • Lack of peer review/auditing: How confident are you giving a new-hire admin privileges to 100 production machines and having perform arbitrary GUI operations? Can you peer review or audit their mouse clicks?

What can it do?

GUI tools are great tools for learning a technology or running a single server for a small business, but they have minimal to no role in modern IT.

Where do I learn more?

Don’t.

Powershell1

1 Most of the things we’ll be doing with PowerShell require Administrator privileges, so start PowerShell by right-clicking the icon and selecting Run As Administrator.

What is it?

PowerShell is a powerful automation framework that is cross platform. Companies can utilize this framework to build tooling to help them more effectively administer their entire IT Enterprise.

What does it look like?

PowerShell is built around cmdlets, which perform actions and are stored in modules. The actions in the cmdlets are usually in the format verb-noun.

For example, if you want to get all the processes on a system, you use the Get-Process cmdlet as follows:

Get-Process

Or

Get-Process -Name lsass

Another example would be if you want to retrieve the last 10 events in the security event log for a Windows server. You would use the cmdlet Get-Eventlog as follows

Get-Eventlog -LogName Security -Newest 10

The output is shown in Figure 6-5.

This figure shows the output of the Get-Eventlog.
Figure 6.5 Output of Get-Eventlog

If you used the optional -ComputerName parameter on Get-Eventlog, you could target remote computers. You could even wrap this in a script and collect log data from 100 machines automatically, search for a pattern in the data collected, and trigger an alert somewhere if a condition is detected!

PowerShell is all about a set of small tools that can be stitched together using the pipeline character (|). It’s designed to create an environment where you can think about what you want, type it, and get the results. Imagine that you want to Get all the Processes Where the HandleCount was greater than 2000, sort them by HandleCount, and then Format it as a Table showing the Name, ID, and HandleCount. Here is how you do that in PowerShell:

Get-Process |
   Where HandleCount –ge 2000 |
   Sort HandleCount |
   Format-Table Name,ID,HandleCount

Note

There is a lot of PowerShell in this book, so let’s make sure you know what you’re looking at when you see the examples. The examples show cmdlet parameters using three forms:

  • No quotes: Get-Service -Name ALG

  • Single quotes: Get-Service -Name ’Time Broker’

  • Double quotes: $x = ’Broker’; Get-Service -Name ”Time $x"

Here is what is going on:

  • When a parameter value is a single word with no spaces, it does not need quotes, and we don’t include them to make it easier to read.

  • When a parameter value is multiple words and does not have a variable to be expanded, we use single quotes. This suppresses variable expansion and is best practice to avoid unintended actions.

  • When a parameter value includes a variable name to be expanded (for example, $n), we use double quotes, which allows the parameter to be expanded.

What can it do?

You can use PowerShell to manage your environment that spans across multiple clouds. For example, you can use it to manage creating users, creating virtual machines, failing over a database, and many other tasks. PowerShell can reduce the amount of time required to do standard day-to-day administration tasks to improve the efficiency of the IT team.

Where do I learn more?

The help that ships with PowerShell is amazingly good. Start by using the Get-Help cmdlet and take it from there. If you to prefer watching videos, many people have successfully used a series of Microsoft Virtual Academy video talks to learn PowerShell. The best way to find these are go to the Microsoft Virtual Academy (mva.microsoft.com) and search for “Snover PowerShell.” Look for the following:

  • Getting Started with Microsoft PowerShell

  • Advanced Tools & Scripting with PowerShell 3.0 Jump Start

Powershell Desired State Configuration (Dsc)

What is it?

Desired State Configuration (DSC) is an essential part of the configuration, management, and maintenance of Windows-based servers. It allows a PowerShell script to specify the configuration of the machine using a declarative model in a simple standard way that is easy to maintain and understand. This is a subset of PowerShell; in a sense, you could consider this the next level of PowerShell skills.

What does it look like?

Following is a sample DSC configuration layout that installs the IIS Role on servers dedicated to being a web server and Hyper-V on servers dedicated to being a VM Host:

Configuration Lab1 {

    Node $AllNodes.Where{$_.Role -eq 'WebServer'}.NodeName
    {
        WindowsFeature IISInstall {
            Name    = 'Web-Server'
            Ensure  = 'Present'
        }

    }
    Node $AllNodes.Where{$_.Role -eq 'VMHost'}.NodeName
    {
        WindowsFeature HyperVInstall {
            Name    = 'Hyper-V'
            Ensure  = 'Present'
        }
    }
}

$MyData =
@{
    AllNodes =
    @(
        @{
            NodeName     = 'Web-1'
            Role         = 'WebServer'
        },
        @{
            NodeName     = 'Web-2'
            Role         = 'WebServer'
        },
        @{
            NodeName     = 'VM-2'
            Role         = 'VMHost'
        }
    )
}

Lab1 -ConfigurationData $MyData -Verbose

The very last line does the following:

  • It creates a subdirectory using the name of the configuration ’Lab1’.

  • It generates a set of Managed Object Format (MOF) files, one for each node to be configured into that subdirectory (for example, .Lab1Web-1.mof).

The MOF files are compiled versions of the configuration and are read and interpreted by the Local Configuration Manager engine on the machine to be configured. At this stage, you have defined the configurations you want. Later we show you how to implement those configurations using the Start-DSCConfiguration cmdlet.

What can it do?

PowerShell DSC is a very powerful tool that can help you deploy software and maintain state on machines in a standardized manner. You can use it to integrate into the CI/CD process to ensure that when a piece of software gets updated you can update the DSC configuration scripts to deploy the new software.

Where do I learn more?

Again, the help that ships with PowerShell is your best starting point. Type

>  help about_DesiredStateConfiguration

and take it from there.

If you to prefer watching videos, many people have successfully used a series of Microsoft Virtual Academy talks on PowerShell to learn PowerShell and DSC. The best way to find these are go to the Microsoft Virtual Academy (mva.microsoft.com) and search for “Snover PowerShell.” Specifically, look for the following:

  • Getting Started with PowerShell Desired State Configuration

  • Advanced PowerShell Desired State Configuration (DSC) and Custom Resources

Cloud Shell

What is it?

Cloud Shell is the ability to run a console window in a browser. It’s currently available from the Azure Portal but also will be available from other websites in the future. Cloud Shell enables you to run a Bash/Linux or PowerShell/Windows session.

Each environment includes the latest version of the Azure tools and other tools, including gvim, git, and sqlcmd. Interestingly, the PowerShell/Windows environment allows you also to run Bash, and the Bash/Linux environment allows you also to run PowerShell.

The PowerShell environment mounts Azure as a drive, which allows you to navigate and interact with Azure resources the same way would a file system. For example,

> dir azure:*VirtualMachines* |group PowerState

Note

Although they were designed to make it easy to manage Azure Resources, these are fully functional environments. You could install the AWS PowerShell cmdlets and manage AWS using Cloud Shell.

What does it look like?

Figure 6-6 shows the Cloud Shell in action directly from the browser. It looks and feels like traditional PowerShell Windows! We are listing the status of the Virtual Machines in our Azure Subscription.

The figure shows Cloud Shell in action., In the top part of the screen shot you can still navigate resources. The bottom part of the screen shows an example of listing virtual machines in Azure.
Figure 6-6 Cloud Shell in action

From Cloud Shell, you can administer your Azure Resources and perform tasks as necessary.

What can it do?

Ultimately Cloud Shell gives you the ability to use familiar tooling like PowerShell to manage a complex environment like Azure.

Where do I learn more?

Visit docs.microsoft.com/en-us/azure/cloud-shell/overview to get more information about Cloud Shell.

Azure Automation

What is it?

Whereas PowerShell is a task automation and configuration management framework, Azure Automation (AA) is a task automation and configuration management solution built on top of PowerShell. AA is a product designed to be used by teams to manage production environments. AA runs as a cloud service in Azure but can manage any Windows, Linux, or MacOS machine anywhere—whether it is running in Azure, in AWS, in GCE, or on premises (via an Azure Automation Gateway).

AA is the solution that transforms ad hoc scripting into formal production management. It enables teams to create formal repositories of production scripts under source code control (no more losing track of your scripts, accidental deletions, or questions about who/when/how did this production script change). It provides a secure central repository for managed assets (for example, credentials to be used to manage systems, common parameters to be used by all scripts). It provides scheduling and integration with other products/web services so that scripts can be run on regular intervals or in response to events. It also provides logging and output management so that you know when scripts were run, and you can look back on previous runs to examine the script results.

What does it look like?

AA has many different sections to it, including source control, DSC, runbooks, and assets. Figure 6-7 shows AA and some demo runbooks. The figure also shows the menu of other options you could potentially configure and use.

The figure shows Azure Automation in Azure.
Figure 6-7 Azure Automation

What can it do?

AA was built to automate processes that can be executed in PowerShell and Python across private and public environments! You can use it to create users, provision infrastructure, and do an endless number of other tasks. You also can integrate AA with Microsoft Operations Management and Security Suite to respond to alerts and trigger actions based on the data. You can use it via webhooks, which opens up another significant amount of potential integration points. AA also can leverage DSC and enforce configurations to act as a pull server across private and public clouds.

Where do I learn more?

Visit azure.microsoft.com/en-us/services/automation/ for more information.

Process investments

Every organization strives to build procedures for administration tasks that are performed. We have built hundreds of procedures for organizations that try to standardize how something is done. IT departments need this standardization so they can ensure that if they have 500 servers to deploy, each one gets deployed in the same way. This leads to stability in the environment and reliability of the service the IT team is offering.

However, something almost always happens to the process. It gets defined, it gets implemented, and eventually it gets ignored.

Why? In our experience, process gets ignored because it never had a life cycle applied to it. Like all things we talk about throughout the book, a life cycle is required to build successful processes.

The life cycle of a process

Life cycle is cyclical. It must feed itself to continue to be useful and meet the needs of why it was defined. As with most life cycles, we can identify four main pillars that you can map to the DevOps pillars (review Chapter 2 for the DevOps life cycle):

  • Documenting a process: Documenting the process is simple; you record the steps taken to complete the task. However, there is a twist, which is understanding why you are performing the task. Each step in the process should have a reason for why it is done. This helps you shape an efficient process and can also lead to being objective about what you are trying to develop.

  • Evaluating a process: When the initial process is “built,” walking through the process becomes important. Does it meet the needs of what you are trying to achieve? Is it effective in achieving these tasks? And finally, what can you automate (what steps could you script or have a workflow take over)?

  • Feedback: Remember that although you can create great processes for people to follow, if they don’t reflect how it is actually done or how it evolves, the cool thing you’ve created will be useless. You need to get feedback regarding how it operates in the wild, whether it meet its needs, and what issues it doesn’t take care of during the feedback process.

  • Improvements: Implementing improvements is the process or widening the scope or automating more tasks because we have changed the governance. Building improvements leads to efficiencies not present today.

As we mentioned, process requires a life cycle, which you should review regularly to ensure it is still meeting the needs. When talking with customers, we often refer to process building as being like writing a script. Today we can write a script to perform a task, and with the knowledge we have today it will be written in a particular way. As our skills improve in scripting, we revisit the script and improve on its performance and reliability. We may even increase the scope of the script. Over time if we examine script v1 versus script v5, we’ll see that it has become a very different entity.

Technology investments

One of the promises of this chapter is to help you deliver datacenter efficiency. We have talked about two of our level 1 (or base) layers: people and process. We can invest in people, and we can build amazing processes, but if we don’t have the technology to support them, our datacenter transformation and delivering any type of efficiency will be hampered.

In this section, we show you how you can eat the elephant with technology by giving you lots of practical examples that will save you time and money and deliver efficiency. Each time you approach and implement one thing, you take another bite of the elephant.

You might be wondering what this has to do with the mobile order solution example. Well, to implement a modern solution like a mobile ordering system, we need modern people to manage and maintain it, modern processes to support it, and modern technology for it to be implemented on. We discuss it in more detail later in this chapter, but first we need to demonstrate how we can achieve efficiency in our pillars described in level 2.

Each area we focus on highlights some common areas you can target with examples to improve efficiency.

Automation technologies

Fourth Coffee used to use a wide range of GUI tools to manage its environment. The problem was that the productivity and quality of IT operations was very low. Now Fourth Coffee has adopted a “script everything” culture that slowed things down at first but then rapidly paid for itself as the productivity and quality of IT operations soared and the willingness of the team to take on big initiatives increased.

The point of this section is straightforward: simply decide on your automation tool and implement it! Automation should translate the processes you have defined and improved upon into items where you can give IT staff back time and reduce the amount of unnecessary hands required by tasks.

There are many tools available; we focus on PowerShell in our examples because of its general portability between systems and environments and the extent of its ecosystem.

When working with PowerShell, an important concept to bear in mind is related to toolmaking. Every PowerShell script you create should take the form of a tool for reusability!

Take this example:

Get-Service -Name Spooler

If you save this to a script, it will only ever be able to query the Spooler service. Now, if you change it to the following:

Param
(
     [string]$serviceName,
     [string]$computerName
)

Get-Service -ComputerName $computerName -Name $serviceName

In the example, the computerName and the serviceName are parameterized so that you can query potentially any service on any computer. Although it’s a simple example, it demonstrates the principal of examining a simple one-liner and translating it into something reusable for a wider purpose.

Identity investments

Fourth Coffee used to manage identities using GUI tools. The problem was that this was inefficient, error-prone, and ill-suited to bulk operations or certain operations. Now Fourth Coffee does all of its identity management via PowerShell, which allows the company to enforce corporate identity policies, ensure consistency of operations, automate bulk operations, and quickly produce novel solutions to unique problems or opportunities.

Managing users and groups is critical to the proper operations and security of an enterprise. It also can be a painstaking, error-prone process if you perform it manually. For example, it can take up to five minutes for a user to get created as you log on to the domain controller, open the Active Directory Users and Computer tool, navigate to the organization unit you want to create the user in, and then run through the user creation options. If you also include other systems like Exchange and a login for a point-of-sale system, it can take longer; five minutes can extend to 30 minutes if you don’t get distracted. Now consider what happens when you hire a bunch of people.2

2 The cmdlets work on Active Directory, so you have to have the ActiveDirectory module installed on your machine. If you don’t have these on your machine, you have to install them. On a Windows 10 Client machine, you need to find, download, and install the Remote Server Administration Tools (RSAT). On Windows Server 2016, you can install them with the command Install-WindowsFeature RSAT-AD-PowerShell.

Modern IT required Fourth Coffee to get very good at automating identity so that the company could keep up with the needs of the business and to address security concerns.

Creating Users

Creating a user in PowerShell requires a few cmdlets to complete the entire process.

First, you use the cmdlet New-ADUser with the following syntax:

New-ADUser -Name 'Joe Bloggs' -UserPrincipalName [email protected]

The user is created, but because of the way New-ADUser creates the account, it does not have a password and therefore can’t be enabled. You need to use the Set-ADAccountPassword cmdlet as follows:

Set-ADAccountPassword -Identity 'Joe Bloggs'

This prompts you for a new password to set for the user. If you want to get fancier, you can encode the password as a secure string and use it as part of the command as follows:

$securePW = ConvertTo-SecureString -String 'Password01' -AsPlainText -Force

If we use the $securepassword variable now, which stores the secure string we need, we could use the -newpassword parameter for the Set-ADAccountPassword cmdlet to include the password rather than being prompted for it. Now the syntax will be as follows:

Set-ADAccountPassword -Identity 'Joe Bloggs' -NewPassword $securePW

You also could use the -AccountPassword parameter for New-ADUser cmdlet as well and eliminate this line completely.

Finally, you use the Enable-AdAccount cmdlet to enable the account as follows:

Enable-AdAccount -Identity 'Joe Bloggs'

These are the individual steps and work! Now you make this useful by building a tool script to create an AD user with extra properties. Take the following code and save it to a file named New-TeamMate.ps1:

param
(
    [string]$GivenName,
    [string]$Surname,
    [string]$EmployeeID,
    [string]$Department
)
<# Make this script more readable by creating a hashtable whose names
   match the parameters of New-AdUser and then 'splat' them.
   Get Details by typing:
   > Help about_Splatting
#>
$pw = 'Password123!'
$accountPassword = ConvertTo-SecureString -String $pw -AsPlainText -Force

$param = @{
    GivenName   = $GivenName
    Surname     = $Surname
    Name        = "$Surname, $GivenName"
    SamAccount  = "$GivenName.$Surname"
    EmployeeID  = $EmployeeID
    Department  = $Department
    Office      = '41/5682'
    City        = 'Redmond'
    State       = 'Washington'
    Company     = 'Fourth Coffee'
    Country     = 'US'
    AccountPassword = $accountPassword
}


New-AdUser @param
        Enable-AdAccount -Identity $param.SamAccount
Write-Verbose "Added $GivenName $Surname"

When you execute this script, you can run the script with the parameters as follows:

./New-TeamMate.ps1 -GivenName John -Surname Doe -EmployeeID 12345 -Department IT

This creates a user with a default password and default settings for office, city, and so on.

Once Fourth Coffee delivers its new customer-focused applications, the company will be hiring a lot of users, so you should modify this script to be able to work against a CSV file like this:

JPS> cat .NewHire.csv
GivenName,SurName,EmployeeID,Department
Mary,Breve,3000,IT
Sandeep,Java,3001,Development
Sarah,Bean,3002,Accounting

All you need to do is modify the parameter block and wrap the rest of the script in a process script block:

param
(
    [Parameter(Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [string]$GivenName,
    [Parameter(Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [string]$Surname,
    [Parameter(Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [string]$EmployeeID,
    [Parameter(Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
    [string]$Department
)
process
{
<# Make this script more readable by creating a hashtable whose names
   match the parameters of New-AdUser and then 'splat' them.
   Get Details by typing:
   JPS> Help about_Splatting
#>
$PW = 'Password123!'
$accountPassword = ConvertTo-SecureString -String $PW -AsPlainText -Force

$param = @{
    GivenName   = $GivenName
    Surname     = $Surname
    Name        = "$Surname, $GivenName"
    SamAccount  = "$GivenName.$Surname"
    EmployeeID  = $employeeID
    Department  = $department
    Office      = '41/5682'
    City        = 'Redmond'
    State       = 'Washington'
    Company     = 'Fourth Coffee'
    Country     = 'US'
    AccountPassword = $accountPassword
}
New-AdUser @param
Enable-AdAccount -Identity $samaccount
Write-Verbose "Added $GivenName $Surname"
}

This enables you to do the following:

JPS> Import-Csv '.NewHire.csv'  |.New-TeamMate.PS1 -verbose

VERBOSE: Added Mary Breve
VERBOSE: Added Sandeep Java
VERBOSE: Added Sarah Bean

Congratulations! You are well on your way to massive, no-drama productivity through the magic of PowerShell.

You could continually evolve this script with things like enabling the user principal in other systems like Exchange or a customer Human Resources application.

Disabling User’s Account Not Active in 30 Days

Another common task is identifying users who haven’t being logged on in more than 30 days and disabling them. First, you can use the Search-ADAccount cmdlet3

3 By default, the Active Directory LastLogonTimeStamp attribute is correct, but it has a very slow update frequency (think weeks). It’s not a real-time logon tracking mechanism. You can learn more details by reading the blog at blogs.technet.microsoft.com/askds/2009/04/15/the-lastlogontimestamp-attribute-what-it-was-designed-for-and-how-it-works/.

Search-ADAccount -AccountInactive -TimeSpan 30.00:00:00 -UsersOnly

This gives you a list of accounts that have not been logged on for more than 30 days. You can disable these accounts by using the Disable-ADAccount cmdlet. Fourth Coffee has a policy to disable inactive accounts every month. It implements this using the scheduling capabilities of AA and the following script:

Search-ADAccount -AccountInactive -TimeSpan 30.00:00:00 -UsersOnly |
    Disable-ADAccount
Group Management

You also could automate aspects of group management. If you need to add a user to a group, you can use PowerShell to do it. Following is a basic script that takes a username and group and adds the user into it. Although there are many different ways of doing this, and probably PowerShell now has new tools to do it in a single command, we’re showing you this method to demonstrate a principle of taking a multistep task and turning it into a script:

param
(
    [string]$Useraccount,
    [string]$domaingroup
)
$user = Get-ADUser -Identity $useraccount
$group = Get-ADGroup -Identity $domaingroup
Add-ADGroupMember $group –Member $user

You can call this script AddGroupMembers.ps1 and use the following syntax to execute it:

.AddGroupMembers.ps1 -UserAccount 'Joe Bloggs' -DomainGroup 'Domain Admins'

We have examined three areas for identity which we can take and build an automated solution to match the functionality of the manual process and deliver efficiency by reducing the amount of time it takes to perform the task. These tasks also could be ported to AA and wrapped with a self-service portal to provide more power to the end users and free up time for IT staff to perform more important tasks.

Security investments

Fourth Coffee used to buy a bunch of security products and hope that spending lots of money made them secure. The problem is that it didn’t. Fourth Coffee now invests in its security people, giving them training and the mission to secure the company. The company leverages the built-in security capabilities of Windows 10, WS2016, and Azure—such as Shielded VMs, Device Guard, Credential Guard, Windows Defender, and Azure Security Center—to secure systems. Fourth Coffee also adopted a hunter mentality by actively baselining the signature of normal operations and looking for deviations that might indicate an attacker. Fourth Coffee’s confidence in their security is now based in competence rather than hope.

Reviewing Logs Across Multiple Computers

A common task is trying to find out events across multiple computers to see if it is a common failure.

Say you want to look at the last 10 entries in the application log from a list of servers listed in a file named AllServers.txt. You can use the Invoke-Command and Get-Eventlog cmdlets as follows:

Invoke-Command -ComputerName (Cat AllServers.Txt) {
    Get-Eventlog -LogName Application -Newest 10 |
    Select MachineName, EventID, Source, Message
} |Format-Table

The output is displayed as follows

MachineName              EventID Source                             Message
------------             -------  -------                           --------
sq101.fourthcoffee.com    4098   Group Policy Registry              The computer
                                                                    'AllowKMSUpgrade'
sq101.fourthcoffee.com    1704   SceCli                             Security policy in the
                                                                    Group po
sq101.fourthcoffee.com     916   ESENT                              services (896,G,0) The
                                                                    beta fea
sq101.fourthcoffee.com     916   ESENT                              svchost (4968,G,0) The
                                                                    beta fea
sq101.fourthcoffee.com      31   Microsoft-Windows-Spell-Checking   Failed to update 1
                                                                    user custom
sq101.fourthcoffee.com     916   ESENT                              SettingSyncHost
                                                                    (10220,G,0) The
sq101.fourthcoffee.com     916   ESENT                              SettingSyncHost
                                                                    (10220,G,0) The
sq101.fourthcoffee.com    1001   Windows Error Reporting            Fault bucket
                                                                    146962385495655736
sq101.fourthcoffee.com   16384   Software Protection Platform Service Successfully
                                 scheduled Software

sq101.fourthcoffee.com    1003   Software Protection Platform Service The Software
                                 Protection service
dc01.fourthcoffee.com       15   SecurityCenter                    Updated Windows
                                                                   Defender status
dc01.fourthcoffee.com       15   SecurityCenter                    Updated Windows
                                                                   Defender status
dc01.fourthcoffee.com      916   ESENT                             SettingSyncHost
                                                                   (10440,G,0) The
dc01.fourthcoffee.com     1001   Windows Error Reporting           Fault bucket
                                                                   128049445031, type
dc01.fourthcoffee.com     1001   Windows Error Reporting           Fault bucket
                                                                   128049445031, type
dc01.fourthcoffee.com     1001   Windows Error Reporting           Fault bucket
                                                                   128049445031, type
dc01.fourthcoffee.com     1001   Windows Error Reporting           Fault bucket
                                                                   128049445031, type
dc01.fourthcoffee.com     1001   Windows Error Reporting           Fault bucket
                                                                   128049445031, type
dc01.fourthcoffee.com     1001   Windows Error Reporting           Fault bucket
                                                                   128049445031, type
dc01.fourt

Get-EventLog is a good simple cmdlet, but it is better to invest in learning the Get-WinEvent cmdlet, which is more capable and much faster. The Event ID for a failed logon is 4625, but Get-EventLog does not allow you to specify that ID, so you would have to use the following:

Get-EventLog -LogName Security | where {$_.EventID -eq 4265}

Here is a fast and effective way to get the failed login events from all servers.

Invoke-Command -ComputerName (Cat AllServers.Txt) {
    Get-WinEvent -FilterHashTable @{LogName='security'; id=4625 }
}
Port Scanner

Often when you deploy systems you can have connectivity problems, or if you need to test your firewall rules to see whether a port responds, you need to simulate a connection. You can use PowerShell to implement a lightweight port scanner as follows:

param
(
    [string]$ComputerName,
    [String[]]$ports

)
$ErrorActionPreference='silentlycontinue'
Write-host 'Attempting to Connect...'
foreach($port in $ports)
{
    $test = Test-NetConnection -Port $port -ComputerName $ComputerName
    if($test.TcpTestSucceeded -eq $false)
    {
        Write-host "Connection to $ComputerName failed, Port $port is not listening"
        -ForegroundColor Red -BackgroundColor Black
    }
    else
    {
        write-host "Connection to $ComputerName Succeed, Port $port is listening"
        -ForegroundColor Green -BackgroundColor Black
    }
}

The syntax for using this for multiple ports to be scanned would be as follows:

.New-BasicPortScanner.ps1 -ComputerName TestMachine -Port 80,53,1024

This is obviously a simple scanner, but it again highlights the potential tasks you can automate to become more efficient.

Firewall

Fourth Coffee uses Group Policy to turn off all its firewalls, but the company isn’t trying to ensure the firewall is properly configured. Fourth Coffee needs a quick way to identify across multiple computers if a firewall rule has made it.

First, you should check whether the firewall is turned on; checking this on a remote machine involves using the invoke-command cmdlet and the Get-NetFirewallProfile cmdlet as follows:

Invoke-command -computername dc01 -scriptblock {Get-NetFirewallProfile}

The output for the domain profile is listed here:

Name                            : Domain
Enabled                         : True
DefaultInboundAction            : NotConfigured
DefaultOutboundAction           : NotConfigured
AllowInboundRules               : NotConfigured
AllowLocalFirewallRules         : NotConfigured
AllowLocalIPsecRules            : NotConfigured
AllowUserApps                   : NotConfigured
AllowUserPorts                  : NotConfigured
AllowUnicastResponseToMulticast : NotConfigured
NotifyOnListen                  : False
EnableStealthModeForIPsec       : NotConfigured
LogFileName                     : %systemroot%system32LogFilesFirewallpfirewall.log
LogMaxSizeKilobytes             : 4096
LogAllowed                      : False
LogBlocked                      : False
LogIgnored                      : NotConfigured
DisabledInterfaceAliases        : {NotConfigured}
PSComputerName                  : dc01

Now that you know the firewall is enabled, you can check the rules and the ruleset, and you can filter on inbound-only rules using the following syntax:

Invoke-command -computername dc01 -scriptblock {Get-NetFirewallProfile `
-Name Domain |Get-NetFirewallRule |where {$_.Direction -eq 'Inbound'}}

This command lists all the rules and their state (allowed or denied). You can perform further checks or loop across all profiles to determine all the rule states.

Azure—Network Security Groups

If you’re working in Azure, you can generate new network security groups and rules and enforce traffic policies across an entire estate in a matter of minutes.

You use the New-AzureRMNetworkSecurityRuleConfig cmdlet to generate a new rule you want applied. In this example, we want to allow Port 22 for SSH:

$nsg = Get-AzureRmNetworkSecurityGroup -ResourceGroupName FourthCoffeeNSG `
-Name FrontEnd-Net

$rule1 = New-AzureRmNetworkSecurityRuleConfig -Name rdp-rule `
-NetworkSecurityGroup $nsg -Description 'Allow SSH' -Access Allow '
-Protocol Tcp -Direction Inbound `
-Priority 100 -SourceAddressPrefix Internet -SourcePortRange * `
-DestinationAddressPrefix * -DestinationPortRange 22

  Set-AzureRmNetworkSecurityGroup -NetworkSecurityGroup $nsg

We can loop this across multiple existing Network Security groups using PowerShell and enforce any new security boundaries.

Finding Which Patches are Deployed

Unpatched servers are the number-one cause of preventable security breaches. The day after the CEO of a public company was fired because of a security breach because of unpatched servers, Charlotte was called into the Fourth Coffee CEO’s office. She walked out of that office with what she referred to as her “Salary Continuation Program.” She had a clear mission to inventory every system in the enterprise and to ensure that they were patched within 60 days.

Charlotte’s hair was on fire, and Eddie turned out to be a bucket of water. Moments after her meeting with the CEO, Charlotte held an emergency staff meeting to discuss how the team would get on top of this issue and find out where they were. Charlotte wanted everyone engaged on the problem and was growing irritated as she watched Eddie typing away on his laptop; she assumed he was writing an email. Minutes later, Eddie interrupted and brought the meeting to a halt by saying the single word, “DONE!”.

While everyone else had been arguing about charters, timelines, and whether to bring in contractors, Eddie wrote the following script:

$allComputers = Get-ADComputer -Filter 'ObjectClass -neq "Computer"'
$patches = Get-CimInstance -ComputerName $allcomputers.Name -ClassName
Win32_QuickFixEngineering
$patches | Export-Csv .Patches.csv
Invoke-Item .Patches.csv

Eddie said, “I just ran a script to generate an Excel spreadsheet that shows every computer and every patch installed on those computers.” When Joe asked how long the company had been able to do that, Eddie replied, “About 45 seconds. I just finished the script. This will give us quick answers, but if you give me a few more minutes, I’ll write some helper scripts to generate reports as well.”

Storage investments

Fourth Coffee used to use expensive Storage Area Networks (SANs or $ANs) using fiber channel. The problem was that the “gold plated” hardware was extremely expensive, and finding people that could support and debug fiber channel issues was a challenge. Fourth Coffee moved to Storage Spaces Direct, which uses high-volume/low-cost components and Ethernet. Now the company has great reliability and performance at a fraction of the cost, and it can easily hire people to support and diagnose issues.

Checking Storage Pool Health on Remote Machines

If you want to look at the health of storage pools that you’ve built on various machines, you can connect to the machine via a CIM session using the new-cimsession, and then you can use the Get-StoragePool cmdlet to use the CIM session you’ve created to retrieve the data from the remote machine.

$cimsession = new-cimsession -computername sql01
Get-StoragePool -cimsession $cimsession

The output of the command is shown here:

FriendlyName OperationalStatus HealthStatus IsPrimordial IsReadOnly PSComputerName
------------ ----------------- ------------ ------------ ---------- --------------
Primordial   OK                Healthy      True         False      sql01
sqldata      Read-only         Unknown      False        True       sql01
sqldata      Read-only         Unknown      False        True       sql01
Primordial   OK                Healthy      True         False      sql01

If you observe the output, you see the health status listed so you can determine what is healthy, what is unhealthy, and what is in an unknown status and requires further investigation.

You can create a script that loops across multiple machines and generates a report for all the storage pools so that you have one place to loop versus manually performing the task on each machine.

Creating a File Share

To rapidly create files shares, you can use the New-SmbShare cmdlet, like so:

New-SmbShare -Name 'UserStore' -Path 'D:UserStore'
-FullAccess 'FourthCoffeeAdministrators', 'FourthCoffeeDomain Users', 'FourthCoffee
Domain Admins'

You also can do this with remote machines by creating a CIM Session to a remote machine and executing the New-SmbShare cmdlet with the -cimsession parameter.

Creating a Virtual Disk

If Fourth Coffee implemented Storage Spaces Direct on their hosts and needed to create a new virtual disk for users to store data on, the company also could implement it rapidly in PowerShell. Using the New-VirtualDisk cmdlet as follows would create a 100 GB disk ready for use:

New-VirtualDisk -StoragePoolFriendlyName StoragePool01
-FriendlyName UserDataStore10 -Size 100GB -ProvisioningType Fixed
-ResiliencySettingName Simple

You could easily parameterize this or create code to enumerate the resources and check what is available in the pool before provisioning.

Resizing a Disk

If you’re running out of space on your storage virtual disks, you can use PowerShell to increase the amount of space available using the Resize-VirtualDisk cmdlet:

Resize-VirtualDisk -FriendlyName 'UserDataStore10' -Size (200GB)
Network investments

There are a variety of network automation investments that also could be achieved. In this section we discuss a few options.

Change the Ip Address

A simple task like changing the IP address of a machine can take a minute or two on any machine. You can use the New-NetIPAddress cmdlet to change the IP address of a machine as part of a setup script. We show the basic usage of the New-NetIPAddress cmdlet here:

New-NetIPAddress –InterfaceAlias 'Wired Ethernet Connection'
-IPv4Address '172.18.0.100' –PrefixLength 24 -DefaultGateway 172.18.0.254

Then you can update the DNS servers the network interface card uses to the appropriate DNS servers using the Set-DnsClientServerAddress cmdlet like this:

Set-DnsClientServerAddress -InterfaceAlias 'Wired Ethernet Connection'
-ServerAddresses 172.18.0.1, 172.18.0.2

You also can perform this command using a CIMSession. For example, you can provision a machine on a DHCP network and then remotely connect to it and change it to a static IP address.

Flush Dns

Occasionally DNS on a machine doesn’t flush its cache quickly enough, you and we need to force a flush of the cache to allow for proper name resolution. You can use the Clear-DnsClientCache cmdlet to clear the DNS cache locally or remotely.

For a local machine DNS cache, simply run the following from an elevated PowerShell:

Clear-DnsClientCache

For a remote machine, create a CIM session and then execute the following:

$cimsession = new-cimsession -computername dc01
Clear-DnsClientCache -CimSession $cimsession

A valid day-to-day use of this would be when you’re working with Azure Site Recovery. As part of the recovery plan, you need to clear clients’ DNS cache for them to properly resolve to the newly recovered servers in the disaster recovery site.

Capture Network Trace

Capturing traffic on a system to analyze whether you are having problems like latency or disconnects in your application becomes a useful troubleshooting technique. PowerShell enables you to build a tool that you can use to capture network traffic data using the following steps:

  1. Build a timestamp using the Get-Date cmdlet:

    $timestamp = Get-Date -f yyyy-MM-dd_HH-mm-ss
  2. Create the capture session and give it a friendly name using the New-NetEventSession:

    New-NetEvenltSession -Name NetCapture
    -LocalFilePath d: emp$env:computername-netcap-$timestamp.etl -MaxFileSize 512
  3. Specify the type of capture provider you want to use. In this case, NetCapture using the Add-NetEventPacketCaptureProvider cmdlet:

    Add-NetEventPacketCaptureProvider -SessionName NetCapture
  4. Start the capture using the Start-NetEventSession cmdlet:

    Start-NetEventSession -Name NetCapture
  5. Stop the capture using the Stop-NetEventSession cmdlet:

    Stop-NetEventSession -Name NetCapture

CIM Sessions can be used to perform this on remote computers.

Compute investments

Day to day, you need to do many tasks related to the compute aspects of an infrastructure. This could mean creating virtual machines or rebalancing virtual machine clusters to ensure proper distribution of resources.

Virtual Machine Deployment

A key to driving efficiency in an organization is the delivery of resources in a timely fashion. There are many examples across IT organizations of virtual machines being requested for applications, and it may take several weeks before they are deployed.

This is one process where an automated framework should be implemented with a self-service portal to minimize the IT resources involved in the overall deployment. This does not mean IT resources will allow any virtual machines to be deployed to their environment; rather that users will request a new virtual machine, and IT resources will be notified to approve or deny the request. If the request is approved, the virtual machine will be deployed based on the parameters; if the request is denied, a notification will be sent to the end user who submitted the request.

Figure 6-8 shows a flow chart of the process you could use for virtual machine provisioning.

This diagram shows a process for automated virtual machine deployment. It covers a user requesting a virtual machine, a notification task and approval steps. After approval it will deploy the virtual machine to the correct environment. Or if denied it will notify the user the request has been denied
Figure 6-8 Process for automated virtual machine deployment

Figure 6-9 shows a sample architecture you could use to take the request in and perform the deployment task. You use on-premises and public cloud resources to achieve the solution and leverage the tooling that you have (such as ServiceNow, SharePoint, AA, and PowerShell).

This figure shows a sample architecture that you could use to deploy virtual machines across public and private clouds. It utilizes a simple SharePoint request form with built in SharePoint workflows, once the workflow is complete and approved it will trigger an Azure Automation workflow to perform all necessary tasks, such as raising a service request for tracking and deploying into an environment.
Figure 6-9 Sample architecture using private and public cloud technologies to deploy virtual machines across clouds

Note

Everything shown here is an example of how you might implement it or achieve these efficiencies. How you choose to implement such elements in your infrastructure will require the appropriate planning for your systems, tools, and processes.

Over the next few pages, we show you some of the sample PowerShell we could have used as a base to deploy virtual machines across some of the virtualization environments we find in organizations today. In our architecture in Figure 6-9, we use a SharePoint webpage that controls the type of data we get; it will have a drop-down list for the environment people want this deployed to.

We can abstract the cloud environments from the users by using terminology like Production, Test, Dev, and so on instead of Hyper-V, VMware, Azure, and AWS. Similarly, we can abstract all the information we require to make proper choices for deployment from the user and make those forms user friendly.

When it gets into AA, our control scripts parse out the data and call the relevant deployment script to ensure that it goes to the right environment. We can even get our control scripts to check environment capacity before deploying and make alternative decisions based on what the scripts discover.

Deploying to Hyper-v

Here is an example of a PowerShell command you could use to create a virtual machine in Hyper-V. This creates a hash table of the parameters that will be used to create the virtual machine:

$VMName = 'Server01'

$VM = @{
  Name = $VMName
  MemoryStartupBytes = 2147483648
  Generation = 2
  NewVHDPath = "C:Virtual Machines$VMName$VMName.vhdx"
  NewVHDSizeBytes = 53687091200
  BootDevice = 'VHD'
  Path = "C:Virtual Machines$VMName"
  SwitchName = (Get-VMSwitch).Name[0]
}

New-VM @VM

We can wrap this with parameters to capture all the variables required and deploy the virtual machine.

Normally Hyper-V is managed using System Center Virtual Machine Manager, which also has PowerShell cmdlets to deploy a virtual machine. A sample script for that would be like this:

param
(

    [string]$vmmserver,
    [string]$templatename='WindowsServer2016Datacenter'
 )

Get-VMMServer -ComputerName $VMMServer


$VMTemplate = $TemplateName
$VMHostGroup = Get-VMHostGroup -Name 'All Hosts'

$HostRatings = @(Get-VMHostRating -DiskSpaceGB 120 -Template $VMTemplate `
-VMHostGroup $VMHostGroup -VMName $VMName | where { $_.Rating -gt 0 })

 If($HostRatings.Count -eq "0")
 {
  throw "No hosts meet the requirements for the virtual machine"
 }
 If ($HostRatings.Count -ne 0)
 {

  $VMHost = $HostRatings[0].VMHost
  $VMPath = $HostRatings[0].VMHost.VMPaths[0]


  $VMJobGroup = [System.Guid]::NewGuid()

  Get-Template -VMMServer $VMMServer | where { $_.Name -eq $VMTemplate }


  New-VM -Template $VMTemplate -Name $VMName -Description "New VM" -VMHost $VMHost -Path
$VMPath -JobGroup $VMJobGroup -RunAsynchronously `
  -ComputerName "*" -JoinWorkgroup "WORKGROUP" -RunAsSystem -StopAction SaveVM
}

The deployment requires the VMM server name; in this example, we default it to the Windows Server 2016 Template, which has been prestaged in the VMM server. We could add parameters for the virtual machine name, join it to the domain, and even have postscript executions happen. VMM server has the host rating system, which we can use to help us determine the best host placement taking another job and automating using technology.

Deploying to Vmware

VMware has the ability to deploy via PowerShell; it uses the PowerCLI client. It’s similar to System Center Virtual Machine Manager; we can discover our hosts and, in VMware’s case, the datastores and dynamically choose where the virtual machine should be deployed. The example script that follows shows a basic script you can use to rapidly deploy a Windows Server 2016 virtual machine into a VMware estate.

param
(
    [string]$vcenter,
    [string]$VMName,
    [string]$Password="Password123!"
)

$secpass = ConvertTo-SecureString -String $Password -AsPlainText -Force
$domaincreds = New-Object ' System.Management.Automation.PSCredential('FourthCoffee
Adminisrator', $secpass)

Connect-VIServer –Server $vcenter
$VMHOST="VMHOST01"
$DATASTORE="SSDStore01"

New-OSCustomizationSpec -Name 'WindowsServer2016' -FullName $VMName
-OrgName 'FourthCoffee' -OSType Windows -ChangeSid
-AdminPassword $secpass -Domain 'FOURTHCOFFEE' -DomainCredentials $domaincreds '
-AutoLogonCount 1

$OSSpecs = Get-OSCustomizationSpec -Name 'WindowsServer2016'

$VMTemplate = Get-Template -Name 'Server2016Template'

New-VM -Name '$VMName' -Template $VMTemplate -OSCustomizationSpec $OSSpec '
-VMHost $VMHOST -Datastore $DataStore
Deploying to Azure

There are multiple ways of deploying a virtual machine to Azure. Here are the three most common:

  • The portal

  • PowerShell/CLI

  • ARM templates

You can automate virtual machine deployment with PowerShell/CLI or ARM templates. Sometimes you even can use a combination of both.

For the first example, we show a sample using PowerShell that could be used in AA to deploy a Windows Server Virtual machine.

In the first section, we ask for the parameters, and we introduce the more complex Power-Shell features like mandatory parameters and the ValidateNotNullOrEmpty() method to ensure we don’t leave a parameter that needs a value, empty. Then we move on to collect our credentials using the Azure Automation cmdlet Get-AutomationPSCredential. We then build the virtual machine configuration and pass that to the final New-AzureRMVM cmdlet.

param
(
[Parameter(Mandatory=$true)]
    [string]$VMname,
    [ValidateNotNullOrEmpty()]
    [string]$AzureRegion = "uswest2",
    [ValidateNotNullOrEmpty()]
    [string]$ImageName = "WindowsServer2016",
    [ValidateNotNullOrEmpty()]
    [string]$VMSize = "Standard_D1",
    [ValidateNotNullOrEmpty()]
    [string]$ResourceGroup

)

If ($ImageName -eq "WindowsServer2016")
{
   $PublisherName = "MicrosoftWindowsServer"
   $offer = "WindowsServer"
   $skus = "2016-Datacenter"
}

$cred = Get-AutomationPSCredential -Name "Azure"
$vm = New-AzureRmVMConfig -VMName $VMName -VMSize $VMSize

$vm = Set-AzureRmVMOperatingSystem `
    -VM $vm `
    -Windows `
    -ComputerName $VMName `
    -Credential $cred `
    -ProvisionVMAgent -EnableAutoUpdate

$vm = Set-AzureRmVMSourceImage `
    -VM $vm `
    -PublisherName $PublisherName `
    -Offer $Offer `
    -Skus $skus `
    -Version latest

$vm = Set-AzureRmVMOSDisk `
    -VM $vm `
    -Name "$VMNAme_OSDisk" `
    -DiskSizeInGB 128 `
    -CreateOption FromImage `
    -Caching ReadWrite

 $vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

 New-AzureRmVM -ResourceGroupName $resourcegroup -Location $AzureRegion -VM $vm

ARM templates are slightly different, however. They’re defined in a JSON file. This file describes the infrastructure to be deployed. It also includes taking parameters, defining usable variables, and providing outputs. This is infrastructure as code, and when we start to adopt concepts like DevOps into Fourth Coffee’s IT organization, this will be a cornerstone of driving consistency and efficiency throughout the company’s environments.

The following is a sample ARM template for deploying a Windows Virtual Machine:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.
json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "adminUsername": { "type": "string" },
    "adminPassword": { "type": "securestring" }
  },
"variables": {
    "vnetID": "[resourceId('Microsoft.Network/virtualNetworks','myVNet')]",
    "subnetRef": "[concat(variables('vnetID'),'/subnets/mySubnet')]",
  },
  "resources": [
    {
      "apiVersion": "2016-03-30",
      "type": "Microsoft.Network/publicIPAddresses",
      "name": "myPublicIPAddress",
      "location": "[resourceGroup().location]",
      "properties": {
        "publicIPAllocationMethod": "Dynamic",
        "dnsSettings": {
          "domainNameLabel": "myresourcegroupdns1"
        }
      }
    },
    {
      "apiVersion": "2016-03-30",
      "type": "Microsoft.Network/virtualNetworks",
      "name": "myVNet",
      "location": "[resourceGroup().location]",
      "properties": {
        "addressSpace": { "addressPrefixes": [ "10.0.0.0/16" ] },
        "subnets": [
          {
            "name": "mySubnet",
            "properties": { "addressPrefix": "10.0.0.0/24" }
          }
        ]
      }
    },
    {
      "apiVersion": "2016-03-30",
      "type": "Microsoft.Network/networkInterfaces",
      "name": "myNic",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[resourceId('Microsoft.Network/publicIPAddresses/', 'myPublicIPAddress')]",
        "[resourceId('Microsoft.Network/virtualNetworks/', 'myVNet')]"
      ],
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses','myPublicIPAddress')]" },
              "subnet": { "id": "[variables('subnetRef')]" }
            }
          }
        ]
      }
    },
    {
      "apiVersion": "2016-04-30-preview",
      "type": "Microsoft.Compute/virtualMachines",
      "name": "myVM",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[resourceId('Microsoft.Network/networkInterfaces/', 'myNic')]"
      ],
      "properties": {
        "hardwareProfile": { "vmSize": "Standard_DS1" },
        "osProfile": {
          "computerName": "myVM",
          "adminUsername": "[parameters('adminUsername')]",
          "adminPassword": "[parameters('adminPassword')]"
        },
        "storageProfile": {
          "imageReference": {
            "publisher": "MicrosoftWindowsServer",
            "offer": "WindowsServer",
            "sku": "2012-R2-Datacenter",
            "version": "latest"
          },
          "osDisk": {
            "name": "myManagedOSDisk",
            "caching": "ReadWrite",
            "createOption": "FromImage"
          }
        },
        "networkProfile": {
          "networkInterfaces": [
            {
              "id": "[resourceId('Microsoft.Network/networkInterfaces','myNic')]"
            }
          ]
        }
      }
    }
  ]
}

We can use CLI, PowerShell, REST API, or Portal to deploy this template.

When considering automation in Azure, ARM Templates is the preferred choice. Even if you use AA as the controlling engine, you can store your templates in a public repository and have AA call to the public repository during runbook execution and execute the template.

Deploying to Aws

If you want to deploy to AWS, you can use PowerShell to achieve this. The following is a base script that will deploy an AWS virtual machine. In the first section, we ask for the parameters, and we use the more complex PowerShell features like mandatory parameters and the ValidateNotNullOrEmpty() method. We proceed to build the credentials to connect to AWS, and the Get-AutomationPSCredential cmdlet is Azure Automation–specific, allowing us to store the credential and retrieve it securely when our script executes. Then we retrieve the image and build the virtual machine.

param (
    [Parameter(Mandatory=$true)]
    [string]$VMname,
    [ValidateNotNullOrEmpty()]
    [string]$AWSRegion = "us-west-2",
    [ValidateNotNullOrEmpty()]
    [string]$EC2ImageName = "WINDOWS_2012R2_BASE",
    [ValidateNotNullOrEmpty()]
    [string]$MinCount = 1,
    [ValidateNotNullOrEmpty()]
    [string]$MaxCount = 1,
    [ValidateNotNullOrEmpty()]
    [string]$InstanceType = "t2.micro"
    )


$AwsCred = Get-AutomationPSCredential -Name "AwsCred"
$AwsAccessKeyId = $AwsCred.UserName
$AwsSecretKey = $AwsCred.GetNetworkCredential().Password


Set-AWSCredentials -AccessKey $AwsAccessKeyId -SecretKey $AwsSecretKey -StoreAs AWSProfile
Set-DefaultAWSRegion -Region $AWSRegion


$ami = Get-EC2ImageByName $EC2ImageName -ProfileName AWSProfile -ErrorAction Stop


Write-host "Creating new AWS Instance..."
$NewVM = New-EC2Instance `
    -ImageId $ami.ImageId `
    -MinCount $MinCount `
    -MaxCount $MaxCount `
    -InstanceType $InstanceType `
    -ProfileName AWSProfile `
    -ErrorAction Stop
 $InstanceID = $NewVM.Instances.InstanceID
 $NewVM


Write-host "Applying new VM Name...."
New-EC2Tag -Resource $InstanceID -Tag @( @{ Key = "Name" ; Value = $VMname}) -ProfileName
AWSProfile
Write-host ("Successfully created AWS VM: " + $VMname)

Because this is a public cloud, we don’t have to worry about capacity issues, but we do have to worry about cost! That is why in the SharePoint workflow we have approval steps, or in the control scripts we probe the ticket in ServiceNow to see that it has been approved.

Containers

Although deploying virtual machines in an automated fashion definitely leads to greater efficiency for the datacenter, another transformation could take place to push Fourth Coffee to the next level.

In the next set of examples, we slightly move away from PowerShell to take a look at infrastructure as code with Dockerfiles. A Dockerfile describes the environment you want to deploy, including adding files to the container and executing or installing anything required to run the application. Here’s an example:

FROM microsoft/windowsservercore
ADD ApacheInstall.ps1 /windows/temp/ApacheInstall.ps1
ADD VCRedistInstall.ps1 /windows/temp/VCRedistInstall.ps1
RUN powershell.exe -executionpolicy bypass c:windows empApacheInstall.ps1
RUN powershell.exe -executionpolicy bypass c:windows empVCRedistInstall.ps1
WORKDIR /Apache24/bin
CMD /Apache24/bin/httpd.exe -w

In our example, we use the windowsservercore base image for our container, copy two scripts for deploying Apache webserver and Visual C redistributable, run these scripts to install the software, and then start Apache.

The ApacheInstall.ps1 file would look as follows:

Invoke-WebRequest -Method Get `
-Uri http://www.apachelounge.com/download/VC14/binaries/httpd-2.4.25-win64-VC14.zip
-OutFile c:apache.zip


Expand-Archive -Path c:apache.zip -DestinationPath c:
Remove-Item c:apache.zip -Force

As you can see, we’re downloading the file from the public Apache source and extracting it.

Every time we deploy this container, it’s provisioned in this exact manner—pulling our base image, downloading the components required, installing them, and then launching the Apache web server.

If we scale out the container instances, the exact same procedure would occur. Of course, we could commit our changes to the container and have our custom “base” image to launch each and every time.

Then why use Dockerfiles? Why not just build our own application into a custom image and deploy the custom image each time? Well, this is exactly like the process of a virtual machine deployment for an application. Someone has to deploy the app and commit the image each time. With the Dockerfile, we can write out the steps to deploy the image (or add a PowerShell script!) and allow it to deploy it.

Think of it in terms of DevOps. More importantly, what if Fourth Coffee chooses to implement a CI/CD pipeline?

The development team needs an environment where they can deploy and test their application. If we are close to capacity on the virtualization platforms they have implemented, requesting additional resources to try something new will be a difficult ask. Given that a container footprint is considerably less, we could use that technology to give them an environment easily. What’s more, if they use a cloud-based container service like Azure Container Service and then deploy to Docker on premises, it won’t matter because they will describe their environment through Dockerfiles or Docker Compose files.

Following is a sample Docker Compose file that references a folder where the web Dockerfile exists and the db Dockerfile exists, comprising the “application”:

version: '2.1'
services:
 web:
  build: ./web
  ports:
   - "80:80"
  depends_on:
   - db
  tty:
    true
 db:
  build: ./db
  expose:
   - "1433"
  tty:
    true
networks:
 default:
  external:
   name: "nat"

The development team and the operations team work on building a small container host environment and begin the process of describing the applications and operating systems that will run their software in a Dockerfile to tie all parts of an application (that is, the database and the front end) together. They will describe them in a Docker Compose file that will reference the Dockerfiles.

These Docker Compose files can be “played” against any Docker container deployment and will build out the service. The development team can verify its functionality, while operations can determine how to monitor it. Best of all, once all the verification is done, the same process applies to moving between the development, test, and production environments.

When the development team releases a new update based on feedback and monitoring data, the Docker Compose file gets updated to point to the new application, and a redeploy happens.

In a DevOps pipeline scenario, the development team would integrate the creation of the Docker Compose files and deployment into the entire process so that when they commit code, it gets validated under automated tests (which may involve deploying to a container). Then it’s deployed into the test environments and, if it passes another set of tests, finally moves into the production environment. All this happens without IT operations being directly involved. The operations team would have been involved from the start of planning and working out this process, however.

Remember also that this process can be done across cloud environments. For example, the dev and test environments might be on premises, and the production environment might be in the cloud. The number of scripts we need to support multiple environments for deploying a virtual machine is considerably less than the number we need for containers.

For CI/CD, Figure 6-10 shows a flow of how we would build a pipeline with Visual Studio team services.

This figure shows the Visual Studio team services integration with containers for CI/CD integration. It shows code being pushed into version control, going into a build cycle, and creating release artifacts and then being directly published to an Azure container registry, which then is used to deploy to an Azure App Service.
Figure 6-10 Visual Studio team services integration with containers for CI/CD

The code is written and committed to a source control repository. The automation build and test begin to ensure code functionality. Once it is ready and has passed the tests, it moves on to release management, which pushes the image to the container registry and then can invoke a deployment process with the latest software.

This helps bring a modern IT environment closer to Fourth Coffee. When they choose to implement a mobile ordering system, the container element will allow for great efficiency and agility for all teams.

Over the course of this chapter, we have talked about the areas Fourth Coffee will need to invest in to deliver on datacenter efficiency. There are millions of examples available on the internet for almost any scenario you can think of. A lot of the technical samples work today; however, like all things in IT, they need to be updated over time as the technology changes. If Fourth Coffee applies the life cycle principals we’ve discussed, then updates will happen naturally to enable the company to eat the elephant and deliver on the promises of a modern IT infrastructure.

The mobile ordering system

What does this all mean for Fourth Coffee and their drive to digital transformation via their mobile ordering system? Charlotte knew that to deliver such a system effectively she first needed to free up budget, time, and resources. She accomplished that by doing the following things:

  • Using Software as a Service (SaaS) whenever possible: She had to spend money to make this happen, but the expense got the company on the latest technology; ensured that they were always secure, patched, and up to date; and freed people’s time.

  • Using Azure whenever possible: By using Azure, Fourth Coffee was able to close a datacenter and eliminate a set of servers and software (such as backup systems, management servers, and so on) that were not contributing to customer value. Once again, the company moved to use the latest technology, which ensured that they were always secure, patched, up to date, and people had more time to address other issues.

  • Using Windows Server 2016: The dramatic efficiencies made possible by WS2016 infrastructure as well as the increased security and automation allowed Fourth Coffee to eliminate its expensive SAN and VMware components.

  • Investing in the technical and cultural training of the staff: Expectations and job responsibilities were changing, and Charlotte gave her staff motivation, time, and resources to make those changes.

Design choices

During the inception phase of introducing a mobile ordering experience, Fourth Coffee identified some design choices that would modernize the IT experience and deliver the desired efficiency from datacenter transformation. Here we discuss some of those choices to highlight how they affected the design and how the things led to great efficiency in the datacenter.

Identity in a mobile app world

One of the first aspects of adopting a mobile experience must involve handling identity. For example, when a user signs up for the Fourth Coffee mobile experience, is an Active Directory account created or is an identity table built in the database with authentication protocols to support them? Choosing how you handle customers’ identity and what information you store relating to them will also affects what governance requirements need to be implemented. And those decisions affect the choice of vendors and technology to ensure they can meet the needs for Fourth Coffee to be covered!

For authentication, Fourth Coffee wanted to move away from Kerberos- or NTLM-based protocols to adopt something like Security Assertion Markup Language (SAML) for authentication and authorization in a mobile app world. One option was to use Microsoft Accounts or Facebook to provide and authenticate the user into the mobile order experience. This solution would simplify the coding practices because Fourth Coffee would adopt an open standard and not have to build custom identity providers and authentication systems.

As customers sign up, they can select from the identity providers (Microsoft, Facebook, Google, and so on) that Fourth Coffee enabled, and data passes between the mobile order application and the identity provider. Fourth Coffee asks to retrieve relevant information about the user from the identity provider, so it can store it in the database; however, no security information (passwords or multifactor authentication codes) are stored because they’re maintained by the identity provider. The information that Fourth Coffee stores is used to present the end users’ experience and to allow the company to tag additional behavioral data in the future.

Security

Security is a complex topic. How do we stop attacks on Fourth Coffee’s systems while maintaining a positive working experience for the employees and a usable consumer experience? Part of the evolution to modern operating systems and application stacks is to help address that very need and minimize the exposure footprint.

Security considerations were also a mitigating factor for choosing a cloud-based deployment for the Fourth Coffee mobile ordering experience. The cloud provides access to platforms, tools, and techniques that are built in or readily available. On premises, the same types of platforms and tools take time, money, and considerable effort to maintain and operate.

Fourth Coffee exposed the application to handle the mobile ordering experience via the application gateway technology in Azure. With the Web Application Firewall enabled, attacks like SQL injection are mitigated out of the box without the Fourth Coffee team having to touch any code.

Other factors, like governance and how it’s implemented, play an important part in the design. For example, if Fourth Coffee stores credit card information, they need to implement PCI-DSS. Azure has PCI-DSS certification for the platform, which means Fourth Coffee just needs to implement the controls as part of the application. And if Fourth Coffee uses some of the Azure PaaS services, they might need to only check a box to enable the feature.

For operations, Fourth Coffee leveraged tools like Log Analytics and Azure Security Center to quickly identify threats and correlate activities across the entire IT infrastructure, no matter the location.

Monitoring

Modernizing provides a huge step forward to identify problems in the application or infrastructure that could lead to a negative effect on the user experience.

For example, by using Application Insights, Fourth Coffee can get real-time data from inside the application on how quickly a method is executing or the exact failure of a stack call. Using this data, Fourth Coffee also can collect trace information for the developers to use to determine and resolve the problem.

We also can use Azure Monitor to gain deeper insight into the workings of Azure PaaS services, our container host infrastructure, our network connections, and practically any source we can extract or push data from. Similar to the developer experience with Application Insights, we can track and correlate problems rapidly and resolve them.

Payment processing

Modernizing the payment processing environment provides greater agility and minimizes dependencies on older technology. Fourth Coffee won’t need to have connectivity back to the corporate network to talk to the old payment processing server.

In most cases, the improvement could be a few lines of code to reflect the call to the payment processor. The goal is to simplify the code and build a robust application while meeting the needs of the customer.

Also, when Fourth Coffee moves to a modern payment processor, the company technically doesn’t need to store credit card data. Instead, it can store value amounts once the processor has billed the customer through the system. This can simplify the governance requirements.

The architecture

Figure 6-11 shows a sample architecture Fourth Coffee could use to handle two elements at once: updating the POS system and introducing the mobile ordering experience.

This figure shows the new architecture for the point-of-sale and mobile ordering experience. It shows the core components using Azure SQL and Azure SQL Datawarehouse; it also uses Application Gateway and Azure Kubernetes Service for running the application. It finally uses Application Insights and log analytics for in-depth telemetry of their entire infrastructure and application.
Figure 6-11 Mobile ordering experience architecture

Fourth Coffee chose to combine the update of the POS system with the introduction of the mobile ordering experience; the company updated and ported to the code to execute in containers. The team also built an application front end, which is hosted in a container, and updated the database structure and code to support Azure SQL.

The efficiencies

The efficiencies are ten-fold, but the following sections describe how Fourth Coffee has driven efficiencies and modernized the infrastructure and application.

Use of containers

Containers drive a large amount of efficiency within an IT organization. First, with the declarative infrastructure as code (IaC) approach, Fourth Coffee knows what it’s getting from a deployment perspective every single time the team deploys, no matter the infrastructure destination. The deployment process is simplified because the team can describe what is desired in the deployment. Because Fourth Coffee standardized on the docker container format, they also have the benefit of being able to deploy it to any infrastructure—private or public or managed—so they’re truly cloud agnostic. Given that containers are immutable and in general stateless or in a state, which has been externalized, they can also be considered as a shell that can be recreated at a moment’s notice. The consequence is that Fourth Coffee doesn’t have to perform a backup of every container running because they will just redeploy and connect to the state store. Scaling to meet performance demand is also simplified because Fourth Coffee has already defined the desired state of the container in the IaC configuration; that way, increasing the number of containers to serve the application is simple and takes seconds versus minutes. Monitoring also becomes simpler as we adopt two simple practices: utilizing the orchestrator used in containers to help us monitor the running applications and employing application monitoring like Application Insights.

Azure SQL

AzureSQL provides a managed SQL infrastructure that can scale and meet the performance demands of Fourth Coffee’s application. The company no longer needs to patch or back up the database or provide any special tooling to do so because those functions are built in to the product, which saves time and money. Native integrations into other elements of the Azure cloud data suite also use AzureSQL as the base for them, which simplifies data access or transfer.

Azure SQL Data Warehouse

One of the hard things about looking retrospectively at any business is having to keep information. This requires a lot of storage. With on-premises sytems, the solution was usually some sort of SAN storage. With Azure SQL Data Warehouse, Fourth Coffee can technically have unlimited data storage with native integration to Azure SQL, which reduces the load of data transfer. Fourth Coffee also can integrate natively into PowerBI for generating reports easily, and the company has access to multiple tools for data analytics and machine learning. Reducing specialized tooling, complex integration, and management of these services drives much more efficiency than having to manually build and maintain all these services and gain the appropriate skills to maintain production-grade infrastructures.

Application Gateway

No matter how good the application developer might be, infrastructure safeguards are always needed. The application gateway reduces the need for specialized knowledge and hardware to achieve application high availability and throughput. It also prevents a range of attacks which might have been missed in a code security scan.

Payment Processor

Using a cloud payment processor inherently drives efficiency because we don’t need to maintain a specialized payment infrastructure and can potentially reduce the governance required. It usually takes just a few lines of code to redirect to a process that simplifies the code base. Also, cloud payment processors adopt newer payment technologies and provide them through their “interfaces,” which again uses small code fragments for developers.

Final thoughts

We’ve discussed a lot of different concepts in this chapter. The key take-away is that at every step, you can make an assessment about where you stand today across the pillars of people, process, and technology and invest in each as required. The examples we’ve provided are simple and intended to begin stirring your mind about what you need to change for your organization.

Technologies will change between the time of writing and when you read this because of the rapid advances being made in the cloud. Processes and the skills required to deliver datacenter efficiency will also need to change. Embrace the rapid and ever-changing nature of cloud but do so at a pace that your organization can connect with. Otherwise, all the efficiencies that come with delivering datacenter efficiency will never be realized.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset