Chapter 14

Enhancing Your Troubleshooting Toolset with PowerShell

WHAT’S IN THIS CHAPTER?

  • Introducing PowerShell
  • Getting started with PowerShell
  • Using PowerShell to investigate server issues
  • Proactively tuning SQL Server performance with PowerShell

WROX.COM CODE DOWNLOADS FOR THIS CHAPTER

The wrox.com code downloads for this chapter are found at www.wrox.com/remtitle.cgi?isbn=1118177657 on the Download Code tab. The code is in the Chapter 14 download and individually named according to the names throughout the chapter.

INTRODUCING POWERSHELL

System administrators are realizing the productivity gains that can be achieved from PowerShell, the unified scripting environment provided by Microsoft. Reusing scripts of commonly executed tasks is much faster and less error prone than repeatedly clicking through dialogs.

PowerShell was first unveiled to the world at the Microsoft Professional Developers Conference with a code name of Monad in 2003. Following a beta release, version 1 was officially released in 2006, and since then PowerShell has become a deeply integrated component of Windows. Indeed, SQL Server 2012 takes PowerShell so seriously that PowerShell 2.0 is a prerequisite, and SQL Server will not install if PowerShell is not already installed on the system. The initial SQL Server and PowerShell integration that started out with the SQL Server Database Engine now provides support for SSIS and SSAS in SQL Server 2012. PowerShell is also still in active development, which proceeds rapidly; the PowerShell 3.0 release is expected in late 2012.

This chapter looks at the latest version of PowerShell available at the time of writing, version 2.0, and demonstrates why it is such a useful tool for your troubleshooting toolset. You’ll see how simple it is to construct scripts for use in your environment that can be run with minimal impact on system performance. Even better, you can save these scripts for later use. That way, when you find yourself in the heat of a production issue, you will have all your diagnostic scripts ready to run quickly and effectively.

GETTING STARTED WITH POWERSHELL

PowerShell is an object-oriented, command-line and scripting engine created by Microsoft and built on top of the .NET Framework. PowerShell provides full access to COM, WMI, the Windows Registry, and the .NET Framework.

The PowerShell product team created a design that enables developers to extend and enhance the environment with new cmdlets, providers, and modules. This deep Windows integration out-of-the-box and the extensibility of the framework means that you can create scripts to perform nearly any conceivable task.

You can think of PowerShell as an alternative view of your environment, as opposed to the familiar GUI experience. Every time you run a cmdlet, PowerShell responds with a result that you can evaluate and investigate further; and because everything returned is an object, you can drill down for further details. You can run cmdlets repeatedly and scroll through their history in the command window to see how you got to where you are. If you find you are repeatedly running the same pattern of cmdlets, you can package them up into a script or function and reuse complex composite scripts of cmdlets with a single new script or function name.

As mentioned earlier, PowerShell operates on objects. This is a very important aspect of its design that makes life simpler for you because you can take the results of one cmdlet and pass them straight into the next cmdlet as input. This is known as the pipeline and it differentiates PowerShell from other command shells, which have traditionally relied on text streams.

The productivity benefits that result from using PowerShell — automating repetitive tasks and reducing errors — are obvious, but PowerShell is also great for investigation and troubleshooting; and its command-line interface is useful when you are trying to investigate an issue. Imagine a situation in which you need to investigate a poorly performing server that you have not worked on before. With the following simple command you can quickly identify where the server resources are being consumed. This example shows how you can determine the top ten processes consuming the most CPU resources:

Get-Process | Sort-Object cpu -Descending | Select-Object -First 10

This simple statement has taken all the processes running on the computer, sorted them by CPU utilization descending, and then selected the top 10 processes. You don’t have to wait for a user interface to load, which eliminates putting additional strain on the server.

Another troubleshooting question you may wish to find out is when the computer last booted up. You can also achieve this with a simple script (code file: PS_LastBootTime01.PS1):

$wmiBootTime = (get-wmiobject Win32_OperatingSystem).lastbootuptime;
[DateTime]$lastBootUpTime = 
[Management.ManagementDateTimeConverter]::ToDateTime($wmiBootTime);
$uptime = (Get-Date) - $lastBootUpTime;
Write-Host $lastBootUpTime;
Write-Host $uptime;

In the preceding example I created a variable and stored the last boot-up time, which is initially a string when retrieved from WMI. I then converted it into a DateTime object. I calculated the uptime by using the Get-Date cmdlet, subtracting the date on which the machine was rebooted. I then displayed the last boot-up time and the uptime in the console.

You can build a suite of such statements and scripts that you want to run later, and you can easily package them into a script that you can run on a server. Don’t worry if these scripts look a little daunting right now. This chapter explains in detail how scripts are composed, and by the end of it, you will have enough knowledge to understand numerous useful cmdlets and scripts to help with troubleshooting and diagnostics.

The PowerShell Environment

PowerShell 2 is pre-installed on Windows 7 and Windows Server 2008 R2 operating systems. Although PowerShell can be installed onto earlier versions of Windows, it is not immediately obvious from where you can obtain the download. This is because the actual product name that is indexed by all the popular Internet search engines is “Windows Management Framework,” not PowerShell. You can find the PowerShell download from the following link: http://support.microsoft.com/kb/968929.

Once installed, you can either use PowerShell in interactive command-line mode (the shell), or use a script editor to create script files containing multiple statements to execute. To launch the interactive command line, run PowerShell.exe or find the “Windows PowerShell” shortcut that is installed in your Start menu. There is an alternate icon in the Start menu called “Windows PowerShell Modules”; this is similar to the standard “Windows PowerShell” item, but while loading the shell it also imports all the modules in the PowerShell Modules folder. The following PowerShell variable allows you to discover where the PowerShell modules are located on your machine:

$env:PSModulePath

The environments variable contains many configuration settings on your system. You can discover all of them by listing the contents of the variable as shown below:

PS > get-childitem env:

When you install PowerShell packs as part of Windows features, the PowerShell modules will be installed here, which provides a quick way to ensure that every Windows feature with PowerShell support installed has all its functionality available to you when you load the command line.

All the commands that you will be familiar with from cmd.exe are available, such as DIR. You will also have access to a wealth of new commands that are provided with PowerShell; these commands are commonly known as cmdlets.

PowerShell ships with a host application, the PowerShell Integrated Scripting Environment (ISE). This is installed by default in Windows 7 and can be enabled by installing the “Windows PowerShell Integrated Scripting Environment (ISE)” feature on Windows Server 2008 R2. The PowerShell ISE enables you to create script files, which by convention have an extension of .PS1. You can also debug and run scripts from within this environment; although this is a significant improvement from using Notepad and the command window in PowerShell 1, there is still room for improvement.

I prefer to use PowerGUI, a free PowerShell Scripting Environment that is available for download from www.powergui.org. PowerGUI also offers a wealth of additional cmdlets that have been written by the PowerGUI community, providing functionality that is not available out-of-the-box with PowerShell. If you also use Visual Studio, you’re in luck, because Adam Driscoll has created a Visual Studio, extension (VSX) for PowerShell that uses PowerGUI. You can download the VSX from http://visualstudiogallery.msdn.microsoft.com/01516103-d487-4a7e-bb40-c15ec709afa3; this extension enables you to create, debug, and manage scripts within the Visual Studio IDE.

A growing community of users has contributed additional cmdlets and modules. The best known and supported set of community functionality is available at http://pscx.codeplex.com. The PowerShell Community Extensions (PSCX) utility is worth downloading and keeping locally because it contains numerous cmdlets, functions, and modules to facilitate your work with respect to many aspects of Windows. A set specific to SQL Server, known as the SQL PowerShell Extensions, is available from http://sqlpsx.codeplex.com.

Security was an important consideration when the product team created PowerShell. With such an integrated and extensible scripting and command-line interface, it was critical to consider how users would be protected from unauthorized scripts executing. A key feature of this is the Execution Policy, which is a systemwide configuration that specifies the type of scripts that can be executed on the computer. Table 14-1 summarizes the four possible values for the Execution Policy.

TABLE 14-1: PowerShell Execution Policies

Execution Policy Description
Restricted No scripts can be run. Windows PowerShell can be used only in interactive mode.
AllSigned Only scripts signed by a trusted publisher can be run.
RemoteSigned Downloaded scripts must be signed by a trusted publisher before they can be run.
Unrestricted No restrictions; all scripts can be run.

You can check the current status of the Execution Policy on your computer with the Get-ExecutionPolicy cmdlet. The following example shows how my computer is currently configured. As you can see in the example below, the machine is set up so that downloaded scripts must be signed to run, but scripts created on the computer don’t have to be signed to run.

PS > Get-ExecutionPolicy
RemoteSigned

The Basics — Cmdlets, Variables, Advanced Functions, and Modules

PowerShell is created upon a few core building blocks. A thorough understanding of these constituent parts will make understanding the environment a lot easier, so the following sections describe these core components.

Cmdlets

Cmdlets follow a naming convention to assist with identification. This convention is [Name]-[Verb], such as Get-Process, Get-Help, and Remove-Item.

Cmdlets are .NET classes that provide a single piece of functionality and are either provided with the PowerShell environment or provided by .NET developers and installed into the environment. They can take PowerShell objects as input and provide PowerShell objects as output. This ability to take and return PowerShell objects enabled the framework creators to offer what’s known as a pipeline. Using pipelines, you can construct single lines of PowerShell that contain multiple cmdlets, each passing the output as input to the next, with cmdlets participating in the pipeline separated by the bar operator (|). Very powerful functionality can be composed in a single line of PowerShell using this functionality. The following example uses the Get-Process cmdlet to list all the running processes on the computer, and then the output of the Get-Process cmdlet is connected to the Export-Csv cmdlet, which exports the list of processes to a CSV file:

Get-Process | Export-Csv .processes.csv -NoTypeInformation

Discovering functionality is important in the command line because unlike in a GUI where you can visually scan through the menu items, you need to instead use discoverability tools that are provided with PowerShell. The first cmdlet to be aware of is Get-Help. Get-Help takes one parameter CmdletName or TopicName. You can also provide a few properties: -examples to display the examples; -detailed to get further information; and -full for all the technical information. The following example uses Get-Help to retrieve all the information about the Get-Process cmdlet:

Get-Help Get-Process -full

In order to “Get-Help” you need to know the cmdlets that are available, and PowerShell provides the Get-Command cmdlet for just that purpose. If you are trying to find all the cmdlets available within a module, you can specify the module name to filter the results to only the commands that exist within the module. The module name for SQL Server 2012 is SQLPS, so you can find all the cmdlets provided with SQL Server 2012 with the following:

Get-Command -Module sqlps

If you don’t have the sqlps module available the cmdlets will return nothing. You can make the module available for use by importing it as shown below:

Import-Module sqlps

Aliases enable cmdlets to have different names, behavior embraced by the PowerShell team. As mentioned earlier, for example, DIR is available in PowerShell. It isn’t in fact the DIR command you’ll find in the cmd.exe but rather a PowerShell Cmdlet called Get-ChildItem that has been aliased to DIR for backward compatibility. Interestingly Get-ChildItem is also aliased to LS, which is the equivalent command on UNIX platforms.

Variables

After you have written cmdlet statements in PowerShell, you need to be able to store data so that it can be retrieved and manipulated further through your scripts. PowerShell provides variables to store data within PowerShell. Variables always start with a $ symbol, followed by the name you choose. For example, to store the top 10 processes by CPU usage you can use the following:

$TopTenProcessesByCPU = Get-Process | Sort-Object cpu -Descending | Select-Object -
First 10

Now that the value is stored in a variable, it is available for retrieval and further use through your script.

You can discover all the variables that are available using the Get-Variable cmdlet. Table 14-2 shows some of the more important variables that were returned after I ran this on my machine.

TABLE 14-2: Common PowerShell Variables

Variable Name Description
$_ Current item in the pipeline
$args Array of arguments
$Error Array of errors
$FALSE Boolean False
$HOME Folder containing the current user’s profile
$Host This is a reference to the host of this runspace.
$null References to the null variable always return the null value. Assignments have no effect.
$PID Current process ID
$PROFILE Path to the active Profile
$PSCulture Culture of the current Windows PowerShell session
$PSEmailServer Variable to hold the Email Server. This can be used instead of the HostName parameter in the Send-MailMessage cmdlet.
$PSHOME Parent folder of the host application of this Runspace
$PSUICulture UI Culture of the current Windows PowerShell Session
$PSVersionTable Version information for current PowerShell session
$PWD PowerShell Working Directory
$StackTrace Detailed StackTrace for the last error
$TRUE Boolean True

The most important variable to familiarize yourself with is $_, which is the current object in the pipeline. This is very handy when you want to iterate through all the items that are being sent through the pipeline. The following example uses a Where-Object cmdlet to filter the output on every process that has been passed along the pipeline where the WorkingSet of the current process in the pipeline is greater than 100MB:

Get-Process | Where-Object {$_.WorkingSet –gt 100MB}

Note that this example uses the -gt comparison operator. PowerShell provides a set of comparison operators that need to be used when comparing objects. Table 14-3 lists the commonly used comparison operators.

TABLE 14-3: PowerShell Equality Operators

Operator Description
-eq Equal to
-ne Not equal to
-gt Greater than
-ge Greater than or equal to
-lt Less than
-le Less than or equal to
-like Matches using the (*) wildcard character
Operator Description
-notlike Does not match using the (*) wildcard character
-match Matches a string using a regular expression
-notmatch Does not match a string using a regular expression
-contains Includes an identical value
-notcontains Does not include an identical value
-band Bitwise AND
-bor Bitwise OR
-bxor Bitwise XOR

It is often useful to know the makeup of a variable. You can discover the structure of an object using the Get-Member cmdlet. The following code demonstrates the creation of a new string object and then passing it into the Get-Member cmdlet:

PS > [String]$NewString | Get-Member

Knowing the members available on an object is very useful because you can start using them. For instance, in the following example I have a Statement string and want to return the last word. By inspecting the members on the $Statement variable, I can see that there is a Length property, an IndexOf method, and a Substring method, which I can use together to return the part of the string I’m interested in (code file: PS_StringManipulation01.PS1):

$Statement = "PowerShell Rocks"
$Statement.SubString($Statement.IndexOf(" ") + 1, $Statement.Length - 
$Statement.IndexOf(" ") - 1)

Advanced Functions

Once you have used a snippet of PowerShell code a few times you’ll probably want a way of refactoring that code into something that you can reuse easily. Fortunately, PowerShell 2.0 makes this very easy to accomplish with a new feature called advanced functions. I’m going to use a trivial example to demonstrate this functionality. Assume that you regularly filter the Get-Process cmdlet for all processes starting with SQL, and you want to be able to return all these processes with a simple cmdlet named Get-SQLProcess. The following code listing shows how this advanced function is created:

function Get-SQLProcess
{
<#
.SYNOPSIS
Retrieves processes starting with the term SQL
 
.DESCRIPTION
The Get-SQLProcess function uses the Get-Process Cmdlet to retrieve all the 
processes
that start with SQL from the local computer.
#>
    [CmdletBinding()]
    Param()
    Process
    {
    Get-Process SQL*
    }
}

After this code has been executed by the PowerShell environment, you’ll be able to call this function; and because it uses the CmdletBinding attribute, which differentiates the advanced function from a standard function, PowerShell will treat it like a compiled cmdlet — meaning it will have autocomplete and be listed among the available cmdlets. I’ve also written some simple documentation for the preceding example, so this function now has help as well. The following listing shows the documentation that is displayed for this function when Get-Help is called on it:

PS > Get-Help Get-SQLProcess
 
NAME
    Get-SQLProcess
 
SYNOPSIS
    Retrieves processes starting with the term SQL
 
SYNTAX
    Get-SQLProcess [<CommonParameters>]
 
DESCRIPTION
    The Get-SQLProcess function uses the Get-Process Cmdlet to retrieve all the 
    processes that start with SQL from the local computer.
 
RELATED LINKS
 
REMARKS
    To see the examples, type: "get-help Get-SQLProcess -examples".
    For more information, type: "get-help Get-SQLProcess -detailed".
    For technical information, type: "get-help Get-SQLProcess -full".

Modules

Modules are another new feature introduced in PowerShell 2.0. Before modules were available, developers who wanted to introduce new functionality into PowerShell were required to use snap-ins, which were created in the C# programming language, compiled, and then imported into the PowerShell host. This was difficult and required the assistance of an experienced C# developer to introduce new functionality. Modules are designed to make this easier, as a module is a package that can contain members such as cmdlets, providers, functions, variables, aliases, and so on. There are four types of module, described in the following sections.

Script Modules

A PowerShell script that has a .PSM1 file extension is known as a script module. Modules contain PowerShell script that can be shared within an organization and, if required, further afield. Adding advanced functions as shown in the last section enables the script author to create cmdlet-like functionality and share it.

Script modules are the most accessible way to create a module because any valid PowerShell script can simply be saved into a file with a .PSM1 extension and then be used as any other module type.

Binary Modules

A binary module contains compiled .NET code and is compiled into an assembly (.dll). This is essentially a replacement for the snap-in functionality provided in PowerShell 1.0. The disadvantage of using binary modules is that the assistance of an experienced C# developer was required to create the modules. However, if significant intellectual property is contained within the module, this may be the best approach because the code can be obfuscated before it is distributed.

Manifest Modules

Manifest modules are used to describe the contents of a module. They can contain the prerequisites (PowerShell version, .NET Framework version, etc.); processing directives such as scripts; formats; and type properties. Restrictions can be applied, such as members of the module to export. This is useful when creating a clean API for others to share. By convention, manifest files have a .psd1 extension, and formatting and type files have a .psxml extension.

Dynamic Modules

Dynamic modules are created on demand using the New-Module cmdlet. These modules live in memory only for the session in which the PowerShell host is active, and because of this transient nature they are not exposed through the Get-Module cmdlet.

Working with Modules

To identify the modules available on your system, you can use the Get-Module cmdlet with the -ListAvailable parameter, which returns a list of all the modules that can be imported into the session. The following example shows the SQL modules available on my system:

PS > get-module -listavailable sql*
 
ModuleType Name                      ExportedCommands
---------- ----                      ----------------
Manifest   SQLASCMDLETS              {}
Manifest   SQLPS                     {}

Once you have identified a module that you would like to use, you can import it using the Import-Module cmdlet. You need to provide the name of the module if it is listed in the available modules or by entering the full path of the module. When importing a module, there is a parameter called -DisableNameChecking that prevents PowerShell from checking the members’ verbs against a predefined, approved list from the PowerShell product team.

SQL Server 2012 comes with two modules, SQLPS and SQLASCMDLETS; the following example script imports both modules into the active session:

Import-Module SQLPS,SQLASCMDLETS -DisableNameChecking

I like to inspect the cmdlets that are available when importing a module for the first time so that I can find out what I can do with the module. This is possible using the Get-Command cmdlet with the -module parameter to filter the cmdlets listed to only those for the given module. The following example shows the cmdlets available within the SQLASCMDLETS module:

PS SQLSERVER:> Get-Command -Module SQLASCMDLETS
 
CommandType     Name 
-----------     ---- 
Cmdlet          Add-RoleMember 
Cmdlet          Backup-ASDatabase 
Cmdlet          Invoke-ASCmd 
Cmdlet          Invoke-ProcessCube 
Cmdlet          Invoke-ProcessDimension 
Cmdlet          Invoke-ProcessPartition 
Cmdlet          Merge-Partition 
Cmdlet          New-RestoreFolder 
Cmdlet          New-RestoreLocation 
Cmdlet          Remove-RoleMember 
Cmdlet          Restore-ASDatabase

If you no longer want to reference a module imported into a session, you can remove it using the Remove-Module cmdlet, providing the name of the module you wish to remove.

Signing PowerShell Scripts

Modules are designed to be shared and should therefore be digitally signed so that users who have locked down their computer with Set-ExecutionPolicy will still be able to execute the script — unless the computer is locked down to Restricted, in which case no scripts can be executed. Signing a script is an involved process requiring code-signing certificates and is outside the scope of this book, but you can obtain more information from TechNet magazine at the following link: http://technet.microsoft.com/en-us/magazine/2008.04.powershell.aspx.

Working Remotely

In the first version of PowerShell users were constrained to working on a local computer. This was OK if you had a single computer to manage, but many administrators work with multiple servers in complex computing environments, and it is a major limitation to remote onto each computer in order to use PowerShell. This was recognized and addressed in version 2.0, when WinRM was launched. WinRM is a Windows service that runs on the target computer and allows a remote computer running PowerShell 2.0 to communicate with it and execute PowerShell 2.0 scripts from the local computer.

From an administrator’s perspective, this opens up a wealth of possibilities — from the simplicity of now having a single management computer from which to operate, to being able to construct a full monitoring framework within PowerShell.

To set up PowerShell remoting you must have PowerShell 2.0 installed on all the participating computers. There are a couple of steps that you must perform to have everything running:

1. On the target computer, run PowerShell as Administrator (this step won’t work if you are not running with elevated privileges). Run the Enable-PSRemoting -force cmdlet to set up the WinRM (Windows Remote Management) service and the firewall configuration to allow remoting commands to pass through:
PS WSMan:localhostClient> Enable-PSRemoting -force
WinRM already is set up to receive requests on this machine.
WinRM has been updated for remote management.
Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on 
this machine.
WinRM firewall exception enabled.
2. On the target computer, ensure that the computer from which you’ll be executing scripts is a trusted host by using the TrustedHosts command, as shown here:
PS C:> cd wsman:localhostclient
PS WSMan:localhostClient> Set-Item TrustedHosts SQL2012 -Force
PS WSMan:localhostClient> Get-Item TrustedHosts
   WSManConfig: Microsoft.WSMan.ManagementWSMan::localhostClient
 
Name                      Value
----                      -----
TrustedHosts              SQL2012

After completing these steps you can run remote scripts. You can do this interactively with the New-PSSession cmdlet and then enter the remote session with the Enter-PSSession cmdlet. Alternately, you can invoke scripts on the remote machine with the Invoke-Command cmdlet. I like to test the remote session using Invoke-Command and request the hostname to ensure that I’ve connected to the remote computer correctly, as shown in the following example:

Invoke-Command -Session $sessions –ScriptBlock {Hostname}

What’s New in SQL Server 2012

Prior to SQL Server 2012, the SQL Server team created a PowerShell mini-shell called SQLPS. This has changed in SQL Server 2012; SQLPS is now a standard PowerShell 2.0 host and imports the SQLPS module. This makes life a lot easier if you are already using a standard PowerShell host because you too can use import-module SQLPS and SQLASCMDLETS to gain all the SQL Server provider functionality within another PowerShell host.

SQL Server 2012 ships with the following 40 cmdlets:

SQLPS Cmdlets SQLASCMDLETS Cmdlets
Add-SqlAvailabilityDatabase Add-RoleMember
Add-SqlAvailabilityGroupListenerStaticIp Backup-ASDatabase
Backup-SqlDatabase Invoke-ASCmd
Convert-UrnToPath Invoke-ProcessCube
Decode-SqlName Invoke-ProcessDimension
Disable-SqlHADRService Invoke-ProcessPartition
Enable-SqlHADRService Merge-Partition
Encode-SqlName New-RestoreFolder
Invoke-PolicyEvaluation New-RestoreLocation
Invoke-Sqlcmd Remove-RoleMember
Join-SqlAvailabilityGroup Restore-ASDatabase
New-SqlAvailabilityGroup
New-SqlAvailabilityGroupListener
New-SqlAvailabilityReplica
New-SqlHADREndpoint
Remove-SqlAvailabilityDatabase
Remove-SqlAvailabilityGroup
Remove-SqlAvailabilityReplica
Restore-SqlDatabase
Resume-SqlAvailabilityDatabase
Set-SqlAvailabilityGroup
Set-SqlAvailabilityGroupListener
Set-SqlAvailabilityReplica
Set-SqlHADREndpoint
Suspend-SqlAvailabilityDatabase
Switch-SqlAvailabilityGroup
Test-SqlAvailabilityGroup
Test-SqlAvailabilityReplica
Test-SqlDatabaseReplicaState

Of these cmdlets, 22 are for managing the new High Availability features and Analysis Services, which I’m not covering in this chapter. The other notable cmdlets from a troubleshooting and performance perspective are for backup and restore. SQL Server Backup and Restore cmdlets are a welcome addition to SQL Server 2012; previously, as a script author you had two options:

  • Invoke-SQL and write T-SQL that you execute from a PowerShell script.
  • Load the SMO objects and use the backup and restore functionality provided through the SMO library. This was my preferred approach but it requires a significant scripting effort, somewhere in the neighborhood of 20–50 lines of script to handle a typical backup or restore scenario.

Backup-SqlDatabase

The following script shows how simple database backups can be using the SQL Provider and the new Backup-SqlDatabase script to iterate through all the databases, creating a folder for each one and then generating a backup based on a timestamp (code file: PS_SQLBackup01.PS1):

foreach($database in (Get-ChildItem))
{
    $dbName = $database.Name
    $timeStamp = Get-Date -FORMAT yyyyMMddHHmmss
    $backupFolder = "c:ackups$dbName"
 
    if((Test-Path $backupFolder) -eq $False)
    {
        New-Item -type directory -path $backupFolder
    }
 
    Backup-SqlDatabase -Database $dbName -BackupFile "$backupFolder$dbName-
    $timeStamp.bak"
}

Restore-SqlDatabase

The new cmdlets also make restoring a database very straightforward. The following script demonstrates how to restore the Adventure Works database:

Restore-sqldatabase -Database AdventureWorks '
-BackupFile "C:BackupsAdventureWorksAdventureWorks-20120220005537.bak"

You may notice the “`” character after AdventureWorks. This is not a mistake or a typo. It is a continuation character; which means that you can type a line in the PowerShell command line, press return, and PowerShell will continue accepting input for the previous line. This is very useful when single statements become too long to read comfortably.

More than 40 parameters are available for Restore-SqlDatabase, enabling the creation of some very complex restore scripting. Anything that is possible within the SQL Server Management Studio user interface or through T-SQL is available here, with the added benefit of simple file management. This makes complex restores using multiple files, such as restoring many log files, a relative breeze.

USING POWERSHELL TO INVESTIGATE SERVER ISSUES

PowerShell provides full integration with the Windows Management Instrumentation (WMI) framework, The WMI framework is a very powerful method for querying the Windows system to find detailed information. Traditionally, it has not been easy to gain access to WMI, and this was mainly done via a programming interface. PowerShell opens up access to WMI making it a useful resource for your troubleshooting efforts.

Interrogating Disk Space Utilization

In this example, you are going to see how to make use of one of the WMI win32-logicaldisk classes to retrieve information about disk space utilization on the server. Initially, the raw information provided by the class will be returned and then the data will be used to produce a detailed analysis of disk space available:

Get-wmiobject win32_logicaldisk

Running the Get-WmiObject cmdlet and providing the win32_logicaldisk class returns the following results on my computer:

DeviceID     : C:
DriveType    : 3
ProviderName :
FreeSpace    : 105582891008
Size         : 128742060032
VolumeName   :
 
DeviceID     : D:
DriveType    : 5
ProviderName :
FreeSpace    : 0
Size         : 45471744
VolumeName   : VBOXADDITIONS_4.

The default list view of the results is not very readable for this output, but you can improve it by formatting the output as a table:

Get-wmiobject win32-logicaldisk | format-table -autosize

On my computer this statement produces the following output:

DeviceID DriveType ProviderName    FreeSpace         Size VolumeName
-------- --------- ------------    ---------         ---- ----------
C:               3              105582891008 128742060032
D:               5                         0     45471744 VBOXADDITIONS_4.

You may want to filter this information so that you are only displaying fixed disks. This can be achieved by using a Filter parameter to filter the disks returned to be fixed disks only:

Get-WmiObject Win32_logicaldisk -Filter  "DriveType = 3" | Format-Table -autosize

This is getting better, but the results are being presented with the free space and the size in bytes. I can’t remember the last time when bytes were a meaningful unit of measure for a hard disk, so you can modify the script to format the output in GB (code file: PS_DiskInfo01.PS1):

$diskinfo = Get-WmiObject Win32_logicaldisk -Filter  "DriveType = 3"
$diskinfo | foreach-object {$_.FreeSpace = $_.FreeSpace / 1GB; '
$_.Size = $_.Size / 1GB}
$diskinfo | format-table -autoSize

We are now presented with a table that shows the total capacity of the drive and the available space in GB, as shown when run on my system below:

DeviceID     : C:
DriveType    : 3
ProviderName :
FreeSpace    : 388
Size         : 466
VolumeName   : OS
 
DeviceID     : H:
DriveType    : 3
ProviderName :
FreeSpace    : 303
Size         : 466
VolumeName   : DATA

Interrogating Current Server Activity

When checking whether a remote computer is responsive, a good first place to investigate is the ping response. Using the win32_pingstatus WMI class, this is relatively straightforward. The data returned from this class is structured and can then be used within other parts of the investigation or recorded back to disk in a file, e-mailed to a recipient, or stored within a database. The following example checks the status code (0 if successful) of a ping test on as server called SQL2012 (replace SQL2012 with a machine name on your network):

Get-WmiObject win32_pingstatus -Filter "Address='SQL2012'" '
| Select-Object statuscode

Assuming a status code of 0 is returned from the server, you can now turn your attention to services running on the computer. To find out which services are running, execute the following:

Get-Service | Where-Object { $_.Status -eq 'Running' }

The Get-Service cmdlet is very handy for simple service queries. Unfortunately, it doesn’t provide access to every property on the service. I often find it useful to see the Service State and the Start Mode. Fortunately, this information is available on the Win23_Service WMI class; and the following script shows an example of querying this class to find services that start automatically with Windows but are currently stopped. This may be useful if a server is not behaving as expected and you suspect that a service that is usually started at boot time is currently not running.

Get-WmiObject -Class Win32_Service -Property Name,State,StartMode -Filter 
"StartMode='Auto' AND State='Stopped'" '
| Select-Object -Property Name,StartMode,State

Finding out which processes are active is useful during troubleshooting. Often, a process has run away with CPU or memory and is therefore putting pressure on the system, causing other vital services to be starved of resources. Once again PowerShell jumps to the rescue with Get-Process, which returns process information. A busy system will have many processes running, so I like to run two queries — the first finds the top 10 processes by CPU utilization, and the second finds the top 10 processes by memory utilization. The following example queries the top 10 processes by CPU utilization and displays the results from my SQL2012 Virtual Machine:

PS C:> get-process | Sort-Object CPU -Descending | Select-Object -First 10
 
Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
    388      50    41052      56776   222    48.50   3660 iexplore
    783      53    41408      58104   287    28.92   2452 explorer
    889      78   101688     144016   548    27.94   2208 Ssms
    914      47   196188     216300   639    27.03   2252 powershell
     57       7     1352       5072    70    12.00   3580 notepad
    637      55    58648      73928   621     7.56   3028 SQLPS
    564      43    16048      29524   177     5.97   3396 iexplore
     39       6     2028       5036    57     2.97   3008 conhost
    223      17     7816      17984   111     1.02   3968 notepad
     79       9     1392       4480    71     0.19   2632 VBoxTray

This isn’t a busy system, so the usual suspects such as Internet Explorer (iexplore) and Windows Explorer (explorer) are high in the list. The following example shows a similar query, this time querying by memory utilization:

PS C:> get-process | Sort-Object WS -Descending | Select-Object -First 10
 
Handles  NPM(K)    PM(K)      WS(K) VM(M)   CPU(s)     Id ProcessName
-------  ------    -----      ----- -----   ------     -- -----------
    639      88   444404     308232  -502            1268 sqlservr
    816      47   200100     220736   640    28.13   2252 powershell
    818      77    99328     143800   547    28.22   2208 Ssms
    618      52   101812      74352  1906            1452 ReportingServicesService
    637      55    58648      73928   621     7.56   3028 SQLPS
    784      54    41748      58472   288    29.61   2452 explorer
    388      50    40948      56748   221    48.52   3660 iexplore
   1690      54    75844      54556   674            1312 msmdsrv
   1061      52    28388      35400   432             848 svchost
    311      26    31896      33084   534            1336 SMSvcHost

In this case, the processes consuming the largest working set of memory are different from the processes consuming the most CPU. Finding processes that are using excessive CPU or memory gives you a scope for further investigation.

Interrogating for Warnings and Errors

When a server is not behaving as expected, an error has often occurred. Inspecting the Windows Event Log is a good place to start an investigation into what may have gone wrong. PowerShell has full support for the Windows Event Log. The Get-EventLog cmdlet lists all the items within an Event Log. The following example shows the list of items within the application Event Log filtered to those whose source is “MSSQLSERVER” and are an error:

Get-Eventlog application '
| Where-Object {$_.Source -eq "MSSQLSERVER" -and $_.EntryType -eq "Error"}

SQL Server also has an Error Log that often contains more detailed information pertaining to SQL Server–specific issues. The following example shows how to query the SQL Server Error Log filtering the results to just the errors (code file: PS_ReadSQLErrorLog01.PS1):

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null
$server = New-Object "Microsoft.SqlServer.Management.Smo.Server" "(local)"
$server.ReadErrorLog() | Where-Object {$_.Text -like "Error:*"}

My machine has a single default SQL Server instance. If you are using a named instance, replace “(local)” with the name of your SQL Server instance.

Using the SQL Server Management Objects it is also possible to quickly filter the available jobs, identifying those that failed the last time they ran and are currently enabled. This is a useful script to run on a regular basis because SQL jobs often contain important management and administration tasks that are automated, and quickly identifying a failed job will save significant time diagnosing issues later. The following script example shows how to query the SMO objects for SQL Server jobs whose last run outcome failed and are still enabled (code file: PS_ReadFailedJobs01.PS1):

[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null
$server = New-Object "Microsoft.SqlServer.Management.Smo.Server"  "(local)"
$server.jobserver.jobs | where-object {$_.lastrunoutcome -eq "Failed" -and 
$_.isenabled -eq $TRUE}

Interrogating Server Performance

PowerShell 2.0 introduced integration with Windows performance counters. This means that you can quickly view all the performance counters on your machine:

Get-Counter -listSet * | Select-Object -ExpandProperty Paths

You could then find all of the SQL Server–specific counters:

Get-Counter -listSet * | Select-Object -ExpandProperty Paths | where-object {$_ -
like "*SQL*"}

It is also possible to get a sample of a performance counter:

Get-Counter 'Processor(*)\% Processor Time'

You can even leave PowerShell listening for the counter and collecting samples until you press Ctrl+C:

Get-Counter 'Processor(*)\% Processor Time' -continuous

The advantage of configuring your performance counters through PowerShell is that you can create a gold set of performance counters that you are interested in monitoring on every computer in your environment and then save the script to run on each computer, ensuring that all the computers capture the same statistics.

PROACTIVELY TUNING SQL SERVER PERFORMANCE WITH POWERSHELL

Now that you’ve seen some of the building blocks of PowerShell it’s time to see how easy it is to combine all this together and build some more functional scripts. Remember, everything can be broken back down to cmdlets and variables; so if you find that your script is getting too complex and difficult to work with, break it back down and build it back up.

In this next section we will examine some scripts that can be used to help with SQL Server tuning as we look at index maintenance, managing disk space, and scripting out DDL data. To round the chapter off we’ll look at how PowerShell scripts can be scheduled to run automatically.

Index Maintenance

To ensure maximum performance is obtained, index maintenance is an important job when managing a SQL Server. Traditionally, a DBA may have written some T-SQL to perform index maintenance, but this is not a simple task and one probably not best suited to T-SQL because of the procedural nature of the task. PowerShell offers an elegant solution to index maintenance in collaboration with the SQL Server SMO library. The following script shows how a fairly comprehensive index maintenance routine can be performed in just a few lines of PowerShell. The script iterates through all the indexes within all the tables of a user database, and based on the fragmentation level of the index performs one of three tasks (code file: PS_ManageFragmentation01.PS1):

  • If the index is less than 5% fragmented, then do nothing because the performance benefit of maintaining the index on this occurrence is negligible.
  • If the index is between 5% and 30% fragmented, then perform an index reorganization.
  • If the index is greater than 30% fragmented, then perform an index rebuild.
[string] $ServerName = $args[0]
[string] $TargetDatabaseName = $args[1]
 
[Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null
$server = New-Object Microsoft.SqlServer.Management.Smo.Server $ServerName
$targetDB = $server.Databases[$targetDatabaseName]
 
foreach ($table in $targetDB.Tables)
{
       foreach($index in $table.Indexes)
       {
             $fragmentation = $index.EnumFragmentation()
             $averageFragmentation = $fragmentation.Rows[0].AverageFragmentation
             
             if($averageFragmentation -lt .05)
             {
                    continue
             }
              
             if($averageFragmentation -ge .05 -and $averageFragmentation -lt .3)
             {
                    $index.Reorganize()
                    continue
             }
              
             $index.Rebuild()
         }
}

Managing Disk Space Utilization of Backups

Working with the file system is very straightforward with PowerShell. In the following example, I have created a small script that removes transaction log backups that are older than two days, differential backups that are older than eight days, and full backups that are older than 91 days. By splitting the Get-ChildItem query into three queries based on file extension and last write time, it was simply a matter of setting the number of days to subtract from the current date to define the retention period for each backup type (code file: PS_DeleteOldBackups01.PS1):

Get-ChildItem .Backups -include *.trn -recurse '
      | Where-Object { $_.lastwritetime -lt (Get-Date).AddDays(-2) } '
      | Remove-Item
 
Get-ChildItem .Backups -include *.dif -recurse '
      | Where-Object { $_.lastwritetime -lt (Get-Date).AddDays(-8) } '
      | Remove-Item
 
Get-ChildItem .Backups -include *.bak -recurse '
      | Where-Object { $_.lastwritetime -lt (Get-Date).AddDays(-91) } '
      | Remove-Item

Extracting DDL Using SMO

A misplaced index or a poorly defined table definition are two examples of how changing the DDL of a database can have heavy performance implications. With this in mind, I created the following script, which can be scheduled to run once a day and will extract all the DDL objects from the database and store them on disk against the day that they were extracted. If poor performance is identified, it’s easy to compare the files between days to identify whether any DDL changes caused the performance of the database to degrade.

This is a long script but it combines all the concepts and techniques discussed so far in this chapter to dump all the database objects to file (code file: PS_ExtractDDL01.PS1):

#Helper function to script the DDL Object to disk
function Write-DDLOutput ($filename, $object)
{
       New-Item $filename -type file -force | Out-Null
       #Specify the filename
       $ScriptingOptions.FileName = $filename
 
      #Assign the scripting options to the Scripter
      $Scripter.Options = $ScriptingOptions
 
      #Script the index
      $Scripter.Script($object)
}
 
#Load the SMO assembly
[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null
 
#Create all the global vars we need
$Server = New-Object ("Microsoft.SqlServer.Management.Smo.Server")
$Scripter = New-Object ("Microsoft.SqlServer.Management.Smo.Scripter")
$ScriptingOptions = New-Object 
("Microsoft.SqlServer.Management.SMO.ScriptingOptions")
$Scripter.Server = $Server
 
#Specifies the root folder that we’ll store the Scripts into This will probably 
become a param in future
$RootBackupFolder = "C:SqlBackupsDDL"
 
#Get the day of the week so that we can create a folder for each day
$Today = [System.DateTime]::Today.DayOfWeek
 
#Store today’s backup folder
$DDLBackupFolder = Join-Path -Path $RootBackupFolder -ChildPath $Today
 
#Check if today’s folder exists
if ([System.IO.Directory]::Exists($DDLBackupFolder))
{
       #If it does delete it’s contents
       Remove-Item (Join-Path -Path $DDLBackupFolder -ChildPath *) -Recurse
}
else
{
       #Otherwise create it
       [System.IO.Directory]::CreateDirectory($DDLBackupFolder) | Out-Null
}
 
#Setup the scripting options
$ScriptingOptions.AppendToFile = $true
$ScriptingOptions.FileName = $filename
$ScriptingOptions.ToFileOnly = $true
$ScriptingOptions.ScriptData = $false
 
#Loop through all the databases to script them out
foreach ($database in ($Server.databases | where {$_.IsSystemObject -eq $false -and
$_.IsDatabaseSnapshot -eq $false}))
{
 
       $databaseBackupFolder = Join-Path -Path $DDLBackupFolder -ChildPath 
       $Database.Name
 
       #This will be the database create script
       Write-DDLOutput (Join-Path -Path ($databaseBackupFolder) -ChildPath
       ($Database.Name +  ".sql")) $database
 
             $ProgrammabilityBackupFolder = Join-Path -Path
             $databaseBackupFolder -ChildPath "Programmability"
 
             $DefaultsBackupFolder = Join-Path -Path 
             $ProgrammabilityBackupFolder -ChildPath "Defaults"
              foreach ($default in $database.Defaults)
              {
                    #Generate a filename for the default
                    Write-DDLOutput (Join-Path -Path 
                    ($DefaultsBackupFolder) -ChildPath  
                    ($default.Schema + "." + $default.Name +  ".sql"))
                    $default
              }
 
             #Create a folders to store the functions in
             $FunctionsBackupFolder = Join-Path -Path 
             $ProgrammabilityBackupFolder -ChildPath "Functions"
             $ScalarFunctionsBackupFolder = Join-Path -Path 
             $FunctionsBackupFolder -ChildPath "Scalar-valued Functions"
             $TableValuedFunctionsBackupFolder = Join-Path -Path 
             $FunctionsBackupFolder -ChildPath "Table-valued Functions"
 
             foreach ($function in $database.UserDefinedFunctions | where 
             {$_.IsSystemObject -eq $false})
             {
                    #script the functions into folders depending upon type. We’re 
                    only interested in scalar and table
                    switch ($function.FunctionType)
                    {
                    scalar
                    {
                          #Generate a filename for the scalar function
                          $filename = Join-Path -Path 
                          ($ScalarFunctionsBackupFolder) -ChildPath
                          ($function.Schema + "." + $function.Name +  ".sql")
                    }
                    table
                    {
                          #Generate a filename for the table value function
                          $filename = Join-Path -Path 
                          ($TableValuedFunctionsBackupFolder) -ChildPath
                          ($function.Schema + "." + $function.Name +  ".sql")
                    }
                    default { continue }
                    }
 
                    #Script the function
                    Write-DDLOutput $filename $function
             }
 
             $RulesBackupFolder = Join-Path -Path 
             $ProgrammabilityBackupFolder -ChildPath "Rules"
             foreach ($rule in $database.Rules)
             {
                    #Script the rule
                    Write-DDLOutput (Join-Path -Path 
                    ($RulesBackupFolder) -ChildPath
                    ($rule.Schema + "." + $rule.Name +  ".sql")) $rule
             }
              
             #Create a folder to store the Sprocs in
             $StoredProceduresBackupFolder = Join-Path -Path 
             $ProgrammabilityBackupFolder -ChildPath "Stored Procedures"
 
             #Loop through the sprocs to script them out
             foreach ($storedProcedure in $database.StoredProcedures | where 
             {$_.IsSystemObject -eq $false})
             {
                    #script the sproc
                    Write-DDLOutput ($filename = Join-Path -Path 
                    ($StoredProceduresBackupFolder) -ChildPath
                    ($storedProcedure.Schema + "." + 
                    $storedProcedure.Name + ".sql")) 
                    $storedProcedure
             }
 
             #Create a folder to store the table scripts
             $TablesBackupFolder = Join-Path -Path $databaseBackupFolder -ChildPath
             "Tables"
             $TableIndexesBackupFolder = Join-Path -Path 
             $TablesBackupFolder -ChildPath "Indexes"
             $TableKeysBackupFolder = Join-Path -Path 
             $TablesBackupFolder -ChildPath "Keys"
             $TableConstraintsBackupFolder = Join-Path -Path 
             $TablesBackupFolder -ChildPath "Constraints"
             $TableTriggersBackupFolder = Join-Path -Path 
             $TablesBackupFolder -ChildPath "Triggers"
 
             #Loop through the tables to script them out
             foreach ($table in $database.Tables | where 
             {$_.IsSystemObject -eq $false})
             {
                    #Script the Table
                     Write-DDLOutput (Join-Path -Path 
                     ($TablesBackupFolder) -ChildPath  
                     ($table.Schema + "." + $table.Name +  ".sql")) $table
 
                    foreach($Constraint in $table.Checks)
                    {
                          #Script the Constraint
                          Write-DDLOutput (Join-Path -Path 
                          ($TableConstraintsBackupFolder) -ChildPath
                          ($table.Schema + "." + $table.Name + "." + 
                          $Constraint.Name + ".sql")) $Constraint
                    }
 
                    foreach ($index in $table.Indexes)
                    {
                           #Generate a filename for the table
                           switch($index.IndexKeyType)
                           {
                                 DriPrimaryKey
                                 {
                                        $filename = Join-Path -Path 
                                        ($TableKeysBackupFolder) -ChildPath
                                        ($table.Schema + "." + 
                                        $table.Name + "." + 
                                        $index.Name + ".sql")
                                 }
                                 default
                                 {
                                        $filename = Join-Path -Path 
                                        ($TableIndexesBackupFolder) -ChildPath
                                        ($table.Schema + "." + 
                                        $table.Name + "." + 
                                        $index.Name + ".sql")
                                 }
                           }
 
                          #Script the index
                          Write-DDLOutput $filename $index
                    }
 
                    foreach ($trigger in $table.Triggers)
                    {
                          #Script the trigger
                          Write-DDLOutput (Join-Path -Path 
                          ($TableTriggersBackupFolder) -ChildPath
                          ($table.Schema + "." + $table.Name + "." + 
                          $trigger.Name + ".sql")) $trigger
                    }
             }
 
             #Create a folder to store the view scripts
             $ViewsBackupFolder = Join-Path -Path $databaseBackupFolder -ChildPath 
             "Views"
             $ViewKeysBackupFolder = Join-Path -Path $ViewsBackupFolder -ChildPath 
             "Keys"
             $ViewIndexesBackupFolder = Join-Path -Path 
             $ViewsBackupFolder -ChildPath "Indexes"
             $ViewTriggersBackupFolder = Join-Path -Path 
             $ViewsBackupFolder -ChildPath "Triggers"
 
             #Loop through the views to script them out
             foreach ($view in $database.Views | where 
             {$_.IsSystemObject -eq $false})
             {
                    #Script the view
                    Write-DDLOutput (Join-Path -Path 
                    ($ViewsBackupFolder) -ChildPath
                    ($view.Schema + "." + $view.Name +  ".sql")) $view
 
                    foreach ($index in $view.Indexes)
                    {
                           #Generate a filename for the table
                           switch($index.IndexKeyType)
                           {
                                  DriPrimaryKey
                                   {
                                        $filename = Join-Path -Path 
                                        ($ViewKeysBackupFolder) -ChildPath
                                        ($view.Schema + "." + 
                                        $view.Name + "." + $index.Name + ".sql")
                                 }
                                 default
                                 {
                                        $filename = Join-Path -Path 
                                        ($ViewIndexesBackupFolder) -ChildPath
                                        ($view.Schema + "." + $view.Name + "." +
                                        $index.Name + ".sql")
                                 }
                           }
 
                                 Write-DDLOutput $filename $index
                    }
 
                    foreach ($trigger in $view.Triggers)
                    {
                          #Script the trigger
                          Write-DDLOutput (Join-Path -Path 
                          ($ViewTriggersBackupFolder) -ChildPath  
                          ($view.Schema + "." + $view.Name + "." + 
                          $trigger.Name + ".sql")) $trigger
                    }
              }
}

Scheduling Script Execution

There are two methods for scheduling script execution. The first is to use Windows Task Scheduler, which is useful if you don’t have SQL Server installed on the server from which you wish to execute the PowerShell script. You can simply add a new task to the Scheduler and execute PowerShell.exe, passing the script you want to execute as a parameter.

For servers on which you have SQL Server 2008 R2 or later installed, you also have the option to execute PowerShell as a SQL Server Agent job. This is easily achieved by creating a new step for a job and selecting PowerShell from the Type drop-down box. You can then enter the PowerShell script into the Command text box.

Unfortunately using PowerShell jobs in SQL Server 2008 R2 was not very useful because it would call into PowerShell version 1.0 so many scripts and modules would not work properly. If you want to execute scripts written for PowerShell 2.0 from SQL Server 2008 R2 you are best off using the execute PowerShell.exe method described earlier. Fortunately this is resolved in SQL Server 2012 because it will load PowerShell 2.0

The advantage of using the SQL Server job agent is that you may already have jobs running, and this approach enables you to manage all your jobs in one place. You can also use the logging functionality already built into the SQL Job engine to monitor the execution of the PowerShell script.

SUMMARY

This chapter introduced you to PowerShell’s core concepts and provided examples of how PowerShell can be used in a troubleshooting and performance analysis scenario. The scope of possibilities for using PowerShell in a SQL Server environment are almost endless; and where the product team hasn’t provided full functionality, the user community has filled the gap in the form of the SQLPSX and PSX libraries on CodePlex.

You are now aware of how you can build your own library of useful functionality within a module, and you have the tools and knowledge to share these with other members of your team or the wider population.

PowerShell is an area of administration that is still in its early stages, and it is being actively developed by the SQL Server team and the Windows team, as well as by other product groups. However, now is the perfect time to begin using this rich shell and scripting environment, and the trusted toolkit it provides, which enable you to be more efficient as a DBA and to maintain a calm and confident attitude when facing production issues.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset