Chapter 2
PowerShell

The need for basic PowerShell skills is sprouting up everywhere in operating systems, such as Windows Server 2016, and in customized line-of-business applications. In fact, many configuration changes can be accomplished only via PowerShell. Throughout this book, you will be shown various scripts and commands. This chapter is not going to teach you everything there is to know about PowerShell; its goal is to provide you with enough background information to enable you to understand what is going on in the various commands and scripts you will see throughout this book and online. Being able to find commands and understand documentation will enable you to develop your own scripts and functions to automate your day-to-day responsibilities in a consistent and methodical way. This chapter will also prepare you with an excellent base of knowledge if you decide to dive into the exciting and complex world of programming with Windows PowerShell.

What Is PowerShell?

PowerShell was introduced in 2006. It reminded many people of the old DOS prompt because of its command-line interface (CLI), but PowerShell isn’t really a CLI. It is an object-oriented administrative automation engine (see Figure 2.1). PowerShell has a CLI, but PowerShell can also be the backend to a graphical user interface (GUI).

Screenshot of the Windows PowerShell console on WindowsServer 2016, which is an object-oriented administrative automation engine.

FIGURE 2.1 The Windows PowerShell console on Windows Server 2016

A great example of the flexibility and utility of PowerShell is how PowerShell can be hosted by other applications. Many line-of-business (LOB) applications are written specifically to act as wrappers around PowerShell. These applications require specific classes, modules, properties, and settings. If PowerShell gets updated to a version that is incompatible with the LOB application, that application may fail. You always want to check the manufacturer of any LOB software to ensure compatibility prior to upgrading.

Typically, a GUI doesn’t offer every possible configuration setting. Many of the more advanced features, or the ones that Microsoft wanted tucked away, can be configured only by using the CLI. As operating systems and enterprise applications evolve, we are seeing an increased turn toward automation and configuration standards. This provides reproducibility; in other words, having configuration done via a script makes it easier to ensure uniform configurations throughout all of your systems. They also provide excellent and detailed documentation of what configurations have been applied to systems and help ensure that if a system needs to be brought online quickly, you can use these scripts to automate much of this process.

Forward Compatible

PowerShell is forward compatible, which means that any script you may have created in an older version of PowerShell should run in a newer version. However, even though the modules and classes of PowerShell 1.0 are still included in PowerShell 5.0, newer operating systems may not use these old modules or classes. Older scripts may seem to run, but you may not have the pieces in the OS needed to provide a result. That means your scripts might not behave as expected or might simply refuse to run.

PowerShell Versions

PowerShell has 32-bit and 64-bit versions. The modern Microsoft operating systems are typically 64-bit. The 32-bit version is used for compatibility when the shell is hosted in a 32-bit application. PowerShell version 1 was only 32-bit. The rest will have a designation of (x86) for the 32-bit version. This will appear with the application name of Windows PowerShell (x86) or Windows PowerShell ISE (x86). When you run the PowerShell application, the title bar at the top of the screen will display the same names. Figure 2.2 shows the differences in the name.

Screenshot of the 32-bit and 64-bit versions of PowerShell.  When the PowerShell application is run, the title bar at the top of the screen displays the same names.

FIGURE 2.2 The 32-bit and 64-bit versions of PowerShell

If you are using a 32-bit operating system, you can run only 32-bit applications. That means you will only be able to run the 32-bit version of PowerShell. On a 64-bit operating system, such as Windows Server 2016, you can use either one, but we strongly encourage you to use the 64-bit version whenever possible.

Running and Customizing PowerShell

If you have an operating system that uses User Account Control (UAC), such as Windows Server 2016, PowerShell will not open as an administrator. To run PowerShell with full administrative credentials, right-click the icon and select Run As Administrator from the shortcut menu. This change will be displayed in the title bar and the PowerShell CLI, as illustrated in Figure 2.3.

Screenshot of Windows PowerShell's Run as Administrator with full administrative credentials which will be displayed in the title bar and the PowerShell CLI.

FIGURE 2.3 PowerShell’s Run As Administrator

Customizing the PowerShell Console

Few things are worse than spending countless hours debugging a script only to find out the problem was something silly like a single quotation mark being replaced by a grave accent or mistaking a curly brace for an open parenthesis. Each of these characters is used in different situations, and if you swap one for the other, you may get unexpected results. This is typically displayed as a bunch of error messages that can be difficult to understand. You need to change the font and the font size to make it easier to recognize the different characters.

To change the font, right-click the PowerShell window and select Properties. Then select the Font tab. Raster fonts seem to be particularly prone to confusion, so you may want to select a TrueType font.

You can also control the size of the shell window. Most people like to have big windows to work in, but they don’t like horizontal scroll bars. You can go into the Layout tab and adjust the window size. You may notice there is a buffer size as well. You typically want to have the Width value for both Buffer Size and Window Size to be the same. The values will differ, depending on the resolution. Most administrators like to fill the width without a scroll bar being present. The values for Buffer Size and Window Size don’t need to be the same. In fact, a large height buffer size will give you the vertical scroll bar. This means you will be able to scroll up and down more without the shell deleting previous lines so quickly.

Cutting and Pasting in PowerShell

You can also perform cut and paste operations in PowerShell, but they work a bit differently from what you might expect.

Note that when you copy, whatever is highlighted goes into the Clipboard. That means that if you drag your mouse across and get just the middle of several lines of text, that is the only text that will actually be copied. Be sure to select exactly what you want when you highlight, or you could get some unexpected results.

If you want to enable a more traditional method of selection, where you can grab entire lines and not just stacked sections of lines, go into the properties of your PowerShell console window and on the Options tab, select Enable Line Wrapping Selection. If copy and paste does not work at all, go into the properties of your PowerShell window and make sure that QuickEdit mode is enabled.

Using PowerShell Integrated Scripting Environment (ISE)

Regular PowerShell looks like the old DOS prompt. PowerShell ISE also has a console window; but in ISE, you will also have a script window, where you can load and edit scripts and text files. Depending on the OS and the screen resolution, additional add-ons may be visible. You can select View to see what you can make visible. You can also select Add-ons and make different selections, depending on your needs. Refer to Figure 2.4 to look at the default Windows Server 2016 PowerShell ISE layout with the Show Script Pane view option selected.

Screenshot of Windows Server 2016 PowerShell ISE layout with the Show Script Pane view option selected, where edit scripts and text files can be loaded.

FIGURE 2.4 PowerShell ISE on Windows Server 2016

Exploring the Command Add-On Pane

One of the most popular add-ons usually included with the ISE is the Command add-on. This gives you an alphabetized reference for commands. These commands are typically called cmdlets in PowerShell. These cmdlets are included with the various modules you may have loaded on your local system.

Windows Server 2016 has several modules installed by default. You will also typically get additional modules when you install different roles or install additional applications and services. You can also get modules provided by third parties who have written their own. Figure 2.5 shows just the Command add-on pane in PowerShell ISE with “All” selected.

Screenshot of the Command add-on pane in PowerShell ISE with “All” selected.

FIGURE 2.5 Command pane with Hyper-V module selected

When you select a module, only the cmdlets for that module will be shown in the command pane.

Another handy add-on is the Script pane. This lets you have scripts loaded, and you can edit and run entire scripts without the need to cut and paste. You can also highlight particular lines of the script and execute just the lines that are highlighted.

To run the entire contents of the Script pane, you can click the green Play button on the top of the screen or you can simply press F5. If you want to execute just the lines you have highlighted, you can press the Play button with the small text document behind it or press F8. If your script needs to be stopped, you can press the red square or you can press Ctrl+Break.

Other options are available for ISE. If you select Tools and then Options, you can do tons of customization of the fonts and have extensive control over the colors of text. When you select Manage Themes, you can select between several defaults. You can also import and export themes for further customization. See Figure 2.6 to examine the ISE Colors and Fonts tab.

Screenshot of the ISE Colors and Fonts tab, where one can do tons of customization of the fonts and have extensive control over the colors of text.

FIGURE 2.6 ISE Colors and Fonts tab

On the General Settings tab of the Options dialog box, as shown in Figure 2.7, you have several additional options for script behavior, including outlining, line numbers, detecting duplicate files, and offering to save the script prior to running it. You can also modify the location of the Script pane between the top, the right, and maximized.

Screenshot of the ISE General Settings tab of the Options dialog box, which has several additional options for script behavior, including outlining, line numbers, detecting duplicate files, and offering to save the script prior to running it.

FIGURE 2.7 ISE General Settings

The middle of the General Settings tab is where you can configure Intellisense. When Intellisense detects that you are typing a command, it will autofill and allow you to quickly select between the different commands available that match what you have typed. You can adjust the Intellisense timeout, which mandates how long the Intellisense suggestion is displayed. The default is just 3 seconds. The range in the dialog box is between 1 and 5 seconds. You can set the timeout to other values using the following command:

$host.PrivateData.IntellisenseTimeoutInSeconds = X

where X is replaced with the number of seconds to display the Intellisense suggestion.

Setting Up PowerShell ISE Profiles

Re-creating your favorite scripting environment every time you launch PowerShell or PowerShell ISE can be difficult. A way to retain these settings, session by session, is to use PowerShell profiles.

A PowerShell profile is a script that executes every time PowerShell starts. You can have tons of commands, functions, variables, snap-ins, aliases, modules, and drives. You can also add additional session-specific elements that load each and every session using PowerShell profiles.

These PowerShell Profiles are stored as files. You can have several profile files, and you can even have profiles that are specific to a particular host. There are several that can be associated with your session, and they are listed in precedence order. The first profile listed has the highest precedence. These profiles are stored in various locations. Here are the basic profile file paths:

Current User, Current Host $Home[My ]DocumentsWindowsPowerShellProfile.ps1 Current User, All Hosts $Home[My ]DocumentsProfile.ps1 All Users, Current Host $PsHomeMicrosoft.PowerShell_profile.ps1 All Users, All Hosts $PsHomeProfile.ps1

This path has two variables:

  • $Home: This stores the current user’s home directory location.
  • $PsHome: This points to the PowerShell installation directory.

Typically, the CurrentUser, Current Host profile is what is known as your PowerShell profile. The path for these profiles are stored in the $Profile automatic variable. You can use the $Profile variable to look at the path, and you can use the $Profile variable in a command.

To view the current value of the $Profile variable, use the following command:

$Profile |Get-Member –Type Noteproperty

You can copy the $Profile value into Notepad using the following command:

Notepad $profile

You can also test to ensure the profile path exists on the local computer by entering this:

Test-Path $profile.AllUsersAllHosts

To create a profile without overwriting an existing profile, use the following:

if (!(test-path $profile)) {new-item -type file -path $profile -force}

The if statement looks to see if the path already has an existing profile. If it doesn’t, it will make a new profile for you.

If you want to create a new All Users profile, you need to run PowerShell using the Run As Administrator option. This is done by right-clicking the PowerShell icon and selecting Run As Administrator.

Editing Profiles

Profiles are just text files. You can edit them in any text editor that doesn’t embed extra information. Notepad is a perfectly good editor for PowerShell profiles. To open the current user’s profile in something like Notepad, enter the following:

Notepad $Profile

If you want to edit other profiles, you just specify the profile name. For example, to open the profile that is used for all of the users on all of the host applications, you can enter the following:

Notepad $profile.AllUsersAllHosts

Initially, the profile will be blank.

Maybe you want a customized prompt that will tell you the current computer name and the current path. You can use this command:

function awesome-prompt { $env:computername + "" + (get-location) + "> " }

If you want to open PowerShell using Run As Administrator automatically, you can use the following:

Function Open-AsAdmin {Start-Process PowerShell -Verb RunAs}

Once you have made the appropriate changes, you simply save the profile file and then restart PowerShell.

Setting Up Execution Policies

You don’t want to allow just anyone to execute scripts or run scripts from unknown or untrusted sources. Execution policies specify if a user can load configuration files, such as profiles. They also determine whether you are even allowed to run scripts, which scripts you can run, and whether the scripts have to be digitally signed with a digital certificate before they are allowed to run. Policies are configured with the Set-ExecutionPolicy command.

The execution policy can be set for a particular PowerShell session, for the current user, or for the local machine. The execution policy does not need to be set in the PowerShell profile because its setting is stored in the Registry. However, session execution policies are exceptions; they exist only during the session and are not stored in the Registry. When you exit the session, the execution policy associated with the session is deleted.

Remember that the execution policy sets the behavior for processing scripts. If you have a determined user or you are the determined user, you can enter all of the commands into the console. Execution policies help make users aware of the security context of their scripts and help them avoid running inappropriate scripts accidentally.

The Restricted execution does not let you run scripts, but you can run individual commands. You are blocked from all script files.

Set-ExecutionPolicy Restricted

With AllSigned, you can run scripts. All of these scripts and configuration files will need to be signed by a trusted publisher. This includes scripts that you have written and have on the local computer.

Set-ExecutionPolicy AllSigned

This is the default policy for Windows Server 2012 R2 and Windows Server 2016:

Set-ExecutionPolicy RemoteSigned

This policy mandates that any script or configuration has to be signed by a trusted publisher. If you want to run an unsigned script, you can unblock that script using the Unblock-File cmdlet. Any scripts that you have created on the local system will run without signing.

The following will let the user run anything. It will notify the users if they try to run scripts or configuration files that were downloaded, but it will not block their execution.

Set-ExecutionPolicy Unestricted

This is the most dangerous policy. It will run anything and everything without any prompts.

Set-ExecutionPolicy Bypass

Undefined execution policies are typically ignored. If all of the applied policies are set to undefined, your system will use the default execution policy which, in Windows Server 2016, is RemoteSigned.

Set-ExecutionPolicy Undefined

Recording PowerShell Sessions

You may find it necessary to record PowerShell sessions. The transcription operations will capture all input and any output that displayed on the console and store it to a file. To enable transcripts, you can enter the following:

Start-Transcript c:mystranscript.txt

You can use the Help Start-Transcript command to view the various options. This example will create a transcription file and store it at C:mytranscript.txt. PowerShell will overwrite any file that already exists. To avoid overwriting the file, you can use the -NoClobber parameter. If the specified file already exists, the -NoClobber will cause the command to fail. If you want to specify only a directory and have PowerShell automatically name the files, you can use the -OutputDirectory parameter. If you want to just append to the existing file, instead of creating a new file, you can use the -Append parameter.

To stop recording the transcript, you can simply close your console session, or you can use the Stop-Transcript cmdlet. Note that this will stop all transcriptions from all sessions. There are other options, so you are encouraged to look around in the Help About_Start-Transcript files.

Using Aliases and Getting Help

PowerShell offers plenty of ways to make using commands easier.

Using CMD.EXE-Like Commands in PowerShell

When you run PowerShell for the first time, it may remind you of the old DOS command prompt. In fact, many of the same commands seem to be supported. Here are some of the commands that you may remember that still seem to operate:

MKDIR
DIR
CD
PING
IPCONFIG

In many cases, these commands are the actual commands and haven’t changed. That is because they are external commands that PowerShell sends to external apps to process. Some examples would be IPCONFIG and PING.

But not all of the older commands will work the way you may anticipate. For example, the DIR command is used to display the contents of the current directory. You can also use several options to do sorting, show file ownership, display the folder listing in a wide format, or display only files that have certain attributes, such as hidden files.

A great example is the DIR /S Importantfile.txt command. This command is used to find every occurrence of that particular filename within a particular directory, as well as all the subdirectories underneath the current directory. This is known as a recursive search.

Here is what happens when you run the command from within cmd.exe instead of PowerShell:

       Dir /s Importantfile.txt

      Volume in drive C is OSDisk
      Volume Serial Number is 8636-D98D

      Directory of C:	emplatesHR
      03/05/2017  11:56 AM               480 importantfile.txt
                    1 File(s)            480 bytes
      Directory of C:	emplatessales
      03/05/2017  11:56 AM               480 importantfile.txt
                    1 File(s)            480 bytes
      Total Files Listed:
                    2 File(s)            960 bytes
             0 Dir(s)  377,296,039,936 bytes free

This is a pretty useful result. This could prove vital in a script where you may want to try to consolidate a bunch of files into a single location. In Figure 2.8, you can see the result of performing the same command in PowerShell.

Screenshot of the result of performing the same command dir /s in Windows PowerShell, which tells that it cannot find the path because it doesn’t exist.

FIGURE 2.8 Dir /S in PowerShell

As you can see, Dir /S is completely misunderstood by PowerShell. It thinks you are trying to get a listing of the contents of the C:S folder. It tells you that it can’t find the path because it doesn’t exist.

Dir /S fails because many of the “old” commands use an alias to redirect them to new PowerShell cmdlets. The aliases are used because the old “tried and true” commands don’t always follow the PowerShell verb-noun format. Any options that the user sends to these aliases are processed by that underlying PowerShell cmdlet. If you want to find help on DIR in PowerShell, you can simply enter the following:

                    Help DIR

Here is a partial list of the output:

NAME
    Get-ChildItem
    SYNOPSIS
    Gets the files and folders in a file system drive.

    SYNTAX
    Get-ChildItem [[-Filter] <String>] [-Attributes {ReadOnly | Hidden | System | Directory | Archive | Device | Normal |

This will give you the default help information for a Get-ChildItem PowerShell cmdlet. When you use DIR, PowerShell gets the information you provided and sends it to the Get-ChildItem cmdlet. Get-Childitem does not support the /s switch. That is why you get the error and your script fails.

You need to remember that “tried and true” commands are typically either a call to an external application or an alias to an internal PowerShell command.

If you want to find out what aliases are available and which PowerShell cmdlet they really call, simply enter the following to see all the aliases in all the modules that are available in the current session:

Get-Alias

Exploring a Get-Help Example

You can request help by prefacing any command with Get-Help, Help, or Man. The output is mostly the same because if you use Get-Help, all of the help output is dumped right to your console and will likely scroll off the screen. You can then scroll up and down to look at the particular area of information you need. If you use Help or Man, one screen of information will display at a time and you can press almost any key to get the next screen. If you press CTRL-C, the output will stop and you will go back to the command prompt.

A way to display the help file in a separate window that you can keep up on the screen, or even move to a different monitor, is to use the –ShowWindow parameter. Figure 2.9 shows the output from Get-Help Get-ChildItem –ShowWindow.

Screenshot of the output from Get-Help Get-ChildItem ShowWindow, which can be used as a constant reference when creating scripts or typing directly into the console.

FIGURE 2.9 –ShowWindow parameter

You can take this window and use it as a constant reference when you are creating your scripts or typing directly into your console. It is searchable so you can use it to quickly pinpoint exactly what you are looking to accomplish.

PowerShell help will show examples. The problem is you will need to scroll around past all the syntax and stuff to find just the examples. If you want to jump right to the example code, simply change your help request to something similar to the following:

Get-Help Dir –Example 

Here is a relevant portion of the output:

Example 2: Get all files with the specified file extension in the current directory and subdirectories
        PS C:>Get-ChildItem -Path "*.txt" -Recurse -Force

This command gets all of the .txt files in the current directory and its subdirectories. The Recurse parameter directs Windows PowerShell to get objects recursively, and it indicates that the subject of the command is the specified directory and its contents. The Force parameter adds hidden files to the display.

Now you are getting somewhere. You can try the following:

Get-ChildItem –Recurse

And that provides the same output as DIR /S, if it were run from cmd.exe.

Because you know that DIR is an alias for Get-ChildItem, let’s see what happens if you try this command:

DIR -Recurse

If you try it, you will find that it is the same result.

Getting Get-Help Updates

PowerShell help files haven’t been included with the operating system since PowerShell 3.0. If you run PowerShell as an administrator, you may notice that the system attempts to download the help files from an online service owned by Microsoft. If you have PowerShell modules provided by a third-party vendor, they can also be updated with downloadable help. You do have to run this with credentials that are part of the local administrators group because the PowerShell core command help is stored in your %systemdir%. If PowerShell is unable to download the updated help files, it will create a default help display for the commands in the module that lacks updates.

Not all modules will support updating its help files. You can get a list of modules that have pointers to updatable information by entering the following:

Get-Module -ListAvailable |Where HelpInfoURI

If you want to update your help files immediately, you can perform the following command:

Update-Help

The Update-Help command will look for all of your installed modules in the default module path on your system. This path is stored in the environmental variable $env:PSModulePath. To view this path, use the following:

$env:PSModulePath

Here is a typical output:

C:UsersAdministratorDocumentsWindowsPowerShellModules;C:Program FilesWindowsPowerShellModules;C:Windowssystem32WindowsPowerShellv1.0Modules

If you want to add an additional, temporary path that exists only during this session, modify this environmental variable as follows:

$env:PSModulePath = $env:PSModulePath + ";f:OurAddedPath"

If you want to make the change permanent, you will need to add this command to the profile.

If you want to update a module that isn’t in your module path, you can import the module into the current session and then use the Update-Help command. You can import the module with the following command:

Import-Module "D:LOBModulesweboughtLOBModule"

If you run Update-Help more than once in a 24-hour period, nothing will actually be updated. There is also a 1 GB of uncompressed content limit. If you don’t want to wait the 24 hours or you want to bypass the 1 GB limit, you can use the following command:

Update-Help -Force

This Update-Help -Force command can be added to your PowerShell profile to ensure that your help is always updated.

Updating Help for Servers Without Internet Access

Many times, you are going to have servers that are for internal use only and these systems don’t have direct Internet access. The good news is that Microsoft has addressed this concern with the Save-Help cmdlet. You will download the help file to a file share on an Internet-connected machine. You can then copy the files to a system that is reachable by the internal machines. Here is an example:

Save-Help -DestinationPath \SMBFileServer01SharenamePSHelpFolder -Credential DomainnameUsername

This will download the help files to a file share on SMBFileServer01. You also need to ensure that the credentials used are members of the administrator’s group on each machine or are domain administrators where all of the computers are members. Also know that Update-Help and Save-Help will update files only for modules that are installed on the local system.

Accessing Online Help Files

If you don’t want to download updates, but want access to the latest version of the help file, you can add the –Online parameter. Here is an example:

Get-Help Get-ChildItem -Online

Understanding Cmdlet Syntax

PowerShell cmdlets are typically set in a Verb-Noun format and can include a number of mandatory and optional parameters. Update-Help follows that context. You may also notice that not all verbs are actually verbs. New-VM is a valid cmdlet, but New is not an English language verb.

Be aware that cmdlets are generally not case-sensitive. There are some rare exceptions, but the following examples are functionally identical:

Get-Vm
get-vm
GeT-vM

As you can tell, having odd casing in your cmdlets can make it very difficult to read and troubleshoot. Traditionally, you will capitalize the first letter of each word that is crammed together to make a cmdlet or parameter. There are also various conventions for variables and functions and modules. Traditionally, the first character of a variable is not capitalized, but subsequent words are capitalized. The variable $computerList is the traditional practice. It is still easy to read because additional words are capitalized, but it is a bit more obvious that this is a variable because the first word is not capitalized.

Interpreting the Syntax

You will need to know which parameters are mandatory, which are optional, and which parameters won’t work together with other parameters.

Let’s examine a sample of syntax that is generated by the Help Get-Eventlog cmdlet.

NAME
    Get-EventLog
 SYNOPSIS
    Gets the events in an event log, or a list of the event logs, on the local or remote computers.
SYNTAX
Get-EventLog [-LogName] <String> [[-InstanceId] <Int64[]>] [-After <DateTime>] [-AsBaseObject] [-Before <DateTime>]   [-ComputerName <String[]>] [-EntryType {Error | Information | FailureAudit | SuccessAudit | Warning}] [-Index <Int32[]>]  [-Message <String>] [-Newest <Int32>] [-Source <String[]>] [-UserName <String[]>] [<CommonParameters>] Get-EventLog [-AsString] [-ComputerName <String[]>] [-List] [<CommonParameters>]

You can identify parameters because they are prefaced with a hyphen. Look at the following example:

Get-Eventlog –LogName Security

The Get-Eventlog cmdlet has a single parameter, -LogName. The string Security tells PowerShell which event log it is supposed to “get.” This command will get the security event log and dump its contents on your console screen. Many people who are new to PowerShell get confused with parameters. Look at this broken cmdlet:

Get-Eventlog –Security

You have swapped the parameter value for the name of the parameter. The PowerShell module that hosts this cmdlet has no idea what –Security means, because that isn’t one of the accepted parameters and Security is tagged as a parameter because it has the hyphen in the front of it.

You also need to determine which parameters are mandatory and which are optional. Anything surrounded entirely by square brackets is optional. Anything not entirely surrounded by square brackets is mandatory.

[-optionalstuffhere]
[-Optionalstuff] mandatorystuff

Let’s examine the very first parameter associated with the Get-EventLog cmdlet:

Get-EventLog [-LogName] <string>

This syntax block says that the word –LogName is optional, but the <string> portion isn’t surrounded by square brackets. You always have to have a <string> for this cmdlet. That means that any time you use the Get-Evenlog cmdlet you must include a string that has the name of the log. Even though the string value is required, you don’t need to include, in this instance, the parameter name of –LogName. That is shown by the fact that LogName is completely surrounded by square brackets.

The rest of the parameters are completely surrounded by square brackets. That means all of the other parameters are optional. Note that some of these optional parameters, when used, have mandatory values.

Let’s look at an optional parameter with a mandatory value block from the same Get-EventLog cmdlet:

[-After <DateTime>] 

Notice how the entire parameter of -After and the <DateTime> value are completely surrounded by square brackets. That tells you that this entire parameter is optional. But note that <DateTime> doesn’t have additional square brackets. That means that any time you use the –After parameter, it is mandatory that you include a value for the <DateTIme>. If it were optional, the syntax block would look like this:

[-After [<DateTime>]]

Because the –LogName parameter is listed as the first parameter, this is also known as a positional parameter. You don’t have to identify that the first parameter value you are sending, in this case Security, is the value associated with –LogName because -LogName is the first parameter expected in this command.

In certain cmdlets, if you are careful, you can pass a ton of parameters without labeling them if you do it in a very specific order. However, doing so makes the script almost illegible. Do yourself a favor and always include the parameter name in written scripts so they become self-documenting. Including parameter names also means you can put the parameters in a different order, but PowerShell knows which parameter value goes to which parameter because you so helpfully identified it

Using Spaces in Cmdlets

PowerShell uses spacing to separate cmdlets from parameters and parameters from values. You do need to be cautious where you place spaces, but you can put as many as you like, where spaces are allowed. Here is an example:

Get-EventLog                      -LogName                        Security

This is perfectly acceptable to PowerShell because the spaces, or blocks of spaces, are located where PowerShell expects a single space. You have to ensure that the space is in the correct location. Consider the following cmdlet examples:

Get-Eventlog - LogName Security
Get-Event Log –LogName Security
Get- EventLog -Logname Security
Get-EventLog –Log Name Security

These are all invalid and will produce errors because spaces have been placed where PowerShell isn’t expecting spaces. If you get too creative, it gets hard to read, and PowerShell may think you are passing additional values or putting in some other piece.

You also want to avoid mixing spaces and tabs to make your code line up. Use one or the other. This is particularly important when you are copying code from one script to another. Mixing tabs and spaces frequently leads to failures.

Passing Multiple Values to a Parameter

There are many instances where you will want to provide multiple values to a parameter. Part of your Get-Eventlog syntax includes the following notation:

[-ComputerName <String[]>]

Notice that this entire parameter, including the string, is optional, because the entire thing is surrounded by brackets. Also notice that the <String[]> has little square brackets inside of it. When you see the two square brackets displayed in this manner, it means that you can pass multiple values using a comma-separated list. Examine the following code:

Get-EventLog Security –ComputerName Server01, Server02, Server03

This tells the Get-EventLog cmdlet to get the security logs from three different servers. Also notice that the syntax for the –LogName parameter is listed as [-LogName] <String>. The lack of the small square brackets here means you can have only a single value for the –LogName parameter. Functionally, that means you can get only one named event log, such as Security or Application, but not both. -Computername <String[]> tells us that you can get the single log from multiple computers.

Another way to get multiple values loaded into a parameter is to read a comma-separated list of values from a file. Here is an example:

Get-EventLog Security –ComputerName (Get-Content c:computerlist.txt)

This is known as a parenthetical command. You have a command inside of parentheses to provide values to a different parameter. The Get-Content will read the file, one line at a time, and place each line as a separate value to the –ComputerName parameter. Parenthetical commands work just like the math rules you learned in school. You will do what is in the parentheses first, and that result will become the value that is handed to the parameter.

You can also place values into variables, and then the variable can pass the values to the parameter. Look at the following commands:

$computers = Get-Content c:Computerlist.txt
Get-EventLog –LogName Security –ComputerName $computers

You will be talking more about variables a bit later, but you are using the first line to load up the variable called $computers with comma-separated values of text. You then take this variable and use it as the value for the –ComputerName parameter.

Using Show-Command

PowerShell can automatically take a cmdlet and display it in a dialog box with areas for each of the parameters. Look at the following command:

Show-Command Get-EventLog

When this command is executed, you will get a dialog box as shown in Figure 2.10. We have added values in the ComputerName and LogName block to illustrate how those are populated. By default, all the parameters will have blank values.

Screenshot of a dialog box of the Show-Command Get-EventLog where we have added values in the ComputerName and LogName block to illustrate how those are populated. By default, all the parameters will have blank values.

FIGURE 2.10 Show-Command Get-EventLog

This shows you all of the parameters that are specific for this command, and it will let you fill out each of these parameters. Note that LogName has an asterisk. That means that that parameter is mandatory.

The List tab will show you which parameter will take a list of values. You can enter them separated by a comma in the ComputerName field.

When you select Copy, PowerShell will copy the resultant command in your Clipboard. If you select Run, it will put the resultant command in the PowerShell console that was used to launch the Show-Command. This is how the result will appear:

Get-EventLog -LogName security -ComputerName Server01, Server02, Server04∧M

The ∧M at the end represents the Enter character. If you then press Enter on your keyboard, the command will execute.

Using -WhatIf

-WhatIf is a handy parameter. It lets you see the result of a cmdlet without the cmdlet making any changes to your system. Note that the colon is mandatory if you pass $true or $false. If you don’t pass any parameters, it will default to $true. This helps you to verify that the command you used gave you the desired output and results. Nothing is changed and no actions are performed when you use a cmdlet with the -WhatIf parameter.

You will see whatever output would be generated if the cmdlet executed. This can be a big help when you are unsure of the exact format. This can also help you out a lot if you decide to do cmdlets without labeling the parameters and instead rely on parameter positions. Putting the parameter in the wrong order is very common, and -WhatIf can help prevent you from making a serious configuration error. Of course, Microsoft recommends that you always put parameter labels on any written script or reference file. Doing so makes it much easier to read and troubleshoot. Then you can use -Whatif as a failsafe to ensure you get the expected output.

Note the following example and the result:

Remove-Item C:
ano
ano-srv02.vhd -WhatIf
What if: Performing the operation "Remove File" on target "C:
ano
ano-srv02.vhd".

The file was never removed, but it shows what would have happened if the cmdlet had been executed.

Using -Confirm

This parameter helps mitigate risks by asking for confirmation prior to running a command:

-Confirm[:{$true | $false}]

It will temporarily override the $ConfirmPreference variable. The $ConfirmPreference variable has a default value of High. This is compared to the estimated risk potential of a cmdlet. If the risk potential is equal or greater that the $ConfirmPreference setting, the cmdlets will always ask for confirmation unless you add a -Confirm: $False. Other, less risky cmdlets will typically suppress confirmation.

The -Confirm parameter is also useful if you are doing mass changes, possibly as part of a loop. It will ask you to confirm for each operation. This can help prevent applying incorrect configurations to items that may not be readily obvious, like contents read from a file or items that are identified by calculation or other less visible means.

When you use -Confirm in ISE, you will get a dialog box as displayed in Figure 2.11.

Screenshot of a dialog box when the confirm parameter in ISE is used. This can help prevent applying incorrect configurations to items that may not be readily obvious, like contents read from a file or items that are identified by calculation or other less visible means.

FIGURE 2.11 -Confirm parameter in ISE

When you use the -Confirm parameter in a regular PowerShell console, you will see the following:

Remove-Item C:
ano
ano-srv05.vhd -Confirm
Are you sure you want to perform this action?
Performing the operation "Remove File" on target "C:
ano
ano-srv05.vhd".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"):

It doesn’t matter if you get the dialog box or the text on the console; the options presented are identical and have identical results.

If you select Yes, the operation will be performed. If the operation is part of a loop, you will receive additional confirmation prompts. If you select Yes To All, the operation will be performed, including any looping, and further confirmation prompts will be suppressed for this cmdlet’s operation. If you select No, the operation will not be performed, but you may be further prompted if the cmdlet is performing multiple iterations of the cmdlet, possibly in a loop. If you select No To All, all operations will cease for this cmdlet and you will not see any subsequent prompts.

The Suspend option is going to put the current cmdlet on hold and start a nested PowerShell session. This nested session will be indicated by adding two additional caret symbols (>>) in the command prompt. In this nested session, you can run additional cmdlets and scripts. When you are done with the nested session, you can leave it by entering Exit. This will return you back to the -Confirm prompt. You will then need to decide your confirmation options, as previously discussed. This can give you an opportunity to load up some variables or do other tasks you need to ensure the cmdlet will work when you return.

The ? option for this -Confirm prompt will display help for the confirm choices.

The default action is Y, or Yes. This will be sent to the console if you are just holding down the Enter key. Be cautious! If you are holding down the Enter key, it will automatically confirm to Yes and the cmdlet will execute.

All About “About” Files

Get-Help is very useful to find specific information on particular cmdlets. Microsoft has also included “About” files that help explain PowerShell concepts covering items such as scripting techniques, scripting languages, operators, and others. The About help files do not support -Full or -Example, because they cover only concepts and topics. They will support -Online and -ShowWindow.

To see a listing of all of the locally available About files, simply enter the following:

Get-Help About

Get-Help About can be used when you want to see the About file for a particular topic. You simply would add the topic name. For example:

Get-Help About_Aliases

This help topic will have a short description and a long description. Here is a sample portion of the output:

PS C:UsersAdministrator> Get-Help about_Aliases

TOPIC
    about_aliases

SHORT DESCRIPTION
    Describes how to use alternate names for cmdlets and commands in Windows
    PowerShell.

LONG DESCRIPTION
    An alias is an alternate name or nickname for a cmdlet or for a command element, such as a function, script, file, or executable file. You can use the alias instead of the command name in any Windows PowerShell commands.
    To create an alias, use the New-Alias cmdlet. For example, the following
    command creates the "gas" alias for the Get-AuthenticodeSignature cmdlet:
        New-Alias -Name gas -Value Get-AuthenticodeSignature

    After you create the alias for the cmdlet name, you can use the alias instead of the cmdlet name. For example, to get the Authenticode signature for the SqlScript.ps1 file, type:
        Get-AuthenticodeSignature SqlScript.ps1
    Or, type:
        gas SqlScript.ps1

Looking at this small portion of the About_Alias, you can start to see how you can make your own alias. You can create an alias for cmdlets, scripts, functions, or even executables. Reading further in the About file should tell you what you need to do to create an alias.

Understanding Shortened Command Syntax

PowerShell tries to be very accommodating. Microsoft knows there are hundreds of cmdlets. Microsoft has also spent a great deal of effort trying to make these cmdlets fairly intuitive. When you are using the same commands over and over, typing the entire command becomes tedious, especially since some of these commands are quite long. You also run into the issue of not being entirely certain of the exact cmdlet syntax you should use. Microsoft has included a shortened syntax, as well as aliases and tab-completion to make your jobs a bit easier.

Shortened command syntax with tab completion means you can type part of a command and then press the Tab key to ask PowerShell to look at all of the session-loaded modules to try to figure out which command you are trying to use. If there are several choices, you can keep pressing the Tab key to cycle through the commands until you find the one you want. You can also do this with parameters.

Figuring out the exact command or parameter name with tab completion provides two advantages: you can get the correct cmdlet or parameter and the complete cmdlet and parameter name will be displayed so the text is easier to read, understand, maintain, and troubleshoot. Here is an example:

Get-Service MpsSVC -ComputerName Boston-Srv01

If you just typed G and then pressed the Tab key, you would have to cycle through all of the cmdlets, verbs, and aliases that start with the letter G. The list is quite long. The problem is you are too ambiguous. You need to disambiguate, or type enough letters so PowerShell has a better idea as to which command you are looking to use. Get- is pretty easy to understand and remember because it is used all the time. You will just type Get-S. There are a lot of Get commands where the “noun” starts with the letter S. If you kept pressing the Tab key over and over, you would eventually get there; but to save time, let’s disambiguate even more by adding a few more characters. Be aware that PowerShell will be sifting through all the modules you have loaded for this particular session. Depending on the profile, defaults, and the modules located and loaded in the $env:PSModulePath variable, the number of letters you need to type to thin down the list may vary by system, session, and profile.

The next thing you would like in your command is the name of the service. If you read the Get-Help Get-Service information, you will see that you can simply press the Enter key and the Get-Service command will give you a list of services that are running on the local machine. If you just press the Tab key instead, the services will be listed automatically until you see the one you want.

Remember that PowerShell doesn’t possess magic powers that automatically reach out to remote machines to guess what you mean to type. You can write scripts and functions that help automate the process, but you may need to start a session on the remote machine to load all the modules or import the modules to your local console. Of course, you can simply just type the command as well. The end result is the same. Our tab completion exercise on your cmdlet now looks like this:

Get-Ser MP TAB TAB 

This turns into Get-Service MpsSvc.

The next bit is a parameter. You type and find that it becomes unique when you merely type -C TAB. Pressing the spacebar and then pressing the Tab key again won’t give you a list of computers you can use. Here you will simply have to know the needed value.

So, what will happen if you just type enough characters to disambiguate the sections but you don’t press the Tab key? This is how it would appear:

Get-ser mp -c boston-srv01

That’s not very easy to read. If you press the Enter key, here is the result:

get-ser : The term 'get-ser' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ get-ser mp -c boston-srv01
+ ~~~~~~
    + CategoryInfo          : ObjectNotFound: (get-ser:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

The little squiggle underneath + get-ser tells you that PowerShell has no cmdlet called get-ser in any of the loaded modules. This is an important concept. Typing sufficient characters to disambiguate cmdlets for tab completion does not resolve to shortcuts for commands. Disambiguation of parameters works, but commands need to be complete or an alias needs to be used.

Exploring PowerShell Command Concepts

Using the alias Help, instead of Get-Help, tells PowerShell to display only one screen full of information at a time. You can use Get-Help or Help to discover more commands. Remember that asking for help doesn’t make any changes to your system. You can guess all you want to try to discover exactly what it is you are trying to accomplish.

Let’s say that you want to change the MAC address of a network adapter. You can start off pretty basic with the following command:

Get-Command *adapter

The * is a wildcard character that says to get any command that ends with adapter. Removing the Hyper-V specific stuff at the bottom, here is the output:

CommandType    Name                           Version   Source
-----------   ----                            -------   ------
Function      Add-NetEventNetworkAdapter      1.0.0.0   NetEventPacketCapture 
Function      Add-NetEventVmNetworkAdapter    1.0.0.0   NetEventPacketCapture 
Function      Disable-NetAdapter              2.0.0.0   NetAdapter            
Function      Enable-NetAdapter               2.0.0.0   NetAdapter            
Function      Get-NetAdapter                  2.0.0.0   NetAdapter            
Function      Get-NetEventNetworkAdapter      1.0.0.0   NetEventPacketCapture 
Function      Get-NetEventVmNetworkAdapter    1.0.0.0   NetEventPacketCapture 
Function      Remove-NetEventNetworkAdapter   1.0.0.0   NetEventPacketCapture 
Function      Remove-NetEventVmNetworkAdapter 1.0.0.0   NetEventPacketCapture 
Function      Rename-NetAdapter               2.0.0.0   NetAdapter            
Function      Restart-NetAdapter              2.0.0.0   NetAdapter            
Function      Set-NetAdapter                  2.0.0.0   NetAdapter            

Set-NetAdapter looks promising. Let’s do a Help Set-Netadapter command. Here seems to be the relevant output:

Set-NetAdapter [-Name] <String[]> [-AsJob] [-CimSession <CimSession[]>] [-IncludeHidden] [-MacAddress <String>] [-NoRestart]  [-PassThru] [-ThrottleLimit <Int32>] [-VlanID <UInt16>] [-Confirm] [-WhatIf] [<CommonParameters>]

The -MacAddress stuff is highlighted. Remember, PowerShell doesn’t yet have mind-reading powers. Give Cortana a bit of time.

If you do the same command but add the -Examples parameter and scroll a bit, you discover the following:

   Example 2: Set the MAC address of the specified network adapter

    Set-NetAdapter -Name "Ethernet 1" -MacAddress "00-10-18-57-1B-0D"
    This command sets the MAC address of the network adapter named Ethernet 1.

So, now you know that you can change the MAC address with a simple command. You can use the same technique to discover all sorts of additional commands. The idea is that if you want to accomplish a specific task, you can usually find a command that does the job.

Implementing Pipelines

You frequently want to chain commands together with the output of one command becoming the input to the next command. Microsoft PowerShell makes it easy by using the vertical pipe (|) character. On many keyboards, this character is on the same key as the backslash character () only shifted.

You can use the pipe character to connect several commands together. These will be evaluated from the left to the right. The output of the left command will be added to the pipeline and will be sent as an input to the following command, to the right. If you have multiple pipeline characters, they are always evaluated from the left to the right.

Each time you press the Enter key, you will run the entire pipeline and any final output of the last command will be displayed. Not all commands will have a displayable output. Look at the following example:

Get-EventLog Security | Out-File c:SecurityEvents.txt

This will get the contents of the local Security event logs and put them in the pipeline. This data is then fed to the Out-File command, which takes the contents of the pipeline and uses that as the input source. The end result is that the contents of the Security event log will be sent to a text file because of the nature of the Out-File cmdlet. Most PowerShell commands do not produce text files. PowerShell commands will typically produce objects.

Exploring Objects and Members

Objects have something called a member. Members are just the various components that make up an object. The members for an object may include properties, events, and methods. You can use the pipeline to get information about the members of a particular object by getting the object and then piping the results into the Get-Member cmdlet, as illustrated:

Get-Service | Get-Member

Be aware that the first command, Get-Service, will run. In this example, it isn’t too dangerous; but if you were looking at the members of a destructive command, such as Remove-Item, you will actually remove the specific item. The -WhatIf parameter won’t work because it only provides a text output to the console and doesn’t produce any actual output to the Get-Member cmdlet.

You also need to ensure that the output of the first command will match the expected input of the next command in the pipeline. Examine the following code:

Get-Service | Set-ACL

The Get-Service command’s output doesn’t match the input requirements of the Set-ACL command. This will just produce an error for each of the objects the Get-Service command places into the pipeline. When you are using pipelines, you always need to match the output of the previous command to the expected input of the next command.

As you move back to your original command, Get-Service | Get-Member, you will see that the members of the objects that Get-Service places in the pipeline have properties, events, and methods.

Exploring Properties, Events, and Methods

Properties describe the various attributes of an object. Using Get and Set commands will typically work with properties. Some of the properties of the Service object are MachineName, StartType, and CanShutdown. These properties can be used to instruct PowerShell what to display or manipulate.

Events can be triggered as an operation does something to an object. Opening a file or running a process may trigger an event. The only event listed as a member from Get-Service is the Disposed event. Disposed tells you that the script has been instructed to free up external resources such as file handles, database connections, or TCP ports. When you get deeper into PowerShell programming, the event of Disposed can help you ensure these resources are released.

This can be particularly important if you get a bit too creative in copying snippets of scripts for reuse. Frequently, administrators will grab promising sections but forget to grab the garbage cleanup sections. This results in resource consumption, without resource release. This can make systems unstable or actually cause the exhaustion of resources, leading to a possible crash. As you get more advanced into PowerShell scripting, you’ll want to ensure you always clean up your mess, and the Disposed event lets you know that cleanup has occurred.

Methods are used when you want to tell an object to perform some type of action. Close, Pause, Start, and Stop are all examples of methods associated with the Service object. This will help us, obviously, close, pause, start, and stop service. Pretty handy. Maybe you should add that to your library of cool cmdlets.

Properties, events, and methods are very specific to each type of object. Remember that it is the command that produces the objects. Some commands will produce multiple types of objects. If you use a Get-Member command on a pipeline with multiple types of objects, you will get separate member lists for each type of object. If you didn’t send this to a text file, you can pull in so many objects and members that it will overwhelm the console buffer and will produce unusable output. It looks pretty cool, but it’s ultimately pointless.

Performing Object Sorting

Visualizing objects as a table in a spreadsheet can be useful. Each column will have a different property, and each row identifies the particular object. Running a command that returns several objects will be like adding rows to the table.

For example, Get-Service, if dumped into a spreadsheet, would create a table as illustrated in Table 2.1.

TABLE 2.1: Get-Service Objects

STATUS NAME DISPLAYNAME
Stopped AJRouter AllJoyn Router Service
Stopped ALG Application Layer Gateway Service
Stopped ApplDsvc Application Identity
Running Appinfo Application Information
Stopped AppMgmt Application Management
Stopped AppReadiness App Readiness

Only some of the objects were included because there are more than 200. Each row in the table is an object. Each column is a property of that object. Not all properties are displayed by default, but they are still included in the objects that are put into the pipeline. This group of objects is called a collection, or an array, of objects.

You can use PowerShell to pull a list of objects into a pipeline and then sort the objects according to whatever criteria you need. Frequently, cmdlets will automatically sort objects in the pipeline alphabetically by the name of the object. That is what the default is on the Get-Service cmdlet in Table 2.1.

You can instruct PowerShell to sort on different properties or even a combination of properties if you know the particular name of the desired property of an object. By default, string properties aren’t case-sensitive and are sorted in ascending order. The objects sorted are based on the default properties of an object type.

You have the ability to change the defaults to meet your particular needs. You will use the Sort-Object cmdlet. This cmdlet has an alias of simply Sort. Here are some examples of the Sort-Object cmdlet:

Get-Service | Sort-Object -Property Name -Descending
Get-Service | Sort-Object Name -Descending
Get-Service | Sort-Object -Descending

All three examples do the same thing because the name is the default sorting key. If you look at Help Sort-Object, you will get the following syntax:

Sort-Object [[-Property] <Object[]>] [-CaseSensitive] [-Culture <String>] [-Descending] [-InputObject <PSObject>] [-Unique] [<CommonParameters>]

You can explore the deeper syntax by searching for About_Sort-Object, but some parameters are immediately useful.

  • [[-Property] <Object[]>] tells you that you can specify one or more properties. This is an optional parameter, but you can pass multiple parameters in a comma-separated list. If you pass multiple properties, they will be sorted by the first listed property. If more than one object has the same first property, then the objects will be sorted by the second listed property. If the first two properties have more than one result, it will be sorted by the third property, etc.
  • [-Unique] is an optional parameter that looks through the pipeline and identifies only unique members of the pipeline collection. Any duplicates will simply be discarded. This parameter is not case-sensitive.

So, if you wanted to sort the services based on status and then by name, you would use the following command:

Get-Services | Sort-object -Property Status, Name

Status                  Name                           DisplayName

Stopped            AJRouter                 AllJoyn Router Service
Stopped                 ALG      Application Layer Gateway Service
Stopped            AppIDSvc                   Application Identity
Stopped             AppMgmt                 Application Management
Stopped        AppReadiness                          App Readiness
Stopped          AppVClient                 Microsoft App-V Client
Stopped             AppXSvc       AppX Deployment Service (AppXSV)
Stopped    AudioEndpointBu…         Windows Audio Endpoint Builder
Stopped            Audiosrv                          Windows Audio
Stopped            AxInstSV            ActiveX Installer (AxInstSV

If you examined the pipeline, you would see the objects and their properties sorted by the Status property, followed by the Name property.

Measuring Objects

You may find it useful in your scripts to measure the various objects. This can include the number of objects in a pipeline. You will use the Measure-Object cmdlet. Here is the syntax:

Measure-Object [[-Property] <String[]>] [-Average] [-InputObject <PSObject>] [-Maximum] [-Minimum] [-Sum] [<CommonParameters>]

The Measure-Object cmdlet, by default, will count just the number of objects in a collection. You can also perform four other types of measurements: the average, the sum, the maximum, and the minimum. Note the output of the following command:

Get-Process | Measure-Object -Property WorkingSet -Minimum -Maximum -Average -Sum

Count    : 70
Average  : 40100776.2285714
Sum      : 2807054336
Maximum  : 684228608
Minimum  : 4096
Property : WorkingSet

The working set is the amount of RAM associated with a particular process. This command will gather all of the running processes in a system. You have 70. These objects, with all their properties, will then be handed over as a collection in the pipeline to the Measure-Object cmdlet.

You also provided instructions to measure the objects based on the object’s WorkingSet property. The Measure-Object cmdlet will then show you the average amount of memory consumed by all 70 processes. It also shows you the largest amount of RAM assigned to a process, the Maximum. It shows the least amount of RAM assigned to a process working set, the Minimum. Potentially, the most useful would be the Sum. This is the total amount of RAM that is assigned to all the processes on this system. You can use this information to determine the minimum amount of RAM needed to support the running processes without the need to start sending RAM to the page file.

Using Select-Object to Select a Subset of Objects in a Pipeline

You may not want to look at all the objects in a pipeline collection. This is particularly true if you only want to look at the top or bottom number of objects in a collection. To do that, you use the Select-Object cmdlet. Here is the syntax:

 Select-Object [[-Property] <Object[]>] [-ExcludeProperty <String[]>] [-ExpandProperty <String>] [-First <Int32>] [-InputObject <PSObject>] [-Last <Int32>] [-Skip <Int32>] [-Unique] [-Wait] [<CommonParameters>]

Select-Object will go into the pipeline collection and allow you to select the first, or last, of however many objects. These correspond to the rows in your collection. You can also select specific properties to include and exclude. For example, if you want to look at the top five processes consuming RAM on your system, you use the following command:

Get-Process|Sort-Object -Property WorkingSet -Descending | Select-Object -Property Workingset, ProcessName -First 5

WorkingSet      ProcessName
----------      -----------
 635871232       powershell
 177840128             vmms
 147591168   powershell_ise
 143896576           LobAPP
 112848896    ServerManager

You can select specified properties of an object. You can specify -First and -Last to display a certain number from the top or the bottom of the collection. Note that if you don’t sort the collection before you select the objects, they will be in random order. You can skip a particular number of objects with the -Skip parameter. You can also select only unique values with the -Unique parameter.

Select-Object will remove all nonspecified properties when you use the -Properties parameter. If you want to view all of the properties, but selectively remove specific properties, use the -Exclude parameter.

When you use pipelines, especially pipes that go to other pipes that go to other pipes, you have to be careful about the type and format of data. It is typically easier to troubleshoot one command at a time. You should take the first cmdlet and run it alone and see its outcome. Once that cmdlet works properly, you add the next command and work with that until it provides the outcome you need. You should continue this process as you build your ultimate command.

You can also type each command on a separate line so you can keep the various cmdlets straight in your head. If you end a command line with a pipe character, or you don’t put in all of the mandatory parameters, or if you don’t close all the quotation marks or bracket or braces, etc., you will go into the extended prompt mode. This is illustrated here:

PS C:UsersAdministrator> get-process|get-member |
>> sort-object '
>> name
>> '

In the first line, you pressed the Enter key after the pipe character. That told PowerShell there was more to come. You ended the second line with a single quote and you didn’t close that quote until the fourth line. PowerShell understands that there is more to come, so you continue to get prompted. When you finally press the Enter key on the fourth line, PowerShell isn’t expecting any other parameters or characters and finally executes the commands. You will see similar behavior any time you enter a command where you don’t include mandatory parameters or you fail to include characters, such as closing quotes. If you find yourself stuck in extended prompt jail, with no idea what is to come next, you can always leave the extended prompt mode by pressing Ctrl+C. None of the commands you were carefully building will execute, and you will return to the normal command prompt.

Using File Input and Output Operations

Sometimes you want to save the results of a command or script to a file. You can use the redirection operator, or greater-than sign (>). Here is an example:

Get-Process|Sort-Object -Property WorkingSet -Descending | Select-Object -Property Workingset, ProcessName -First 5 > "c:Top 5 Processes Consuming RAM.txt"

This is a quick way to take whatever would have been displayed on the console and dump it directly into a file. The only issue is you are just blindly dumping the information. If you want more control, you should use the Out-File command. Here is the syntax:

Out-File [-FilePath] <String> [[-Encoding] {unknown | string | unicode | bigendianunicode | utf8 | utf7 | utf32 | ascii | default | oem}] [-Append] [-Confirm] [-Force] [-InputObject <PSObject>] [-NoClobber] [-NoNewline] [-WhatIf] [-Width <Int32>]  [<CommonParameters>]

Here is an example of using the parameters of the Out-File command where you want to ensure you don’t overwrite an existing file:

Get-Process|Sort-Object -Property WorkingSet -Descending | Select-Object -Property Workingset, ProcessName -First 5 |Out-File "c:Top 5 Processes Consuming RAM.txt" -NoClobber

By default, Out-File will use Unicode format for the text file. This can be a problem for some search programs. You can add the -Encoding ASCII parameter. You may also want to avoid having a newline character. This will put everything in one big line. This is used if you don’t want a line-separated list of values. Putting in a new line can be suppressed with the -NoNewline parameter.

You also need to pay attention to the width of the output. By default, the output is truncated based on the characteristics of the host. The default for the PowerShell console is 80 characters. Note that any characters in a line beyond that are truncated and not word-wrapped. If you have more than 80 characters per line, you will need to use the -Width parameter, or you will lose any characters after the first 80 per line.

Converting Objects to Different Formats

PowerShell pipeline objects can be in several different formats. If you are trying to save these objects to a file, you may need to convert from the object’s native format. PowerShell uses two verbs for object conversion, ConvertTo and Export. If you execute a Get-Command ConvertTo-*, you will find six cmdlets and one function. The command and output are shown here:

PS C:UsersAdministrator> Get-Command ConvertTo-*

CommandType                         Name  Version                        Source
-----------                         ----  -------                        ------
   Function    ConvertTo-HgsKeyProtector  1.0.0.0                     HgsClient
     Cmdlet                ConvertTo-Csv  3.1.0.0  Microsoft.PowerShell.Utility
     Cmdlet               ConvertTo-Html  3.1.0.0  Microsoft.PowerShell.Utility
     Cmdlet               ConvertTo-Json  3.1.0.0  Microsoft.PowerShell.Utility
     Cmdlet       ConvertTo-SecureString  3.0.0.0 Microsoft.PowerShell.Security
     Cmdlet       ConvertTo-TpmOwnerAuth  2.0.0.0         TrustedPlatformModule
     Cmdlet                ConvertTo-Xml  3.1.0.0  Microsoft.PowerShell.Utility

Our primary focus resolves around converting pipeline contents to CSV, HTML, or XML.

Using ConvertTo-CSV

The ConvertTo-Csv cmdlet is pretty basic. Here is the syntax:

ConvertTo-Csv [-InputObject] <psobject> [[-Delimiter] <char>] [-NoTypeInformation]  [<CommonParameters>]

The value for the parameter –InputObject is the contents of the PowerShell pipeline. It is referenced as the <psobject> in the syntax. This piped-in object can be identified with a legacy $_ or the more recent $PSItem.

Many administrators still use the older $_ to use the object in the pipeline, but $PSItem is the current identifier. Either reference works in your expressions, but you may see one or the other in scripts, help files, and About_ documents so you have to be familiar with both. We recommend that you transition to $PSItem, but there aren’t any advertised plans to retire $_.

The –Delimiter parameter lets you change the character that identifies the various values. If you don’t want commas to separate your values, but want a semicolon instead, you can use the –Delimiter ";" parameter.

The ConvertTo-Csv command will take the objects and convert them into comma-separated values. Each object will be converted into a string, and these strings will replace the contents of the pipeline. This ConvertTo-Csv command will not provide any output to the console. You are converting the contents of the pipeline to CSV format and replacing the current contents of the pipeline with resultant CSV strings. You will have a single string for each object.

The very first thing placed in the pipeline is the type information. You may not want this information in either the pipeline or in an output text file. You can suppress it with the –NoTypeInformation parameter. The following is an example of the output with the type information:

Get-EventLog System | Select-Object EventId, EntryType –First 3 |ConvertTo-Csv |Out-File "C:Events.txt"

This command provides the following output in your text file:

#TYPE Selected.System.Diagnostics.EventLogEntry
"EventID","EntryType"
"7040","Information"
"7040","Information"
"7040","Information"

If you want to import this CSV into something like Excel, you will need to clean it. Notice the difference when you remove the type information:

Get-EventLog System | Select-Object EventId, EntryType –First 3 |ConvertTo-Csv –NoTypeInformation |Out-File "C:Events.txt"

"EventID","EntryType"
"7040","Information"
"7040","Information"
"7040","Information"

This result is much easier to manipulate and import. Remember that ConvertTo-Csv changes the contents of the pipeline collection of objects into a collection of strings. All methods and actions are discarded. The first object in the pipeline will have its properties used to define the field headers, with the values following. If subsequent objects in the pipeline don’t have a property that was defined with the first object, or has no value for the defined property, the place where the value would be stored will be filled with a null value that is represented by two commas. If you have mixed objects in the pipeline and subsequent objects have additional properties that weren’t included in the first object, those additional properties are simply discarded.

Using Export-Csv

Export-Csv creates a CSV file of the objects in the pipeline. Here is the syntax:

Export-Csv -InputObject <PSObject> [[-Path] <String>] [-LiteralPath <String>] [-Force] [-NoClobber] [-Encoding <String>] [-Append] [-UseCulture] [-NoTypeInformation] [-WhatIf] [-Confirm] [<CommonParameters>]

These parameters are similar to what you have previously seen in the Out-File cmdlet. An important point is that you don’t want to format the output prior to conversion. If you do, the Export-Csv cmdlet will convert the formatting properties to a csv file and not the object properties. You can select the properties of the object by using the Select-Object cmdlet because when you Select-Object specific properties, all of the other properties are removed.

Using ConvertTo-Html

The ConvertTo-Html cmdlet converts PowerShell objects in the pipeline to either an HTML page or an HTML fragment. Here is the syntax:

ConvertTo-Html [-InputObject <PSObject>] [[-Property] <Object[]>] [[-Body] <String[]>] [[-Head] <String[]>] [[-Title] <String>] [-As <String>] [-CssUri <Uri>] [-PostContent <String[]>] [-PreContent <String[]>] [<CommonParameters>]

Get-EventLog System | Select-Object EventId, EntryType –First 3 |ConvertTo-Html |Out-File "C:Events.htm"  

This produces the following output:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>HTML TABLE</title>
</head><body>
<table>
<colgroup><col/><col/></colgroup>
<tr><th>EventID</th><th>EntryType</th></tr>
<tr><td>7040</td><td>Information</td></tr>
<tr><td>7040</td><td>Information</td></tr>
<tr><td>7040</td><td>Information</td></tr>
</table>
</body></html>

This is displayed in a browser as illustrated in Figure 2.12.

Screenshot of ConvertTo-Html output in the default table format, which converts PowerShell objects in the pipeline to either an HTML page or an HTML fragment.

FIGURE 2.12 ConvertTo-Html output in the default table format

Note that you can use the –Head, -Body, and –Title parameters to replace any of these default entries with a custom value of choice. The –As parameter lets you choose between a table and a list. Table is the default and will be used if you omit the -As parameter. Figure 2.13 shows the same output with an –As List parameter added.

Screenshot of the same ConvertTo-Html output with an –As List parameter added.

FIGURE 2.13 ConvertTo-Htm with –As List

You can also use a –Fragment parameter, which only produces an HTML table. All of the other HTML elements, such as <Head>, <Body>, etc., are discarded.

There is no equivalent cmdlet to export to an HTML formatted file. You can perform a redirect, as illustrated here:

Get-EventLog System | Select-Object EventId, EntryType –First 3 |ConvertTo-Html > c:TopSystemLogs.html

Using ConvertTo-Xml

ConvertTo-XML will take objects in the PowerShell pipeline and convert them to an XML representation of the objects. When there are several objects in the pipeline, ConvertTo-Xml will create a single XML document that includes all of the objects. This cmdlet takes the created XML and replaces the content currently in the pipeline. Here is the syntax:

ConvertTo-Xml [-Depth <Int32>] [-InputObject] <PSObject> [-NoTypeInformation] [-As <String>]
 [<CommonParameters>]

-Depth controls the level of conversion when an object’s properties contain other objects. The lower object, in turn, may have properties that contain even more objects. You need to ensure that when you have objects that contain other objects, you let PowerShell know the depth of conversion you want; otherwise, you will lose the XML representation of these contained objects. This setting can be overridden for the object types in the Types.ps1xml files.

Types.ps1xml files allow you to add additional members to object types in PowerShell. This allows the addition of extended type data consisting of additional properties and methods to objects. The Types.ps1xml file is located in the PowerShell installation directory and is loaded any time a PowerShell session is stated. You will also load a Types.ps1xml file when you import a module into a session. You can also temporarily add extended type data using the Update-TypeData cmdlet. This is not saved to a file and is discarded when a session closes.

-NoTypeInformation removes the type attribute from the object nodes. You have seen the results of this parameter in the ConvertTo-Csv cmdlet.

The -As parameter instructs PowerShell to convert to one of three formats. -As String converts the objects into a single string. -As Stream converts the objects in the pipeline into an array of strings. -As Document returns an XMLDocument object. -As Document is the default and will be used in the absence of any -As parameter being specified in the cmdlet.

Remember that ConvertTo-Xml converts the objects in the pipeline and then replaces the pipeline with the converted objects. The cmdlet doesn’t save anything to a file. Saving the results will require additional cmdlets or redirection. Here is example code where you specified the -As Document parameter.

  Get-EventLog System | Select-Object EventId, EntryType –First 3 |ConvertTo-Xml -As Document|Out-File "C:Events.htm" 

This produces the following output:

xml                            Objects
---                            -------
version="1.0" encoding="utf-8" Objects

The input objects, in this case, don’t seem to convert well into a standard XML document. The following is the output with the -As String parameter that loads the entire pipeline into a single string:

<?xml version="1.0" encoding="utf-8"?>
<Objects>
  <Object Type="System.Management.Automation.PSCustomObject">
    <Property Name="EventID" Type="System.Int32">7036</Property>
    <Property Name="EntryType" Type="System.Diagnostics.EventLogEntryType">Information</Property>
  </Object>
  <Object Type="System.Management.Automation.PSCustomObject">
    <Property Name="EventID" Type="System.Int32">7036</Property>
    <Property Name="EntryType" Type="System.Diagnostics.EventLogEntryType">Information</Property>
  </Object>
  <Object Type="System.Management.Automation.PSCustomObject">
    <Property Name="EventID" Type="System.Int32">7036</Property>
    <Property Name="EntryType" Type="System.Diagnostics.EventLogEntryType">Information</Property>
  </Object>
</Objects>

Here is the output with the -As Stream parameter, which loads each object as a separate string that is stored as an array:

<?xml version="1.0" encoding="utf-8"?>
<Objects>
<Object Type="System.Management.Automation.PSCustomObject">
  <Property Name="EventID" Type="System.Int32">7036</Property>
  <Property Name="EntryType" Type="System.Diagnostics.EventLogEntryType">Information</Property>
</Object>
<Object Type="System.Management.Automation.PSCustomObject">
  <Property Name="EventID" Type="System.Int32">7036</Property>
  <Property Name="EntryType" Type="System.Diagnostics.EventLogEntryType">Information</Property>
</Object>
<Object Type="System.Management.Automation.PSCustomObject">
  <Property Name="EventID" Type="System.Int32">7036</Property>
  <Property Name="EntryType" Type="System.Diagnostics.EventLogEntryType">Information</Property>
</Object>
</Objects>

Using Export-Clixml

Export-Clixml is quite similar to ConvertTo-Xml; but just as with Export-Csv, the output will be saved to a file and won’t just replace the contents of the pipeline. Here is the syntax:

Export-Clixml [-Depth <Int32>] [-Path] <String> -InputObject <PSObject> [-Force] [-NoClobber] [-Encoding <String>] [-WhatIf] [-Confirm] [<CommonParameters>]

Encrypting an Exported Credential Object with Export-Clixml

One frequent use of Export-Clixml is to export credentials in an encrypted format. This allows you to store credentials you would use in a script without exposing the credentials in cleartext inside the body of a script or in the pipeline itself.

To get the credential, you can use the Get-Credential cmdlet to pop up a dialog box and put the username and password into a variable. Get-Credential can use a generic credential dialog box, a custom dialog box with a message, or it can prompt the user via the command line. The command-line prompting requires a Registry entry.

Here is the required code to allow command-line prompting for credentials:

Set-ItemProperty "HKLM:SOFTWAREMicrosoftPowerShell1ShellIds" -Name "ConsolePrompting" -Value $True

Here is the syntax:

Get-Credential [-Credential] <PSCredential> [<CommonParameters>]

The simplest form is getting the credential and storing it into a variable as follows:

$CredentialStorageVariable = Get-Credential -Credential "ContosoServiceAcct01"

This prompts the user for a username and password and then creates a PSCredential object. Since you used the -Credential parameter, the user name field will already be populated, but it is still editable. If you leave off the -Credential parameter, all of the fields will be blank. Refer to Figure 2.14 to see the Get-Credential dialog box.

Screenshot of the Get-Credential dialog box to enter your credentials.

FIGURE 2.14 Get-Credential dialog box

You can also create the dialog box with a custom message. This is called a MessageSet. This is accomplished using the following code:

$credentialStorageVariable = Get-Credential -Message "We need your credentials to connect to the remote server" 

The custom message is displayed, as illustrated in Figure 2.15.

Screenshot of the Get-Credential MessageSet dialog box, where we need to enter our credentials to connect to the remote server.

FIGURE 2.15 Get-Credential MessageSet dialog box

The PSCredential object is then stored in the identified variable. In this case, you store the results in the $credentialStorageVariable. This is just a variable name you created. You can call the variable whatever you like.

The members of the PSCredential object include only two properties, Password and Username. You can view the contents of both by simply typing the variable in its own line:

$credentialStorageVariable
UserName                                     Password
--------                                     --------
ContosoServiceAcct01    System.Security.SecureString

Note that the username is stored in cleartext in the variable, but the password is stored as a secure string. You can access these values individually by specifying the variable and then appending the name of the member you want, prefaced with a period. If you want to look at the value of the Username property of the variable, you can state the variable and append the property. Here is how you can view just the contents of the UserName and the Password property of your credential variable:

$credentialStorageVariable.UserName

ContosoServiceAcct01

$credentialStorageVariable.Password

System.Security.SecureString

Saving the Credentials to an XML File

Once you have the credential stored in a variable, you can then export the credentials into an XML file. Here is the code showing these two operations:

$credentialStorageVariable = Get-Credential
$credentialStorageVariable | Export-Clixml c:OurCredentialFile

Note that you can replace the path with another variable to make this more modular. This command encrypts the object using the Windows Data Protection API.

If you load the resultant file into a text editor, this is what you will see:

<Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04">
  <Obj RefId="0">
    <TN RefId="0">
      <T>System.Management.Automation.PSCredential</T>
      <T>System.Object</T>
    </TN>
    <ToString>System.Management.Automation.PSCredential</ToString>
    <Props>
      <S N="UserName">ContosoServiceAcct01</S>
      <SS N="Password">01000000d08c9ddf0115d1118c7a00c04fc297eb010000000cf4a23dfab49a4c85b4026968824dc300000000020000000000106600000001000020000000285a7b2ffb2022b66f3a89d321fcc13535f7fa75abff48479265484b9aae9b34000000000e8000000002000020000000e802cd61458770bd5213de0bdc0944722abf28a6c86d7d81c8bb9ba96112860630000000594c5e154bdec29b151d55c654de32ddfc222a3cd0a20fbedb6485e440bd516c3cdc225b722636f1d02edb4fe027227f4000000025b101384a5ead762f526d315f71b1291c54368f6ffffaee4a6027aa17e9529bdf8bb26d498a715aec9e56bdb5c7ff497e3ca27383d0894169220c5c8e8cb55a</SS>
    </Props>
  </Obj>
</Objs>

The username is still in cleartext, but the password is encrypted.

To import the credential into the script, simply load a variable with the object using the Import-Clixml command, as shown here:

$newCredentialVariable = Import-Clixml c:OurCredentialFile

The Password property is still encrypted, but the credentials can now be used throughout your script.

Importing Data into PowerShell

When you import data from a file, or other external storage, you will be converting this formatted data back into objects. These objects can then be loaded into the pipeline and passed to other commands. PowerShell understands a number of formats, but not all formats are equally import friendly.

There is a difference between importing data and reading data. Importing data is like reading a csv file into a spreadsheet, such as Microsoft Excel. With Excel, you will have column headings, typically the first line of the file. This prepares PowerShell with the Property to Value information. The header will list all of the properties these objects will use. As the rest of the file is imported, the CSV values will be added into the appropriate property. This makes it easy to pass the objects, with their intact parameters, to other commands for further processing.

If you use the Get-Content cmdlet, you aren’t importing the file, you are merely reading it. No properties are defined, and the values are all loaded together. This is similar to simply reading the file with a basic text editor, such as Notepad. No intelligence is attached to match properties and values. The data that is read in may not have the expected structure, and you will need to ensure that your script gets the type of information it accepts.

Here is the syntax for the Import-CSV command:

Import-Csv [[-Path] <String[]>] [[-Delimiter] <Char>] [-Encoding {Unicode | UTF7 | UTF8 | ASCII | UTF32 | BigEndianUnicode | Default | OEM}] [-Header <String[]>] [-LiteralPath <String[]>] [<CommonParameters>]

When Import-Csv is used, it will first try to read the header of the input file to determine the properties of the objects it is importing. If there is no header, or it is empty, Import-Csv will insert a default header row name and will display a message. Before PowerShell version 3, the script would simply fail.

Processing Pipeline Data

The PowerShell pipeline can hold many objects with a variety of members. You may not want all of the objects that are loaded into the pipeline. This is particularly true if you have some complex sorting or processing further down in the script. You need to eliminate those objects that are unneeded. This process of selectively removing objects is called filtering.

When you want to remove an object, you typically compare it to some criteria. If the object matches the criteria, it will be evaluated as $True. Objects that come out as $True will be allowed to remain in the pipeline. If the object doesn’t match your criteria, the object will be evaluated as $False and it will be removed.

Using Comparison Operators

Comparison operators come in many forms. They can be case-sensitive or case-insensitive. Some comparison operators allow wild cards. Some comparison operators will look for more than just string values stored in the properties of an object. Some will use regular expressions. It is useful to test comparison operators while writing scripts to ensure you are getting the value you expect. If you use a scalar value, in other words a single value, these comparison operators will return a $True or $False value. $True or $False are known as Boolean values. Look at the following code:

10 -eq 10

The -eq means equals. This will evaluate as $True. One to one comparisons will return only Boolean values. The two values you are comparing have names. In this case, the one on the right is the testing value. That is the data you are testing. The left value is the reference value. Both of these values are also known as operands.

If you have multiple values to compare, PowerShell will return any values that match, instead of returning $True or $False. Look at the following code and its result:

10,8,35,17,99,8,17,888,786 -eq 8

8
8

This is comparing a collection of numbers to the number 8. Note that none of the values are enclosed in quotation marks. If the values were surrounded by quotes, they wouldn’t be numbers, they would be a strings. We will discuss data types a bit later.

Because you have to match the value exactly, you return two results. If you don’t match any values, this operation will not return anything. Let’s see what happens when you wrap it all up in quotation marks and evaluate this as a bunch of characters, also known as a string:

" 8","8 ", " 8 ", "888" -eq "8"

This returns nothing because none of the values exactly match. Enclosing the values in quotations makes them strings of text. A space is a character, just like an 8. When you compare SPACE8 and 8SPACE to 8, they obviously don’t match. You will see a comparison operator that can use wildcards a bit later.

This type of comparison, taking an expected value and comparing it to the actual contents of a variable or specific text, is a very useful troubleshooting technique. PowerShell will let you enter most comparisons directly into the console to test your logic. This is useful when building scripts to ensure you are evaluating the correct values and are getting the expected results.

Here are some basic comparison operators you can use to get started:

  • -eq: This means equals. 10 -eq 10 comes out as true.
  • -ne: This means not equal. 11 -ne 10 comes out as true.
  • -gt: This means greater than. 11 -gt 10 comes out as true.
  • -lt: This is a lowercase L, but you can put it in uppercase. This means less than. 11 -lt 10 is false.
  • -le: This means less than or equal to. 10 -le 10 is true, but so is 2 -le 10.
  • -ge: This means greater than or equal to. 11 -ge 10 is true, but so is 200 -ge 10.

All of these operators are case-insensitive. "A" -eq "a" is true. If you want your comparisons to be case-sensitive, you should precede the operator with a c. In other words, -eq becomes -ceq. If you want to make case-insensitivity explicit, precede the operator with an i. In other words, -eq becomes -ieq.

It is important to understand that these comparisons all require exact values. None of these comparison operators allow for any wildcards.

Using Wildcards and the -like Operator

Sometimes you will allow a variety of values to match. This is where you can use wildcards. Wildcards can be used only with the -like or -clike operators. These are similar to the -eq operator, but you can use wildcards. The wildcard character of * means any number of characters, where ? means just this one character.

  • "AAA" -like "*" is True.
  • "AAA" -like "*a" is True.
  • "aAA" -like "A?A" is True.
  • "ABA" -like "A?A" is also True.

Consider the following:

"Wyoming" -like "W*om?ng" 

The long form of what you are saying with this comparison is “Does the first value have a W in the first position, with any number of any characters between the W and the o? Is o directly followed by m? The next character after the m doesn’t matter, but there has to be a character, and the string’s next two characters have to be n and g. Also, there can’t be anything after the g in the first value.” The comparison will evaluate as True. The operator -clike does the same thing but is case-sensitive.

 "Wadfakdsfbasd123sdfasjomXng" -clike "W*om?ng" 

This comparison will also be evaluated as True.

Note that if you try 1 -gt "a", you will get an error because the 1 without quotes around it is interpreted as type System.Int32 value, where "a" is a type string. Because you are comparing literal values, the type of the first value will determine the expected type of the second value, and some types are not compatible with others.

If you try "1" -gt "a", you are comparing the character “1”, the character “a”. The character “1” is less than “a” in PowerShell. If you try "a" -gt 1, you will get a True because the “a” value is a string type so PowerShell will assume 1 is also a string.

If you try 1 -gt "a", you will get an error. That is because the 1 is an integer, and not a string. Since the first item being compared determins the type of the following items, PowerShell will try to convert “a” to an integer, and it will fail.

Type mismatch is a common mistake in PowerShell. If the values are in a variable, PowerShell may be able to convert. Obviously, the value of “ABCE” will never automatically convert to a 32-bit integer. It is good practice to test comparisons against known values to ensure you have the correct type and that your operators are acting as expected.

Exploring Common Data Types

There are many data types in PowerShell. Some of which are really obscure. You can also create your own data types. Here are the most common you will use:

  • [string]: Fixed-length string of Unicode characters
  • [char]: Unicode 16-bit character
  • [byte]: 8-bit unsigned character
  • [int]: 32-bit signed integer
  • [long]: 64-bit signed integer
  • [decimal]: 128-bit decimal value
  • [single]: Single-precision 32-bit floating-point number
  • [double]: Double-precision 64-bit floating-point number
  • [DateTime]: Holds the date and the time. Be careful of the format.
  • [xml]: XML object
  • [array]: Array of values, like rows and columns of a spreadsheet
  • [hashtable]: Hash table object
  • [void]: Discards the value

Consider the following:

[string] $myStringVar = "123.456"

This will load the variable $myStringVar with the characters 123.456. Remember this is just a bunch of characters. It is not a number. What do you expect will happen with the following commands?

[string]$MyStringVar = "123.456"
213.45 -gt $MyStringVar

PowerShell will look at the first value 213.45 and decide that this is a [single] data type because it isn’t surrounded by quotations and has a decimal point. When PowerShell gets to the second value, it sees that that data type has been cast as a [string]; but when looking at the value, PowerShell realizes that it can be converted to a [single] because it doesn’t contain anything but numbers and a decimal point. Because 213.45 is larger than 123.456, the comparison evaluates as True. Note that the $myStringVar remains a data type of [string]. The conversion is only for this comparison.

You need to be cautious when you convert an [int] data type from a value that is fractional, such as a [single] or a [string], to a value that contains only numeric characters and a decimal point. PowerShell will perform a Round() operation. This will round the value up or down. The value 123.1 will convert to 123. The value 123.5 will convert to 124. This can give you unexpected results that can be tricky to troubleshoot.

If you just want to truncate the decimals to the right of the decimal point, you will need to do a separate operation and call the system math function. Examine the following code:

[long] $myVar= 1234.5
[int]$myVar

This will give the result of 1235 as PowerShell, in the background, performs a Round() function. PowerShell doesn’t have separate math functions, so you will need to call for [System.Math] or simply [Math]. Because you are doing a static method, you will need to separate it with double colons (::) before you call the actual function. Static simply means the function, [System.Math], already exists and you will be using its Truncate method.

Here is the code example:

[long] $myVar = 1234.5
[Math]::Truncate($myVar)

This will return the value of 1234. It simply sliced off everything to the right of the decimal point. Note that this is also a temporary operation. $myVar still actually equals 1234.5. The result of the Truncate method is only for the single call. The value of the variable doesn’t actually change. If you wanted to make it permanent, you would need to assign the value directly to the variable, either by typing the characters or loading up $myVar from another variable with the [int] data type. You could also simply use this operation to load a different variable:

$myTruncatedValueVar = [Math]::Truncate($myVar)

If you want to get a list of other static methods inside of [System.Math], you can use the following code:

[System.Math] | Get-Member -Static

This will give you a list of many mathematic methods.

Determining Data Type with -is

If you want to determine the data type, you can use the type operator -is. You can use the following code to verify the data type of your two values, as shown here:

[string] $myStringVar = "123.456"
$myStringVar -is [String]

This will evaluate as True.

$mymysteryVar = "1234.5678"
$myMysteryVar -is [String]

This will also evaluate as True because you placed the characters inside of quotation marks. You can permanently convert the data type to an [int] by doing the following operation:

$myMysteryVar = "12345.2343"
$myMysteryVar = 12345
$myMysteryVar -is [string]
$myMysteryVar -is [int]

The first -is comparison will evaluate as False, and the second will evaluate as True. $myMysteryVar was a string; but when you assigned 12345 without quotes or a decimal point, it became an integer. What will be the outcome following the execution of the following lines of code?

$myStringVar = "12345"
$myIntVar = 98765
$myStringVar = $MyIntVar
$myStringVar -is [string]

This will evaluate as False. You replaced the value in $myStringVar with the contents of $myIntVar. PowerShell knows that the content of $myIntVar is [int] data type. Even though $myStringVar started off as a [string], PowerShell converted it to an [int] and replaced the value. This automatic conversion can be very handy when it is done intentionally. If you don’t pay attention to data type, you can end up with scripts that sometimes work and sometimes don’t. You should test with known values and be particularly careful when handling input data, especially data manually entered by people.

You can also invert this type operator by using -isnot. Note that the type operators, -is and -isnot, will return only True or False.

Finding Portions of Strings with -match

Sometimes you need to find if certain text is contained within a string or collection of strings. The operand -match can’t search against anything except strings. Examine the following code:

"January" -match "Jan"

This will evaluate as True. This is a scalar input, which means it is a single value and not part of an array or collection of data. A single reference value is the important part. When you run this code, it will evaluate as True. Because the input is scalar, this operation will also populate the $Matches automatic variable. You can view the value of $Matches here:

$Matches

Name          Value
----          -----
   0            Jan

That shows that Jan did match something in the string. But what if you wanted to see what exactly you matched? Examine the following code:

"Srv-Den01", "DenaliRRAS04", "DC-Hedenar-05", "Lon-Win16-CA-05" -match "dEn"

This provides the following result:

Srv-Den01
DenaliRRAS04
DC-Hedenar-05

Because you are comparing your test value to multiple reference values, an array, you will not get a True or False. Instead, you will get a list of all of the matches. This will not populate the $Matches automatic variable; so if you run $Matches again to view the variable’s value, it will still say Jan.

You can invert the selection to find which strings do not match with the following code:

"Srv-Den01", "DenaliRRAS04", "DC-Hedenar-05", "Lon-Win16-CA-05" -notmatch "dEn"

with the output of:

Lon-Win16-CA-05

This shows you the reference values you didn’t match. Because the reference values are in an array, you will not get a True or False. You will get a list of all the reference values you matched, even if you match only one. You can use the -match operator to quickly locate strings for further manipulation.

Using the Containment Operators -contains and -notcontains

Containment operators return only Boolean results, True or False. Many people get this confused with your earlier operator -like. When you use -contains with a single value, you have to match the reference values, the left operand, exactly to the right operand, or testing value. Here’s an example:

"Mark" -contains "M"

This will evaluate as False because it isn’t the exact match. What about the following?

"M","Mark" -contains "m"

This evaluates as True because one of the reference values exactly matches the test value. So, this is pretty much like -eq, except "M","Mark" -eq "m" will return M and Mark because when you compare an array, or collection, of reference values, it will return all the reference values that match.

The advantage of -contains is that it returns only a Boolean value of True or False. You won’t have to do additional processing if you have more than one match. Remember that if the -like operator matches more than one reference value, it will not return True or False. When you match more than one reference, -like will return all the matching values.

What happens if you compare an array to an array? Consider the following code:

"M", "Mark" -contains "m", "mar", "M", "Mark"

This always returns False because when the test values, the values on the right of the comparison, are an array, PowerShell shifts to what is called reference equality. What this means is that all of the attributes and properties of the reference value, the operands on the left, have to match all of the properties of the operands on the right. Look at the following code example:

"M","Mark" -contains "M","Mark"

Oddly enough, this also evaluates as False. Reference equality means that all attributes and properties of the reference value must match the test values’ attributes and properties exactly. It looks like they match, but due to the strangeness that is reference equality, they are not exactly the same.

This is noteworthy because many administrators use the -contains or -notcontains operator to try to compare an array to an array, and they run into the wall of reference equality and can’t figure out why their code fails. Digging into all the nuances of reference equality is a bit beyond the scope of this chapter, Most administrators will just use -like; just remember -like doesn’t always return a Boolean True or False. You may need to code additional steps to provide a Boolean return.

Using the -in and -notin Operators

These operators will always return a Boolean True or False value. The test value is on the left this time, and the reference value is on the right. This is exactly the opposite of how -contains is set up. Also, the -in operator will use reference equality if your test value is an array. Just remember, the test value in this command is on the left. Consider the following code:

"J" -in "Jan", "January", "J" 

This evaluates as True because your test value, “J”, exactly matches at least one of your reference values. Consider the following code:

"J" -in "Jan", "January" 

This will evaluate as False because your test value, “J”, doesn’t exactly match any of the reference values. It is similar to -like, except you can’t use wildcards and you will return only a Boolean True or False.

If you switch the reference and the test values, things go off into reference equality land as now the test values are an array.

 "Jan", "January", "Janus", "Bob" -in "Jan"
"Jan", "January", "Janus", "Bob" -in "Jan", "January", "Janus", "Bob"

Both of these will return False. The primary point is you need to know that the reference and test values are on opposite ends with the -in and -notin operators. Unless you understand how reference equality works, you need to ensure you have a single test value for both -in and -contains. This will give you a nice, clean Boolean True or False. If you try to get too clever and start using arrays as your test value, the results may be unexpected.

Using the -replace Operator

You may get data that requires you to change the input values to something else. For example, you may need to change a server’s name value from “PHX-SER-01” to “SEASRV-01”. This is what you can do:

"Phx-Ser-01" -replace "PHX-SER","SEASRV"

SEASRV-01

The format is INPUTSTRING -replace "MATCHME" , "Replacement".

Another interesting thing is the replacement doesn’t have to be the same size. Here’s an example:

"ABCDEFG" -replace "de","ILOVECOOKIES"

This will return ABCILOVECOOKIESFG.

You can also use -replace to delete by leaving off the replacement value: Look at the following code and result:

"ABCDEFG" -replace "de"
ABCFG

The -replace operator gives you a lot of power to manipulate items.

Using Variables

Variables are areas of memory that contain something. They can contain different types of data, as previously discussed. You will use variables to hold items that are used with commands, and you typically use variables to store the results of commands and provide input into other commands.

Variables are identified as a text string that starts with a $. The name $myVariable is one example. Note the variable itself is a text string, but that doesn’t necessarily mean that the variable contains a text string. Think of a variable as a pointer to an area in memory that has some value, even if the value is NULL.

Because a variable is a text string and is not case-sensitive, you can include all sorts of characters in the variable’s name. Variable names can include spaces and special characters. Placing spaces and special characters in the name of a variable can cause mass confusion as your code becomes difficult to read and difficult to use. PowerShell will refuse any variable name that contains spaces or special characters, with the exception of the underscore (_) character. If you insist on using characters, such as a hyphen or a space, you will need to enclose the command in a brace, as illustrated:

${my poorly-chosen variable name} = "This is a really bad idea."

Sometimes, you have to reference a variable that may have special characters. A good example is the environmental variable ${ENV:ProgramFiles(x86)}. If you needed to get a list, using your old friend Get-ChildItem, you would need to do the following:

Get-ChildItem ${ENV:ProgramFiles(x86)}

It is also a great idea to have some naming standard for variables. You should try for something that makes the purpose of a variable more obvious. $v is much harder to understand than $listOfServers. If there are several administrators, or coders, you want to have an approved naming standard that is intuitive, standardized, and rigorously enforced.

Exploring Types of PowerShell Variables

There are three types of variables: preference, automatic, and user-created. You already explored preference variables when you set up your custom PowerShell console. These variables are automatically created and populated with default values when your PowerShell session starts. You can change these values inside of your session, but the changes are lost when you close your session. If you want these preference variables to keep your changes between sessions, you will need to add the changes to your profiles.

Automatic variables are created by PowerShell and are automatically updated when PowerShell needs to keep track of something. $Matches is an example of an automatic variable. Users can’t directly change these variables, but actions you do with cmdlets and operators, such as -match, will cause PowerShell to automatically change the values.

User-created variables are created from the console and from within scripts and functions. These variables exist only during the session and will be forgotten unless you save them in your profile.

Clearing and Removing Variables

To delete the value of a variable, you can simply set its value to $null or you can use a Clear-Variable cmdlet, as illustrated here:

$removeMyValue = $NULL
Clear-Variable -Name removeMyValueToo

Notice that these variables still exist. Their value has just been assigned to $Null. They are still taking up space.

If you want to actually remove a variable, you can use:

Remove-Variable $myUnneededVar

This will clear the value and remove the variable from memory. You still need to be aware of scoping because if you remove a variable that is locally scoped, only the locally scoped variable will be cleared. If there is a parent variable of the same name, you will now see that variable in the local scope.

Using the Variable Drive

PowerShell will create a pretend drive that will act like a drive that has a file system. This is used to hold all the variables and their assigned values that exist in your current session. You can change to the Variable: drive by treating it like file system drive, as follows:

Cd Variable:

This will, of course, call an alias. If you want to do this without the alias, you can do the following:

Set-Location Variable:

You can see the contents of this Variable: drive by changing your location and running the Dir or ls alias, or by using your friend Get-Childitem, as illustrated here:

Set-location Variable:
Dir
Get-ChildItem Variable:

Using Environmental Variables

PowerShell will store environmental variables in another PowerShell drive called Env:. This is used to store information such as the Windows installation directory, the user directory, and the location of temp directory. To view the contents of this directory, you can do the same operations, as shown here:

Set-location env:
Dir
Get-ChildItem env:
Get-Item env:

These objects in the Env: drive won’t have child items, so Get-Item and Get-ChildItem will return the same information.

Environmental variables are shared by parent and child sessions. This allows you to share values between parent and child sessions. You can view and manipulate environmental variables by prefacing the variable name with $env:, as shown here:

$env:Tmp
$windowsdirectory = $env:windir
$env:myNewEnvironmentalVariable = "Data that is available to the parent and child"

Using Functions

So far, you have executed cmdlets pretty much one at a time. Functions let you collect any number of PowerShell statements, give this series of statements a name, and execute them, one after the other. You can pass parameters to a function. You can make your own parameters for your function. You can take the output of a function and put it in a variable, or load up a pipeline, or even send the output to other cmdlets or functions.

Seeing Them in Action

Simply enter Get-Help About_Functions to see that a function is a block of code that is given a name. You also discover the basic format of a function. Here would be the code you use to create your function:

Function Snag-SecurityLog {Get-EventLog Security}

This command will create a function called Snag-SecurityLog. You can then create an alias to call your function:

New-Alias -Name View-SecurityLog -Value Snag-SecurityLog

So now you can enter View-SecurityLog to call the Snag-SecurityLog function that runs the Get-EventLog cmdlet. This isn’t the most efficient way to get the logs, and it seems rather redundant. If you dig a bit deeper into the About_Functions file, you will find the following section:

Using Splatting to Represent Command Parameters
    You can use splatting to represent the parameters of a command.
    This feature is introduced in Windows PowerShell 3.0.
    Use this technique in functions that call commands in the session. You do not need to declare or enumerate the command parameters, or change the function when command parameters change.

    The following sample function calls the Get-Command cmdlet. The command uses @Args to represent the parameters of Get-Command.
        function Get-MyCommand { Get-Command @Args }

Splatting

Splatting sounds strange, but by looking at the sample, and maybe digging a bit further, you will find that you can modify your alias to allow you to pass a parameter to the function and you won’t have to declare the parameter, or even the number of parameters. So now your function and alias can be changed as follows:

Function View-ALog {Get-EventLog @Args}
New-Alias -Name Grab-Log -Value View-ALog

You have now modified your function and alias to give you the opportunity to use a single alias and pass a parameter to the function. This new alias can let you tell the function which log you are looking to retrieve. Here are some examples of what your newly discovered alias and function skills have provided:

Grab-Log Security
Grab-Log Application
Grab-Log System

This is not designed for efficiency but is designed to show you the breadth of information that is available in the About files and how to use these files to discover new ways of performing needed operations.

Creating Functions

Functions can be rather simple. If you want to find out how much RAM is being consumed by PowerShell, via a function, it could look like this:

PS C:> Function Pull-ShellRam {Get-Process PowerShell}
PS C:> Pull-ShellRam
Handles     NMP(K)    PM(K)    WS(K)     VM(M)    CPU(s)     ID      ProcessName
-------     ------     ----     ----    ------    ------    ---      -----------
    657         22    50836     2967       571      0.72    312       powershell

You can name your function whatever you like. It is best practice to follow the standard verb-noun convention currently used by PowerShell. The verb should state what action your function is performing. The noun should identify the item that you are doing the action against.

Here is the syntax for a function:

function [<scope:>]<name> [([type]$parameter1[,[type]$parameter2])]
{
  param([type]$parameter1 [,[type]$parameter2])
  dynamicparam {<statement list>}
  begin {<statement list>}
  process {<statement list>}
  end {<statement list>}
}

This function can hold parameters. You can type a single parameter. You can add several parameters by separating them with commas. These are all at the very first where you declared your function. If you declare your parameters when you declare your function, you can’t declare any additional parameters in the body of the function.

You open the body of the function with a brace ({) and start adding statements. You will return to the different kinds of parameters and parameter declarations in a moment.

The Begin, Process, and end are used in pipelines, which will be discussed in a moment.

After the brace ({), you will add any number of statements. You will need to add a semicolon (;) between statements if the statements are on the same line. You can also place each statement on a separate line and then you won’t have to use the semicolon. This can enhance readability. It is also a good idea to indent the various sections to make them easier to read. When you indent, use either all spaces or all tabs. If you combine the two, the spaces could be interpreted oddly if you cut and paste between scripts. It could lead the script to think the space was identifying a parameter. If you use either all spaces or all tabs for your indentation, you won’t have that problem.

To aid in readability, you can add comments to document your script. These comments are prefaced by a pound sign (#). Anything to the right of the pound sign is ignored on that line. If you want your comment to cover multiple lines, you need to use the # before each line. You can make a block of comments by starting the block with a less than sign and a pound sign (<#). You would then put in any number of comments on as many lines as desired. You will then close with another pound sign and a greater than sign (#>).

You have the basics of a function. You have the name and you have comments. You also have code pieces on one line and on separate lines. This process is illustrated here in pseudocode:

Function Dostuff-OurCoolThing
{
    #Here is a comment on a single line. Our function is designed to process your 
     # cool thing. Broken line so you have to have another comment mark
    <# This is a comment block that you started.
Our comments can be endless and your spacing and tab location doesn't matter as
The comment block ends when you end it #>
Put-codeline1 ; Put-Codeline2
Put-codleine3nosemi
Put-Codeline4nosemi
}

You need to keep in the habit of commenting everything about your functions and scripts. You need to establish standards for spacing, variable names, function name, and basically everything. This will make your code much easier to create, use, troubleshoot, and maintain.

Using Parameters

It is a best practice to send data to a function only via parameters. This makes it easy to document and makes the function self-contained. It also mimics how the rest of PowerShell operates, preserving the familiar environment.

You can have parameters assigned to functions. These parameters can be named, like most cmdlet parameters. You can also make them positional. This is useful when you are always feeding a function with the same expected parameters in the same order. It can make it difficult to read, but it can reduce the amount of data you need to send.

NAMED PARAMETERS

Named parameters are like the parameters you used earlier in the chapter. They will have a name and can have a value, or an array of values, assigned. You can name them inside or outside of the braces that start your code area. Here is an example of declaring the parameters when you declare a function:

Function Display-Values ($Parameter1, $Parameter2)
        {
         $var1 = $Parameter1
         Write-Host ("This is from var1 "+$var1)
         Write-Host ("This is from Parameter2 "+$Parameter2)
         Write-Host($Parameter1,$Parameter2)
}

For the function to work, you have to load the code into your session. In ISE, you can just type the code in, select it, and press F8. If it is the only thing in your script window, you can press F5, which will run everything in the script window.

In the regular console, you can enter only one line of code at a time. Unless you can get lucky with line breaks, you can’t enter everything on a single line. This mega-line would also be very difficult to read and could require extensive scrolling to ensure you typed everything correctly.

Normally, you would save your function code as a .ps1 file, also known as a script. You would then load and run the script. This will load the function into your session by placing the function into Function: drive for your scope. You can view this just as you did earlier for the Variable: drive.

You would call this function and pass the parameters as follows:

PS C:Windowssystem32> Display-Values -Parameter1 Hello -Parameter2 World

This is from var1 Hello
This is from Parameter2 World
Hello World

Notice that parameters are passed with a space in between each parameter. You can also leave out the parameter names and rely on the position in the order they were defined. This is what it looks like when you leave off the names of the parameters:

PS C:Windowssystem32> Display-Values Learning PowerShell

This is from var1 Learning
This is from Parameter2 PowerShell
Learning PowerShell

Again, it is important that you notice the parameters are passed to the function separated with a space. If you pass the parameter values with a comma, all of the values will be added to the first parameter as an array.

If you decide to declare parameters inside your function body, you can’t declare them when the function itself is declared. PowerShell will throw an error telling you exactly that. Below you will define the parameters in the function body. Note the parameter names are just names. The position of the parameter is defined by when they are declared. The first parameter that is passed is actually in position 0.

In this example, the order of the output is varied to illustrate that you can use the parameters in any order. A default value is even assigned to one of the parameters when it is declared. Pay attention to the commas between the parameters during declaration:

Function Display-Values 
{
   Param(
         $Parameter1, $Parameter2,
         $Parameter3,
         [String]$Parameter4 = "Nano",
         $Parameter5
        )
  Write-Host($Parameter3,$Parameter4,$Parameter1,$Parameter2,$Parameter5)
}

PS C:Windowssystem32> Display-Values Server 2016 Windows

Windows Nano Server 2016

Running the function is identical. Any parameters you define that aren’t sent a parameter value, or assigned a default value, are assigned $NULL, unless you pass a value to the parameter when you call the function.

It is also important to realize that if you do pass a parameter that has a default value, the passed value will overwrite the default value. Using the same function, let’s add a fourth passed parameter:

PS C:Windowssystem32> Display-Values Server 2016 Windows Installation
Windows Installation Server 2016 

That output may have been unexpected because you might have thought the fourth parameter would be somehow overwritten by being defined with a default value. You would have been wrong. With the strange order, you would need to pass the parameters in the following way to get it to make your cool sentence and not overwrite the default value:

PS C:Windowssystem32> Display-Values Server 2016 Windows -Parameter5 Installation

Windows Nano Server 2016 Installation

This illustrates why naming the parameters while you pass them make it so much easier to read. You can also pass named parameters in any order, as long as you specify the parameter’s name. This also lets you define exactly just the parameters you have passed.

MANDATORY PARAMETERS

Many times, your functions will be worthless if they don’t get the parameters they need. You can define, in [Parameters], the properties associated with a parameter. One property is Position, another is Mandatory. You will set a parameter as mandatory in your code block, as follows:

Function Show-OurValues
{
    Param ($Param1,
           [Parameter(Mandatory = $True)][string]$StringParam
          )
    Write-Host ($Param1,$StringParam)
 }

Show-OurValues PassingJustTheFirstParam

Because you passed only the first parameter, position 0, and the second parameter, position 1, is mandatory, the console will ask you for the second value. In ISE, you will get a dialog box, as shown in Figure 2.16.

Screenshot of a dialog box where the first parameter, position 0, and the second parameter, position 1, is mandatory, the console will ask for the second value.

FIGURE 2.16 You forgot the mandatory parameter.

Notice that ISE is nice enough to tell you which parameter you forgot. This illustrates why it is important to have meaningful parameter names. You will also get a similar output on the console. You can also provide a help message that will be displayed only when you forget a parameter, as shown here:

Param
(
[Parameter(Mandatory=$True, HelpMessage="Enter one or more AD site names, separated by a comma.")] [String[]] $townName
)

You have to use [Parameter(Mandatory = $True)] $myParameterName to modify the parameter’s properties. You can also set other properties, like its position:

Param (
[Parameter(Mandatory = $True,Position1)] $myParameter
)

This block shows that you can set multiple parameter properties in the Parameter section. This example will set $myParameter as the second parameter. Remember that positions start with 0. The parameter will also be mandatory. Of course, if you reference the named parameter by name, PowerShell will ignore the position of that named parameter and go with the direct value assignment.

POSITIONAL PARAMETERS

The parameters that you create are assigned a position based on the order in which they are defined, or by hard-coding the position, as shown earlier. All parameters are positional, by default, but you can store them another way. With this method, you don’t give them a name, but you do have them automatically stored in an array. You saw an example of this before with splatting, but instead of passing arguments with the @, you will put them into the $args array.

Whatever you pass to the function will be stored in the $args array. The first one will be in the first position, starting at 0. The $args array looks a bit like a spreadsheet where you will have other properties, not just the value, but the position is the row. You need to remember that the first parameter is stored in $Args[0]. To add to the fun, if you use the Get-Help cmdlet, it will display the Position attribute, but this value is incremented by 1. Therefore, the first positional parameter, Position 0, will have the parameter attribute of “Position? 1”. This can be rather confusing and if you forget, it can lead to some interesting troubleshooting.

Here is an example function:

Function Add-Domain
{
$FQDN = $args[0]+".Contoso.com"
$FQDN
}

PS C:Windowssystem32> Add-Domain Server15

Server15.Contoso.com

If you fail to pass the parameter value, the $args[0] will be $NULL and the output will reflect it:

PS C:Windowssystem32> Add-Domain

Contoso.com

SWITCH PARAMETERS

Switch parameters act like a light switch. The idea is that if a value for the switch parameter is passed, the parameter will become $True, regardless of the actual value passed, unless you pass $False. When the switch parameter is defined as a switch parameter, the parameter will be default to $False.

If you don’t send the switch parameter when you call the function, it will remain $False. If you do pass the parameter to your function, even with no value, PowerShell will evaluate the parameter as being set to $True. Again, you can pass the parameter as being $False, and it will remain $False.

This gives you the ability to write code that mostly ignores the value of your switch parameter and performs operation only if the switch parameter is passed. You can also have code that operates only in your function if the switch parameter’s value remains $False. Passing the switch parameter “switches” the value. Here is how you set up a switch parameter and run some sample values to see if you are passing the -DomainParam to the function:

Function Check-Domain
{
param (
        [switch]$DomainParam #This sets -DomainParam to $False
      )
If ($DomainParam -eq $True) {"There is a domain."}
else {"No Domain Found."}
}
PS C:Windowssystem32> Check-Domain 

No Domain Found.

PS C:Windowssystem32> Check-Domain Value1 Value2 Value3

No Domain Found.

PS C:Windowssystem32> Check-Domain -DomainParam

There is a domain.

You can pass a Boolean value to a parameter by adding it after the parameter name, as shown here:

PS C:Windowssystem32> Check-Domain -DomainParam:$False

No Domain Found.

PS C:Windowssystem32> Check-Domain -DomainParam:$True

There is a domain.

PS C:Windowssystem32> Check-Domain -DomainParam:$Grapefruit

There is a domain.

Remember, passing the switch parameter with any value besides $False will always revaluate as True. If statements will be discussed a bit later in the chapter.

Sending Pipeline Objects to a Function with Begin, Process, and End

You can pipeline objects to a function. You will execute the Begin statement only at the beginning of the function. You haven’t pulled anything from the pipeline yet. Once the Begin statement is done, the Process statements will run once for each object in the pipeline. As objects are assigned to the pipeline, they are referenced by the $PSItem automatic variable and the older $_ automatic variable. Both automatic variables refer to the current object in the PowerShell pipeline. Just remember that $PSItem is supported only in PowerShell 3.0 and later.

Once all of the items are processed, the End statement runs once. If you don’t include Begin, Process, or End keywords, every statement will be treated as an End statement list. Here is some sample code:

Function Examine-Pipeline
{
     Begin {$myVar = "Nothing first pulled from the pipeline --->$PSItem<---"
             $myVar
            }
    Process {
             $myVar = "Value from the pipeline $PSItem"
             $myVar 
            }
    End
           {
             $mYVar = "This only executes at the end"
             $myvar
           }
}
PS C:Windowssystem32> 2,4,8 | Examine-Pipeline

Nothing first pulled from the pipeline ---><---
Value from the pipeline 2
Value from the pipeline 4
Value from the pipeline 8
This only executes at the end 

Viewing All Functions in a Session

Functions are stored in the Function: drive. This is just like the Variables: drive you saw earlier. You can view all the functions that are loaded in a session by changing to the drive and using the alias Dir, or you can simply use the following command:

Get-ChildItem -Path Function:

This command will also work for the other drives because, as you learned earlier, Dir is an alias.

Formatting Output

PowerShell has many different ways to present output to the console. Table 2.2 lists these format cmdlets with their aliases.

TABLE 2.2: Output Formats

CMDLET ALIAS
Format-Wide FW
Format-List FL
Format-Table FT

These format cmdlets will have their own parameter, called a property. This property will hold the list of properties you are trying to display. You can modify this property by passing the various attributes you want displayed. Each format type will display specific default properties. Format-Wide has only a single property. Format-List and Format-Table can hold several.

Using Format-Wide

Format-Wide is the default output format to the console. Here is an example of the Format-Wide cmdlet:

PS C:UsersAdministrator> Get-ChildItem |Format-Wide

  Directory: C:UsersAdministrator

Documents                                                                     Desktop
Dropbox                                                                     Downloads
Links                                                                       Favorites
Pictures                                                                        Music
Searches                                                                       Videos

Format-Wide tries to fill up the entire screen of the console. That is why it is called wide. It can lead to some interesting looking output.

Using Format-List

Format-List shows many properties of an object. Each property will be labeled and on a separate line. If you want to limit what is shown, you can specify the individual properties by passing values to the -Property parameter. Let’s compare the default properties as shown here:

PS C:UsersAdministrator> Get-ChildItem |Format-List

    Directory: C:UsersAdministrator

Name            Contacts
CreationTime   : 2/24/2017 2:03
LastWriteTime  : 3/17/2017 7:53
LastAccessTime : 2/24/2017 2:03
Mode           : d-r---
LinkType       : 
Target         : {}

This displays only the properties of the Contacts folder. Now you will tell Format-List to show all of the properties of the same file object:

Get-ChildItem | Format-List -Property *

PSPath         : Microsoft.PowerShell.CoreFileSystem::C:UsersAdministratorContacts
PSParentPath   : Microsoft.PowerShell.CoreFileSystem::C:UsersAdministrator
PSChildName    : Contacts
PSDrive        : C
PSProvider     : Microsoft.PowerShell.CoreFileSystem
PSIsContainer  : TRUE
Mode           : d-r---
BaseName       : Contacts
Target         : {}
LinkType       : 
Name           : Contacts
Parent         : Administrator
Exists         : TRUE
Root           : C:
FullName       : C:UsersAdministratorContacts
Extension      :
CreationTime   : 2/24/2017 2:03
CreationTimeUtc: 2/24/2017 9:03
LastAccessTime : 2/24/2017 2:03
LastAccessTimeUtc : 2/24/2017 9:03
LastWriteTime  : 3/17/2017 7:35
LastWriteTimeUtc : 3/17/2017 14:35
Attributes     : ReadOnly, Directory

You can selectively filter to whatever properties you want to view by simply adding a comma-separated list of values to the -Properties parameter with full wildcard support.

Using Format-Table

Format-Table is used for tabular output. Remember that each format has its own defaults. Here is the same directory listing displayed by a Format-Table:

Get-ChildItem | Format-Table

Mode           LastWriteTime   Length    Name
------     -----------------   ------    ----
d-r---   3/17/2017   7:35 AM             Contacts
d-r---   3/17/2017   7:35 AM             Desktop
d-r---   3/17/2017   7:35 AM             Documents
d-r---   3/17/2017   7:35 AM             Downloads
d-r---   3/20/2017  11:55 AM             Dropbox
d-r---   3/19/2017   7:35 AM             Favorites
d-r---   3/11/2017   7:35 AM             Links
d-r---   2/19/2017   7:35 AM             Music
d-r---   2/14/2017   7:35 AM             Pictures
d-r---   2/19/2017   7:35 AM             Saved Games
d-r---   2/19/2017   7:35 AM             Searches
d-r---   2/19/2017   7:35 AM             Videos

There are other options for setting up output formatting. The appropriate type depends on the user’s need and the type of data. You can filter by passing the -Property parameter, which helps reduce the clutter of your output.

Using Loops

Sometimes when scripting, you may need to do the same operation over and over, but with different objects. You may have a pipeline filled with objects that need to be manipulated. You may need to refill the pipeline over and over. You can accomplish this by creating a variety of loops and conditional loops. This is where individual command and variable elements really come together.

Using the For Loop

The for loop will run a block of code a specific number of times. This is useful for running the same code over and over, or for processing members of an array that match a particular characteristic. If you want to do the same thing for all members of an array, you will probably be better off using the foreach loop that is discussed next.

Here is the syntax for the for loop:

for (<init>; <condition>; <repeat>)
          {<statement list>}

The for loop will start with an initiation section that has one or more commands. If you use multiple commands, you will need to separate the commands by a comma. This section is used to initialize a variable with the starting value that is used to keep track of how many times you step through the loop.

The condition section has some type of Boolean comparison or condition. This is typically to see if you have executed the loop enough times. If this comparison is True, the section in the command block runs once and then the commands in the repeat section run.

The repeat section is used to typically increment the variable that was set in the init section. Then the condition section is performed again. If the condition is still true, the statements in the statement list, known as the command block, will run again and the repeat section will run again. This repeats until the condition evaluates to $False, in which case the for loop ends.

The statement list section will contain code that is executed each time the loop condition evaluates to True. You can also change the variable being tested in the condition section inside the statement list section.

The init, condition, and repeat sections are separated by a semicolon, but you can also separate them with carriage returns. At a minimum, you have to have these three sections, surrounded by parentheses, and you must have a command in the statement list section. Here is an example:

PS C:UsersAdministrator> For ($i=1 ; $i -lt 3; $I++) {"The counter is at $i"}

The counter is at 1
The counter is at 2

The first time through, $i was set to 1. When you did the comparison, $i is less than 3, so you run the command block for your output. Then you go to the repeat section where $i is incremented by 1. This makes $i now equal to 2. If you want to increment more than one, you can do $i+=5. That will increment the value of $i by 5. Here, you will just increment by 1.

PowerShell will then test the condition again. This time $i = 2, so the comparison is still True, the code block runs again, and $i gets incremented to 3.

When PowerShell does the comparison, it finds that $i, with a value of 3, is no longer less than 3, so the for loop ends.

Using the Foreach Loop

The foreach loop is used to run through all the members of an array. This loop will run commands against each item. The items will be identified by a variable that doesn’t need to be declared. This variable will represent each item, one at a time, in the array. Unlike the for loop, foreach doesn’t need to know the number of times it needs to run though the loop and doesn’t need any initialization of counting variables. Here is the syntax:

foreach ($<item> in $<collection>){<statement list>}

You can set up an array and run through the processing. Note that in this example, you will use a variable that is created just for the foreach loop:

$ourCityArray = "Paris","Perth","Atlanta","Phoenix"
Foreach ($magicCreatedVariable in $ourCityArray)
{"The City here is $magicCreatedVariable."}
"There are no more cities."

The city here is Paris.
The city here is Perth.
The city here is Atlanta.
The city here is Phoenix.
There are no more cities.

You can also use a cmdlet instead of an array. You will use a Get-ChildItem to view all of the functions in your scope that start with the letter G, as illustrated here:

Foreach ($Functions in Get-ChildItem -Path Function: -Name -Include G*) {$Functions}

Get-Verb
G:
Get-IseSnippet
Get-FileHash

You aren’t limited to having just a single statement in the statement list. You can also include foreach inside the command pipeline. When you do this, foreach doesn’t need the variable or the array identified. It will simply pull each item from the pipeline values provided by the previous command and run the statement items against each item as illustrated here:

Get-ChildItem -Path ENV: -include "*Win*" -name| foreach {"[ENV]:$PSItem"}

[ENV]:windir

This will go through the environmental variables and display any that have “Win” somewhere in their name.

You can also use -Begin, -Process, and -End command blocks. This is similar to the Begin, Process, and End portions of the function block you saw earlier. The -Begin section is processed just once, prior to pulling objects from the pipeline. The –Process block is executed once per item. The –End block is performed only once, after all of the objects in the pipeline have been processed. You can view the code here:

$citiesVisited = "Paris","Perth","Atlanta","Phoenix"
$citiesVisited | ForEach-Object -Begin {Write-Host("AD Site Cities")} -Process {Write-Host($PSItem)} -End {Get-Date}

AD Site Cities
Paris
Perth
Atlanta
Phoenix
Tuesday, March 23, 2022 6:48:07 PM

Using the If Statement

The If statement is used to run code blocks based on the results of a Boolean conditional test. The If statement gives you multiple combinations of three options that also enable multiple levels of nesting:

  • Run a code block if the condition test result evaluates to $True
  • Run a code block if the condition test result evaluates to $True and all the previous conditions evaluated to $False
  • Run a code block if all of the previous conditions test result evaluated to $False

Here is the syntax:

if (<test1>) {<statement list 1>} [elseif (<test2>) {<statement list 2>}] [else {<statement list 3>}]

These If statements can be pretty simple, as illustrated here:

If ($a –eq 3) {Write-Host "$A equals 3." } 

The code block will be executed only if the evaluation results in True. You can also use an If statement where you have one block run if the condition is True, and a different code block if the condition evaluates as False.

If ($a –gt 3)
 {
  Write-Host "Variable a is greater than 3."
 }
Else
{
  Write-Host "Variable a is less than 3 or Variable a is empty."  
}

The Else portion of this code will run only if the prior If statement is $False. This could be because the $a variable is 3 or less. It could also be due to the fact that $a is $Null.

If you want the Else statement to test another condition prior to running the code block, you could put in another If statement, but PowerShell has Elseif. With Elseif, the second condition will occur only if the first condition evaluates as $False.

If ($a –Lt 10)
 {
   Write-Host "Site Link cost is less than 10."
 }
Elseif ($a –Eq $Null)
 {
   Write-Host "Site Link cost is Null."  
 } 
Elseif ($a –Lt 21)
 {
   Write-Host "Site Link cost is between 10 and 20."  
 }
Else 
 {
   Write-Host "Site Link a is greater than 20"
 }

Any time your condition is evaluated as True, the associated code block will run and none of the remaining Else or ElseIf statements will be evaluated. You can place an Else statement at the end that will only be executed if all the previous conditional statements evaluated as $False.

If you find that you are using a lot of Elseif statements in your code, you should probably use a Switch statement instead.

Using the Switch Statement

The switch statement is not like a switch parameter. The switch statement will specify a test value and will then contain multiple conditions. If the test value matches the conditions, the associated action will execute. Unlike the Elseif statements, all switch conditions are typically tested. Here is the basic syntax:

Switch (<test-value>)  
 {  
     <reference-value> {<action>}  
     <reference-value> {<action>}  
}

The actual switch syntax is a bit more involved, and you will examine it in a moment.

You need to understand the test value is checked against each reference value, even if the test value matched a previous reference value in the same switch block. If the test value matches the reference value, the action code block is performed. If the test value doesn’t match any of the switch reference values, none of the blocks are performed for that test value. See the following code:

Switch (7)
{
  2 {"This matches the reference value two"}
  94{"This matches ninety-four"}
  7{"This is matching seven"}
  4{"This matches four"}
  7{"This matched seven again"}
}

This is matching seven
This matched seven again

Here is an example of code processing an array of location codes and showing what happens if the test value doesn’t match any reference values:

Switch ("Perth","Phoenix","Dallas")
 {
   Wyoming {"This matches Wyoming"}
   Perth {"This matches Perth"}
   Phoenix {"This matches Phoenix"}
   Perth {"This matches Perth again"}
}

This matches Perth
This matches Perth again
This matches Phoenix

The switch values can be in any order, but each test value is tested by every reference value, even when the test value matches other reference values in the same switch block.

If you want to prevent matching on multiple reference values, you can add a Break to the switch, as illustrated here:

Switch ("Perth","Phoenix","Dallas")
 {
  Wyoming {"This matches Wyoming"}
  Perth {"This matches Perth";Break}
  Phoenix {"This matches Phoenix"}
  Perth {"This matches Perth again"}
}

This matches Perth

When PowerShell is executing a switch block and it hits a Break in the switch reference values, the Switch block will stop evaluating that test value and will exit, even if the test value will match later reference values in the same switch block or there are more test values to check.

If you want to stop further processing just that particular test value, but want to move to process any additional test values, you would use a Continue as illustrated here:

Switch ("Perth","Phoenix","Dallas")
 {
  Wyoming {"This matches Wyoming"}
  Perth {"This matches Perth";Continue}
  Phoenix {"This matches Phoenix"}
  Perth {"This matches Perth again"}
}

This matches Perth
This matches Phoenix

You can also identify a default switch that will be used if the test value doesn’t match any other conditions, as illustrated here:

Switch ("Perth","Phoenix","Dallas")
 {
   Wyoming {"This matches Wyoming"}
   Perth {"This matches Perth"}
   Phoenix {"This matches Phoenix"}
   Perth {"This matches Perth again"}
   Default {"We don't match anything"}
}

This matches Perth
This matches Phoenix
We don't match anything 

There can be only a single default statement in each switch statement, and each switch statement must include at least one condition statement.

The actual Switch syntax will take parameters that identify regular expressions, a wildcard, or an exact value. Only one parameter is used. If more than one is specified, the last one specified will be the only one that is used. You can also make the test value case-sensitive.

switch [-regex|-wildcard|-exact][-casesensitive] (<value>) 

You can also use a file instead of just a value.

switch [-regex|-wildcard|-exact][-casesensitive] -file filename

In either case, the switch statement is followed by the code block:

{ 
"string"|number|variable|{ expression } { statementlist } 
default { statementlist }
}

If you use switch after a pipeline, the values will be passed to the switch and processed in order. If you hit a Break, you will stop processing the switch block, even if additional objects are in the pipeline. Because the pipeline isn’t empty, this can lead to unexpected results to cmdlets that are next in line to extract objects from the pipeline.

Using the While Loop

Using While loops is an easy alternative to using for loops. Here is the syntax:

While (<Condition>) {<statement list>}

As long as the condition is True, PowerShell will endlessly loop through the statement lists. At the end of the block, the condition is again evaluated. If the condition is no longer True, the loop ends. Note that the condition is evaluated only at the beginning of each loop. If you have multiple statements that would temporarily make the condition False and it becomes True before you are finished with the loop, you will continue the loop. The evaluation only means that the condition is True at the time it was checked. Here is some code:

While ($count –ne 5)
{
$count++
Write-Host "The count is "$count
}

The count is 1
The count is 2
The count is 3
The count is 4
The count is 5

Notice that the first time you run the code, the variable $count is $Null. Also, you need to notice that when the count becomes 5, the loop stops only after the condition is checked. When $count equals 5, you still finish the statement list even though you no longer match the condition. If you had placed the $count++ statement after the Write-Host statement, you would have a different output:

While ($Count –ne 5)
{
Write-Host "The count is "$Count
$Count++
}

The count is 
The count is 1
The count is 2
The count is 3
The count is 4

If you want to write the While loop in a single line, you should separate the different statement lines with a semicolon, as illustrated here:

While($Count –ne 5) {$Count++ ; Write-Host "The count is "$Count}

It is critical to understand that as long as your condition is True, the loop will run forever.

Using the Where-Object Method

If statements are quite powerful, but they can require quite a bit of coding. Where statements are used when you want to select objects in a collection based on a property value. Where has several syntax options, based on what you are using as your conditional statement. Here is an example of two:

Where-Object [-Property] <String> [[-Value] <Object>] -comparisonoperand [-InputObject <PSObject>] [<CommonParameters>]

The comparison is what is typically different between the various syntax versions of the Where method, so it is italicized in the example.

To view the entire syntax list, enter Help Where-Object.

PowerShell provides two ways to use Where. The first is to use a script block. A script block allows you to specify the name of the property, a comparison operator, and the reference value. Here is an example with a script block:

Get-Service | Where-Object {$PSItem.Status –eq "Stopped"}

The other format is the comparison statement format. Here is the same command:

Get-Process | Where Status –eq "Stopped"

Both methods work, and there is no difference in the output.

Managing Remote Systems via PowerShell

PowerShell provides a means of managing thousands of systems to do thousands of different things. So far, you have pretty much stayed on the local system. Typically, you will want to connect to remote systems.

By default, your systems are locked down from remote PowerShell access. The way access is enabled will depend on whether your target systems are part of an Active Directory domain or are just in a workgroup. Let’s explore domain-joined systems first.

Using Enable-PSRemoting

You will need to run PowerShell as an administrator. PowerShell relies on the WinRM service. It is important to ensure that the services is set to start automatically. You will also need to create firewall rules that allow PowerShell to connect to the system. Fortunately, Microsoft has made it easy to do both operations with a single command:

Enable-PSRemoting –Force

This command runs the Set-WSManQuickConfig cmdlet. That cmdlet is responsible for starting the service, changing the startup to automatic, and enabling a firewall exception. Here is a list of the other things it does:

  1. Creates a listener that accepts requests from any IP address.
  2. Registers Microsoft.PowerShell and Microsft.PowerShell.WorkFlow session configuration.
  3. Registers Microsoft.PowerShell32 session configuration on 64-bit computers.
  4. Enables all session configurations.
  5. Changes security descriptor on all session configurations to permit remote access.
  6. Finally, restarts the WinRM service so all the configuration changes will take effect.

Enable-PSRemoting has several options. Because Enable-PSRemoting starts listening services, you typically don’t want to run Enable-PSRemoting on systems that are used only to send commands. You don’t want to have services listening when you are not the ones receiving the remote PowerShell connection.

If you want to disable PSRemoting, you should use the following command:

Disable-PSRemoting –Force

Remoting to Workgroup Servers

If the target server is not domain-joined, you will need to run Enable-PSRemoting on both the target system and the system you will use to run your console. You will also need to configure the TrustedHosts setting for WsMan. To do this, use the following command:

Set-Item WsMan:localhostclient	rustedhosts *

This will allow any system to connect. The user will still need to have local administrator credentials on the managed machine. If you want to restrict management computers, you can replace the * with a comma-separated list of IP addresses or host names of systems you trust to manage this remote machine.

Once the configuration is changed, you will need to restart the WinRM service. This can be accomplished by using the following:

Restart-Service WinRM 

You will need to run this command on both the local and the remote system. You can test to see if you have communication by using this command:

Test-WsMan Server06.contoso.com

Of course, you will need to replace the computer name used in the example with the actual name. This command tests whether the WinRM service is running, and then it will display the MS-Management identity schema, the protocol version, the product vendor, and the product version.

Running PowerShell Commands on Remote Systems

If you want to start an interactive session on the remote computer, use the following:

Enter-PSSession Server15.contoso.com

You can use the computer name or the IP address. This cmdlet will change your prompt to reflect the remote system, as illustrated here:

PS C:UsersAdministrator> Enter-PSSession 10.102.50.50
[10.102.50.50]: PS C:UsersAdministratorDocuments> 

Any commands you use will be executed on the remote system. Any output displayed to the console, including errors, will be displayed locally. To exit the session, use the following command:

Exit-PSSession 

Not all cmdlets need to have a remote session. To find a list of cmdlets that do not require a session, you can use the following:

Get-Command | where {$PSItem.Parameters.Keys -contains "ComputerName" -and $PSItem.Parameters.Keys -NotContains "Session"}

This will give you a list of commands that do have the –ComputerName parameter but do not have a –Session parameter. These commands will not need to have the WinRM service running. They will not need to be configured for PowerShell remoting and will not have to match remoting system requirements.

If you need to run single commands that require the ability to remote, you can use the Invoke-Command cmdlet, as shown here:

Invoke-Command –Computername Server12.contoso.com –ScriptBlock {Get-ChildItem C:}

A script block is just a list of statements, very similar to a function. It can receive parameters. Unlike a function, the parameters have to be included inside the braces. Script blocks also support Begin, Process, and End keywords.

These Invoke-Command cmdlets are designed to send a single cmdlet to one or more computers. If you want to send the command to several, just place the computer names or IP addresses, separated by a comma, in place of the single computer name. The connection only lasts long enough to send the command and receive whatever output is returned.

Running Remote Scripts on Remote Computers

If you want to run a script that resides on all of the remote computers in the session, use the –FilePath parameter. Here is an example:

Invoke-Command –ComputerName Server01, Server02, LocalHost –FilePath C:scriptstoragemyscript.ps1

The scripts need to be in the same location and have the same name on every machine. You can be clever and store script paths and names in a variable. You can then pass the file path using the variable instead of the literal path.

To interrupt a remote command, press Ctrl+C. The interrupt will be sent to the remote computer.

Establishing Persistent Remote Connections

If you want to run a series of commands, and you want to share data between these commands, you need to establish a persistent session. This is done by using the –Session parameter of the invoke command, as illustrated here:

$mySession = New-PSSession –ComputerName Server01, Server02, Server03

This command creates a persistent remote connection to three servers and saves the PSSession to a variable $mySession. You can use this variable to send identical commands to all of the systems at the same time. If you use variables, these variables will be the same on all the sessions. This makes it easy to pass and use variables without wondering if the variables exist in the remote session. Here is an example:

Invoke-Command –Session $MySesssion –ScriptBlock {$services = Get-Services}

This code uses the $mySession variable, created earlier, to send the contents of the script block to each machine. This will generate a list of all the services on each computer. Each system’s services will be stored in a variable called $services on that machine. If you want to perform additional manipulation using the list of these services (maybe using a While, Switch, If, or ElseIf statement), you can target individual sessions to individual servers or do the same command on all the servers at once using this variable. You can have other variables that identify other servers to allow selective invocation of commands, without getting bogged down keeping track of the various sessions or having to deal with different variables not matching the expected names on these remote machines.

Because this is a persistent connection, you can run additional commands against these systems, and any variables or data created for each session will be available until you disconnect the session. If you want to include the local system, you can use either a dot (.) or the term LocalHost in the –Session parameter.

Using PowerShell Direct

PowerShell Direct is used to manage Hyper-V virtual machines from the host system. The Hyper-V host and guest machines need to be running Windows 10 or Windows Server 2016. The operating systems of the host and guest do not need to have network connectivity, configuration, or even a network adapter. You do have to be logged on the Hyper-V host as an administrator and have user credentials on the virtual machine. The virtual machine will need to be local to the Hyper-V server, and it will need to be booted. Here is the command:

Enter-PSSession –VMName Server01.contos.com

This is a direct interactive session with the virtual machine. You can also use the GUID by replacing -VMName with -VMGUID and then entering the virtual machine’s GUID instead of the computer name.

You will then have an interactive session with the virtual machine. To exit the session, do the following:

Exit-PSSession

You can run script blocks and scripts on the virtual machines pretty much the same way you do with regular PSSessions. You just replace the –ComputerName parameter with the –VMName parameter, as shown here:

Invoke-Command –VMName Server01 –FilePath C:scriptstoragemyscript.ps1
Invoke-Command –VMName Server12 –ScriptBlock {Get-ChildItem C:}

The Bottom Line

  • Customize the PowerShell and PowerShell ISE environments. Microsoft PowerShell and PowerShell ISE are great environments for creating and managing Windows Server 2016 systems. Having the appropriate modules, functions, variables, and configuration settings preloaded into profiles can speed the development process. You’ll want to have everything set up properly to help maximize your workflow.
    • Master It  You have just set up your Windows Server 2016 system and need to customize Windows PowerShell so it will automatically be in Run As Administrator mode. You also need to determine which modules can have their help files automatically update online. Finally, you need to start a transcript of your current PowerShell session in a text file called C:PowerShellTranscript.txt that will close when you exit your session.
    • Solution  Run PowerShell as an administrator. You can accomplish this by right-clicking the PowerShell icon and selecting Run As Administrator. Then modify your profile to include Function Open-AsAdmin {Start-Process PowerShell -Verb RunAs}. Identify the modules that have the ability to update online by using Get-Module -ListAvailable |Where HelpInfoURI.

      Start the transcripts by using Start-Transcript C:PowerShellTranscript.txt. The transcript will end when you close out of your session.

  • Perform command discovery and interpret PowerShell syntax notation and concept documentation.  Frequently, you will be tasked with creating PowerShell configuration with limited documentation. You need to master the skill of command discovery when how-to documents are difficult to find.
    • Master It  Find the PowerShell commands that will create a formatted list that contains only the network interface aliases and the IP addresses for all the network adapters on the local server. The command should save the list to c:Networkadapters.txt.
    • Solution  Use Get-Command -Noun *IpAddress* to find the commands that contain IP addressing. Use Get-NetIpaddress |FL to find all the parameters and assemble them into Get-NetIpaddress| fl InterfaceAlias, IpAddress > c:Networkadapters.txt to create the text file that lists all of the interface names and assigned IP addresses.
  • Write and analyze code that supports functions, loops, comparisons, pipeline processing, variables, and scripts.  There are many building blocks to creating useful scripts. You will frequently need to assemble scripts using a variety of components to produce useful output. Developing the skills to create new basic scripts is critical. You will also frequently need to read and understand external scripts before you run them in your production environment.
    • Master It  Locate all the systems in the domain and create a web page that lists all of the installed hotfixes on each running machine. Have a separate web page for each machine. Display the Hotfix ID and when each hotfix was installed. Create a single, separate web page that lists all the machines that do not respond to the probe.
    • Solution
      $hosts = (Get-ADComputer -Filter *| Select-Object -ExpandProperty DNSHostName)
      foreach ($hostname in $Hosts)
           {      
             If(Test-Connection $hostname -Quiet)
               {
               Get-Hotfix -ComputerName $hostname |Select-Object Hotfixid,installedon|convertto-html -Fragment|out-file c:$hostname" Hotfixes.html"
               }
             Else {
                     $hostname+" doesn't respond" |Out-File c:Unresponsive.html -append
                  }
      
           }
  • Manage remote servers with PowerShell.  You will frequently need to send the same command to multiple machines. Many of these commands will require an interactive session and the use of credentials.
    • Master It  You need to disable the use of SMB version 1 on all of your running servers, throughout your domain. Use a script to locate all of the hosts in your domain that are running the WinRM service. Open an interactive session with all of them and disable SMB1 protocol without the need for confirmation.
    • Solution
      $hosts = (Get-ADComputer -Filter *| Select-Object -ExpandProperty DNSHostName)
      foreach ($hostname in $Hosts)
           {      
             If(Test-Connection $hostname -Quiet)
               {
                If((Get-Service -computername $hostname "winrm"|Select-Object -expandproperty Status) -eq "Running") 
                  {
                   Invoke-Command -ComputerName $hostname -ScriptBlock {Set-SmbserverConfiguration -EnableSMB1Protocol $False -Force}
                 }
                }
           }
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset