IN THIS CHAPTER
This chapter explains how PowerShell can be used to manage the Windows file system. This chapter provides examples on how to manage the file system using PowerShell. We discuss the core cmdlets; look at file system navigation; manage drives, folders and files; and show how to work with XML and CSV files. In addition, this chapter presents a working file-management script based on a real-world situation.
The goal is to give you a chance to learn how PowerShell scripting techniques can be applied to meet real-world file system automation needs.
PowerShell provides a set of core cmdlets that can be used to manage all the various data stores that are supported via PowerShell Providers, which were discussed in Chapter 4, “Other Key PowerShell Concepts.” These core cmdlets can be used to manage all the supported data stores, so a consistent user experience is provided. If you need to create something in the registry or in the file system, the command and its syntax are similar.
Some of the parameters to these cmdlets, and even some of the cmdlets themselves, might either require different arguments or possibly not work at all, depending on which provider is currently being used. For example, Select-String
works differently while in the file system provider versus the registry provider. Unfortunately, there isn’t any help currently available that will dynamically update itself to let you know what will and will not work.
The following command retrieves a list of the core cmdlets for manipulating data stores available via PowerShell providers:
PowerShell has a built-in provider, the PowerShell FileSystem Provider, for interfacing with the Windows file system. The abstraction layer this provider furnishes between PowerShell and the Windows file system gives the file system the appearance of a hierarchical data store. Therefore, interfacing with the file system is the same as with any other data store that’s accessible through a PowerShell provider.
Being able to move around in a file system is one of the most essential tasks in PowerShell. Several core cmdlets deal with file system navigation. For individuals more familiar with either DOS or UNIX/Linux servers, PowerShell offers aliases in most cases that are more familiar to users who are just getting accustomed to using PowerShell. The two key cmdlets when dealing with file system navigation are Set-Location
(alias cd
) and Get-Location
(alias pwd
).
The Get-Location
cmdlet retrieves the current working directory. As you move through the file system, if you have changed your default PowerShell prompt, it can be useful to have a quick reminder of your current location in the file system.
The Set-Location
cmdlet accepts a parameter that sets the working directory to the value specified. Using Set-Location
enables one to move through the file system, and when you might need to access a file or run a script, only the relative file name needs to be used (versus the full path).
Tab completion works well with Set-Location
. Start off the desired path, and then hit the TAB key. Additionally, one can use tab completion to even complete a path containing subdirectories. For example, if you want to change to the directory “C:Documents and SettingsuserABCDesktop,
” you can simply do
set-location"c:docu*userA*desk* [TAB].
As long as the preceding string leads to a unique path, PowerShell auto-completes the entire string.
The Push-Location
cmdlet, along with the Pop-Location
cmdlet reviewed in the next section, are useful when navigating to and from multiple directories. Using Push-Location
, we can have PowerShell remember certain directories for later use. Here’s a simple example of “pushing” the current directory to a list of locations.
PowerShell drives can be created and removed, which is handy when you’re working with a location or set of locations frequently. Instead of having to change the location or use an absolute path, you can create new drives (also referred to as “mounting a drive” in PowerShell) as shortcuts to those locations.
To add a new drive, or a shortcut in this case, use the New-PSDrive
cmdlet, shown in the following example:
Let’s look at how to manage folders in PowerShell. Some cmdlets behave a bit differently depending on the provider. Dealing with folders while in a file system provider is easy to comprehend. In most cases, the arguments passed to the cmdlet are similar when dealing with a folder versus a file. This section examines folder management, and the next section covers file management.
To create a new folder, the New-Item
cmdlet is used. We pass it a Type
parameter with a value of Directory
to indicate this will be a directory.
Removing a folder is just as easy using the Remove-Item
cmdlet with the proper parameters and values.
When creating a file or a folder, you want to specify the type as either directory or file. We see how to use new-item
to create a file later.
The previous example shows what happens when we try to delete a directory that has some contents. If the testing directory were empty, there would be no output.
Remember to use whatIf
and confirm
parameters when available to make any kind of change, such as deleting. You are either presented with the details of what PowerShell would have done had you entered the command as is or be prompted with an “Are you sure?
” response before PowerShell continues with the command entered.
Let’s look at how to move a folder. Moving something consists of actually moving a directory, in this case, from some directory to another. In other words, you won’t typically use the Move-Item
cmdlet to simply move the folder test
to test2
, which is basically renaming a folder, which we will see just after this.
In the previous example, we moved a directory to a new location. It’s important to check that it has been moved. Notice that we first used Get-Item
, and then Get-ChildItem
. Get-Item
simply lists an item itself, whereas Get-ChildItem
lists the child items, such as files.
When dealing with the Move-Item
cmdlet, if you want to move directories recursively, especially when wanting to reproduce the directory structure from the source to the destination, it is easier to use the DOS xcopy
command. PowerShell can handle this kind of operation, but several lines of scripting are required.
In contrast with actually moving a folder, where it may be moved from one directory to a completely different location, renaming a folder would be more appropriate to use when the location of a folder will remain constant, but we are looking to change the name of the folder.
We say it is more appropriate to use Rename-Item
, but Move-Item
would be equally useful.
You might need to check whether a directory actually exists, especially in scripts dealing with any kind of file system management task. Using Test-Path
returns a Boolean value (either “True
” or “False
”) depending on the result of the test.
A good practical example that we show is checking for the existence of a directory before continuing to run a script. In this case, we create a simple function to show how this works.
We see that our function ended when we test for the existence of a file. If the file doesn’t exist, which we determine using "(if(!("
,we run that statement block that outputs some text and then the break statement is invoked, which exits the function.
We looked at how to manage folders with PowerShell. Now, we look at how to manage files. Because of how PowerShell tries to provide a common experience to users, the cmdlets used will be the same for managing files versus managing folders. The only differences are found in the values passed to the arguments of the cmdlets.
The process to create a new file is basically the same as creating a new folder except a different value for a parameter is used.
The syntax for removing a file is exactly the same as removing a file because there isn’t a parameter for specifying whether we are attempting to remove a file or a folder.
Moving an item and renaming an item are two different things.
If we want to rename an item, we should not attempt to move it, but use Rename-Item
.
Let’s examine the content in a file. To read the data inside of a file, we use the cmdlet Get-Content
.
To write to a file, we have two cmdlets: Set-Content
writes data to the file, Add-Content
appends data to the file. It is important to remember the difference between these two or risk losing data. Writing data to a file can do two things: create the file with the appropriate data or overwrite any existing data in the file. We show how we can use the Set-Content
cmdlet to demonstrate both scenarios. We also use the file string.txt
we saw in the previous section.
Add-Content
can do two things: It can add data to an existing file, but it can also create a new file and add the data to it.
Fortunately, there is a cmdlet to search for data within items. Select-String
enables you to search for the contents of a file passed as an argument.
You will normally use this cmdlet for searching through files that you are passing via the pipeline.
Searching for strings in multiple files: As we mentioned, we can look through data passed via the pipeline by passing the files to search to the Select-String
cmdlet. This is useful when attempting to search recursively through a file system. If we wanted to search through a file system recursively, we would use Get-ChildItem
to get the objects we were interested in, and then pass those objects along to Select-String
.
PS > get-childitem . -rec *.txt|foreach-object{select-string $_ -patt "2nd"}string.txt:2:2nd line
string2.txt:1:2nd line
foobarstring2.txt:1:2nd line
PS >
Dealing with XML files is another one of PowerShell’s strengths. You can load XML data for handy viewing, and you can also edit them.
In the first part of this section, we walk you through various tasks with PowerShell where we create an XML file, and then edit it by adding to it and removing from it. While doing so, we reference an article that originally appeared in the February 2008 issue of TechNet Magazine, which you can find online at http://technet.microsoft.com/en-us/magazine/cc194420.aspx.
The online document used VBScript and the older Microsoft.XMLDOM
object to handle XML tasks. We are going to use PowerShell and the .NET Framework’s System.Xml.XmlDocument
class to provide an updated example.
XML files are case-sensitive. A node named “Test
” will not be seen by PowerShell as the same as “TEST.
”
The 2.0 CTP2 has a few new feature relating to XML:
• A new cmdlet named ConvertTo-Xml
. As of the release of the 2.0 CTP2, the built-in help was not complete for this new cmdlet.
We want to end up with an XML document that looks like this:
Here’s the code we use to create the previous document.
If we use Get-Content
to load our newly created XML document, we see that we achieved the desired result shown in the previous example.
We have our XML document, but now we want to modify it by adding to the original XML file.
We have added to our XML document; now, let’s show how to modify existing data in our XML file.
We have just modified an existing node, and our XML document looks like the following:
Let’s look at how we can delete an existing node from our XML file.
Our document looks like this with a child node removed, which is what our document looked like when we first created it:
Before we show how to load an XML file, let’s note what we did in the previous sections: We used the Get-Content
cmdlet to read the contents of an XML-formatted file. Using this method provides us with contents of the XML-formatted file, but this isn’t useful except for reading the file.
The proper way to use the XML features included with PowerShell is to use the [XML]
type shortcut (also known as type accelerator).
In the previous section, we loaded an XML file into the $xml
variable. Let’s look at what we can do with our new variable.
We can walk through all the XML nodes in the document. An example that could be handy is looking through the XML output of the new Server Manager included with Windows 2008 for the roles and features on a system.
We cannot split the Import-CliXml
and Export-CliXml
cmdlets into their own section, so we cover them together here.
Import-CliXml
and Export-CliXml
cmdlets are provided to handle structured data helping to move data in and out of files. This can help, for example, when one might need to store data. When it comes time to reconstruct the original object, the original properties and values can be accessed, but some of the methods might no longer be available.
Let’s create a simple string
and datetime
object, and then use Export-CliXml
to write this object to a XML-formatted file.
We’ve created two XML-formatted files, so let’s take a look at the contents of each.
Some of the key things to take out of the previous contents follow:
• From string.xml
: The pair <S>
and </S>
indicate that this is a string
object.
• From date.xml
: The pair <DT>
and </DT>
indicate that this is a datetime
object.
From there, we can take these XML files and move them around from one system to another.
If we use the Import-CliXml
cmdlet, we can reconstruct the original objects again.
In the previous example, we see we now have our original objects back again. If the XML is imported into another system, the specific .NET class used would also need to exist on the destination system.
PowerShell also has some built-in functionality for dealing with comma-separated value (CSV) files. Import-Csv
reads in the file and creates objects based on the contents of the file.
The 2.0 CTP2 has a few new features relating to CSV:
• A new cmdlet named ConvertTo-Csv
. As of the release of the 2.0 CTP2, the built-in help was not complete for this new cmdlet.
• The existing Import-Csv
and Export-Csv
cmdlets now have a new Delimeter
parameter, which provides support for delimeters other than “,
”. See the built-in help for the cmdlet for more details.
We look at Import-Csv
and show an example of reading in a file.
Because these are objects, we can use cmdlets like Select-Object
to get just specific properties. For example, my server.csv
file might contain all kinds of information regarding each server, but we might need only a printout of the server column.
We also said something about objects being created when using Import-Csv
. Using Get-Member
, we can see these are added as properties to a custom object that PowerShell creates.
You can also use PowerShell to create a CSV file from objects passed along the pipeline. For example, you can get a listing of all the processes on the machine and write out particular objects to a CSV file.
The previous example dumps a lot of information into the CSV file, so we might want to filter out some of the information by using select-object
.
ProvisionWebFolders.ps1
is a script provided as a complete working example of PowerShell being put to use for task automation. A working copy of this script can be found at www.informit.com/title/9789780768687187. You need to provide two parameters to run this script. First, TemplatePath
should have its argument set to the source path of the template folder structure copied to new users’ Web folders. Second, ImportFile
should have its argument set to the name of the CSV import file used to define new users and their Web folder locations. The command to run the ProvisionWebFolders.ps1
script, with sample output shown in Figure 8.1, follows:
The ProvisionWebFolders.ps1
script performs the following sequence of actions:
1. The script verifies that the template folder path exists.
2. The script verifies that the import folder path exists.
3. The script imports the CSV file into the $Targets
variable.
4. For each user in $Targets
, the script copies the template folder structure to the new user’s Web folder.
5. The script sets permissions on each folder, such as the following:
• Administrators: Owner
• Administrators: FullControl
• System: FullControl
• NewUser: FullControl
The first code sample contains the header for the ProvisionWebFolders.ps1
script. This header includes information about what the script does, when it was updated, and the script’s author. Just after the header are the script’s parameters:
Notice how the throw
keyword is being used in the param
declaration to generate an error when a parameter does not have a defined argument. This technique is used to force a parameter to be defined by stopping execution of the script and providing the script operator with information about the required parameter using the Write-Host
cmdlet. When using the Write-Host
cmdlet, you can use the Foregroundcolor
parameter, as shown in the previous code sample, to control the color of output text. This feature is handy for focusing attention on details of the script status, as shown in Figure 8.2:
Next, as seen in the following code sample, the script loads the needed file system management functions into its scope:
The preceding functions are used to make file system permission changes. These functions are explained in Chapter 9, “PowerShell and Permissions.”
The next code sample contains the beginning of the script’s automation portion. First, the script checks to see if the string contained in the $TemplatePath
variable is a valid folder path. Then, the script checks to see if the string contained in the $ImportFile
variable is a valid file path. To perform these tests, the if...then
statements make use of Test-Path
cmdlet. This is a handy cmdlet that can be used for verifying whether a folder or file (-pathType container
or leaf
) is valid. If any of these paths are invalid, the script execution is halted, and information about the invalid paths is returned to script operator:
In the next code sample the rest of the variables that are used in the script are defined. The first variable $Owner
is used by the script to define the owner for each user’s Web folder structure, which in this case is the local Administrators group. Then, the variable $Targets
is defined using the Import-Csv
cmdlet. This cmdlet is used to read values from the import CSV file ($ImportFile
) into the $Targets
variable, which is used to provision new users’ Web folders:
In the following code sample, the script uses the path and username information from the information contained in the $Target
variable to construct the final destination path using the Join-Path
cmdlet. Then, the script uses the Copy-Item
cmdlet to copy the template folders to the destination path:
Next, the script uses the Set-Owner
function to change ownership of the user’s Web folder structure to the local Administrators group:
You might be wondering why the code for Set-Owner
is enclosed in a script block. The dot (.
) call operator preceding the script block tells PowerShell to run the script block within the current scope. If the call operator isn’t used, PowerShell doesn’t run the script block. The reason for creating an independent script block to handle the code for Set-Owner
is to ensure that the trap statement is scoped only to this block of code.
In the following code sample, notice that the Administrators group is added to the root folder’s security descriptor before inherited permissions are cleared:
The Clear-Inherit
function clears inherited permissions from the root folder, subfolders, and files, in addition to explicitly defined permissions on all subfolders and files. If the Administrators group didn’t have explicitly defined rights on the root folder, the rest of the script wouldn’t run because of a lack of rights.
Explicitly defined permissions are permissions that are directly defined for a user on an object. Implicitly defined permissions are permissions that are either inherited or defined through membership of a group.
In the last code sample, the SYSTEM account and the user are then granted FullControl
to the user’s web folder, and the script notifies the script operator of its completion:
We first looked at the core cmdlets used when dealing with file systems. We discussed how these core cmdlets provide a common experience among different providers, not only when dealing with file systems.
Then, we focused on how to manage the Windows file system using PowerShell, where we showed how to complete some of the most common administrative tasks done with drives, folders, and files.
This chapter provided an overview of how to manage CSV and XML files using cmdlets created specifically to deal with these specially formatted files.
Finally, the chapter presented a complete file system management script gathering several key concepts already reviewed in the book. Modularizing scripts can provide various advantages, such as reusing code and helping to make scripts more readable.