Monday, 21 September 2015

Next-Level Scripting - Using Export-CSV With Custom Objects

Welcome to the next post in my Next-Level Scripting series where I show you the PowerShell scripting techniques that take your scripts to the next level - a level defined by concise, well-written code that conforms to best practices and uses the built-in tools in PowerShell whenever possible to handle common scripting tasks.

Sometimes it’s Easy, Sometimes it Ain’t

One of the most common things you will need to do as a Next-Level scripter is collect information from various sources and then display that information on the screen and/or export it to a file.  As we’ll see, this is not very difficult to do when you’re just collecting information from one place using one cmdlet but when you’re getting information from multiple sources it becomes significantly more challenging.

Let’s look at the easy scenario first.  We just want a list of services whose name begins with “Hyper-V” as well as the state of the service and its name.  We can just stick with Get-Service and then export straight to CSV as shown below.

001
002

Get-Service | where {$_.DisplayName -like "Hyper-V*"} | select Name,Status,DisplayName `
| Export-Csv C:\Temp\Services.csv -NoTypeInformation

Now let’s imagine a different scenario where we want to do the following:

  1. Read list of usernames from a file
  2. Find the account corresponding to that name in Active Directory
  3. Get the profile path for the account
  4. Get the home drive path for the account
  5. Check if the roaming profile path exists on a file server
  6. Check if the home drive folder exists on a file server

Since we are pulling information from various sources, we can’t just export straight to CSV like we did in the first example.  Without knowing exactly what to do, this scenario can appear very daunting.  Armed with the right knowledge however, it becomes a cookie cutter approach that you can easily copy between scripts for widely varying purposes.

Custom Objects and Hashtables

In this example, we are going to be working with hashtables, arrays and custom objects.  Now if you don’t understand exactly what arrays, hashtables and custom objects are, don’t worry too much.  You can essentially copy the approach I’m going to show you and with a few basic modifications use it for just about any scenario where you want to collect and display information from multiple different sources.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021

$allResults = @()

foreach ($user in (Get-Content C:\Temp\Users.txt
))
{
   
$thisUserInfo =
 @{}
   
   
$thisUser = Get-ADUser $user -Properties ProfilePath,HomeDrive
  
   
$thisUserInfo.UserName = $thisUser.
SamAccountName
   
$thisUserInfo.RoamingProfilePath = $thisUser.
ProfilePath
   
$thisUserInfo.HomeDrivePath = $thisUser.
HomeDrive
   
$thisUserInfo.ProfileExists = Test-Path \\FileServer1\Profiles\$user
    $thisUserInfo.HomeDriveExists = Test-Path \\FileServer2\HomeDrives\$user
 
   
   
$object = New-Object PsObject -Property $thisUserInfo
   
   
$allResults += $object
}

$allResults
$allResults
 | Export-Csv C:\Temp\Results.csv -NoTypeInformation

Let’s look at the important lines in this script.

  • Line 1: Create an empty array which will hold the results for every user
  • Line 5: Create a hashtable which will hold the results for the current user we are processing.  It’s important to understand that this hashtable only holds the results for the current user and is recreated each time the script hits line 5
  • Lines 9 through 13:  Populate the hashtable with the various bits and pieces of information we are looking for.  The names you choose for these values will become the column headers in your results
  • Line 15: Create a custom object and populate it with the information from the hashtable
  • Line 17: Copy the newly created object into the allResults array

And you can see the results of your magnificent script at the end when you display the allResults array on the screen:

image

And then to cap it off, your concise and beautiful output is ready for export to a CSV file which we do on line 21.

Friday, 14 August 2015

You Can’t Always Trust Perfmon

As a diligent IT professional, you decide you want to run perfmon to monitor the memory usage on a few of your servers.  Not being 100% familiar with what all the counters mean, you do a bit of research and decide that one of the counters you want to look at is Committed Bytes.  Good choice, this is an important counter which shows you how much memory Windows has committed to make available to processes if and when they need it.  Wanting to know what perfmon has to say about this counter, you open it up and see the following description:

Committed Bytes is the amount of committed virtual memory, in bytes. Committed memory is the physical memory which has space reserved on the disk paging file(s)

Unfortunately, this description (and the description for Commit Limit and % Committed Bytes In Use) is wrong.  Interestingly, if you go all the way back to Windows 2003 (and possibly further than that) and skip forward to Windows 10, the same incorrect description is there all the way through.  This is a good example of how difficult it is to truly understand the counters relating to memory management.  Let’s take a look at what is wrong with this description.

Spare Me The Details!

If you don’t care too much about the details, the easy way of showing that the description is wrong is to remove your page file and see if Committed Bytes is zero.  After all, the implication in the description is that committed memory has space reserved in the page file.  Therefore, without a page file you can’t reserve said space in the page file and therefore can’t have any Committed Bytes.  If you turn off your page file, you’ll see that you will still have a non-zero value for Committed Bytes just like you did with the page file still present.  Case closed!

Give Me The Details!

Ok, you asked for it – things are going to get technical now.  Let’s first look at what memory commitment is.  When a process starts up, it needs memory.  However, it doesn’t know exactly how much memory it’s going to need ahead of time.  It might need 100 MB or it might need 500 MB depending on load.  Let’s say that Windows has 800 MB of free memory when the process first starts up.  No problems here – even under full load the 500 MB requirement of our fictitious process can be met.  But what if it only needs the full 500 MB four days after starting up?  By that time, there may only be 100 MB of available memory in Windows at which point our process is going to be very unhappy when it asks for the full 500 MB and the allocation fails.  Avoiding this situation is what memory commitment is all about.  It allows a process to reserve a given amount of memory and Windows will promise to make the memory available should the process need it, even if this request comes weeks after the process first started.  They key, of course, is for Windows to ensure that it doesn’t agree to provide more memory to processes than it can physically provide.  We’ll look at how it does this in the next paragraph when we talk about the Commit Limit.

Needless to say, you’ve probably realized by this point that the concept of “reservation” from the built-in perfmon description is accurate.  What’s not accurate is where the reservation is held.  To see why this is the case, we first need to understand where this committed memory is being drawn from.  Windows has a value known as the Commit Limit.  This is the maximum amount of memory that Windows can promise to make available to processes and is composed (approximately) of the amount of physical RAM plus the size of the paging file(s).  For example, if you have 4 GB of RAM and a 2 GB Page File, you have a 6 GB Commit Limit.  Related to the Commit Limit, we have the Commit Charge.  The Commit Charge is how much memory Windows has promised to make available to processes if and when they need it and is what is represented by the Committed Bytes counter we’re discussing in this post. 

And now we get to the crux of the matter.  When Windows says, “I solemnly promise to reserve 500 MB of memory for you” (making the Committed Bytes counter go up by by 500 MB as well) it’s not reserving that space in the page file, contrary to what the description says.  What it is instead doing is setting aside 500 MB of memory for the process and then adding 500 MB to the Commit Charge.  If and when the process actually uses that memory, the page file may come into play or it may not, depending on what the Windows memory manager decides is best at that point in time.  If a request for memory from a process will make the Commit Charge exceed the Commit Limit, the request will fail (or the page file will need to be expanded).

So in summary – the built-in description for Committed Bytes states that committed memory is memory which has space reserved in the page file.  In this post we have seen that committed memory is a memory reservation, represented by Committed Bytes, which is charged against the Commit Limit and that there is no specific tie-in to where that memory reservation is being held.

Monday, 3 August 2015

Modifying an INI File

Occasionally, you might find yourself needing to modify a file which has an ini-type format, i.e. setting=value.  For example, you might be doing an unattended install of SQL using a script which needs to modify certain values in the configuration file.  Or perhaps you’re installing Micros-Retail’s XStore product which makes extensive use of files that use this format.

There are two methods I commonly use to do this type of thing, and which method I choose depends on how many values I’m modifying.  If it’s just one value, I’ll use the one-liner method but if I’m modifying many values I use a custom function I wrote to do this.  For the purposes of this post, let’s assume we’re customizing a SQL install and want to vary the instance name with each install.

The One-Liner Method

001
002
003
004

$iniFile = 'C:\Temp\SQLInstall.ini'
(Get-Content $iniFile) | foreach {
$_ -replace "INSTANCENAME=.+","INSTANCENAME=MYINSTANCE"} | Set-Content 
$iniFile

Pretty straightforward – any line matching “INSTANCENAME=” will be replaced with “INSTANCENAME=MYINSTANCE”.  While this approach works well for replacing one value, I like to move this work to a function when I’m replacing multiple values.  So let’s expand our scenario to include some other values that we want to modify and see how it looks with a function doing the work.

Using a Function

The basic approach with this function is to create a hashtable of the values we want to replace in the file and then pass the hashtable to a function which then does a search and replace and updates the file.  Let’s take a look at how it all works.

Since the function is expecting to receive a hashtable, let’s start by creating the input it’s expecting and then take a look at the function. 

001
002
003
004
005
006
007
008
009
010
011
012
013

$iniFile = 'C:\Temp\SQLInstall.ini'

$sqlInstallParams =
 @{
    "SQLSVCACCOUNT=" = "SQLServiceAccount"

    "INSTANCENAME=" = "MYINSTANCE";
    "SQLBACKUPDIR=" = "C:\Backups";
    "SQLUSERDBDIR=" = "C:\SQL\DB";
    "SQLUSERDBLOGDIR=" = "C:\SQL\DB\Logs";
    "SQLTEMPDBDIR=" = "C:\SQL\TempDB"
;
}


#Now call custom function to update the config file using the hashtable values
Set-IniValue –
targetFile $iniFile hashTable$sqlInstallParams

Nothing much to this step – we just create a basic hashtable with the values we want to replace.  What I like about this approach is that it allows you to see what’s being modified in an easy-to-read format.  Now let’s take a look at the function we just called in Step 1.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030

function Set-IniValue ([string]$targetFile, [hashtable]$hashTable)
{

    # Get contents of current target file
    $content = Get-Content $targetFile

    # Loop through hash table of new values and compare each line of
    # file with each key from hash table

    # If no match is found, add line to text file
    $hashTable.GetEnumerator() | ForEach-Object {
        if (!(Select-String -Pattern $_.Key -Path $targetFile -Quiet
))
        {

            $newText = $_.Key + $_.Value
            Add-Content $targetFile $newText
        }
        else
        {
            foreach ($line in $content
)
            {

                #If line is found, replace it with key and new value
                if ($line -match $_.
key)
                {

                    $newText = $_.Key + $_.Value
                    [IO.File]::ReadAllText($targetFile).Replace($line,$newText
) `
                      | Set-Content $targetFile -Force
                }
            }
        }
    }
}

The function starts off by checking if any of the entries we want to modify don’t exist in the ini file.  If any are found which don’t exist, they are added.  It then checks for any matches with the current key of the hashtable and modifies that line with the corresponding value from the hashtable, saving the ini file after each change.  And that’s pretty much it – I’ve used this function thousands of times over the years and it has never let me down!

Cleanup

Set-Content adds a blank line to the end of the file each time it is run so if you’re modifying a few values you can end up with a number of blank lines at the end of the file.  If you want to clean this up, you can add this line after the function call to remove these blank lines.

001
(Get-Content $iniFile) | where {$_.Trim() -ne "" } | Set-Content $iniFile

Thursday, 25 June 2015

Displaying Script Progress With a Function

In my experience, if you are writing scripts that will be used by other people they will almost invariably be intimidated by the console, even if they work in IT.  The fact is, the vast majority of people who work on Windows spend most of their time in the GUI and only a tiny fraction of time in any kind of command prompt/console.  Therefore, if you’re writing a script that other people are going to be running, and the script performs multiple actions, I find it very helpful to display in clear detail what stage the script is at and what it’s doing.  To this end, I wrote this function which I use in certain scripts which displays the script progress.  Let’s take a look.

Before we look at the code, let’s go over one of the fundamental rules of script writing and programming in general – don’t repeat yourself.  Let me repeat that because it’s so important – don’t repeat yourself!  What this means is that if you find yourself writing the same line of code more than once in a script, you need to write a function instead and call it.  So let’s say you want to produce an output like this:

image

You might be tempted to write something like this to get the output shown above:

001
002
003
004
005
006
007
008
009
010
011
012
013
014

Write-Host -ForegroundColor Green '**********************************'
Write-Host -ForegroundColor Green 'STAGE 1 - COPYING FILES'
Write-Host -ForegroundColor Green '**********************************'
#Code to copy files goes here

Write-Host -ForegroundColor Green "`n**********************************"
Write-Host -ForegroundColor Green 'STAGE 2 - QUERYING EVENT LOG'
Write-Host -ForegroundColor Green '**********************************'
#Code to query event logs goes here

Write-Host -ForegroundColor Green "`n**********************************"
Write-Host -ForegroundColor Green 'STAGE 3 - SETTING REGISTRY VALUES'
Write-Host -ForegroundColor Green '**********************************'
#Code to set reg values goes here

This approach works but notice how often you’re repeating the same thing.  Also, what happens if your script has 15 stages and you need to add a new stage between 3 and 4?  Now you have to find all the references to stage numbers and manually change them to be in order.  A better approach is to delegate this task to a function.  We need two things to make this happen:

  1. A way to keep track of the stage number
  2. A way to keep track of the stage description, eg. “Copying Files”

Let’s take a look at the function to do this, along with the code needed to product the output previously discussed:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019

function Write-Stage([string]$description,[string]$fgColor = 'Green', [int]$length = 35)
{
     
"`n`n"
      Write-Host -ForegroundColor $fgColor $("*" * $length
)
     
Write-Host -ForegroundColor $fgColor "STAGE $stage - $description"

     
Write-Host -ForegroundColor $fgColor $("*" * $length
)
     
$script:stage++
} 

$script:stage = 1

Write-Stage -description "COPYING FILES"
# Code to copy files goes here

Write-Stage -description "QUERYING EVENT LOG"
# Code to query event log goes here

Write-Stage -description "SETTING REGISTRY VALUES"
# Code to set reg values goes here

Let’s take a look at what is going on here.  Our function accepts three parameters:

  1. Stage description ($description)
  2. Text foreground colour ($fgColor, defaults to white)
  3. Length of the string of asterisks above and below the stage description ($length, defaults to 35)

Line 004 simply repeats the asterisk the number of times specified by the $length variable.  Line 005 echoes the script stage number and description and then line 006 echoes the asterisk the same number of times again.

So, each time we want to display our progress we simply call the function with the appropriate description and it appears in all its glory without endless repetition of asterisks all over your script.

Now you might be wondering – how does the script know what stage number to display?  After all, we never pass it as a parameter to the function.  The answer to that lies in the variable defined on line 010.  Notice that it is not simply defined as $stage but rather as $script:stage.  This gives the variable script-wide scope which means that we can modify it inside the Write-Stage function and it will be available outside of the function.  So we start off by setting it to 1 and then each time the function is called, it is incremented by 1.  With this approach, if you ever need to add extra stages it’s not a big deal because the stage number is not hard-coded anywhere like it was in the first code example I showed you.

You can also modify the appearance of the progress display by passing different values for the length and foreground colour parameters.  So you could do something like this:

001
002
003
004
005
006
007
008

Write-Stage -description "COPYING FILES"
# Code to copy files goes here

Write-Stage -description "QUERYING EVENT LOG" -fgColor Magenta
# Code to query event log goes here

Write-Stage -description "SETTING REGISTRY VALUES"
# Code to set reg values goes here

And get output like this:

image

And that’s it!  Now you have a concise and efficient way to display the progress of a script and hopefully reduce the intimidation factor a non-command-line guru might feel when running one of your masterpieces.

Friday, 22 May 2015

Next-Level Scripting – Remember Get-Member

Welcome to the next post in my Next-Level Scripting series where I show you the PowerShell scripting techniques that take your scripts to the next level - a level defined by concise, well-written code that conforms to best practices and uses the built-in tools in PowerShell whenever possible to handle common scripting tasks.

What You Don’t Know

One of the most important skills to have as a Next-Level Scripter is the ability to find the answers to what you don't know.  What don't you know exactly?  A lot!  I've been writing PowerShell scripts for years and I still regularly use the techniques I will outline in this post to discover information that I don't know.  And I think it's safe to say that I will never stop using these techniques because there is just too much "stuff" to know it all.  And there’s nothing wrong with that so please don’t ever think that your scripting skills are lacking if you regularly have to look things up!

Warning – Mild Developer-Speak Ahead

I have to apologize but it's time for some developer-speak.  Don't worry though, there are only two main terms you need to understand as your foundation - methods and properties.

To explain these terms, let's take a step back from the world of computers and look at a real-world example to which pretty much everyone can relate to - the car.  In developer-speak the car is known as an object.  Every car (object) has, among other things, a colour, weight and model.  These are fixed attributes that you can't change at will and are what as known as properties.  In developer-speak, the colour is a property of the car object.

But, there are also things that you can change and to make these changes you need to actually do something.  For example, the car can accelerate but it doesn't just do that by itself; you need to press the gas pedal to make it accelerate.  In the world of objects this is what is known as a Method.  Once again, to put it in developer speak: you use the accelerate method to make the car object accelerate.

And that's all you really need to know about objects, methods and properties for the purposes of this post!  See, it wasn't too bad was it?

Exploring Objects

Remember the car object from our example above?  When you work with PowerShell you are working with objects as well and all of these objects have methods and properties.  The question though when you're dealing with these objects is, "What can I do with this object via its methods and properties?"  Like I said in the introduction to this post, there is absolutely no way you can know what all the properties and methods are for everything you'll ever come across and you therefore must know how to find this information out.

Enter Get-Member…this cmdlet is what shows you the methods and properties of an object.  To get it to tell you this information you need to tell it what object you're interested in.  Let's look at a simple example to show how this is done.

You've been asked to write a script to find the date created for every subfolder and file of a given folder.  You've never done this before but suspect that there is a property which contains the creation date of a folder/file.  Your mission then is to find this property (I know you could easily look this up on on the internet but if you want to truly learn PowerShell you need to figure out how to do these things yourself - it's the best way to learn).

You know about the Get-ChildItem cmdlet so you decide that's a good place to start. You run it against your folder of interest and examine the output.

image_thumb17

Well that's a little disappointing - no mention of creation date, only modified date.  But does this mean you should throw your monitor at the wall and go home weeping?  No!  In most cases, there is way more information you can retrieve and PowerShell is only showing what it has been programmed to show you by default.

As you've probably figured by now, this is one of those times where you need to run Get-Member.  Remember how I said above that you must tell Get-Member what object you're interested in?  That object will come from Get-ChildItem via this command:

001
Get-ChildItem C:\CorpMiscellaneous | Get-Member

And here’s the output:

image_thumb23

This command will retrieve all the available Methods and Properties of the C:\CorpMiscellaneous folder.  Since we're trying to retrieve a property and not a method, we can ignore all the methods in the list and focus only on the properties, highlighted in red.  And lo and behold, what is the 2nd property we see in the list, highlighted in green?  CreationTime!  Pleased with this discovery, you decide that you want to report on the name of the file and the creation time as well as sort the results by the creation time.  Let's try this out:

001
Get-ChildItem C:\CorpMiscellaneous | Select-Object Name, CreationTime | Sort-Object CreationTime

And here’s the output:

image_thumb26

In this example, we used the Select-Object cmdlet to specify the properties we want to retrieve from each object, in this case Name and CreationTime (you could have specified any valid property in this command).  We then pipe the results to Sort-Object and tell it to sort the results based on CreationTime.  If you wanted to save the results to a file you could also add Export-CSV at the end of the command.

And that's all there is to it.  You can use this approach for any object to discover what you can do with it and in doing so, take the next step on your way to becoming a Next-Level Scripter.

Tuesday, 12 May 2015

Quick and Easy Check For Nested Groups

I sometimes find myself needing to check if a group contains any other nested groups.  I don’t necessarily need to know who the members are of each nested group, I just want to know if it contains other groups and what those groups are named.  You can use this quick one liner to get a list of any groups who belong to the group specified, in this case Citrix_Users

001
Get-ADGroupMember Citrix_Users | where {$_.objectClass -ne "user"} | select -ExpandProperty Name

Each AD object has an objectClass property associated with it.  All this one-liner does is get each member of the specified group, and if its attribute is not user, display the group name.  Depending on how frequently you used this, you could add a parameter for the group name and even add it as a function to your PowerShell profile so that it’s loaded automatically each time you run PowerShell.

Friday, 8 May 2015

Next-Level Scripting - Strongly Type Your Variables

Welcome to the next post in my Next-Level Scripting series where I show you the PowerShell scripting techniques that take your scripts to the next level – a level defined by concise, well-written code that conforms to best practices and uses the built-in tools in PowerShell whenever possible to handle common scripting tasks.

Go Easy On Your Keyboard

By “strongly type”, I don’t mean pound on your keyboard as you type each variable.  What do I mean then?  For us sysadmins who don’t have a development background, it might come as a surprise to find out that each and every variable that we declare in PowerShell has a data type associated with it.  Your variable, $userName, is not just a variable – it’s a string variable.  Admittedly, this sounds like developer stuff and it kind of is, but if you want to take your scripts to the next level it’s a concept you need to understand.  In this post we’ll examine how our variables get assigned a data type and how and why we should control that process.

Automatic For the Variables

If the news that all your variables have a data type comes as a surprise to you, it’s because PowerShell does a very good of job of automatically assigning the correct data type to your variables without you ever knowing about it.  For example, if you create a variable called $fileName and set it to C:\Temp\MyFile.log, PowerShell will figure out the best data type for it and without any fanfare or big announcement, assign it that type.  This doesn’t mean you can’t find out what type your variable is, you just have to know the right command.  Let’s take a look at some examples.

001
002

$fileName = "C:\Temp\MyFile.log"
$fileName.GetType().Name

Perhaps not surprisingly, the command to find out the type of a variable is GetType().  Let’s see the output from running this.

image

There you go – the type of our variable is String.  Notice how we never told PowerShell to do this, it just figured out what type it should be and did it.

Let’s try one more.

001
002

$fileCount = 100
$fileCount.GetType().Name

image

This time, our variable was assigned the type Int32.  Once again, job well done by PowerShell – this is indeed an integer and it figured it out for us.

Seems Like PowerShell Is Really Good At Variable Typing

It is!  But as a next-level scripter you should get into the habit of strongly typing your variables because things don’t always go as planned.  Let’s see how we do this by revisiting our examples from above.

001
002

[string]$fileName = "C:\Temp\MyFile.log"
[int32]$fileCount = 100

As you can see, we simply need to put the type of our variable in square brackets before the variable name.  This way, we are telling PowerShell what type to make the variable and completely eliminating the risk of an unintended data type being assigned.  This is known as variable casting.

User Input

You may know very clearly what type of variable you want but once you hand over your script to someone else you can never count on them getting it right every time the script is run.  Strongly typing your variables is a great way to easily protect your script against unexpected user input.  Let’s look at an example where our hapless user accidentally provides the wrong input.

001
002
003
004
005

param
(
   
[Parameter(Mandatory = $true)]
    [int32]$daysToRetrieve
)

In this very simple example, our script accepts one parameter – the number of days to retrieve Event Log records for (if you’re not familiar with the parameter block shown above, see here).  Obviously, this input needs to be a number.  But you can also write a number as a word and how do you know that someone, at some point, won’t type out the word “six” instead of the number 6?  You don’t, so prepare for it by strongly typing the variable.  Let’s see what happens when we provide the wrong input.

image

Denied!  PowerShell rejects the input with the message “Input string was not in a correct format”.  The word “six” is a number but it’s not an integer and since we told PowerShell that this is the type that the variable must be, it will not accept anything that it can’t convert to an integer. 

Let’s imagine for a second that you didn’t strongly type your variable.  PowerShell would have accepted “six” and then proceeded to run whatever command you had in your script to retrieve the event log records.  The command would very likely have failed anyway at this point so what did we gain by catching the mistake earlier?  It is much, much better to catch this kind of thing before your script starts doing its work.  What if your script was doing something other than just retrieving information like deleting files, or rebooting computers, or deleting user accounts from AD?  The last thing you want is the wrong type of information being fed to those commands because the consequences could be disastrous.  When you catch this kind of thing before the script actually does anything, you are ensuring that you don’t have a crazy runaway script causing havoc because of something you didn’t expect and couldn’t have planned for.

How Do I Know What Type To Use?

When you use data types in PowerShell you’re actually using .Net classes.  PowerShell has a number of shortcuts, known as type accelerators, which allow you to specify a data type with less typing.  For example, in our script above when we used [Int32], we were actually using the [System.Int32] data type.  Thanks to the built-in type accelerators though, you only need to type [Int32] and PowerShell does the rest for you.  Here’s a list of some of the more common data types you would use:

[int] 32-bit signed integer
[string] Fixed-length string of Unicode characters
[bool] True/false value
[double] Double-precision 64-bit floating point number
[decimal] 128-bit decimal value
[array] Array of values
[xml] Xmldocument object
[hashtable] Hashtable object (similar to a Dictionary object)

You can also get a full list of the available type accelerators by typing this command:

001
[psobject].Assembly.GetType("System.Management.Automation.TypeAccelerators")::get

One other method is to not strongly type your variable, let Powershell do it for you, and then check what data type PowerShell automatically assigned.  You would do this using the GetType() method that I showed above.

Monday, 4 May 2015

Debugging Without The ISE

Sometimes when you’re debugging a PowerShell script you may find that you need to run the script in a separate PowerShell instance instead of in the ISE.  For example, you may want to test something like transcription which doesn’t work in the ISE.  Problem is, when you’re not using the ISE you lose the ability to easily insert a breakpoint into your script and poke around to check whatever it is you want to check.  One solution is to write a bunch of lines echoing what each variable is set to at that point.  But that’s time consuming, especially if you have a lot of variables you’re interested in.  A more efficient solution is to use a very handy method which allows you to temporarily stop a PowerShell script in mid-execution so you can examine those variables dynamically to your heart’s content.

The automatic $host variable has a host (pardon the choice of words) of properties and methods, one of which is EnterNestedPrompt.  What this allows you to do is stop script execution at any given point and enter a prompt inside the script with the all the variables available at that point in the script accessible to you.  Let’s look at the very simplistic example shown below:

001
002
003
004
005
006
007
008

$a = 1
$b = $a + 1

Write-Host "Entering nested prompt..."
$host.EnterNestedPrompt()
Write-Host "Out of nested prompt"
$c = $b + 1
$c

All we’re doing here is setting some very basic variables in lines 1 through 4.  But the magic happens on line 5 where we enter the nested prompt.  Let’s take a look at the script output when this is run:

image

Notice how after the line “Entering nested prompt” the PS prompt changes from PS C:\Temp> to PS C:\Temp>>.  The script is paused at this point and we’re now inside the nested prompt. We can now type anything we want with all the variables active in the script at that time available to us.  Notice how I typed $a and $b and got the correct values echoed back.  Very handy!  Once you’re done, just type exit and the script continues.