Best Practices for PowerShell Scripting (Part 6)

If you would like to read the other parts in this article series please go to:

Throughout this article series, I have talked about a number of different best practices for PowerShell scripting, especially with regard to building PowerShell functions. In this article, I want to wrap things up by talking about a few more best practices.

I want to start out by talking a bit more about version control. In my previous article, I mentioned that I recommend assigning version numbers to scripts and to functions. That way, if something were to unexpectedly go wrong with your code it will be relatively easy to go back and look at what has changed since the last version in which the code was known to be working correctly.

There is however, one additional aspect of the versioning process that I think I forgot to mention in that article. Before you modify a functional script or function, it is important to make a backup of the current version in case something was to go wrong. In fact, it is a good idea to retain copies of the last several versions of your code because you never know when you may need to reference them. After all, bugs do not always show up right away.

While I am on the subject of versioning, it is worth noting that some people build standalone functions that exist as independent files. There is nothing wrong with this practice, especially if you have functions that you use frequently. It is important to understand however, that an external function is a dependency for the scripts that use that function. As such, it is a good idea to document within the script (through the use of comments) which external functions are used, and which version of those functions the script is designed to work with. Otherwise, you can end up in a situation in which a function gets updated and one of the scripts that depends on the function breaks as a result.

While this concept probably seems really obvious, you have to remember that not every PowerShell script gets run every day. What happens if six months pass before the script is run? Chances are that nobody will immediately remember that a dependency function was modified. At first, everybody will be trying to figure out why the script malfunctioned when it ran perfectly the last time that it was used. Documenting dependencies and dependency versions within the script can help to reduce the amount of time that it takes to troubleshoot.

A dependency comment can be something as simple as “This script uses My-Function and has been confirmed to work with version 1.8 of that function”. This type of comment can be a big clue to anyone who has to troubleshoot the script. Assuming that the script has worked fine in the past, whoever is troubleshooting the script would probably see this comment and immediately begin checking the version number of the dependency functions that are currently being used.

External functions aren’t the only types of dependencies that can exist for a PowerShell script. A script can have external dependencies that are not specifically related to PowerShell. Let me give you an example. When Microsoft introduced Exchange Server 2007, there were several administrative tasks that could only be performed through PowerShell. Many Exchange administrators either wrote or downloaded scripts to help them with common administrative tasks that could not be performed through the Exchange Server GUI.

Today, Exchange Server still makes extensive use of PowerShell. However, a lot has changed in the eight years or so since the release of Exchange Server 2007. There are some PowerShell scripts that work great in Exchange 2007, but will not work with some of the newer versions of Exchange. Similarly, there are PowerShell scripts that have been written for Exchange Server 2013 that won’t work with older versions of Exchange.

My point is that in situations like this, Exchange Server essentially becomes a dependency service. And keep in mind that I am only using Exchange Server as an example. The same basic concept can apply to just about any application. The point is that if a PowerShell script is specifically designed to work with a specific version of an application then it is important to make note of that as a part of the script’s documentation. Of course this alone may sometimes be inadequate. It may not be enough to simply state that a particular script is designed to work with Exchange Server 2016. There may be other details that need to be included in your script’s documentation. Let me give you a few examples.

If a PowerShell script is designed to help you to service, configure, or maintain an application (such as Exchange Server) then it isn’t just the application’s version that is relevant, but also the application’s configuration. Depending on what a script is designed to do, the way that an application is configured can mean the difference between a script working or failing.

One of the best examples that I can think of is that of server roles. Many versions of Exchange Server have been designed to be deployed in role specific configurations. With that said, imagine what would happen if you tried to run a script that targets the mailbox database on a server that does not have the mailbox role installed. If there are role or configuration specific requirements, then those requirements should be documented in the script.

Another important consideration is that of file locations. Imagine for a moment that you have a CSV file filled with user names and you are using a PowerShell script to create Active Directory accounts for those users based on the file’s contents. In a situation like this, the script won’t work if it can’t find the file. It is therefore a good idea to document the file’s required location and format.

You should also consider where a script needs to be run from. Suppose that a script is designed to perform some sort of maintenance task against an SQL Server. Can the script be run locally from your administrative workstation, or does it need to be run on the SQL server? This sort of detail definitely needs to be documented.

Finally, what permissions are required in order for the script to work correctly? Many PowerShell scripts must be run with administrative permissions, but are there any additional permissions required? For example, does the administrator who is running the script need to also be an Exchange Server administrator, or will regular, run of the mill administrative permissions suffice? Any time that there are special permissions required, those requirements should definitely be documented.

What all of this really boils down to is that you never want to leave anyone guessing. Building error checking into your scripts will go a long way toward preventing scripts from failing (or providing the person who is running the script with a meaningful reason of why the script failed and what they can do about it). Even so, you can’t error check for every possible situation. Documentation is what’s really needed when a script must be checked manually.

Conclusion

As you can see, there are a number of different best practices for PowerShell scripting. Ultimately however, you should adopt the best practices that are the best fit for your own coding style.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top