Sunday, March 26, 2017

Put down your laptop, turn off your phone, turn off your brain and get into the wilderness

There was a time before I was before I was a Father, before I was a husband that I would go to work all day, come home, and then continue to work until I went to sleep. At the time this did not seem like a horrible life to me. I loved what I did. Troubleshooting, designing, implementing systems was a lot of fun and brought a tremendous sense of achievement to me. In fact, I still love to work.

As time went on this sort of schedule ebbed and flowed. I would have some weeks where I would do this and then some when I would be more social after work and find other things to do besides IT.

Now that I am a Father and a husband, I can no longer afford that non-stop working attitude and lifestyle. It would prohibit me from being "present" with my family when I am home, and that is not fair to them and it is also no way to live. Truth be said I promised my wife I would put my phone away while I am home so that I am not distracted by it. This is necessary since anyone who works in IT likely has Internet addiction like me. With that said, I still logon to my Macbook whenever I have a free moment at home or in my "free" time. It is not because I necessarily need to do work, it is because I love to.

Recently I began to feel the dreaded feeling of being burnt out. I would go do sleep exhausted. I would wake up exhausted. I was eating like crap. I was sleeping like crap. I was constantly distracted by constant thinking while both at work and at home. It was time for a much needed change.

I decided to take up something that would get me away from technology and everything else in life at least for a little while. I needed an escape from both work and being home, so I looked towards the wilderness and began to hike at the break of dawn 2-3 times a week. I go on my hikes without any technological device, only taking some water and maybe a blanket to lay on a rock with if needed. At first it was hard. My mind was still distracted thinking about work and all the responsibilities outside of work that I have. Soon after though, I started to learn to enjoy the wild by actively paying attention to my thoughts but also my surroundings. I did this by looking at the beauty of nature, watching animals, watching a stream flow or just meditating on top of a mountain. Now, when I hike it is a real break from it all and it recharges my batteries well.

As we get older in IT our responsibilities outside of work began to take up a lot of time in life but it is important to find time to recharge and get away to let your brain rest. I encourage anyone who works a lot in an office space or just in general and immerse yourself into the wild, even for a little while.

Friday, March 24, 2017

Using the PowerShell #requires statement in your script

A few years ago I was working with a team of sysadmins. We were all learning PowerShell together and were having a blast playing with different modules and trying to figure out what sort of things we can automate. One day I downloaded a module from PSGallery, installed it and wrote a nice little script. My colleague asked if he could borrow it to run it, so he installed the module, and ran the script but immediately his console threw a nasty error message. Something along the lines of...

PS /Users/dan> Get-Something                                                    
Get-Something : The term 'Get-Something' is not recognized as the name of a 
cmdlet, function, script file, or operable program. Check the spelling of the 
name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-Something
+ ~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-Something:String) [], Comma 
    + FullyQualifiedErrorId : CommandNotFoundException

Ahh, the dreaded error of not recognizing a cmdlet, function, script file or operable program. We have all seen this before. For a novice to PowerShell this can be confusing and annoying. This error is getting thrown because my colleague did not have that module imported into his session. Fortunately, there is an easy way to ensure this doesn't happen again for him or any user who runs a script.

The #requires statement is something that we can place at the top of our script that will ensure that the script cannot run unless the conditions that #requires states are met. This can be done for modules, pssnapins, runasadministrator or a specific PowerShell version as the help explains.

    #Requires -Version <N>[.<n>] 
          #Requires –PSSnapin <PSSnapin-Name> [-Version <N>[.<n>]]
          #Requires -Modules { <Module-Name> | <Hashtable> } 
          #Requires –ShellId <ShellId>
          #Requires -RunAsAdministrator

So how could I have avoided the nasty error my colleague received? By sticking this at the top of my script:

#requires -Modules ModuleName

As long as the module is installed on the system the script is executing on, the module will automatically be imported and used into the session, and if it is not installed an error will get thrown stating the script cannot run because the modules specified are missing.

As an example I created a script called 'sample.ps1'. I placed #requires -Modules what at the top of the script so that the script will not run if the module 'what' is not in the session. So when I attempt to run sample.ps1 I get this error:

PS /Users/dan> ./sample.ps1                                                     
./sample.ps1 : The script 'sample.ps1' cannot be run because the following      
modules that are specified by the "#requires" statements of the script are 
missing: what.
At line:1 char:1
+ ./sample.ps1
+ ~~~~~~~~~~~~
    + CategoryInfo          : ResourceUnavailable: (sample.ps1:String) [], Scr 
    + FullyQualifiedErrorId : ScriptRequiresMissingModules

Well at least that is a much more informative error than cmdlet, function, script file or operable program not recognized right? Now the user knows what module they need installed on their machine before they can run this script.

Creating a CIFS share on Netapp Clustered Mode storage system with PowerShell

In a Windows environment, CIFS shares are a common occurrence and a necessity for many services such as Active Directory and print services. Setting up a share on a Windows server is a fairly simple task. We create a folder, enable sharing, set permissions. Netapp has been a leader in storage technology and also has the capability to serve CIFS shares. My experience with Netapp has been extremely positive as their storage systems are very solid and rarely have outages. Today I will show how we can create a share on a Netapp Clustered Mode system with Netapp's PowerShell module.

For this example I am connecting to a management interface on a Netapp storage SVM (Storage virtual machine) with the name FAS-SVM. This share named 'finance' will be used for an Active Directory group named 'DOMAIN\FinanceGroup to share their files'.

1. First we login to the system with the Connect-NCController command. This will create a persistent session with FAS-SVM so that going forward any commands that run will be against this system.

CC:\> Connect-NcController -Name fas-svm | ft -AutoSize

Name                 Address      Vserver  Version
----                 -------      -------  -------
fas-svm1    fas-svm1 NetApp Release 9.1

2. Next, we will create a directory on 'vol1' for our CIFS share named 'finance'. I set the permissions to 777 so that everyone has access to the directory.

C:\> New-NcDirectory -Path /vol/vol1/finance -Permission 777 | ft -AutoSize

Name    Type      Size   Created  Modified Owner Group Perm Empty
----    ----      ----   -------  -------- ----- ----- ---- -----
finance directory 4 KB 3/24/2017 3/24/2017     0     0  777 True

3. Time to create the CIFS share on the 'finance' folder. We will use the command Add-NcCIFSShare for this. Note that there are several parameters we can use. One important parameter is -ShareProperties. From the module help we can see our options:

-ShareProperties <String[]>
    The list of properties for this CIFS share. Possible values:
    "oplocks"        - This specifies that opportunistic locks (client-side caching) are enabled on this share.
    "browsable"      - This specifies that the share can be browsed by Windows clients.
    "showsnapshot"   - This specifies that Snapshots can be viewed and traversed by clients.
    "changenotify"   - This specifies that CIFS clients can request for change notifications for directories on this share.
    "homedirectory"  - This specifies that the share is added and enabled as part of the CIFS home directory feature. The configuration of this share should be done using CIFS home directory
    feature interface.
    "attributecache" - This specifies that connections through this share are caching attributes for a short time to improve performance.

For this example lets make the CIFS share browsable and showsnapshots.

C:\> Add-NcCifsShare  -Name finance -Path '/vol1/finance' -ShareProperties @("browsable","showsnapshot") -Comment 'Finance share' | ft -AutoSize

CifsServer ShareName    Path              Comment
---------- ---------    ----              -------
FAS-SVM   finance      /vol1/finance     Finance share

4. Now that our 'finance' share is created, we will set our CIFS permissions. We will use the Add-NcCifsShareACL command to give the AD security group 'DOMAIN\FinanceGroup' full control. We specify the parameter as 'windows' for -UserOrGroup.

C:\> Add-NcCifsShareAcl -Share finance -UserOrGroup 'DOMAIN\FinanceGroup' -Permission full_control -UserGroupType windows | ft -AutoSize

Share        UserOrGroup         Permission
-----        -----------         ----------
finance      DOMAIN\FinanceGroup full_control

Done! Pretty simple eh? You can see how we could easily put these steps into a function for repeatable use in PowerShell.

To summarize the steps this is what we did:
  • Connected to our Netapp system
  • Created a directory on a volume
  • Created a CIFS share on that directory
  • Added CIFS permissions using an Active Directory security group

Thursday, March 23, 2017

Installing software packages remotely using Chocolatey and PowerShell

The capability to install software remotely on many machines at once is still one of the most magical things a System Administrator can do. Nowadays we take it for granted due to all of the different tools we can use to do this, but it is still one of coolest automated tasks in my opinion.

Today, we will use two tools to accomplish this, PowerShell and Chocolatey. Both automate a lot of tasks on their own, but combining them gives an admin even more power and agility to push out software quickly and easily.

In this example I have three Windows workstations in an Active Directory domain, and I would like to push Firefox to all three at the same time. To do this we will use the PowerShell "Invoke-Command" which allows us to create remote PowerShell sessions to many computers simultaneously. From there we use the choco install command on each workstation which will tell Chocolatey to install Firefox.

First we open PowerShell from our control machine. I will create an array in PowerShell for the hostnames of the workstations I want to install Firefox on.

$workstations = ("computer1","computer2","computer3")

Next, lets run the Invoke-Command in PowerShell using the $workstations variable in -ComputerName. You will notice that we pipe choco install firefox -y to Select-String. This is because we want to parse only the result of the installation, specifically the line that includes "Chocolatey installed". This line will provide if the package was successful or not. Note that you may want to use the -Credential parameter in Invoke-Command if the account you are logged into on your control machine does not have local administrator access to the workstations. Then we use Select-Object to display the installation result for each machine and it's computer name in PowerShell.

Invoke-Command -ComputerName $workstations -ScriptBlock {choco install firefox -y | Select-String -Pattern "Chocolatey installed" } | select PSComputername,Line

After running Invoke-Command we are greeted with success. Each machine successfully installed Firefox.

PSComputerName Line
-------------- ----
computer1      Chocolatey installed 1/1 packages. 0 packages failed.
computer2      Chocolatey installed 1/1 packages. 0 packages failed.
computer3      Chocolatey installed 1/1 packages. 0 packages failed.

Wednesday, March 22, 2017

One shell to rule them all

One shell to rule them all
One shell to find them
One shell to bring them all 
and in the darkness bind them.

Sit back and close your eyes. Imagine being a system administrator in a time where you only need to learn one shell to manage any operating system. A time where you can login via SSH to a Windows or Linux server and have the exact same shell experience. Now open your eyes. We are on the cusp of this dream.

With the eventual production release of PowerShell on Linux (and Mac), sysadmins will finally have a shell and phenomenal automation language on both Windows and Linux. While the alpha release works on Linux, it probably isn't quite safe to be writing production code just yet.

Will it replace Bash for all? No, but I see no reason it can't be an alternative for someone who wants it to be. The vast majority of current Bash users will likely never migrate, but that is OK. Admins are free to use the tools they desire, but for PowerShell users it is an easy choice. We already know how easy it is to use. although I do believe there will be some open-minded Bash users who after learning a bit of PS will quickly fall in love with what it does well, and what Bash does not, handling structured data. Whether that be JSON, XML, CSV etc., PowerShell is an undeniably a beautiful tool for just that.

These are such interesting times to be in in the field of IT. 

Monday, March 20, 2017

Death of the Windows or Linux Sysadmin

When I first started my IT career years ago I would generally come across two different types of system administrators. Those who used Windows and those you used Linux, I rarely found someone who used both frequently. I spent my first four years in an all Windows environment, learning technologies such as Active Directory, Exchange among others. Sure I knew what Linux was, but I had no desire to take on learning another entirely different operating system from what I already knew not to mention there was no business need at that time.

As I changed jobs in started working in different environments I began to be introduced to Linux because that was used for web servers, LDAP, SMTP and such in these organizations. At first, it was daunting because once you know one OS, trying to learn something that looks and feels much different can be quite difficult. Having a knowledgeable Windows administrator who is used to a GUI login into a completely CLI based server for the first time will send them to a psychiatrist quickly. To be fair, expecting a Linux admin to login to a Windows server and accomplish a complex task will make them look downright stupid if they don't have experience doing it. Thus, this is an example of the toughest obstacle of learning a different OS when you know the other one so well. Yourself.

I recently had an intern ask me about what I thought he should learn if he wanted to become a system administrator. This is actually a pretty easy question because if you search for jobs in areas such as New York, you will find what employers are looking for in terms of skills. So I wrote him a list:

OS - Windows, Linux (CentOS, RedHat)
Web servers - IIS, Apache
Programming - Bash, Python, Powershell
Cloud - AWS, Azure
Networking - Study CCNA material

There is a lot you can add to this list, but I think it is a good start for a novice who is starting out. The important part of this list is that it is cross-platform so that it forces the student to not get too comfortable with one that they don't want to learn the other. Not focusing solely on one OS, but both. Employers want to hire IT professionals that can operate on Windows and Linux, but why? To an employer, a system is a system. If they are using both operating systems (and most of them are) then you are no use to them if you only know one.

So this latest generation of sysadmins will prove to be the last who only know one OS, at least if they want to stay a viable candidate for jobs. There will always be preference in an OS by a Sysadmin, but the luxury of only knowing one or the other will soon be over.

Friday, March 17, 2017

Creating software packages with Chocolatey Business

At a few of the organizations I worked for we had a network share specifically for software. We would dump any software installation media that we used and place it there to be used when necessary. Sound familiar? There are a lot of software deployment solutions now that can help manage software packaging and deployment, but perhaps none better for Windows administrators than Chocolatey.

Chocolatey is one of the most used package management tools used by Windows administrators and for good reason. While open source is a completely command line based package manager, Chocolatey for Business (C4B) is moving towards complete software management and has both CLI and GUI options to address varying skill sets and preferences in the enterprise. Perhaps best of all it integrates well with configuration management tools such as Puppet, Chef, Ansible and SCCM. Chocolatey is built on top of the NuGet packaging technology created by Microsoft, which is also used by public repositories sites such as PSGallery. Even without a configuration management solution, Chocolatey can be used with an internal repository and deployed with other tools, such as PowerShell.

Today, we will go over how to create a Chocolatey package with the Business version of Chocolatey from an installer file. By using the command "choco new" we can quickly create a Chocolatey package that will be ready for distribution.


For this example I have an installer for Notepad++ that I downloaded and placed into the folder C:\Demo. I will use this installer to create our Chocolatey package.

PS C:\Demo> dir

    Directory: C:\Demo

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        3/17/2017   2:57 PM        2982992 npp.7.3.3.Installer.exe

First lets run the command choco new notepadplusplus --file='npp.7.3.3.Installer.exe' --build-package. Using this command we are telling Chocolatey to create a package with the name "Notepadplusplus" from the installer file npp.7.3.3.Installer.exe. Note that this command can be run without including the name "notepadplusplus" but it will result in Chocolatey creating a name which is not always what is preferred.

PS C:\Demo> choco new --file='npp.7.3.3.Installer.exe' --build-package
Chocolatey v0.10.3 Business
Creating a new package specification at C:\Demo\npp.Installer
Generating package from custom template at 'C:\ProgramData\chocolatey\templates\NewFileInstaller'.
Generating template to a file
 at 'C:\Demo\npp.Installer\npp.installer.nuspec'
Generating template to a file
 at 'C:\Demo\npp.Installer\tools\chocolateyinstall.ps1'
Successfully generated npp.Installer package specification files
 at 'C:\Demo\npp.Installer'
Attempting to build package from 'npp.Installer.nuspec'.
Successfully created package 'C:\Demo\npp.Installer\npp.installer.7.3.3.nupkg'

The notepadplusplus.nuspec file is an XML file that contains metadata about the package such as name and version. You will notice that Chocolatey was able to pull the version information from the installer file, which is very handy; however, additionally, you can specify the version as well with the --version parameter. Keeping the correct version of the package is important as it uses this information to upgrade packages if a new version is available in a repository with the choco upgrade command. The most important file created, though, is the notepadplusplus.nupkg file, which is all that is needed now to in order to install Notepad++ with Chocolatey.

We can now try to install it on our local machine to ensure the package installs correctly.

We will run the command choco install notepadplusplus --source=.\notepadplusplus -y. It is important to point our source to the folder that contains our NuGet package for this test, otherwise it will attempt to install from our default repository.

PS C:\Demo> choco install notepadplusplus --source=.\notepadplusplus -y
Chocolatey v0.10.3 Business
Installing the following packages:
By installing you accept licenses for the packages.

notepadplusplus v7.3.3
notepadplusplus package files install completed. Performing other installation steps.
Installing npp.7.3.3.Installer.exe...
npp.7.3.3.Installer.exe has been installed.
 The install of notepadplusplus was successful.
  Software install location not explicitly set, could be in package or
  default install location if installer.

Chocolatey installed 1/1 packages. 0 packages failed.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Success! The Notepad++ package installed successfully. Now we can deploy it from our internal repository to machines in the environment.

So to summarize:
  • Downloaded installation file
  • Used "choco new" to create the NuGet specifications and NuGet package
  • Used "choco install" to install the package locally
To give Chocolatey a try or to learn about additional features check out

Wednesday, March 15, 2017

Quickly find where your VCenter VM is running using PowerShell and PowerCLI

VCenter is the glue to a solid VSphere environment, so if it stops working so do other components like Vmotion and DRS (pretty important). Most environments run Vcenter on a VM within VSphere, not a physical box. So, if Vcenter stops working to the point where it cannot be accessed via the web client, Windows client, or PowerShell, you will probably want to access the VM console to troubleshoot. Chances are though, you do not keep tabs on what ESXi host the Vcenter VM is running on. If you have more than a few ESXi hosts in your environment, trying to find which host the VCenter VM is running on can be painful if done manually. So instead of going through that hassle, lets automate that.

To do this lets use PowerShell and PowerCLI to build a small, simple function to access each ESXi host to see if the VCenter VM is running.

The flow of the function is fairly simple:
  • Connect to the ESXi host
  • Attempt to find the VCenter VM by name
  • If the VCenter VM is found, it writes the ESXi host to output
  • If it does not find VCenter, move on to the next ESXi host
In order to find VCenter I created a small PowerShell function below, but lets step through some of the commands this function uses.

Connect-VIServer is used to connect to VCenter or a ESXi host via a credential. Note this function assumes you have the same password on each ESXi host.

Connect-VIServer -Server $item -credential $RootCredential -ErrorAction stop

Next, Get-VM is used to find the VCenter VM on an ESXi host. It uses the $item variable which contains the current ESXi host in the foreach loop. If found, it writes the name of the VCenter VM and the vmhost to output.

 Get-VM -Name $item | select-object name,vmhost

Finally, after attempting Get-VM, we disconnect from the current ESXi host we are connected to with Disconnect-ViServer.

Disconnect-viserver -force -confirm:$false

Now lets use the function!

We will create a $hosts variable with the hostnames of our ESXi hosts, use Get-Credential for our ESXi username and password (likely root), and then use the Get-VCenterEsxiHost function to find our VCenter VM.

Looks like our VCenter is running on host1.

$hosts = @("host1","host2","host3")
$Credential = Get-Credential

Get-VCenterEsxiHost -VCenterVMName 'vcenterhost' -ESXiHosts @("host1","host2","host3") -RootCredential $Credential

  Name        VMHost
  ----        ------
  vcenterhost host1

There we have it. A very simple PowerShell function to find our VCenter.

Windows 10 in-place upgrade with PowerShell and MDT

In this article, I will demonstrate how to use Microsoft Deployment Toolkit (MDT) and PowerShell to create a reusable in-place upgrade pr...