Tuesday, June 20, 2017

Windows 10 in-place upgrade with PowerShell and MDT



In this article, I will demonstrate how to use Microsoft Deployment Toolkit (MDT) and PowerShell to create a reusable in-place upgrade process for domain-joined computers. This is a completely automated process. Thus, no end-user interaction is necessary, and it can take place on any remote computer. Although I have not tested it specifically, theoretically this function should be able to upgrade hundreds of workstations simultaneously with the proper computing in place.

While adoption of Windows 10 for businesses has been growing, many workstations still run Windows 7 or Windows 8. For mass in-place upgrades, System Center Configuration Manager (SCCM) is the most widely used option as it allows administrators to push out the upgrade easily. For organizations that do not use SCCM, such as small to medium-sized businesses, there are other viable options, notably using MDT along with PowerShell.

Please note this solution will not be a fit for every organization. It requires the use of the Remote Desktop Protocol (RDP) on each machine to launch the upgrade process, and it is widely known that RDP is not entirely secure. The need for using RDP is due to the MDT upgrade process requiring a user logged on to the computer to launch the litetouch.vbs file. With that said, there are ways to reduce the security hole by using public key infrastructure (PKI) and enabling RDP only during the upgrade process. I also recommend changing the password on the account connecting via RDP immediately after the upgrade is complete.

Read more at 4Sysops.com

Wednesday, June 7, 2017

A safer way to patch ESXi using PowerCLI and VUM

Patching vSphere is fairly straightforward using vSphere Update Manager. You can let vCenter/VUM automate the patching of an entire datacenter or cluster if you want. Many VMware professionals prefer to have more control over how their clusters get patched, and with good reason. Yes, vCenter is capable of figuring out how many hosts can run your cluster via DRS so you can patch multiple hosts at once, but that is a bit scary if you ask me and unless you have a massive cluster, it is not worth the time savings in my opinion. I prefer to patch each host one by one and do some testing of vMotioning VM's for each host post installation to ensure the host is functioning correctly.

So I created a little function to do just that, Install-VUMPatch. You can grab it from my Github repo below. I included a good amount of error checking so that hopefully if anything goes wrong with a patch installation, the function stops and asks the user to halt or continue.


#Requires -Modules VMware.VimAutomation.Core
#Requires -Modules VMware.VumAutomation

function Install-VUMPatch
{
    [CmdletBinding()]
    param
    (
    [Parameter(Mandatory=$true)]
    [string]$VCenter,

    [Parameter(Mandatory=$true)]
    [pscredential]$Credential,

    [Parameter(Mandatory=$true)]
    [string]$ClusterName,

    [Parameter(Mandatory=$false)]
    [string]$BaselineName = 'Critical Host Patches (Predefined)',

    [Parameter(Mandatory=$true)]
    [string]$VM
    )
    begin 
    {
        ##Try connecting to vcenter
        try
        {
            Connect-VIServer $VCenter -Credential $Credential -ErrorAction Stop
        }
        catch
        {
            $ErrorMessage = $_.Exception.Message
            Write-Error $ErrorMessage 
            break
        }
    }
    process 
    {
        Try 
        {
            # Put baseline into variable and validate existence for later use
            $Baseline = Get-Baseline -Name $BaselineName -ErrorAction stop
            # Attach baseline to all hosts in cluster
            Attach-Baseline -Entity $ClusterName -Baseline $Baseline -ErrorAction stop
            # Test compliance against all hosts in cluster
            Test-Compliance -Entity $ClusterName -UpdateType HostPatch -Verbose -ErrorAction stop
            # Build array of noncompliant hosts
            $VMHosts = (Get-Compliance -Entity $ClusterName -Baseline $Baseline -ComplianceStatus NotCompliant -ErrorAction Stop).Entity.Name
            #Copy patches to noncompliant hosts
            Copy-Patch -Entity $VMhosts -Confirm:$false -ErrorAction stop
        }
        Catch 
        {
            $ErrorMessage = $_.Exception.Message
            Write-Error $ErrorMessage 
            Write-Output 'Error getting $Vmhosts variable'
            break
        }
        # For each noncompliant host install patches
        foreach ($VMhost in $VMHosts)
        {
            Write-Output "Patching $VMHost"
            try 
            {   
                # Put VMHost in maintenance mode
                Set-VMHost $VMhost -State Maintenance -Confirm:$false -ErrorAction Inquire | Select-Object Name,State | Format-Table -AutoSize
                # Remediate VMHost
                $UpdateTask = Update-Entity -Baseline $baseline -Entity $vmhost -RunAsync -Confirm:$false -ErrorAction Stop
                Start-Sleep -Seconds 05
                # Wait for patch task to complete
                while ($UpdateTask.PercentComplete -ne 100)
                {   
                    Write-Progress -Activity "Waiting for $VMhost to finish patch installation" -PercentComplete $UpdateTask.PercentComplete 
                    Start-Sleep -seconds 10
                    $UpdateTask = Get-Task -id $UpdateTask.id
                }
                # Check to see if remediation was sucessful
                if ($UpdateTask.State -ne 'Success')
                {
                    Write-Warning "Patch for $VMHost was not successful"
                    Read-Host 'Press enter to continue to next host or CTL+C to exit script'
                    Continue
                }
                # Check to see if host is now in compliance
                $CurrentCompliance = Get-Compliance -Entity $VMHost -Baseline $Baseline -ErrorAction Stop
                if  ($CurrentCompliance.Status -ne 'Compliant')
                {
                    Write-Warning "$VMHost is not compliant"
                    Read-Host 'Press enter to continue to next host or CTL+C to exit script'
                    Continue
                }
                # Set VMHost out of maintenance mode 
                Set-VMHost $vmhost -State Connected -Confirm:$false -ErrorAction Inquire | Select-Object Name,State | Format-Table -AutoSize
                # VMotion VM to VMHost and sleep for 3 seconds
                Move-VM -VM $VM -Destination $VMhost -Confirm:$false -ErrorAction Stop | Out-Null
                Start-Sleep -seconds 3
                # Test network connectivity to VM to ensure VMHost is operating correctly
                Test-Connection $VM -Count 4 -Quiet -ErrorAction Stop | Out-Null
                Write-Output "$VMHost patch successful."
            }
            catch 
            {
                $ErrorMessage = $_.Exception.Message
                Write-Warning $ErrorMessage 
                # Comment out the Read-Host if you do not want the script to prompt after an error. 
                Read-Host -Prompt 'Press enter to continue to next VMHost or CTRL + C to exit' 
                Continue
            }
        }
    }
    end 
    {
        Disconnect-ViServer -Confirm:$False -Force
        Write-Output  'Script completed'
    }
}

Monday, June 5, 2017

Capture network traces with the PowerShell module NetEventPacketCapture



Every network admin will at some point need to capture and view network events to help troubleshoot network issues. The PowerShell module NetEventPacketCapture is an interesting option to capture network traces

IT professionals have many tools that can enable the capturing and viewing of network traffic. Tools such as Wireshark and Netmon have been staples for performing network traces. Starting with Windows 7/2008 the netsh trace command became available to allow capturing traces via the command line.

The NetEventPacketCapture module


One tool I have recently started using is the PowerShell NetEventPacketCapture module to capture and show trace events. Microsoft released the module with Windows 8.1/2012 R2, so although it is a few years old, it is not a widely used tool. One of the main reasons why using this module is appealing to me is that you can do many of the tasks within PowerShell without having to use other tools.

In order to create a trace log (.etl file), you must use four cmdlets from the NetEventPacketCapture module. In addition, you need a tool to view the trace file. This would be the bare minimum process for capturing a network event trace:

  • Use New-NetEventSession to create a trace session. For remote traces you can use the ‑CimSession
  • Add-NetEventProvider to add an event-tracing provider to the session you created. For instance the provider "Microsoft-Windows-TCPIP" would trace TCP/IP events.
  • Start-NetEventSession will begin logging live events to the .etl file.
  • Stop-NetEventSession will end the trace session.
  • Finally, to view the .etl file, you can use a number of tools. In this article, I will use the Get-WinEvent cmdlet in PowerShell.

Read more on 4Sysops.com

Sunday, May 21, 2017

Using the Windows Defender PowerShell cmdlets

There are several ways to manage and configure Windows Defender, such as via the System Center Configuration Manager (SCCM), Desired State Configuration (DSC), Intune, and Group Policy. The Defender PowerShell module is another tool you can use. In this article, I will provide an introduction to the Defender module and examples of using its commands.

With the release of the Windows 10 Anniversary Update, Microsoft has improved their antivirus (AV) solution by adding features, including the ability to perform offline scans, cloud integration, and enhanced notifications as noted here. One advantage of Windows Defender over third-party AV products is Defender's built-in PowerShell support.

Running Get-Command -Module Defender shows the cmdlets you can use to work with Defender. Essentially, you can manage preferences, threats, definitions, scans, and get the current status of Windows Defender.



Read more on 4Sysops.com

Thursday, May 18, 2017

What version of SMB are my clients connecting to my Windows server with?

In light of the recent WannaCry attack, this simple PowerShell one liner will give you some insight into what SMB version your clients are connect to your Windows servers with. This works with any server that is running 2012 and up. This gives the current SMB connections for all 2012 servers in your Active Directory domain.




Wednesday, May 17, 2017

The best training experience I ever had

First off, before I tell my story I will admit that I am very lucky and in more than one way. The story I will tell is not the norm, it is an exception.

I am fortunate enough to work at a phenomenal organization. It is an organization that treats their employees VERY well. The pay is great, the benefits are excellent, and the atmosphere is beautiful. Each year we are given an opportunity to take training in a particular area, online or in-person and with all expenses paid. As an IT professional I have attended many classroom trainings. Some past trainings have been in VMware, Red Hat, Netapp, PHP, Sharepoint among others. Some have been good, others have been not so good.

For those who have attended training, you know how it goes. They are usually geared towards certifications, which means they try to cram as much material down your throat as they can and at breakneck pace. By the third day, your brain is wiped out and you start to spend a lot of time checking work emails and not engaging in the classroom.

Last Fall I decided I wanted to take a class in PowerShell DSC, in particular the class DevOps Management Fundamentals using Desired State Configuration (DSC), with the instructor Jason Helmick. There were a few reasons for this choice. First, I have wanted to start using DSC for a while, but just haven't had to the time to sit down and learn it. I use PowerShell a lot, so I knew I would be able to pick it up fairly easily. Second, the training is not geared towards a certification. I hoped this meant there would be time to dive a little deeper into DSC and PowerShell (I was right). Third, I have used Puppet and Ansible, but I work primarily in Windows. They both support Windows but I would rather use a Windows-based config management solution. Fourth, the instructor was Jason Helmick, who I consider a "higher-up" in the PowerShell community. I knew I would be getting expert training. I had actually seen him teach a session at Ignite in Chicago a few years back.

As I arrived in Phoenix on day one of the training, I walked in and met Jason. For anyone who has met Jason before you would know he is extremely down to earth, smart, easy to talk to, and fun. Having seen him at Ignite I felt really lucky to have a chance to learn for him for a week. As I sat down I asked him how many other students would be attending. He said "just one". Now for an IT training to have only two attendees is not only rare, it's unheard of. Most trainings would get cancelled with only two attendees, but since it was a fairly new course they decided to run it. 

As we started getting into learning DSC, I soon learned this was not just another training, it was a once in a lifetime experience to learn from someone who knows a lot more than I am. Being there were only two students, it was easy to ask questions and get help on the labs. Not only that, but Jason was so open to sharing his knowledge that discussions about DSC turned into discussions about using PowerShell as well, which is just about my favorite topic in the world. I learned more in depth PowerShell that week than I have in the last year, because I had a great instructor that I could easily ask questions to without feeling like I was wasting the time of other students. It was awesome.

As the week ended, I felt comfortable using DSC and I was excited to go back home and start implementing it. Before I left though, I wanted to pick Jason's brain about one more topic, technical writing. I had long been interested in starting a blog and trying to establish myself as a technical author and here was a guy who is already a respected author at Pluralsight which is the best IT training site in my opinion. Jason was really supportive of my ambition to become an author, and gave me a few great nuggets of advice to help me get started in the field.

As I returned home I immediately starting working on this blog and reached out to a few sites to propose some ideas for articles. I was fortunate enough to be given the chance to write for 4sysops and Tom's IT Pro recently, which has been an awesome experience and I owe it all to Jason. I would not have tried writing without some inspiration from a great instructor who I was fortunate enough to have direct access to for a week.

In closing, I know I am fortunate for this training experience, but I guess the moral of the story is that if you have access to learn from someone who knows more than you do, do not waste that opportunity. Ask questions, learn, most professionals will be more than willing to share, especially in the PowerShell community. This is why PSHSummit is such a popular conference.

Wednesday, May 10, 2017

Automating Windows updates using the PowerShell PSWindowsUpdate module

I will admit it. I love PowerShell and if my choice is to use PowerShell over another tool to do just about anything, I am choosing PowerShell even if it may not be the "best" tool for the job. The PowerShell community is getting larger and larger as great developers are adding quality modules to PSGallery, but perhaps my favorite module so far is PSWindowsUpdate. This is a module created by Michal Gajda and is one of the most popular modules (222k downloads).

There are only two cmdlets I use for the most part with PSWindowsUpdate, Invoke-WUInstall along with Get-WUInstall. Invoke-WUInstall allows you to kick off the installation of patches remotely and it works beautifully.

Get-WUInstall actually downloads and installs the updates. To install all available updates and reboot when finished you can run Get-WUInstall -AcceptAll -AutoReboot locally.

How does it work?

A look inside the Get-WUInstall code and you will see the remoting is actually done via the task scheduler. A scheduled task is created and runs on the remote computer under the system account due to certain methods not available with PowerShell remoting (pretty cool way to get around this). The scheduled task is a PowerShell command that you specify. In my case I use Invoke-WUInstall -ComputerName <ComputerName> -Script {ipmo PSWindowsUpdate; Get-WUInstall -AcceptAll -AutoReboot  | Out-File C:\PSWindowsUpdate.log  } -Confirm:$false -Verbose. This allows me to start the update process on remote machines and log the output. One drawback for Invoke-WUInstall is that it does not monitor the output of the update process after you run it. A workaround I use for this is adding a few lines to Get-WUInstall to send an email to me when the computer is finished installing updates. The email includes which updates were installed and if they were successful or not.

In this example I want to install updates on all servers in my Active Directory domain at the same time:



Monday, May 8, 2017

Using PowerShell to test Active Directory-integrated DNS resolution

For anyone who has worked with Active Directory, they know that AD is dependent on it's associated DNS zones/records. If for some reason these stop resolving, all hell breaks loose in the environment.

To monitor these necessary zones are resolving in DNS, I turned to PowerShell and wrote a simple script to test resolving the tcp, msdcs, udp, sites, domaindnszones and forestdnszones zones that I run from a client machine.

$Domain = 'domain.com'
$Zones = ('_tcp.','_msdcs.','_udp.','_sites.','domaindnszones.','forestdnszones.')

foreach ($Zone in $Zones)
{
    try
    {
        if (Resolve-DnsName -Name $Zone$Domain -ErrorAction Stop)
        {
            Write-Output "$Zone$Domain Resolved"
        }
    }
    catch
    {
        Write-Warning  "$Zone$Domain not resolving"
    }
}



Wednesday, May 3, 2017

How to Deploy Virtual Machines in vSphere Using PowerCLI





When I started deploying servers, the process involved racking the hardware, connecting it to the network, inserting a CD/DVD, installing the operating system and drivers, configuring network settings in the OS and then installing and configuring services such as Active Directory or Exchange. These tasks were done using a GUI. Needless to say, this process has become archaic.
vSphere made the process of building a server much simpler by leveraging virtual machines. But many users still rely on using the GUI for bringing up new systems with the Windows vSphere client. In vSphere, servers can be built quickly and easily using PowerCLI. Code is king when deploying servers and using a GUI lacks scalability.

Using New-VM

In PowerCLI, the New-VM cmdlet is used to create a new virtual machine. A few important things that can be set with New-VM are the following:
Continue reading on Tom's IT Pro:

Thursday, April 27, 2017

Using VMware vSphere Update Manager with PowerCLI


In a vSphere environment, VMware states that vSphere Update Manager (VUM) is the preferred method of upgrading and patching vSphere. Fortunately, for PowerShell users, PowerCLI supports performing the functions of VUM.
Using VUM to upgrade ESXi hosts in a GUI is a relatively straight forward process which is shown on 4sysps here by Jim Jones. Using PowerCLI, I will show you how to update a single ESXi host and an entire cluster. Please note I am using PowerShell v5.1, PowerCLI v6.3 and vSphere v6 in these examples.
Update Manager baselines
VUM uses baselines, which are a group of patches that you can “attach” to a template, virtual machine, ESXi host, cluster, data center, folder, or VApp. After a baseline is attached to one of these entities you can scan to see if it is in compliance, meaning if it is missing any patches that apply to it in the baseline. Below you can see how to retrieve compliance information about a host with the Get-Compliance cmdlet.
$Baseline = Get-Baseline -Name 'Critical Host Patches (Predefined)'
C:\> Get-Compliance -Entity VMHost-1 -Baseline $Baseline
Entity                       Baseline                               Status
------                       --------                               ------
VMHost-1                     Critical Host Patches (Predefined)     Compliant

In this article I will be using the “Critical Host Patches” baseline exclusively. This is a built-in baseline that will include any critical patch for your ESXi hosts.
Read more on 4Sysops.com

Import hosted Chocolatey packages into Microsoft Deployment Toolkit

For users of Microsoft Deployment Toolkit (MDT) the ability to separate applications from the OS during deployment is a great feature. It is a much easier way to manage and deploy packages during the imaging process. Thankfully for Chocolatey users, MDT allows admins to have applications that do not have source files, in this case just a command like "choco install dropbox -y".

Most organizations that use Chocolatey have their own hosted NuGet server which they use to deploy packages from. In this example I have setup a Chocolatey simple server. To see what packages are on your hosted server you can run "choco list --source<server>".

So if you are using MDT and host a NuGet server how can we quickly import all your packages into MDT? We can use the MDT PSSnapin and Chocolatey CLI.


In this example I have my own hosted NuGet server "nugetserver.domain.com". I create a new PS drive to my MDT share "MDT", use "choco list --source=nugetserver.domain.com" in order to get a list of my hosted packages and then loop through them to create an MDT application in the subfolder "test" for each package.

Just like that, Chocolatey deliciousness.


Thursday, April 20, 2017

Possessor of many skills, master of none - the IT generalist

The one single piece of technology I have spent the most time learning and exploring it is PowerShell. Since I began using it years ago, I quickly understood how awesome and useful it was to do my job, which is a Windows systems admin/engineer. More than anything else I do, I love building tools and automating things in IT operations. I find it extremely fulfilling. There is no greater feeling than taking a monotonous task and making it easily repeatable, to the point where you no longer have to worry about it because PowerShell just does it. It is fitting that my most coveted IT skill is in something that having deep knowledge and expertise in it alone, can't really get you a job because it is simply the method to use other technologies like Active Directory, Sharepoint, Exchange and many others.

I have always been most interested in understanding the gist of things and focusing on the breadth and not depth of a given technology. I have found that this usually does not bode well in job interviews. Inevitably, the interviewer will ask what key technologies I know and I always have a crappy answer, because I do not consider myself an "expert" in any one technology, outside of a language that is not used to create applications. Sure, I know Windows. I am a VCP so I am familiar with VMware. I dabble in Linux, but I probably can't talk in depth with professionals who use these exclusively or extensively.

I have written some code in Python. I have created algorithms to use with big data. I can troubleshoot Outlook issues. I can deploy a simple Exchange environment. I have worked with Sharepoint. I can write a simple bash script. I have done desktop support. I can troubleshoot and replace hardware. I can write an SQL query. I can write some HTML. I can setup a SAN. I took a class in PHP. I have deployed Puppet and Ansible. I know a bit of Cisco.

I love learning new technologies and playing with them, but most of the time that is where it stops.

So I am the possessor of many skills and the master of none. The IT generalist.

Tuesday, April 18, 2017

The new local user and group cmdlets in PowerShell 5.1



With the recent release of PowerShell 5.1—part of Windows Management Framework (WMF) 5.1—Microsoft introduced new cmdlets for working with local user and group accounts: Get-LocalUser, New-LocalUser, Remove-LocalUser, New-LocalGroup, Add-LocalGroupMember, and Get-LocalAdministrators. In this article, I will explore how to use these cmdlets by showing a few simple examples as well as how to perform some advanced tasks.

Prior to this release, having to perform tasks with local users and groups from the Windows command line could be cumbersome. It was necessary to revert to commands such as net user, VB scripting, or using the Active Directory Service Interfaces (ADSI) WinNT provider such as Sitaram showed here on 4sysops.

Read more on 4sysops.com

Wednesday, April 12, 2017

Using PSReleaseTools to install latest PowerShell v6 release on Mac

I have been throughly enjoying the use of PowerShell on my new MacBook since it arrived a few months ago. Each release gets better and better. One thing that annoyed me was constantly having to install the latest release from Github. Luckily, Jeff Hicks created a nifty module for doing that named PSReleaseTools. While this is a great tool for grabbing the latest PowerShell v6 package, it does not actually install the package on your machine.

For this reason I went ahead and created a small function to leverage PSReleaseTools and the Mac command-line tool Installpkg to somewhat automate the process of grabbing the latest version of PS and installing it. I say "somewhat" because it appears installpkg requires you to use sudo when installing a package, so that is part of the function. Keep in mind I threw this together this morning so it does not have much error checking or best practices used and there is much to be improved. It obviously requires you install Installpkg, which you can download here https://github.com/henri/installpkg



Tuesday, April 11, 2017

How Chocolatey Business saved me from a Patch Tuesday disaster

First off, I will admit it. I have bad luck with Patch Tuesday and WSUS servers. Twice in the last two years my WSUS server has decided to crash prior to pushing out patches to my servers on a Patch Tuesday. Perhaps this is just my experience but it seems I need to rebuild my WSUS server at least once a year from some bizarre bug that hits me. I normally research the error, but after a while realize it is just easier to rebuild it. Needless to say the WSUS Gods hate me.

Tonight, I first got hit with this pretty little number - http://myitforum.com/myitforumwp/2017/04/11/errors-during-wsus-update-synchronization-for-april-2017-updates/

After resolving it with the workaround, my WSUS synced updates successfully but was still acting funny as I received errors about it not being able to download update files. I realized that the server had crapped out two days ago as no clients had been reporting since then and I just did not realize it until now.

So here I was an hour before my scheduled outage with no WSUS server to hand out updates. Sh*t! Normally, I would resort to copying the .msu files to each server and then strictly using PSExec and PowerShell for this, but tonight another solution came to mind. Chocolatey.

I remembered that Chocolatey can actually create packages from .msu files and since Microsoft only hands out one big patch a month now for 2008/2012 servers all I had to do was create a package from the .msu files I needed and push them out.

So I downloaded the April 2017 patches for my servers and ran:
choco new --file=<.msu file> --build-package and like magic my packages were created. I pushed them to my hosted NuGet server, and then deployed them using PSExec (PS remoting does not seem to be an option with wusa.exe). All and all the process actually took less time than my normal routine of using Invoke-WUInstall from the PSWindowsUpdate module.

Moral of this story is, WSUS is about as dependable as the weather so always have a backup method of deploying patches.

Sunday, March 26, 2017

Put down your laptop, turn off your phone, turn off your brain and get into the wilderness

There was a time before I was before I was a Father, before I was a husband that I would go to work all day, come home, and then continue to work until I went to sleep. At the time this did not seem like a horrible life to me. I loved what I did. Troubleshooting, designing, implementing systems was a lot of fun and brought a tremendous sense of achievement to me. In fact, I still love to work.

As time went on this sort of schedule ebbed and flowed. I would have some weeks where I would do this and then some when I would be more social after work and find other things to do besides IT.

Now that I am a Father and a husband, I can no longer afford that non-stop working attitude and lifestyle. It would prohibit me from being "present" with my family when I am home, and that is not fair to them and it is also no way to live. Truth be said I promised my wife I would put my phone away while I am home so that I am not distracted by it. This is necessary since anyone who works in IT likely has Internet addiction like me. With that said, I still logon to my Macbook whenever I have a free moment at home or in my "free" time. It is not because I necessarily need to do work, it is because I love to.

Recently I began to feel the dreaded feeling of being burnt out. I would go do sleep exhausted. I would wake up exhausted. I was eating like crap. I was sleeping like crap. I was constantly distracted by constant thinking while both at work and at home. It was time for a much needed change.

I decided to take up something that would get me away from technology and everything else in life at least for a little while. I needed an escape from both work and being home, so I looked towards the wilderness and began to hike at the break of dawn 2-3 times a week. I go on my hikes without any technological device, only taking some water and maybe a blanket to lay on a rock with if needed. At first it was hard. My mind was still distracted thinking about work and all the responsibilities outside of work that I have. Soon after though, I started to learn to enjoy the wild by actively paying attention to my thoughts but also my surroundings. I did this by looking at the beauty of nature, watching animals, watching a stream flow or just meditating on top of a mountain. Now, when I hike it is a real break from it all and it recharges my batteries well.

As we get older in IT our responsibilities outside of work began to take up a lot of time in life but it is important to find time to recharge and get away to let your brain rest. I encourage anyone who works a lot in an office space or just in general and immerse yourself into the wild, even for a little while.



Friday, March 24, 2017

Using the PowerShell #requires statement in your script

A few years ago I was working with a team of sysadmins. We were all learning PowerShell together and were having a blast playing with different modules and trying to figure out what sort of things we can automate. One day I downloaded a module from PSGallery, installed it and wrote a nice little script. My colleague asked if he could borrow it to run it, so he installed the module, and ran the script but immediately his console threw a nasty error message. Something along the lines of...

PS /Users/dan> Get-Something                                                    
Get-Something : The term 'Get-Something' is not recognized as the name of a 
cmdlet, function, script file, or operable program. Check the spelling of the 
name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ Get-Something
+ ~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-Something:String) [], Comma 
   ndNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

Ahh, the dreaded error of not recognizing a cmdlet, function, script file or operable program. We have all seen this before. For a novice to PowerShell this can be confusing and annoying. This error is getting thrown because my colleague did not have that module imported into his session. Fortunately, there is an easy way to ensure this doesn't happen again for him or any user who runs a script.

The #requires statement is something that we can place at the top of our script that will ensure that the script cannot run unless the conditions that #requires states are met. This can be done for modules, pssnapins, runasadministrator or a specific PowerShell version as the help explains.

    #Requires -Version <N>[.<n>] 
          #Requires –PSSnapin <PSSnapin-Name> [-Version <N>[.<n>]]
          #Requires -Modules { <Module-Name> | <Hashtable> } 
          #Requires –ShellId <ShellId>
          #Requires -RunAsAdministrator

So how could I have avoided the nasty error my colleague received? By sticking this at the top of my script:

#requires -Modules ModuleName

As long as the module is installed on the system the script is executing on, the module will automatically be imported and used into the session, and if it is not installed an error will get thrown stating the script cannot run because the modules specified are missing.

As an example I created a script called 'sample.ps1'. I placed #requires -Modules what at the top of the script so that the script will not run if the module 'what' is not in the session. So when I attempt to run sample.ps1 I get this error:

PS /Users/dan> ./sample.ps1                                                     
./sample.ps1 : The script 'sample.ps1' cannot be run because the following      
modules that are specified by the "#requires" statements of the script are 
missing: what.
At line:1 char:1
+ ./sample.ps1
+ ~~~~~~~~~~~~
    + CategoryInfo          : ResourceUnavailable: (sample.ps1:String) [], Scr 
   iptRequiresException
    + FullyQualifiedErrorId : ScriptRequiresMissingModules

Well at least that is a much more informative error than cmdlet, function, script file or operable program not recognized right? Now the user knows what module they need installed on their machine before they can run this script.

Creating a CIFS share on Netapp Clustered Mode storage system with PowerShell

In a Windows environment, CIFS shares are a common occurrence and a necessity for many services such as Active Directory and print services. Setting up a share on a Windows server is a fairly simple task. We create a folder, enable sharing, set permissions. Netapp has been a leader in storage technology and also has the capability to serve CIFS shares. My experience with Netapp has been extremely positive as their storage systems are very solid and rarely have outages. Today I will show how we can create a share on a Netapp Clustered Mode system with Netapp's PowerShell module.

Prerequisites:
For this example I am connecting to a management interface on a Netapp storage SVM (Storage virtual machine) with the name FAS-SVM. This share named 'finance' will be used for an Active Directory group named 'DOMAIN\FinanceGroup to share their files'.

1. First we login to the system with the Connect-NCController command. This will create a persistent session with FAS-SVM so that going forward any commands that run will be against this system.

CC:\> Connect-NcController -Name fas-svm | ft -AutoSize

Name                 Address      Vserver  Version
----                 -------      -------  -------
fas-svm1             172.16.80.10 fas-svm1 NetApp Release 9.1


2. Next, we will create a directory on 'vol1' for our CIFS share named 'finance'. I set the permissions to 777 so that everyone has access to the directory.

C:\> New-NcDirectory -Path /vol/vol1/finance -Permission 777 | ft -AutoSize

Name    Type      Size   Created  Modified Owner Group Perm Empty
----    ----      ----   -------  -------- ----- ----- ---- -----
finance directory 4 KB 3/24/2017 3/24/2017     0     0  777 True

3. Time to create the CIFS share on the 'finance' folder. We will use the command Add-NcCIFSShare for this. Note that there are several parameters we can use. One important parameter is -ShareProperties. From the module help we can see our options:

-ShareProperties <String[]>
    The list of properties for this CIFS share. Possible values:
    "oplocks"        - This specifies that opportunistic locks (client-side caching) are enabled on this share.
    "browsable"      - This specifies that the share can be browsed by Windows clients.
    "showsnapshot"   - This specifies that Snapshots can be viewed and traversed by clients.
    "changenotify"   - This specifies that CIFS clients can request for change notifications for directories on this share.
    "homedirectory"  - This specifies that the share is added and enabled as part of the CIFS home directory feature. The configuration of this share should be done using CIFS home directory
    feature interface.
    "attributecache" - This specifies that connections through this share are caching attributes for a short time to improve performance.

For this example lets make the CIFS share browsable and showsnapshots.

C:\> Add-NcCifsShare  -Name finance -Path '/vol1/finance' -ShareProperties @("browsable","showsnapshot") -Comment 'Finance share' | ft -AutoSize

CifsServer ShareName    Path              Comment
---------- ---------    ----              -------
FAS-SVM   finance      /vol1/finance     Finance share

4. Now that our 'finance' share is created, we will set our CIFS permissions. We will use the Add-NcCifsShareACL command to give the AD security group 'DOMAIN\FinanceGroup' full control. We specify the parameter as 'windows' for -UserOrGroup.

C:\> Add-NcCifsShareAcl -Share finance -UserOrGroup 'DOMAIN\FinanceGroup' -Permission full_control -UserGroupType windows | ft -AutoSize

Share        UserOrGroup         Permission
-----        -----------         ----------
finance      DOMAIN\FinanceGroup full_control

Done! Pretty simple eh? You can see how we could easily put these steps into a function for repeatable use in PowerShell.

To summarize the steps this is what we did:
  • Connected to our Netapp system
  • Created a directory on a volume
  • Created a CIFS share on that directory
  • Added CIFS permissions using an Active Directory security group

Windows 10 in-place upgrade with PowerShell and MDT

In this article, I will demonstrate how to use Microsoft Deployment Toolkit (MDT) and PowerShell to create a reusable in-place upgrade pr...