Hyper-V – Don’t forget these simple administrative points! Part 1.


Thought i’d put together a quick guide of often forgotten/overlooked configuration items, which may seem to be benign, but can pop up at exactly the wrong time and bite you in the proverbial. Most of these config items I have seen over the years and still run in to them from time to time!

1. DO configure all your Hyper-V hosts as CORE

There is absolutely no good reason to install GUI/Desktop Experience. If you are still not sure on your powershell commands, then use ‘SCONFIG’ to get an initial IP address in, firewall rules open for remote management, join the domain.

As the old adage goes – practice makes perfect. None of us started out knowing powershell off the top of our heads, so roll up your sleeves and give it a try. Once you have used it for a while it’ll become second nature.

2. DO run AV on your Hyper-V hosts

The exclusions are well known – published on technet for all to use – HERE  There is nothing worse these days, than making it easy for malware to spread. Be mindful of lateral movement in your organisation – think of malware intrusion not as an ‘IF’ but as a ‘WHEN’. Plan ahead and don’t make things easy for them.

3. DO turn on local firewalls

Again, group policy makes it easy for domain machines to have common rule sets applied to them. Don’t open the gates, for the sake of a few mins configuration work

4. DO plan your Hyper-V deployment ahead of time

Bad configuration is often the cause of bad planning. Even if you are only deploying a few hosts in a small cluster, how you approach this should be exactly the same as if you were planning for 32 hosts. Draw up your design according to the requirements. Even if the requirements are on the slim side, document your process and decision making.

Try to think of scale, if the company grows, would it be easy to add another few hosts later on?

5. DO connect a pair of 1gb onboard NICs – for dedicated managament use

You’ve invested in 10GbE – you have a pair of NICs and you are going to use them. There’s usually 2 x 1gb NICs onboard, doing nothing. Configure them as your OS main management network. Don’t put a vSwitch on top, keep it simple and easy to reach. If you do encounter any vLAN/vSwitch/vAnything issues down the line – you’ll be glad you can still manage your server – without the need for clunky iLO/DRAC sessions.

6. DO check your cluster metrics!

It’s pretty smart, it’ll auto assign the networks, but it’s not infallable. Get powershell open and check them HERE. It might be an old article – but it is still relevant. Make sure your CSV traffic is going where you want it to go, in the event of a redirect – you dont want it going over the wrong network.


7. DO make the most of your Microsoft Licences!

So many people overlook system center, believing it to be a set of expensive management tools. If you have datacenter licensing, software assurance and the right CALs then you are set to use the full suite of products, plus some of the MDOP goodies that have been written for you.


I’m sure you are all familiar with the components now. Quite simply, Hyper-V without VMM is like toast without butter – dry and boring!

Take your time planning system center and the investment will pay dividends, alternatively, get a consultant in to do the leg work for you! 🙂 You can contact me on the site.

Part 2 here…




Configuring DPM Agents, Non-domain using Certificates

Hi All,

Lots of articles on this, Microsoft’s own documentation is far more detailed for 2012 R2 than it is for 2016. The concept remains the same.

My problems started as the cert authority (being comprised of an offline root CA and domain enterprise subordinate CA) forming the certificate chain, could not be contacted by agents on the other side of a firewall (without making swiss cheese of the firewall and allowing port 80 to the CRL distribution point, or standing up an alternative CRL location)

Follow the cert template setup guide here.

In the settings for the template, change the subject name handling to supply in the request. These certs are for non domain machines.


Make sure permissions to the template allow read & enroll. Then we can add the template to the CA for use and head for the MMC console + local computer cert snap-in on your DPM server.

Add the cert to your DPM server following the guide above, generate the BIN file, copy to the agent.

If your agent is behind a firewall, make sure to open the additional TCP 6076 port required for the certificate communication.

If your non-domain agent is unable to access the CRL distribution point (required for initial verification of the cert) then you will need to manually import the CRL files. Open explorer to your cert server \\yourcertserver\certenroll – here you will find your CRL files.

Import your CRL files (copied from the CA) using:

certutil -addstore CA “C:\CRLfolderpath\CRLfilename.CRL”

Remember to import the full chain of valid CRL files, once added in, run your DPM agent setup command – then you will have a BIN file with which to complete setup.


Bizarrely – Microsoft doesn’t support backup of machines in a “Perimiter Network” – https://technet.microsoft.com/en-us/library/hh757801(v=sc.12).aspx

I’m not sure about anyone else, but I make extensive use of VLANs, which are subject to firewall rule sets and agents work fine in these scenarios. So why would a “perimiter network” be any different?

I suspect this is just noted as MS have not fully tested support of agents in this scenario, but I do not see any reason why it should not work.

However, follow best practice and remember that if backup of a system is essential, you are best sticking to the official supported guidelines 🙂

Qlik Sense – PostGRE SQL, PGPASS and backups…

A little off my usual subject matter – but was tasked with this recently and couldn’t find anything really detailing it all in one place.

Anyway, those of you familiar (or not) with this DB will be looking for some easy way to back it up. Well, there isn’t. It’s a good old fashioned dump out a flat file product.

Command lines to do this are detailed here: Qlik Sense Backup Guide

pg_dump.exe -h localhost -p 4432 -U postgres -b -F t -f “c:\QSR_backup.tar” QSR

As you will read there, when you backup you can be prompted for a password (???) or you can specify it in plain text (sigh). To store the password, you need to use a PGPASS file.

PGPASS files are detailed here: PostgreSQL docs

File format needs to be:


What it doesn’t clearly explain is that this is a lookup list (think HOSTS file). If you were to use the pg_dump command in the example above – your PGPASS file should read:


For multiple databases across multiple hosts, then you need to list each entry in your file, when PG_dump.exe references this information, it will lookup based on the details in the command line of hostname, port, database and username to then select the password.


I like to work with service accounts for applications, so it makes sense to put the PGpass.conf file in a folder with restricted permissions – so only that service account user or domain admin can get at it. Run your script using this account or one with permissions to read the file.

Scripting Backup

My preference here is to call the script on a schedule. I’m using poweshell as I have multiple nodes in the site and services need to be stopped on every node for the backup to take place.


stop-service QlikSenseEngineService
stop-service QlikSensePrintingService
stop-service QlikSenseProxyService
stop-service QlikSenseSchedulerService
stop-service QlikSenseServiceDispatcher
get-service QlikSenseRepositoryService | stop-service
get-service QlikSenseRepositoryService | start-service
get-service qlik* | ? status -eq 'stopped' | start-service

# Shutdown services on primary & secondary nodes
Invoke-Command -ScriptBlock $scriptblock1
Invoke-Command -ComputerName secondnode -ScriptBlock $scriptblock1

# Call Backup file
(start-process -FilePath "cmd.exe" -ArgumentList '/c c:\backup.bat' -Wait -PassThru).ExitCode

# Start Services up on primary & secondary nodes
Invoke-Command -ScriptBlock $scriptblock2
Invoke-Command -ComputerName secondnode -ScriptBlock $scriptblock2

# Move backup to another location 
Move-Item $backupfile -Destination $destination -Force

The script calls a backup.bat file which contains the following:

SET PGPASSFILE=C:\securefolder\pgpass.conf
"C:\Path to Qliksense\Repository\PostgreSQL\9.3\bin\pg_dump.exe" -U yourDBadminuser -h localhost -p 4432 QSR >"d:\backups\QSR.tar"

The file sets a path ref to your PGpass.conf file, then calls PG_Dump with the required parameters, so it can lookup the password and then output your backup to D:\backups.

Powershell picks the file up and moves it elsewhere (I suggest moving into the same file structure as your Root,Apps,Logs,CustomData & Static content is (specified on install) – so that you can use your prefered backup software (DPM etc…) to protect it.





Deploying custom registry keys – to use with System Center

It can be useful to tattoo servers automatically on deployment with a custom reg – environment, service and component key set.


AssetName Environment Service Component

Unfortunately we had a bunch of legacy servers out there, with a flakey app containing this information centrally.

Not only did we want this information into SCSM, but also available for SCOM and SCCM to use for different purposes. So, armed with a CSV of data (in the format above) I needed to get this applied quickly to a few hundred VMs.

#Set Variables
$ENV = $args[0]
$ENVVAL = $args[1]
$SERVAL = $args[2]
$COMVAL = $args[3]
New-ItemProperty -Path $ENV -Name Environment -PropertyType String -Value $ENVVAL -Force
New-ItemProperty -Path $ENV -Name Service -PropertyType String -Value $SERVal -Force
New-ItemProperty -Path $ENV -Name Component -PropertyType String -Value $COMVal -Force
# Import CSV file
$list = Import-Csv C:\temp\ServiceData\servicedata.csv
# Pipe variable contents and invoke script
$list | foreach-object{
$obj = $_
Invoke-Command -ComputerName $obj.AssetName -ScriptBlock $SCRIPT -ArgumentList $ENV,$obj.Environment,$obj.Service,$obj.Component
# End of Script

The script above sets variables for the reg path, then a script – which will be passed to the server remotely using invoke-command.

This script sets variables based on the command arguments received in the loop at lines 20+21. The CSV data is formatted as the above example table, so the command connects to the computer (defined as AssetName), sends the script (variable $script) and appends the reg path, Environment, Service & Component data as Argument positions 0,1,2&3.

At the other end, it runs the script passed, which in the example CSV above, line 1 would be:

 New-ItemProperty -Path $ENV -Name Environment -PropertyType String -Value $ENVVAL -Force
 New-ItemProperty -Path $ENV -Name Service -PropertyType String -Value $SERVal -Force
 New-ItemProperty -Path $ENV -Name Component -PropertyType String -Value $COMVal -Force

It will proceed to loop round and apply each server in turn. Yes, it’s raw and there’s no error handling there, but you could easily put a TRY/CATCH in there to verify the server can be contacted, plus you can output the results to a file etc…

Now, you can build out dynamically adjusting patch groups in SCCM – based on Environment & Service, gather data into SCSM for services and customise SCOM monitoring & alerting based on Environment, plus set scheduled maintenance mode in SCOM for these groups when they patch.

After all, you dont want to be dragged out of bed for a non-prod server going offline or a routine patch cycle.

Deploy DPM Remote Management Console 2016 + UR2

Unlike all the other system center products, which can normally accept a straight forward setup.exe /install /client and install silently – DPM is different (no shock there then!)

After a long search for documentation on the available install switches, it lead me to a blog post by Steve Buchanan which is for the 2012 console install.

So, 2016 follows the same principle, but for some very bizarre reason – source media contains:

2012 Console,

2012 SP1 Console,

2012 R2 Console and….

2016 Console.

The only command line for install – Setup.exe /i /cc /client – installs all 4 versions – FAIL.

So, the only way round as far as I can see is to live with it and then remove the unnecessary components after install, then apply UR2.

Follow Steve’s post to getting it into config manager (i’m not rewriting his post) – in the source directory, add your source media, a copy of the UR2 console patch (you can extract the file and grab the MSP – it’s called: DPMMANAGEMENTSHELL2016-KB3209593.MSP ) and finally a batch file for install and reference that instead.

so – your file layout should look something like this:


In your batch file:

start /wait cmd /c “Setup.exe /i /cc /client”

start /wait cmd /c “msiexec.exe /x {DFF93860-2113-4207-A7AC-3901ABCE8002} /passive”

start /wait cmd /c “msiexec.exe /x {FF6E79E3-66E5-4079-BE10-2B9CFBE3B458} /passive”

start /wait cmd /c “msiexec.exe /x {88E17747-6E2C-48A0-88CC-396AC8D9C5BB} /passive”

start /wait cmd /c “msiexec.exe /f {BF23ED54-5484-4AC1-8EA7-6ACAFBBA6A45} /qn”

start /wait cmd /c “msiexec.exe /update DPMMANAGEMENTSHELL2016-KB3209593.MSP /qb”

So, we are installing all consoles, then removing 3 of 4 versions. This for me caused the 2016 console icons to go awry – so a quick repair of the 2016 one before finally installing the UR2 MSP.

Dont forget to reference Visual C++ 2008 Redist x64 in your dependencies list in SCCM – otherwise it won’t install 🙂



System Center 2016 UR3

It’s out now –


A lot of good VMM fixes in there – which I will be testing soon. Bulk host agent update script is in Charbel’s blog here: https://charbelnemnom.com/2017/05/update-rollup-3-for-system-center-2016-is-now-available-sysctr-systemcenter-scvmm/

Details of SCOM fixes in Kevin’s blog here: http://kevingreeneitblog.blogspot.co.uk/2017/05/scom-2016-update-rollup-3-ur3-now.html

I’m a little disappointed to see DPM missed an update in UR3. VMware support is still missing from 2016 – but all will be forgiven if this turns up in UR4 along with fixes for woes experienced with UR2 currently:

Tape Library Sharing – 2012 OS cannot remove TL sharing & re-establishing 2016 OS TL required a manual DB cleanout (with Premier Support).

Console Crashing on PG alteration – requires DLL from MS (see my previous posts)

Mount points, whilst supported for the storage (see my other posts) uncover a known issue with DPM mouting the VHDX files for Modern backup Storage – the workaround for this is to add a drive letter to the storage.

If you don’t urgently need supported SQL 2016 backups / SharePoint 2016 protection from DPM, I would seriously consider sticking to UR1 for now.

Roll on UR4! 🙂



DPM 2016 agent installations – Making your life easier with SCCM

Take the pain away from manual deployment – grab the agent and put it into SCCM. The command lines for agent install (2016 UR2) are:

DPMAgentInstaller_KB3209593_AMD64.exe /q /IAcceptEULA

DPMAgentInstaller_KB3209593.exe /q /IAcceptEULA (for x86)

Just make sure all the agent pre-reqs are in place (WMF 4.0 for 2008 R2 etc…) and make the detection of those a pre-req for the SCCM deployment.

If you know what DPM server you are going to protect with – simply add the server name to the install above – that will open the ports and make the agent ready to be attached.

If you dont just yet – then run a second SCCM task to call a batch file running the setdpmserver.exe (in the DPM agent Bin directory) to configure the agent.

Run an “Application deployment type compliance details” report in SCCM, using your target collections, application, deployment type and status of “Success” to generate a CSV file of the installed agents.

Take the computer name column in excel, append your domain name (using concatenate) and put the resulting list into a .txt file (no headings or any other info required)

Open the DPM console – select Install, Attach Agents, click add from file and point to your txt file.

Output from SCCM report, manipulate and import in ~10 mins saving many hours of manual config.

Job Done!