Moving Shared VHDX files between Host clusters

Shared VHDX files in Hyper-V are useful for VM guest clusters – SQL, file servers and so forth. However, when moving from cluster to cluster, you need to plan for an outage in order to move the shared disks.

For me it makes the most sense to let SCVMM move the VM’s, in order to do this, I need to disconnect the shared VHDX files and reconnect them after the move. This is cleaner than offline copying/importing and trying to clean up VM config manually afterwards.

Example is below:

#Set Variables for VM cluster nodes
$currenthost='Host1'
$currenthost2='Host2'
$newhost='Host3'
$newhost2='Host4'
$vmnode1='VMname1'
$vmnode2='VMname2'
$storage='\ClusterStorage\myvolume\folder'
$newstorage='\ClusterStorage\mynewvolume\folder'

#Verify locations for shared disks in SCVMM - alter controller locations below

#Disconnect Disks from VM Node1
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2 
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5

#Disconnect Disks from VM Node2
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5

#Open SCVMM, refresh VMs. Start storage migration (cluster to cluster in SCVMM)

#Copy Clustered disks from location A to B
Copy-Item \\$currenthost\c$\$storage\*.* -Destination \\$newhost\c$\$newstorage\ -Force

#Once Copy completed - reconnect clustered disks from new location

#Add Disks to VM Node1
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2 -Path c:\$newstorage\shared_disk_1.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 -Path c:\$newstorage\shared_disk_2.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 -Path c:\$newstorage\shared_disk_3.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5 -Path c:\$newstorage\shared_disk_4.vhdx -SupportPersistentReservations
 
#Add Disks to VM Node2
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2 -Path c:\$newstorage\shared_disk_1.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 -Path c:\$newstorage\shared_disk_2.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 -Path c:\$newstorage\shared_disk_3.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5 -Path c:\$newstorage\shared_disk_4.vhdx -SupportPersistentReservations

#Power on VMs
Advertisements

Hyper-V – Don’t forget these simple administrative points! Part 2.

Continued from Part 1 – here.

8. DO use SCCM/WSUS for patching

For heavens sake, DON’T have firewall rules in your environment allowing all servers to talk outbound to the internet. WHEN they infiltrate your environment, you’ll have given them an easy exit with your data.

Block all internet bound traffic from your servers and monitor the blocking. Allow only key services and send them all via a proxy – by exception! Get WSUS to collect the patches you need and distribute them internally in a safe and controlled manner.

9. DO use VLANS on your Hyper-V switches – Get VMM to do the donkey work

Networks & IP pools in VMM are a wonderful thing. Set a VLAN, Static MAC and Static IP’s – all handled on build automatically. Make the most of this feature to keep tabs on your infrastructure. There’s no need to go back and forth to your IPAM tools for VM deployments.

Removing native networks and DHCP prevents a poorly configured VM from gaining access to anything it shouldn’t be talking to.

10. DON’T Mix and match CPU/Memory types in a cluster

Whilst it’s entirely possible to do so, you’ll end up compromising on performance of your VM’s. Everything will require CPU compatibility mode enabled (VM shutdown required to enable).

You will also be adding complexity to cluster capacity issues. Draining a host with 1tb of ram across a selection of 256gb hosts will be possible (provided no single VMs exist with memory assigned greater than 256gb), but sustaining node failures will become a complex calculation, rather than being able to tell at a glance.

If you are purchasing new nodes to expand your environment, then group them together and create a new cluster. Utilise live storage migrations between clusters – or – if using SMB3 storage, just simply live migrate the VM between clusters.

11. SQL per CPU host licencing on Hyper-V – beware!

Example: You have a Hyper-V cluster with 10 nodes, 2 processors per node (20 processors total) and you decide you want to use 2 nodes for SQL enterprise licensing on a per host CPU basis (4 CPU licences),

In short, a VM that has a licence covered by a host could potentially exist on any of the 10 nodes in your cluster. It does not matter about the settings on the VM defining the host it will reside on, it could be powered on, on any of the ten hosts and as such it is likely you would be seen to have fallen short of the required licences.

You would be best off removing 3 nodes from your existing cluster and creating a separate Hyper-V cluster dedicated for the SQL workloads and licences. Two nodes of running VM’s with a single empty host (which allows for a node failure/maintenance). This clearly shows your used hosts and your hot standby for a failure.

If in doubt – contact a licencing specialist company and get it checked and double checked. You do not want a bill in the post for the 8 remaining nodes 🙂

12. DO configure everything in PowerShell and DO retain it elsewhere

There’s many ways to automate the build of your hosts, SCVMM/SCCM or third party products, even LUN cloning (if boot from SAN). However, if you are manually building for any reason at all, then prepare the config and retain it afterwards. If the worst occurs, you can be back up and running in no time at all.

13. DO have a separate VLAN and address range for your host networks and DO keep them in step!

Make life easier for yourself. If you have 10 nodes, get 10 concurrent IP addresses from each of the ranges you may require (Management, Cluster-HB, Live Migration etc…) and make them the same last octet. Example:

Node1:

  • Mgmt: 10.0.1.11
  • ClusterHB: 172.10.0.11
  • Live Migration: 192.168.8.11

Node2:

  • Mgmt: 10.0.1.12
  • ClusterHB: 172.10.0.12
  • Live Migration: 192.168.8.12

And so on…. this will make your life easier in the long run. Keep these subnets away from any other use – so when you grow, you have more concurrent IP’s available.

If you ever need to revisit and reconfigure anything remotely via powershell, complexity will be reduced.

14. DO standardise your naming convention and use scripting/deployment tools!

As per the above points, name your network adaptors the same on every node – example:

  • MGMT-Team-NIC1, MGMT-Team-NIC2 –> MGMT-Team
  • LM-Team-NIC1, LM-Team-NI2 –> LM-Team
  • SMB-NIC1, SMB-NIC2
  • iSCSI-NIC1, iSCSI-NIC2

etc…

Again, when using PowerShell to build/configure it is easy to maintain convention and will save you countless config headaches over time.

Deploying custom registry keys – to use with System Center

It can be useful to tattoo servers automatically on deployment with a custom reg – environment, service and component key set.

Example:

AssetName Environment Service Component
SERVER01 PROD WEB APP WFE
SERVER02 UAT WEB APP WFE
SERVER03 DR WEB APP WFE
SERVER04 PROD WEB APP APP
SERVER05 UAT WEB APP APP
SERVER06 DR WEB APP APP
SERVER07 PROD WEB APP SQL
SERVER08 PROD WEB APP SQL
SERVER09 UAT WEB APP SQL
SERVER10 DR WEB APP SQL

Unfortunately we had a bunch of legacy servers out there, with a flakey app containing this information centrally.

Not only did we want this information into SCSM, but also available for SCOM and SCCM to use for different purposes. So, armed with a CSV of data (in the format above) I needed to get this applied quickly to a few hundred VMs.

#Set Variables
 
$ENV="HKLM:\SOFTWARE\MyCompanyName"
$SCRIPT={
$ENV = $args[0]
$ENVVAL = $args[1]
$SERVAL = $args[2]
$COMVAL = $args[3]
New-ItemProperty -Path $ENV -Name Environment -PropertyType String -Value $ENVVAL -Force
New-ItemProperty -Path $ENV -Name Service -PropertyType String -Value $SERVal -Force
New-ItemProperty -Path $ENV -Name Component -PropertyType String -Value $COMVal -Force
}
 
# Import CSV file
$list = Import-Csv C:\temp\ServiceData\servicedata.csv
 
# Pipe variable contents and invoke script
$list | foreach-object{
$obj = $_
Invoke-Command -ComputerName $obj.AssetName -ScriptBlock $SCRIPT -ArgumentList $ENV,$obj.Environment,$obj.Service,$obj.Component
}
 
# End of Script

The script above sets variables for the reg path, then a script – which will be passed to the server remotely using invoke-command.

This script sets variables based on the command arguments received in the loop at lines 20+21. The CSV data is formatted as the above example table, so the command connects to the computer (defined as AssetName), sends the script (variable $script) and appends the reg path, Environment, Service & Component data as Argument positions 0,1,2&3.

At the other end, it runs the script passed, which in the example CSV above, line 1 would be:

$ENV = HKLM:\SOFTWARE\MYCompanyName
 $ENVVAL = PROD
 $SERVAL = WEB APP
 $COMVAL = WFE
 New-ItemProperty -Path $ENV -Name Environment -PropertyType String -Value $ENVVAL -Force
 New-ItemProperty -Path $ENV -Name Service -PropertyType String -Value $SERVal -Force
 New-ItemProperty -Path $ENV -Name Component -PropertyType String -Value $COMVal -Force

It will proceed to loop round and apply each server in turn. Yes, it’s raw and there’s no error handling there, but you could easily put a TRY/CATCH in there to verify the server can be contacted, plus you can output the results to a file etc…

Now, you can build out dynamically adjusting patch groups in SCCM – based on Environment & Service, gather data into SCSM for services and customise SCOM monitoring & alerting based on Environment, plus set scheduled maintenance mode in SCOM for these groups when they patch.

After all, you dont want to be dragged out of bed for a non-prod server going offline or a routine patch cycle.

System Center 2016 UR3

It’s out now –

https://support.microsoft.com/en-hk/help/4020906/update-rollup-3-for-system-center-2016

A lot of good VMM fixes in there – which I will be testing soon. Bulk host agent update script is in Charbel’s blog here: https://charbelnemnom.com/2017/05/update-rollup-3-for-system-center-2016-is-now-available-sysctr-systemcenter-scvmm/

Details of SCOM fixes in Kevin’s blog here: http://kevingreeneitblog.blogspot.co.uk/2017/05/scom-2016-update-rollup-3-ur3-now.html

I’m a little disappointed to see DPM missed an update in UR3. VMware support is still missing from 2016 – but all will be forgiven if this turns up in UR4 along with fixes for woes experienced with UR2 currently:

Tape Library Sharing – 2012 OS cannot remove TL sharing & re-establishing 2016 OS TL required a manual DB cleanout (with Premier Support).

Console Crashing on PG alteration – requires DLL from MS (see my previous posts)

Mount points, whilst supported for the storage (see my other posts) uncover a known issue with DPM mouting the VHDX files for Modern backup Storage – the workaround for this is to add a drive letter to the storage.

If you don’t urgently need supported SQL 2016 backups / SharePoint 2016 protection from DPM, I would seriously consider sticking to UR1 for now.

Roll on UR4! 🙂