Disable & Move old AD Computer Objects

A quick script to take a list from CSV and disable, then move objects in AD…

Remember everyone – don’t just cut & paste scripts from the internet and run them, without understanding what they are doing first, ff in doubt – DON’T run it!

Always build in sanity checking and also test on a sample set of data/test environment.

#Check AD record.
$computerlist=Import-Csv c:\temp\computers.csv
$computerlist | ForEach-Object {
$adobj = $_

try
{
$adcomp= Get-ADComputer -Identity $adobj.computername 
$errorvar = '0' 
}
catch
{
Write-host "Unable to find a computer" ($adobj.computername)
$errorvar = '1'
}

if ($errorvar -ne '1')
 {
 try
 {
 Set-ADComputer -Identity $adcomp -Enabled $false -ErrorAction Stop
 $errorvar= '0'
 }
 catch
 {
 Write-Host "Unable to disable AD account for" ($adobj.computername)
 $errorvar='1'
}
}
else
{
}
if ($errorvar -ne '1')
{
 try
 {
 Move-ADObject -Identity $adcomp -TargetPath 'OU=TBDelete,DC=mydomain,DC=local'
 $errorvar= '0'
 }
 catch
 {
 Write-Host "Unable to move AD account for" ($adobj.computername)
 $errorvar='1'
}
 }
 else
 {
 }
Clear-Variable -Name adcomp
}

Moving Shared VHDX files between Host clusters

Shared VHDX files in Hyper-V are useful for VM guest clusters – SQL, file servers and so forth. However, when moving from cluster to cluster, you need to plan for an outage in order to move the shared disks.

For me it makes the most sense to let SCVMM move the VM’s, in order to do this, I need to disconnect the shared VHDX files and reconnect them after the move. This is cleaner than offline copying/importing and trying to clean up VM config manually afterwards.

Example is below:

#Set Variables for VM cluster nodes
$currenthost='Host1'
$currenthost2='Host2'
$newhost='Host3'
$newhost2='Host4'
$vmnode1='VMname1'
$vmnode2='VMname2'
$storage='\ClusterStorage\myvolume\folder'
$newstorage='\ClusterStorage\mynewvolume\folder'

#Verify locations for shared disks in SCVMM - alter controller locations below

#Disconnect Disks from VM Node1
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2 
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 
Remove-VMHardDiskDrive -computername $currenthost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5

#Disconnect Disks from VM Node2
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 
Remove-VMHardDiskDrive -computername $currenthost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5

#Open SCVMM, refresh VMs. Start storage migration (cluster to cluster in SCVMM)

#Copy Clustered disks from location A to B
Copy-Item \\$currenthost\c$\$storage\*.* -Destination \\$newhost\c$\$newstorage\ -Force

#Once Copy completed - reconnect clustered disks from new location

#Add Disks to VM Node1
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2 -Path c:\$newstorage\shared_disk_1.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 -Path c:\$newstorage\shared_disk_2.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 -Path c:\$newstorage\shared_disk_3.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost -vmname $VMnode1 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5 -Path c:\$newstorage\shared_disk_4.vhdx -SupportPersistentReservations
 
#Add Disks to VM Node2
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 2 -Path c:\$newstorage\shared_disk_1.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 3 -Path c:\$newstorage\shared_disk_2.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 4 -Path c:\$newstorage\shared_disk_3.vhdx -SupportPersistentReservations
Add-VMHardDiskDrive -computername $newhost2 -vmname $VMnode2 -ControllerType SCSI -ControllerNumber 0 -ControllerLocation 5 -Path c:\$newstorage\shared_disk_4.vhdx -SupportPersistentReservations

#Power on VMs

DNS PTR record checking

I was recently asked how to find the missing PTR records…. so here it goes – 1st draft.

#Check records on DNS server itself.
$DNSsvr='YourDNS'
$DNSAZone='Your Main A Zone FQDN here'
$DNSrecords=Get-DnsServerResourceRecord -ZoneName $DNSAZone -ComputerName $DNSsvr -RRType A
$DNSrecords | ForEach-Object {
$dnsobj = $_
$IPsplit=(($dnsobj.RecordData).IPv4Address.IPAddressToString -split "\.")
$PTRZone=$IPsplit[2]+'.'+$IPsplit[1]+'.'+$IPsplit[0]+'.in-addr.arpa'
try
{
$CHK1=Get-DnsServerZone -ComputerName $DNSsvr -Name $PTRZone -ErrorAction Stop
$errorvar = '0' 
}
catch
{
Write-host "Unable to find a reverse lookup zone for" $PTRZone "for record" ($dnsobj.HostName)
$errorvar = '1'
}
if ($errorvar -ne '1')
{
 try
 {
 $RevDNSrecords=Get-DnsServerResourceRecord -ZoneName $CHK1.ZoneName -ComputerName $DNSsvr -RRType Ptr -Name $IPsplit[3] -ErrorAction Stop
 $errorvar= '0'
 }
 catch
 {
 Write-Host "Unable to find a record for" ($dnsobj.HostName) "in" $PTRZone
 $errorvar='1'
}
 }
 else
 {
 }
Clear-Variable -Name CHK1
}

 

Hyper-V and HP c7000 Flex 10D, some thoughts…

I like the good old HP chassis, a reliable piece of kit and relatively straightforward to administer and maintain. On the host & networking side, networks are provided by modules in the rear of the chassis, allocated over the chassis fabric to each blade.

In essence, 2 x rear chassis Flex 10 modules (each with 8 x ports – FCoE, 10gbE or 1gbE SFP+ modules) provide resiliency to the server blades in the front of the chassis. Upstream you can have LACP connections for maximum throughput. Links are aggregated in shared uplink sets (SUS) and you can specify allowed VLAN tags across these uplink sets, which can then be dragged and dropped in server profiles to allow blade connectivity.

Each Flex module can provide 4 VF’s to a server (Virtual Functions). These can be 3 x Ethernet + 1 x FCoE or 4 Ethernet and so on. You provide this via a server profile in the Virtual connect manager.

As these are mirrored for availability, you don’t have 8 Ethernet to play with – you have 4 pairs. So, how best to utilise these connections? Answer: it depends.

4. If you have FCoE or iSCSI storage, you are immediately going to lose 1 pair of connections to facilitate this, leaving you with 3 pairs.

3. Now you are left with choices to make in regard to splitting up your traffic. These decisions will be driven by the back end network infrastructure connecting to the flex10’s. If you have physically separate networks for Internet facing traffic (IF/DMZ) – then you will have at least 1 pair of ports on the Flex side connected to this and you will therefore be forced to lose another pair on the host side.

2. and 1. You need host management, Live Migration, Heartbeat and Trusted/Non internet traffic for your VMs – from 2 pairs of NICs – So however you go about this, you will likely need to use LBFO teams & vSwitches (2012 R2) or SET (2016) over your pairs, carving out vEthernets for Management, LM & Heartbeat and VM networks.

Just be cautious of how you allocate bandwidth to the host network functions. Remember that internode CSV traffic is vital to maintaining storage connectivity and you don’t want your design to allow an out of control VM to consume all the available bandwidth, choking the host connectivity.

Likewise, a live migration needs to be fast, but not to the detriment of your workloads. Test, find the balance, tune and then prepare for distribution.

More reading of QoS in Switch Embedded Teaming is here….