Hyper-V and HP c7000 Flex 10D, some thoughts…

I like the good old HP chassis, a reliable piece of kit and relatively straightforward to administer and maintain. On the host & networking side, networks are provided by modules in the rear of the chassis, allocated over the chassis fabric to each blade.

In essence, 2 x rear chassis Flex 10 modules (each with 8 x ports – FCoE, 10gbE or 1gbE SFP+ modules) provide resiliency to the server blades in the front of the chassis. Upstream you can have LACP connections for maximum throughput. Links are aggregated in shared uplink sets (SUS) and you can specify allowed VLAN tags across these uplink sets, which can then be dragged and dropped in server profiles to allow blade connectivity.

Each Flex module can provide 4 VF’s to a server (Virtual Functions). These can be 3 x Ethernet + 1 x FCoE or 4 Ethernet and so on. You provide this via a server profile in the Virtual connect manager.

As these are mirrored for availability, you don’t have 8 Ethernet to play with – you have 4 pairs. So, how best to utilise these connections? Answer: it depends.

4. If you have FCoE or iSCSI storage, you are immediately going to lose 1 pair of connections to facilitate this, leaving you with 3 pairs.

3. Now you are left with choices to make in regard to splitting up your traffic. These decisions will be driven by the back end network infrastructure connecting to the flex10’s. If you have physically separate networks for Internet facing traffic (IF/DMZ) – then you will have at least 1 pair of ports on the Flex side connected to this and you will therefore be forced to lose another pair on the host side.

2. and 1. You need host management, Live Migration, Heartbeat and Trusted/Non internet traffic for your VMs – from 2 pairs of NICs – So however you go about this, you will likely need to use LBFO teams & vSwitches (2012 R2) or SET (2016) over your pairs, carving out vEthernets for Management, LM & Heartbeat and VM networks.

Just be cautious of how you allocate bandwidth to the host network functions. Remember that internode CSV traffic is vital to maintaining storage connectivity and you don’t want your design to allow an out of control VM to consume all the available bandwidth, choking the host connectivity.

Likewise, a live migration needs to be fast, but not to the detriment of your workloads. Test, find the balance, tune and then prepare for distribution.

More reading of QoS in Switch Embedded Teaming is here….

 

 

Advertisements

Hyper-V & Cisco UCS

All,

Quick note on my preferred network config for Cisco UCS hosts – follow Cisco recommendations and use the hardware you have available to you – set your networks to utilise their native HA:

 

Name

Fabric HA  

vSwitch

 

Use

SMB 1 A SMB Multipath No Host Only
SMB 2 B SMB Multipath No Host Only
iSCSI 1 A MPIO No Host Only
iSCSI 2 B MPIO No Host Only
iSCSI 1 A MPIO Yes Guest Only
iSCSI 2 B MPIO Yes Guest Only
Management AB UCS No Host Only
Live Migration AB UCS No Host Only
Cluster HB AB UCS No Host Only
VM Networks (Trusted) AB UCS Yes Guest Only
VM Networks (Internet Facing / Non Trusted) AB UCS Yes Guest Only

Let the UCS do the hard work maintaining connectivity – don’t bother using LBFO teams, vEthernets etc… just assign as many NICs as per your design requirements and keep all your host & guest data separate.

The example above shows iSCSI use for host and guest use, also SMB for host storage. FCoE storage (preferred) would be provided by HBA and not showing in your Ethernet NIC list.

If you have a NetApp and System Center suite as well, then you should be reading up about flexpod here

Hyper-V – Don’t forget these simple administrative points! Part 2.

Continued from Part 1 – here.

8. DO use SCCM/WSUS for patching

For heavens sake, DON’T have firewall rules in your environment allowing all servers to talk outbound to the internet. WHEN they infiltrate your environment, you’ll have given them an easy exit with your data.

Block all internet bound traffic from your servers and monitor the blocking. Allow only key services and send them all via a proxy – by exception! Get WSUS to collect the patches you need and distribute them internally in a safe and controlled manner.

9. DO use VLANS on your Hyper-V switches – Get VMM to do the donkey work

Networks & IP pools in VMM are a wonderful thing. Set a VLAN, Static MAC and Static IP’s – all handled on build automatically. Make the most of this feature to keep tabs on your infrastructure. There’s no need to go back and forth to your IPAM tools for VM deployments.

Removing native networks and DHCP prevents a poorly configured VM from gaining access to anything it shouldn’t be talking to.

10. DON’T Mix and match CPU/Memory types in a cluster

Whilst it’s entirely possible to do so, you’ll end up compromising on performance of your VM’s. Everything will require CPU compatibility mode enabled (VM shutdown required to enable).

You will also be adding complexity to cluster capacity issues. Draining a host with 1tb of ram across a selection of 256gb hosts will be possible (provided no single VMs exist with memory assigned greater than 256gb), but sustaining node failures will become a complex calculation, rather than being able to tell at a glance.

If you are purchasing new nodes to expand your environment, then group them together and create a new cluster. Utilise live storage migrations between clusters – or – if using SMB3 storage, just simply live migrate the VM between clusters.

11. SQL per CPU host licencing on Hyper-V – beware!

Example: You have a Hyper-V cluster with 10 nodes, 2 processors per node (20 processors total) and you decide you want to use 2 nodes for SQL enterprise licensing on a per host CPU basis (4 CPU licences),

In short, a VM that has a licence covered by a host could potentially exist on any of the 10 nodes in your cluster. It does not matter about the settings on the VM defining the host it will reside on, it could be powered on, on any of the ten hosts and as such it is likely you would be seen to have fallen short of the required licences.

You would be best off removing 3 nodes from your existing cluster and creating a separate Hyper-V cluster dedicated for the SQL workloads and licences. Two nodes of running VM’s with a single empty host (which allows for a node failure/maintenance). This clearly shows your used hosts and your hot standby for a failure.

If in doubt – contact a licencing specialist company and get it checked and double checked. You do not want a bill in the post for the 8 remaining nodes 🙂

12. DO configure everything in PowerShell and DO retain it elsewhere

There’s many ways to automate the build of your hosts, SCVMM/SCCM or third party products, even LUN cloning (if boot from SAN). However, if you are manually building for any reason at all, then prepare the config and retain it afterwards. If the worst occurs, you can be back up and running in no time at all.

13. DO have a separate VLAN and address range for your host networks and DO keep them in step!

Make life easier for yourself. If you have 10 nodes, get 10 concurrent IP addresses from each of the ranges you may require (Management, Cluster-HB, Live Migration etc…) and make them the same last octet. Example:

Node1:

  • Mgmt: 10.0.1.11
  • ClusterHB: 172.10.0.11
  • Live Migration: 192.168.8.11

Node2:

  • Mgmt: 10.0.1.12
  • ClusterHB: 172.10.0.12
  • Live Migration: 192.168.8.12

And so on…. this will make your life easier in the long run. Keep these subnets away from any other use – so when you grow, you have more concurrent IP’s available.

If you ever need to revisit and reconfigure anything remotely via powershell, complexity will be reduced.

14. DO standardise your naming convention and use scripting/deployment tools!

As per the above points, name your network adaptors the same on every node – example:

  • MGMT-Team-NIC1, MGMT-Team-NIC2 –> MGMT-Team
  • LM-Team-NIC1, LM-Team-NI2 –> LM-Team
  • SMB-NIC1, SMB-NIC2
  • iSCSI-NIC1, iSCSI-NIC2

etc…

Again, when using PowerShell to build/configure it is easy to maintain convention and will save you countless config headaches over time.