Hyper-V and HP c7000 Flex 10D, some thoughts…

I like the good old HP chassis, a reliable piece of kit and relatively straightforward to administer and maintain. On the host & networking side, networks are provided by modules in the rear of the chassis, allocated over the chassis fabric to each blade.

In essence, 2 x rear chassis Flex 10 modules (each with 8 x ports – FCoE, 10gbE or 1gbE SFP+ modules) provide resiliency to the server blades in the front of the chassis. Upstream you can have LACP connections for maximum throughput. Links are aggregated in shared uplink sets (SUS) and you can specify allowed VLAN tags across these uplink sets, which can then be dragged and dropped in server profiles to allow blade connectivity.

Each Flex module can provide 4 VF’s to a server (Virtual Functions). These can be 3 x Ethernet + 1 x FCoE or 4 Ethernet and so on. You provide this via a server profile in the Virtual connect manager.

As these are mirrored for availability, you don’t have 8 Ethernet to play with – you have 4 pairs. So, how best to utilise these connections? Answer: it depends.

4. If you have FCoE or iSCSI storage, you are immediately going to lose 1 pair of connections to facilitate this, leaving you with 3 pairs.

3. Now you are left with choices to make in regard to splitting up your traffic. These decisions will be driven by the back end network infrastructure connecting to the flex10’s. If you have physically separate networks for Internet facing traffic (IF/DMZ) – then you will have at least 1 pair of ports on the Flex side connected to this and you will therefore be forced to lose another pair on the host side.

2. and 1. You need host management, Live Migration, Heartbeat and Trusted/Non internet traffic for your VMs – from 2 pairs of NICs – So however you go about this, you will likely need to use LBFO teams & vSwitches (2012 R2) or SET (2016) over your pairs, carving out vEthernets for Management, LM & Heartbeat and VM networks.

Just be cautious of how you allocate bandwidth to the host network functions. Remember that internode CSV traffic is vital to maintaining storage connectivity and you don’t want your design to allow an out of control VM to consume all the available bandwidth, choking the host connectivity.

Likewise, a live migration needs to be fast, but not to the detriment of your workloads. Test, find the balance, tune and then prepare for distribution.

More reading of QoS in Switch Embedded Teaming is here….

 

 

Advertisements

Author: Hyper-Vine

Microsoft Certified Solutions Expert - Cloud Platform & Infrastructure System Center Admin - 2007, 2012 & 2016 Infrastructure Consultant

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s