Cisco UCS: Virtual Interface Cards & VM-FEX

Cisco UCS: Virtual Interface Cards & VM-FEX

Hello once again! Today I decided to talk about some Cisco innovations around of UCS platform. I’m going to try my best to keep this post high-level and EASY to understand as most things “virtual” can get fairly complex.

First up is Virtual Interface Card (VIC). This is Cisco’s answer to 1:1 mapped blade mezzanine cards in blade servers and other “virtual connectivity” mezzanine solutions. Instead of having a single MEZZ/NIC mapped to a specific internal/external switch/interconnect we developed a vNIC optimized for virtualized environments. At the heart of this technology is FCoE and 10GBASE-KR backplane Ethernet. In the case of the VIC 1240, we have 4x 10G connections that connect to the FEX, this connectivity is FCoE until the traffic gets to the fabric interconnect outside the chassis. The internal mapping to the server/blade allows you to dynamically create up to 128 PCIe virtual interfaces. Now here is the best part, you can define the interface type (NIC/HBA) and the identity (MAC/WWN). What does that mean? Easy policy based, stateless, and agile server provisioning. Does one really need 128 interfaces per server??? Perhaps in an ESX host you want the “flexibility and scale”. Oh yea, there is ANOTHER VIC that supports 256 vNICs and has 80Gbps to the backplane!!! That model is the 1280 VIC.

NOTE: 8 interfaces are reserved on both the 128/256 VICs for internal use and the actual number of vNICs presented to the server may be limited by the OS. 

Update: 

Just had a great conversation with a customer today and I want to take a minute to break down the math.

Today we have the 2208 FEX (I/O) module for the 5108 chassis. Each one supports 80G (8×10) uplinks to the Fabric Interconnect. This give a total of 160G to each chassis if all uplinks were utilized.

On the back side of each 2208 I/O is 32 10G ports (downlinks) for a total of 320G to the midplane. We are now at 640G total (A/B side). Take the total amount of blades per chassis and multiple that by 80G. 8 (blades) * 80G (eight traces per blade of 10G) = 640G. 🙂

Just keep in mind that the eight traces to each blades are 4x10G on the (A) side and 4x10G on the (B) side.

OK great I got all this bandwidth in the chasis, what can I do with all that? How about we carve out some vNICs. With the VIC 1240 mezz card you got 128 vNICs and 40Gb to the fabric. Not good enough? How about the VIC 1280 with 256 vNICs and 80Gb to the fabric. Just remember that your vNICs are going to have an active path mapped to either side (A/B) and can fail over to the other side in the event of an issue.  All the (A) side active side vNICs are in a hardware portchannel. Conversely the same holds true for the (B) side vNICs.

So Shaun, what’s you point to all this math? Choice and flexibility. You want 20Gb to the blade, you got it. You want 40G to the blade, done. 80G to the blade, no problem. 160G to the blade, OK but it has to be a full width. <GRIN>

Comments are closed.