Game On Old Friend
It’s hard to believe that it’s been almost 2 years since I passed the R/S lab and my digits (40755) were assigned. I remember the numbers just passed 40k and I was so hoping to get 40007.
This way I could be 007. <GRIN>
Now I’m ready for the next challenge. My motivation for CCIE DC was simple. First I wanted to challenge myself yet again. Second, I feel strongly that a deep understanding of UCS & virtualization helps me stay relevant when it comes to private cloud conversations which all the cool kids are doing. Finally, I suck at storage. If storage was a weakness to me, it would be like green kryptonite to Clark.
All that said, I also miss the behind the wheel configuration and troubleshooting. I’m a pre-sales SE and spend most of my time these days in design sessions, product updates, and evangelizing new solutions. What better way to get serious hands-on than a CCIE lab?
Right before Christmas 2014, I took the CCIE DC written and failed it by 1-2 questions. I was so upset about carrying that disappointment through the holidays. Jan 8th was my date of redemption and I passed with a 953/1000.
I purchased workbooks from INE and leveraged their all access pass program and have about 1/2 the lab gear in one of our Cisco offices Just don’t have enough juice. <FACEPALM>
I’m also going to leverage VIRL and UCS Emulator for my studies.
Now it’s time to lock down and get this lab banged out in November. T-Minus 4 months… #TickTock
CCIE Data Center Lab Exam v1.0
Lab Equipment and Software Versions
Passing the lab exam requires a depth of understanding difficult to obtain without hands-on experience. Early in your preparation you should arrange access to equipment similar to that used on the exam, and listed below.
The lab exam tests any feature that can be configured on the equipment and the NXOS versions indicated below. Occasionally, you may see more recent NXOS versions installed in the lab, but you will not be tested on the new features of a release unless indicated below.
- Cisco Catalyst Switch 3750
- Cisco 2511 Terminal Server
- MDS 9222i
- (1) Sup
- (1) 32 Port 10Gb (F1 Module)
- (1) 32 Port 10Gb (M1 Module)
- Nexus 1000v
- UCS C200 Series Server
- UCS-6248 Fabric Interconnects
- UCS-5108 Blade Chassis
- B-200 Series Blades
- Palo mezzanine card
- Emulex mezzanine card
- Cisco Application Control Engine Appliance – ACE4710
- Dual attached JBODs
- NXOS v6.x on Nexus 7000 Switches
- NXOS v5.x on Nexus 5000 Switches
- NXOS v4.x on Nexus 1000v
- NXOS v5.x on MDS 9222i Switches
- UCS Software release 2.x Fabric Interconnect
- Software Release A5(1.0) for ACE 4710
- Cisco Data Center Manager software v5.x
Today was a BIG day for us at Cisco. We announced our next wave of UCS products and continue building our data center innovation superhighway. Did we announce one product? NO! We announced four major UCS products today at #UCSGRANDSLAM and it was AWESOME! I knew about this stuff for months, but had to keep quite. As you can imagine, I was at the point of imploding because I just wanted to share this info with EVERYONE. Here is a quick recap of the UCS portfolio expansion announced today.
- UCS Mini provides the full power of Cisco Unified Computing in a smaller, all-in-one solution that is simple, easy to manage, yet expandable. Great for IoT/IoElocal processing (Fog) and ROBO customers.
- UCS M-Series Modular Servers for Online Content Providers and Cloud Service Providers and for distributed applications in Industrial High Performance Computing (HPC) and Enterprise Grid. What about dedicated hosting and cloud services?
- Cisco UCS C3160 Rack Server is a modular, capacity-optimized solution ideal for distributed data analytics, unstructured data repositories and media streaming and transcoding. I have one customer looking at this now for vSAN.
- Cisco M4 Generation UCS Rack and Blade Servers are armed with the latest processing power providing increased performance, efficiency and computing density. Intel Haswell architecture, E5 v3.
All that said, I’m ecstatic about today’s announcement and can’t wait to hear from our customers on the challenges that can be overcome with these latest additions to the UCS family. I think about five short years ago when naysayers said Cisco had NO PLACE IN THE SERVER MARKET. They were WRONG! We are #1 in the US and #2 worldwide in the x86 blade server market. I’m confident we’ll be the #1 server vendor worldwide in no time at all.
As soon as the video of today’s announcement is posted, I’ll link it here. Stay tuned!
Hello once again! Today I decided to talk about some Cisco innovations around of UCS platform. I’m going to try my best to keep this post high-level and EASY to understand as most things “virtual” can get fairly complex.
First up is Virtual Interface Card (VIC). This is Cisco’s answer to 1:1 mapped blade mezzanine cards in blade servers and other “virtual connectivity” mezzanine solutions. Instead of having a single MEZZ/NIC mapped to a specific internal/external switch/interconnect we developed a vNIC optimized for virtualized environments. At the heart of this technology is FCoE and 10GBASE-KR backplane Ethernet. In the case of the VIC 1240, we have 4x 10G connections that connect to the FEX, this connectivity is FCoE until the traffic gets to the fabric interconnect outside the chassis. The internal mapping to the server/blade allows you to dynamically create up to 128 PCIe virtual interfaces. Now here is the best part, you can define the interface type (NIC/HBA) and the identity (MAC/WWN). What does that mean? Easy policy based, stateless, and agile server provisioning. Does one really need 128 interfaces per server??? Perhaps in an ESX host you want the “flexibility and scale”. Oh yea, there is ANOTHER VIC that supports 256 vNICs and has 80Gbps to the backplane!!! That model is the 1280 VIC.
NOTE: 8 interfaces are reserved on both the 128/256 VICs for internal use and the actual number of vNICs presented to the server may be limited by the OS.
Just had a great conversation with a customer today and I want to take a minute to break down the math.
Today we have the 2208 FEX (I/O) module for the 5108 chassis. Each one supports 80G (8×10) uplinks to the Fabric Interconnect. This give a total of 160G to each chassis if all uplinks were utilized.
On the back side of each 2208 I/O is 32 10G ports (downlinks) for a total of 320G to the midplane. We are now at 640G total (A/B side). Take the total amount of blades per chassis and multiple that by 80G. 8 (blades) * 80G (eight traces per blade of 10G) = 640G. 🙂
Just keep in mind that the eight traces to each blades are 4x10G on the (A) side and 4x10G on the (B) side.
OK great I got all this bandwidth in the chasis, what can I do with all that? How about we carve out some vNICs. With the VIC 1240 mezz card you got 128 vNICs and 40Gb to the fabric. Not good enough? How about the VIC 1280 with 256 vNICs and 80Gb to the fabric. Just remember that your vNICs are going to have an active path mapped to either side (A/B) and can fail over to the other side in the event of an issue. All the (A) side active side vNICs are in a hardware portchannel. Conversely the same holds true for the (B) side vNICs.
So Shaun, what’s you point to all this math? Choice and flexibility. You want 20Gb to the blade, you got it. You want 40G to the blade, done. 80G to the blade, no problem. 160G to the blade, OK but it has to be a full width. <GRIN>
Cisco impresses with UCS:
If you’re tempted to think of Cisco‘s Unified Computing System (UCS) as just another blade server — don’t. In fact, if you just want a bunch of blades for your computer room, don’t call Cisco — Dell, HP, and IBM all offer simpler and more cost-effective options.
But, if you want an integrated compute farm consisting of blade servers and chassis, Ethernet and Fibre Channel interconnects, and a sophisticated management system, then UCS might be for you.