Browsed by
Tag: CCIE DC

CCIE Data Center: Version 2.0

CCIE Data Center: Version 2.0

Woah… Deja Vu

matrix1

This all seems so familiar…

OH YEA! I went through this once before already. I took the CCIE R/S version 3 with the high (naive) hopes of passing it my first attempt. #n00b

The challenge I had with the R/S v4 update was that it felt like the content managers had a serious case of ADD. Open ended questions, no open ended questions, troubleshooting, etc… It was frustrating that I had to experience every possible derivative of the v4 lab. I’m just glad I passed before the v5 lab blueprint was out.

Now here I am, ready to rock the lab in January and we announce a v2 lab update. Don’t get me wrong, I really dig the changes. I only wish it happened sooner, so I’d be studying for the new (relevant) curriculum.

Let’s start out with the domain changes.

Domain comparison between CCIE Data Center v1.0 and CCIE Data Center v2.0

CCIE Data Center v1.0

  1. Cisco Data Center Architecture
  2. Cisco Data Center Infrastructure-Cisco NX-OS
  3. Cisco Storage Networking
  4. Cisco Data Center Virtualization
  5. Cisco Unified Computing System
  6. Cisco Application Networking Services

CCIE Data Center v2.0

  1. Cisco Data Center L2/L3 Technologies
  2. Cisco Data Center Network Services
  3. Data Center Storage Networking and Compute
  4. Data Center Automation and Orchestration
  5. Data Center Fabric Infrastructure
  6. Evolving Technologies

Thoughts: Focus on skills & technologies vs hardware. I like what I see so far. You still need to possess design, implementation, and troubleshooting skills just less emphasis on knowing all the intricacies of a certain product. Adding things like automation, cloud and ACI to the blueprint is a VERY good idea since the subjects are top of mind with customers.

Topics no longer included in CCIE Data Center v2.0

  • Implement Data Center application high availability and load balancing
  • Implement FCIP features

Thoughts: No more ACE/WAAS/FCIP. Yea, that’s a good thing considering ACE went EoL back in 2013. I just don’t see enough customers using FCIP these days, so I guess that’s also a good one to remove.

Lab Equipment & Software List

2015-12-07 02.43.09 pm

Thoughts: If you look at the updated 2.0 lab hardware, there is no MDS at all. Goodbye 9222i, you will be missed. IP Storage FTW!

The new thing that catches my eye is the update to the next gen FEX (2300) and N5K (5600). I’m very happy for this as the 5672 has been a great (low latency/1us) L2/native L3 ToR for storage. Deep buffers  (25MB per 12p of 10G) help and it doesn’t hurt that this switch supports unified ports (Ethernet/FC/FCoE).

The servers have been refreshed to M4’s the M-series (cloud scale workloads) chassis is added, emulex mezz card removed.

Now my favorite part. The networking gear update. N9K’s + ACI added, the 7k was updated to 7004 with SUP2E (more VDCs) and F3’s. Glad to see the M/F line cards replaced because of the complexity and having to remember which cards had what capabilities. The F1’s really needed to go!

The Diagnostic Module

2015-12-07 02.55.32 pmThoughts: This is probably the most controversial change.  I know this is the direction to align with the other CCIE tracks, however this is also the area in which many candidates will have MANY questions.

Let me post (inline) all that I have on the subject, but in many ways this feels like real world scenarios. I get this all the time from customers and it’s like figuring out a puzzle. I love doing this in the real world, I just hope the exam diagnostic section captures this experience naturally.

Diagnostic Module Details

The new Diagnostic module, which has a length of 60 min, focuses on the skills required to properly diagnose network issues, without having device access. The main objective of the Diagnostic module is to assess the skills required to properly diagnose network issues. These skills include:

  • Analyze
  • Correlate– Discerning multiple sources of documentation(in example e-mail threads, network topology diagrams, console outputs, logs, and even traffic captures.)In the Diagnostic module, candidates need to make choices between pre-defined options to indicate:
  • What is the root cause of an issue
  • Where is the issue located in the diagram
  • What is the critical piece of information allows us the identify the root cause
  • What piece of information is missing to be able to identify the root causeThe Configuration and Troubleshooting module consists of one topology, similar to CCIE Data Center v1.0. The length of the Configuration and Troubleshooting module is seven hours. At the beginning of the module, the candidate has a full overview of the entire module; and can make a choice of working on items in sequence or not, depending on the candidates comfort level, the overall scenario and question interdependencies.The Diagnostic and Configuration and Troubleshooting modules in the Lab exam are delivered in a fixed sequence: the candidate starts the day with the 1 hour Diagnostic module, which is followed by the 7 hours Configuration and Troubleshooting module. The entire Lab exam lasts up to eight hours. Note that candidates are not allowed to go back and forth between modules.

For the Diagnostic module, no device access is provided. Candidates are provided various pieces of information (example emails, debug outputs, example network diagram information that is provided to a Data Center support engineer assisting a customer in determining the root cause of an issue, or an analogy of information that is provided by a colleague who is stuck in a troubleshooting issue).

Within the Diagnostic module, the items are presented in a similar format as within the Written exam.The module includes multiple-choice, drag-and-drop, or even point-and-click style items. The major differences between the Written exam and the Diagnostic module is that the items in the Diagnostic module (called troubleshoot tickets) contain a set of documents that the candidate must consult in order to be able to understand and identify the root cause of the issue presented. Candidates need to analyze and correlate information (after discerning between valuable and worthless pieces of information) in order to make the right choice among the pre-defined options provided.

The troubleshoot tickets will not require candidates to type in order to provide the answer. All tickets will be close-ended so grading will be deterministic, ensuring a fair and consistent scoring process. The new module allows us to grant credit to candidates who are able to accurately identify the root cause of a networking issue, but fail to resolve it within specific constraints (as in the Configuration and Troubleshooting module).

Real-life experience is certainly the best training to prepare for this module. Candidates with limited experience should focus on discovering, practicing and applying efficient and effective troubleshooting methodologies that are used for any realistic networking challenge.

Passing Criteria

In order to pass the Lab exam, the candidate must meet both of the following conditions:

  • The minimum cut-score of each individual module must be achieved.
  • The total score of both modules togethermust be above the minimum value of the combined cut-score. The point value(s) of the items in each module is known to the candidate. Note points are only granted when all requirements and sometimes restrictions of the item are met. There is no partial scoring for any items.

2015-12-07 03.10.01 pm

Closing Thoughts: I would like to think that I’ll pass the CCIE DC 1.0 lab on the 1st attempt this January. If not, I’ll have until July 22nd to pass the current blueprint. After that… I’ll have to figure out if I want to adapt and conquer v2 or just move on to something else like the CCDE.

Important Dates:

 

CCIE Data Center Written Exam v1.0 (350-080 CCIE DC)

Last day to test: July 22, 2016

CCIE Data Center Lab Exam v1.0

Last day to test: July 22, 2016

 

CCIE Data Center Written Exam v2.0 (400-151 CCIE DC)

Available for testing: July 25, 2016

CCIE Data Center Lab Exam v2.0

Available for testing: July 25, 2016

Reference Links: https://learningcontent.cisco.com/cln_storage/text/cln/marketing/ccie-dc-examtopic-delta-v1-v2-01.pdf

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data Center Lab Blueprint

1.1c Implementing Port Channels

 

ConfigBytes #2

Port Channels

A port channel bundles physical links into a channel group to create a single logical link that provides the aggregate bandwidth of up to 16 physical links. If a member port within a port channel fails, the traffic previously carried over the failed link switches to the remaining member ports within the port channel.

  • F and M series line card port members cannot be mixed into a port-channel.
  • On a single switch, the port-channel compatibility parameters (SPEED,DUPLEX,ETC) must be the same among all the port-channel members on the physical switch.
  • Use port-channels for resiliency and aggregation of throughput.
  • 8 member links per port-channel prior to 5.1
  • NXOS 5.1> 16
  •  member links
  • L2 & L3 port-channels available on NXOS
  • Port-channel interface ID range 1-4096
  • Configuration changes made to logical port-channel interface is inherited by the individual member interfaces.
  • You can use static port channels, with no associated aggregation protocol, for a simplified configuration. For more flexibility, you can use LACP. When you use LACP, the link passes protocol packets. You cannot configure LACP on shared interfaces.
  • PAgP is NOT supported on NXOS
  • The port channel is operationally up when at least one of the member ports is up and that port’s status is channeling. The port channel is operationally down when all member ports are operationally down.
Note After a Layer 2/3 port becomes part of a port channel, all configurations must be done on the port channel; you can no longer apply configurations to individual port-channel members. you must apply the configuration to the entire port channel.

2015-04-06 08.14.44 am

Compatibility Requirements

When you add an interface to a channel group, the software checks certain interface attributes to ensure that the interface is compatible with the channel group. For example, you cannot add a Layer 3 interface to a Layer 2 channel group. The Cisco NX-OS software also checks a number of operational attributes for an interface before allowing that interface to participate in the port-channel aggregation.

The compatibility check includes the following operational attributes:

  • (Link) speed capability
  • Access VLAN
  • Allowed VLAN list
  • Check rate mode
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration
  • Layer 3 ports—Cannot have subinterfaces
  • MTU size
  • Media type, either copper or fiber
  • Module Type
  • Network layer
  • Port mode
  • SPAN—Cannot be a SPAN source or a destination port
  • Speed configuration
  • Storm control
  • Tagged or untagged
  • Trunk native VLAN

Use the show port-channel compatibility-parameters command to see the full list of compatibility checks that the Cisco NX-OS uses.

 

You can only add interfaces configured with the channel mode set to on to static port channels, and you can only add interfaces configured with the channel mode as active or passive to port channels that are running LACP. You can configure these attributes on an individual member port. If you configure a member port with an incompatible attribute, the software suspends that port in the port channel.

 

Alternatively, you can force ports with incompatible parameters to join the port channel if the following parameters are the same:

  • (Link) speed capability
  • Speed configuration
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration

 

Port Channel Load Balancing

  • Port channels provide load balancing by default
  • Port-channel load balancing uses L2 (MAC), L3 (IP), or L4 (port) to select the link
  • SRC or DST or both SRC and DST
  • Per switch (global) or per module. Per module takes precedence over per switch
  • L3 default is SRC/DST IP address
  • L2/non-IP default is SRC/DST MAC address
  • 6.0(1) for F series line card L2 load balancing
  • Must be in the default VDC to configure

You can configure load balancing either by the entire system or by specific modules, regardless of the VDC. The port-channel loadbalancing is a global setting across all VDCs.

If the ingress traffic is Multiprotocol Label Switching (MPLS) traffic, the software looks under the labels for the IP address on the packet.

The load-balancing algorithms that use port channels do not apply to multicast traffic. Regardless of the load-balancing algorithm you have configured, multicast traffic uses the following methods for load balancing with port channels:

  • Multicast traffic with Layer 4 information—Source IP address, source port, destination IP address, destination port
  • Multicast traffic without Layer 4 information—Source IP address, destination IP address
  • Non-IP multicast traffic—Source MAC address, destination MAC address
Note Devices that run Cisco IOS can optimize the behavior of the member ports. ASICs if a failure of a single member occurred if you enter the port-channel hash-distribution command. The Cisco Nexus 7000 Series device performs this optimization by default and does not require or support this command.

Cisco NX-OS Release 6.1(3) supports a new Result Bundle Hash (RBH) mode to improve load balancing on port-channel members on Cisco Nexus 7000 M Series I/O XL modules and on F Series modules. With the new RBH modulo mode, the RBH result is based on the actual count of port-channel members.

 

LACP

2015-04-06 08.15.47 am

 

 

  • Feature disabled by default. Must be enable feature first
  • Up to 16 active interfaces with 5.1>
  • Active 8, 8 Standby before 5.1
  •  Modes are active, passive, or ON (static port-channel NO LACP)
  • ON mode or static port channels is the DEFAULT mode

Both the passive and active modes allow LACP to negotiate between ports to determine if they can form a port channel based on criteria such as the port speed and the trunking state.

 

The passive mode is useful when you do not know whether the remote system, or partner, supports LACP.

 

Ports can form an LACP port channel when they are in different LACP modes if the modes are compatible as in the following examples:

 

  • A port in active mode can form a port channel successfully with another port that is in active mode.
  • A port in active mode can form a port channel with another port in passive mode.
  • A port in passive mode cannot form a port channel with another port that is also in passive mode, because neither port will initiate negotiation.
  • A port in on mode is not running LACP and cannot form a port channel with another port that is in active or passive mode.

 

LACP System ID is the combination of the LACP System Priority and MAC Address. Value of system priority is 1-32,768. Lower priority value = higher system priority. 1 being the highest priority.

 

Port Priority values are from 1-65535. Port priority + port number (interface ID) = LACP Port ID

Lower PortID value = higher priority to be chosen for forwarding/active vs. standby links. Default port priority is 32,768

 

Prerequisites for Port Channeling

Port channeling has the following prerequisites:

  • You must be logged onto the device.
  • If necessary, install the Advanced Services license and enter the desired VDC.
  • All ports in the channel group must be in the same VDC.
  • All ports for a single port channel must be either Layer 2 or Layer 3 ports.
  • All ports for a single port channel must meet the compatibility requirements. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • You must configure load balancing from the default VDC.

Guidelines and Limitations

Port channeling has the following configuration guidelines and limitations:

  • The LACP port-channel minimum links and maxbundle feature is not supported for host interface port channels.
  • You must enable LACP before you can use that feature.
  • You can configure multiple port channels on a device.
  • Do not put shared and dedicated ports into the same port channel. (See “Configuring Basic Interface Parameters,” for information about shared and dedicated ports.)
  • For Layer 2 port channels, ports with different STP port path costs can form a port channel if they are compatibly configured with each other. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • In STP, the port-channel cost is based on the aggregated bandwidth of the port members.
  • After you configure a port channel, the configuration that you apply to the port channel interface affects the port channel member ports. The configuration that you apply to the member ports affects only the member port where you apply the configuration.
  • LACP does not support half-duplex mode. Half-duplex ports in LACP port channels are put in the suspended state.
  • You must remove the port-security information from a port before you can add that port to a port channel. Similarly, you cannot apply the port-security configuration to a port that is a member of a channel group.
  • Do not configure ports that belong to a port channel group as private VLAN ports. While a port is part of the private VLAN configuration, the port channel configuration becomes inactive.
  • Channel member ports cannot be a source or destination SPAN port.
  • You cannot configure the ports from an F1 and an M1 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • You cannot configure the ports from an M1 and M2 series linecard in the same port channel.
  • You cannot configure the ports from an F2e and an F3 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • Beginning with Cisco NX-OS Release 5.1, you can bundle up to 16 active links into a port channel on the F1 series linecard.
  • F1 Series modules do not support load balancing of non-IP traffic based on a MAC address. If ports on an F1 Series module are used in a port channel and non-IP traffic is sent over the port channel, Layer 2 traffic might get out of order.
  • Only F Series and the XL type of M Series modules support the RBH modulo mode.

 

Feature History for Configuring Port Channels

Feature Name Release Feature Information
Display policy errors on interfaces and VLANs 6.2(2) Added the show interface status error policy command.
Prevent traffic-drop during bi-directional flow on F2 or F2e modules 6.2(2) Added the asymmetric keyword to port-channel load-balance command to improve load balancing across port channels.
Result Bundle Hash load balancing 6.1(3) Support for the RBH modulo mode to improve load balancing across port channels.
Minimum links for FEX fabric port channel 6.1(3) This feature was introduced.
Port channels hash distribution 6.1(1) Support for port channel hash distribution fixed and adaptive mode.
Load-balancing supports F2 modules 6.0(1) Added support for F2 modules on load-balancing across port channels.
Port channels 5.2(1) Support increased to 528 port channels.
Minimum links and Maxbundle for LACP 5.1(1) This feature was introduced.
Port channels 4.2(1) Support increased to 256 port channels.
Port channels 4.0(1) This feature was introduced.

 

Example Lab Question and Configuration

 

Port Channel Task

Assuming that more links will be added later, with the desire for minimal traffic disruption (LACP), configure the following:

Configure trunking on port channel 100 from N7K1 to UCS FI-A, and ensure that the same port channel number is used later from the UCS side.

 

interface Ethernet1/22

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 100 mode active (LACP)

  no shutdown

 

Configure trunking on port channel 200 from N7K1 to UCS FI-B, and ensure that the same port channel number is used later from the UCS side.

 

interface Ethernet1/24

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 200 mode active (LACP)

  no shutdown

 

Ensure that both of these port channels transition immediately to a state of

forwarding traffic.

“Int port-channel 100” & “Int port-channel 200”

“spanning-tree port type edge trunk”

 

Ensure that the N7K1 is the primary device in LACP negotiation. Ensure that the hashing algorithm takes L3 and L4 for both source and destination into account.

“lacp system-priority 1” Lower system priority value = higher priority

1-32768

“port-channel load-balance src-dst ip-l4port”

 

Trunk only previously created VLANs 100,200,300,400,500 southbound from N7K1 to both FIs.

 

Verify with “Show port-channel summary”

 

DocCD: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/interfaces/configuration/guide/b-Cisco-Nexus-7000-Series-NX-OS-Interfaces-Configuration-Guide/b-Cisco-Nexus-7000-Series-NX-OS-Interfaces-Configuration-Guide-6x_chapter_0111.html