Browsed by
Tag: data center

The Journey to CCIE #2 Starts Now

The Journey to CCIE #2 Starts Now

Game On Old Friend

2015-07-20 10.25.25 pm

 

It’s hard to believe that it’s been almost 2 years since I passed the R/S lab and my digits (40755) were assigned. I remember the numbers just passed 40k and I was so hoping to get 40007.

This way I could be 007. <GRIN>

Now I’m ready for the next challenge. My motivation for CCIE DC was simple. First I wanted to challenge myself yet again. Second, I feel strongly that a deep understanding of UCS & virtualization helps me stay relevant when it comes to private cloud conversations which all the cool kids are doing. Finally, I suck at storage. If storage was a weakness to me, it would be like green kryptonite to Clark.

2015-07-20 09.34.46 pm

 

 

 

 

 

 

 

 

All that said, I also miss the behind the wheel configuration and troubleshooting. I’m a pre-sales SE and spend most of my time these days in design sessions, product updates, and evangelizing new solutions. What better way to get serious hands-on than a CCIE lab?

Right before Christmas 2014, I took the CCIE DC written and failed it by 1-2 questions. I was so upset about carrying that disappointment through the holidays. Jan 8th was my date of redemption and I passed with a 953/1000.

I purchased workbooks from INE and leveraged their all access pass program and have about 1/2 the lab gear in one of our Cisco offices Just don’t have enough juice. <FACEPALM>

I’m also going to leverage VIRL and UCS Emulator for my studies.

Now it’s time to lock down and get this lab banged out in November. T-Minus 4 months… #TickTock

 

CCIE Data Center Lab Exam v1.0 

Lab Equipment and Software Versions

Passing the lab exam requires a depth of understanding difficult to obtain without hands-on experience. Early in your preparation you should arrange access to equipment similar to that used on the exam, and listed below.

The lab exam tests any feature that can be configured on the equipment and the NXOS versions indicated below. Occasionally, you may see more recent NXOS versions installed in the lab, but you will not be tested on the new features of a release unless indicated below.

  • Cisco Catalyst Switch 3750
  • Cisco 2511 Terminal Server
  • MDS 9222i
  • Nexus7009
    • (1) Sup
    • (1) 32 Port 10Gb (F1 Module)
    • (1) 32 Port 10Gb (M1 Module)
  • Nexus5548
  • Nexus2232
  • Nexus 1000v
  • UCS C200 Series Server
    • vic card for c-series
  • UCS-6248 Fabric Interconnects
  • UCS-5108 Blade Chassis
    • B-200 Series Blades
    • Palo mezzanine card
    • Emulex mezzanine card
  • Cisco Application Control Engine Appliance – ACE4710
  • Dual attached JBODs

Software Versions

  • NXOS v6.x on Nexus 7000 Switches
  • NXOS v5.x on Nexus 5000 Switches
  • NXOS v4.x on Nexus 1000v
  • NXOS v5.x on MDS 9222i Switches
  • UCS Software release 2.x Fabric Interconnect
  • Software Release A5(1.0) for ACE 4710
  • Cisco Data Center Manager software v5.x

ACE!? Really!??!?!?

2015-07-20 10.05.10 pm

#CCIEDC

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data Center Lab Blueprint

1.1c Implementing Port Channels

 

ConfigBytes #2

Port Channels

A port channel bundles physical links into a channel group to create a single logical link that provides the aggregate bandwidth of up to 16 physical links. If a member port within a port channel fails, the traffic previously carried over the failed link switches to the remaining member ports within the port channel.

  • F and M series line card port members cannot be mixed into a port-channel.
  • On a single switch, the port-channel compatibility parameters (SPEED,DUPLEX,ETC) must be the same among all the port-channel members on the physical switch.
  • Use port-channels for resiliency and aggregation of throughput.
  • 8 member links per port-channel prior to 5.1
  • NXOS 5.1> 16
  •  member links
  • L2 & L3 port-channels available on NXOS
  • Port-channel interface ID range 1-4096
  • Configuration changes made to logical port-channel interface is inherited by the individual member interfaces.
  • You can use static port channels, with no associated aggregation protocol, for a simplified configuration. For more flexibility, you can use LACP. When you use LACP, the link passes protocol packets. You cannot configure LACP on shared interfaces.
  • PAgP is NOT supported on NXOS
  • The port channel is operationally up when at least one of the member ports is up and that port’s status is channeling. The port channel is operationally down when all member ports are operationally down.
Note After a Layer 2/3 port becomes part of a port channel, all configurations must be done on the port channel; you can no longer apply configurations to individual port-channel members. you must apply the configuration to the entire port channel.

2015-04-06 08.14.44 am

Compatibility Requirements

When you add an interface to a channel group, the software checks certain interface attributes to ensure that the interface is compatible with the channel group. For example, you cannot add a Layer 3 interface to a Layer 2 channel group. The Cisco NX-OS software also checks a number of operational attributes for an interface before allowing that interface to participate in the port-channel aggregation.

The compatibility check includes the following operational attributes:

  • (Link) speed capability
  • Access VLAN
  • Allowed VLAN list
  • Check rate mode
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration
  • Layer 3 ports—Cannot have subinterfaces
  • MTU size
  • Media type, either copper or fiber
  • Module Type
  • Network layer
  • Port mode
  • SPAN—Cannot be a SPAN source or a destination port
  • Speed configuration
  • Storm control
  • Tagged or untagged
  • Trunk native VLAN

Use the show port-channel compatibility-parameters command to see the full list of compatibility checks that the Cisco NX-OS uses.

 

You can only add interfaces configured with the channel mode set to on to static port channels, and you can only add interfaces configured with the channel mode as active or passive to port channels that are running LACP. You can configure these attributes on an individual member port. If you configure a member port with an incompatible attribute, the software suspends that port in the port channel.

 

Alternatively, you can force ports with incompatible parameters to join the port channel if the following parameters are the same:

  • (Link) speed capability
  • Speed configuration
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration

 

Port Channel Load Balancing

  • Port channels provide load balancing by default
  • Port-channel load balancing uses L2 (MAC), L3 (IP), or L4 (port) to select the link
  • SRC or DST or both SRC and DST
  • Per switch (global) or per module. Per module takes precedence over per switch
  • L3 default is SRC/DST IP address
  • L2/non-IP default is SRC/DST MAC address
  • 6.0(1) for F series line card L2 load balancing
  • Must be in the default VDC to configure

You can configure load balancing either by the entire system or by specific modules, regardless of the VDC. The port-channel loadbalancing is a global setting across all VDCs.

If the ingress traffic is Multiprotocol Label Switching (MPLS) traffic, the software looks under the labels for the IP address on the packet.

The load-balancing algorithms that use port channels do not apply to multicast traffic. Regardless of the load-balancing algorithm you have configured, multicast traffic uses the following methods for load balancing with port channels:

  • Multicast traffic with Layer 4 information—Source IP address, source port, destination IP address, destination port
  • Multicast traffic without Layer 4 information—Source IP address, destination IP address
  • Non-IP multicast traffic—Source MAC address, destination MAC address
Note Devices that run Cisco IOS can optimize the behavior of the member ports. ASICs if a failure of a single member occurred if you enter the port-channel hash-distribution command. The Cisco Nexus 7000 Series device performs this optimization by default and does not require or support this command.

Cisco NX-OS Release 6.1(3) supports a new Result Bundle Hash (RBH) mode to improve load balancing on port-channel members on Cisco Nexus 7000 M Series I/O XL modules and on F Series modules. With the new RBH modulo mode, the RBH result is based on the actual count of port-channel members.

 

LACP

2015-04-06 08.15.47 am

 

 

  • Feature disabled by default. Must be enable feature first
  • Up to 16 active interfaces with 5.1>
  • Active 8, 8 Standby before 5.1
  •  Modes are active, passive, or ON (static port-channel NO LACP)
  • ON mode or static port channels is the DEFAULT mode

Both the passive and active modes allow LACP to negotiate between ports to determine if they can form a port channel based on criteria such as the port speed and the trunking state.

 

The passive mode is useful when you do not know whether the remote system, or partner, supports LACP.

 

Ports can form an LACP port channel when they are in different LACP modes if the modes are compatible as in the following examples:

 

  • A port in active mode can form a port channel successfully with another port that is in active mode.
  • A port in active mode can form a port channel with another port in passive mode.
  • A port in passive mode cannot form a port channel with another port that is also in passive mode, because neither port will initiate negotiation.
  • A port in on mode is not running LACP and cannot form a port channel with another port that is in active or passive mode.

 

LACP System ID is the combination of the LACP System Priority and MAC Address. Value of system priority is 1-32,768. Lower priority value = higher system priority. 1 being the highest priority.

 

Port Priority values are from 1-65535. Port priority + port number (interface ID) = LACP Port ID

Lower PortID value = higher priority to be chosen for forwarding/active vs. standby links. Default port priority is 32,768

 

Prerequisites for Port Channeling

Port channeling has the following prerequisites:

  • You must be logged onto the device.
  • If necessary, install the Advanced Services license and enter the desired VDC.
  • All ports in the channel group must be in the same VDC.
  • All ports for a single port channel must be either Layer 2 or Layer 3 ports.
  • All ports for a single port channel must meet the compatibility requirements. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • You must configure load balancing from the default VDC.

Guidelines and Limitations

Port channeling has the following configuration guidelines and limitations:

  • The LACP port-channel minimum links and maxbundle feature is not supported for host interface port channels.
  • You must enable LACP before you can use that feature.
  • You can configure multiple port channels on a device.
  • Do not put shared and dedicated ports into the same port channel. (See “Configuring Basic Interface Parameters,” for information about shared and dedicated ports.)
  • For Layer 2 port channels, ports with different STP port path costs can form a port channel if they are compatibly configured with each other. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • In STP, the port-channel cost is based on the aggregated bandwidth of the port members.
  • After you configure a port channel, the configuration that you apply to the port channel interface affects the port channel member ports. The configuration that you apply to the member ports affects only the member port where you apply the configuration.
  • LACP does not support half-duplex mode. Half-duplex ports in LACP port channels are put in the suspended state.
  • You must remove the port-security information from a port before you can add that port to a port channel. Similarly, you cannot apply the port-security configuration to a port that is a member of a channel group.
  • Do not configure ports that belong to a port channel group as private VLAN ports. While a port is part of the private VLAN configuration, the port channel configuration becomes inactive.
  • Channel member ports cannot be a source or destination SPAN port.
  • You cannot configure the ports from an F1 and an M1 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • You cannot configure the ports from an M1 and M2 series linecard in the same port channel.
  • You cannot configure the ports from an F2e and an F3 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • Beginning with Cisco NX-OS Release 5.1, you can bundle up to 16 active links into a port channel on the F1 series linecard.
  • F1 Series modules do not support load balancing of non-IP traffic based on a MAC address. If ports on an F1 Series module are used in a port channel and non-IP traffic is sent over the port channel, Layer 2 traffic might get out of order.
  • Only F Series and the XL type of M Series modules support the RBH modulo mode.

 

Feature History for Configuring Port Channels

Feature Name Release Feature Information
Display policy errors on interfaces and VLANs 6.2(2) Added the show interface status error policy command.
Prevent traffic-drop during bi-directional flow on F2 or F2e modules 6.2(2) Added the asymmetric keyword to port-channel load-balance command to improve load balancing across port channels.
Result Bundle Hash load balancing 6.1(3) Support for the RBH modulo mode to improve load balancing across port channels.
Minimum links for FEX fabric port channel 6.1(3) This feature was introduced.
Port channels hash distribution 6.1(1) Support for port channel hash distribution fixed and adaptive mode.
Load-balancing supports F2 modules 6.0(1) Added support for F2 modules on load-balancing across port channels.
Port channels 5.2(1) Support increased to 528 port channels.
Minimum links and Maxbundle for LACP 5.1(1) This feature was introduced.
Port channels 4.2(1) Support increased to 256 port channels.
Port channels 4.0(1) This feature was introduced.

 

Example Lab Question and Configuration

 

Port Channel Task

Assuming that more links will be added later, with the desire for minimal traffic disruption (LACP), configure the following:

Configure trunking on port channel 100 from N7K1 to UCS FI-A, and ensure that the same port channel number is used later from the UCS side.

 

interface Ethernet1/22

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 100 mode active (LACP)

  no shutdown

 

Configure trunking on port channel 200 from N7K1 to UCS FI-B, and ensure that the same port channel number is used later from the UCS side.

 

interface Ethernet1/24

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 200 mode active (LACP)

  no shutdown

 

Ensure that both of these port channels transition immediately to a state of

forwarding traffic.

“Int port-channel 100” & “Int port-channel 200”

“spanning-tree port type edge trunk”

 

Ensure that the N7K1 is the primary device in LACP negotiation. Ensure that the hashing algorithm takes L3 and L4 for both source and destination into account.

“lacp system-priority 1” Lower system priority value = higher priority

1-32768

“port-channel load-balance src-dst ip-l4port”

 

Trunk only previously created VLANs 100,200,300,400,500 southbound from N7K1 to both FIs.

 

Verify with “Show port-channel summary”

 

DocCD: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/interfaces/configuration/guide/b-Cisco-Nexus-7000-Series-NX-OS-Interfaces-Configuration-Guide/b-Cisco-Nexus-7000-Series-NX-OS-Interfaces-Configuration-Guide-6x_chapter_0111.html

 

Data Center: Nexus vPC Technology

Data Center: Nexus vPC Technology

Hi Cisco friends! I had a great question from a customer today regarding failure scenarios and vPC. On the surface, I thought this is an easy one. However, when I really gave it deep thought it really depends on the type of failure. Was the failure on the peer-link, peer keepalive, vPC member port, or the worst case dual active/double failure?

Let’s go through some of the failure examples.

vPC Member Port Failure
If one vPC member port goes down – for instance, if a link from a NIC goes down – the member is removed from the PortChannel without bringing down the vPC entirely. Conversely, the switch on which the remaining port is located will allow frames to be sent from the peer link to the vPC orphan port. The Layer 2 forwarding table for the switch that detected the failure is also updated to point the MAC addresses that were associated with the vPC port to the peer link.

vPC Complete Dual-Active Failure (Double Failure)
If both the peer link and the peer-keepalive link are disconnected, the Cisco Nexus switch does not bring down the vPC, because each Cisco Nexus switch cannot discriminate between a vPC device reload and a combined peer-link and peer-keepalive-link failure.

The main problem with a dual-active scenario is the lack of synchronization between the vPC peers over the peer link. This behavior causes IGMP snooping to malfunction, which in turn causes multicast traffic to drop. As described previously, a vPC topology intrinsically protects against loops in dual-active scenarios. Each vPC peer, upon losing peer-link connectivity, starts forwarding BPDUs on vPC member ports. With the peer-switch feature, both vPC peers send BPDUs with the same bridge ID to help ensure that the downstream device does not detect a spanning-tree misconfiguration. When the peer link and the peer-keepalive link are simultaneously lost, both vPC peers become operational primary.

vPC Peer-Link Failure
To prevent problems caused by dual-active devices, vPC shuts down vPC member ports on the secondary switch when the peer link is lost but the peer keepalive is still present.

When the peer link fails, the vPC peers verify their reachability over the peer-keepalive link, and if they can
communicate they take the following actions:

● The operational secondary vPC peer (which may not match the configured secondary because vPC is
nonpreemptive) brings down the vPC member ports, including the vPC member ports located on the fabric
extenders in the case of a Cisco Nexus 5000 Series design with fabric extenders in straight-through mode.

● The secondary vPC peer brings down the vPC VLAN SVIs: that is, all SVIs for the VLANs that happen to be configured on the vPC peer link, whether or not they are used on a vPC member port.

Note: To keep the SVI interface up when a peer link fails, use the command dual-active exclude interface-vlan.

At the time of this writing, if the peer link is lost first, the vPC secondary shuts down the vPC member ports. If this failure is followed by a vPC peer-keepalive failure, the vPC secondary keeps the interfaces shut down. This behavior may change in the future with the introduction of the autorecovery feature, which will allow the secondary device to bring up the vPC ports as a result of this sequence of events.

vPC Peer-Keepalive Failure

If connectivity of the peer-keepalive link is lost but peer-link connectivity is not changed, nothing happens; both vPC peers continue to synchronize MAC address tables, IGMP entries, and so on. The peer-keepalive link is mostly used when the peer link is lost, and the vPC peers use the peer keepalive to resolve the failure and determine which device should shut down the vPC member ports.

 Best Practices: 

Define a vPC domain (should match between peers, MUST NOT MATCH BETWEEN 7K and 5K in Double-Sided vPC) This Step is Required! “(config>vpc>domain)#vpc domain <id>
Define Role Priority: Lower Priority wins Primary Role, try and match your STP root bridge with the primary role. If using “peer-switch” the STP root will be the same on both peers. “(config>vpc>domain)# role priority <xxx>”

If roles shift (they are not preemptive) you would need to change the operational primary after a failure to a value of 36767 and shut/no shut the peerlink to restore the originally configured primary. 

If the Peer Switch is also preforming L3 switching the “peer-gateway” command is recommended.

The “vpc peer-gateway” allows HSRP routers to accept frames destined for their vPC peers.  This feature extends the virtual MAC address functionality to the paired router’s MAC address.  The feature is needed when certain storage/load balancing vendors break RFC rules by ignoring the ARP reply by an HSRP active router and reply directly to the host. Without this enabled packets could traverse the peer link and end up being dropped.

Enable vPC AutoRecovery
“(config-vpc-domain)# auto-recovery”

Beginning with Cisco NX-OS Release 5.2(1), you can configure the Cisco Nexus 7000 Series device to restore vPC services when its peer fails to come online by using the auto-recovery command. You must save this setting in the startup configuration. On reload, if the peer link is down and three consecutive peer-keepalive messages are lost, the secondary device assumes the primary STP role and the primary LACP role. The software reinitializes the vPCs, bringing up its local ports. Because there are no peers, the consistency check is bypassed for the local vPC ports. The device elects itself to be STP primary regardless of its role priority and also acts as the master for LACP port roles.

ARP SYNC
The ARP table sync feature overcomes the delay involved in ARP table restoration that can be triggered when one of the switches in the vPC domain goes offline and comes back online and also when there are peer-link port channel flaps. Enabling ARP on a vPC domain improves convergence times for unicast traffic.

To enable Address Resolution Protocol (ARP) synchronization between the virtual port channel (vPC) peers, use the ip arp synchronizecomand. To disable ARP synchronization, use the no form of this command.

(config-vpc-domain)# ip arp synchronize

Content Source:
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/design_guide_c07-625857.pdf
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/command/reference/vpc/n5k-vpc_cmds_i.html#wp1316724