Browsed by
Category: Cisco




(K)ey (R)einstallation (A)tta(C)(K)

Breaking WPA2 by forcing nonce reuse

It’s been a long day and I wanted to have some fun with this post. I was onsite with several customers today when the news broke publicly. I only knew about it at a high-level in the morning and didn’t have time to digest the magnitude nor details of the vulnerability until this evening.

You see, for me this feels somewhat like Deja Vu. I remember the day when it was discovered that WEP had a key weakness in its security algorithm. That weakness was simple. Collect enough 3 Byte Initialization Vectors (IVs) that are transmitted in clear text and you could use commercial off the shelf hardware (Atheros chipset) and software (BackTrack now known as Kali Linux/aircrack-ng/JTR) to crack the key. It’s was stupid simple to execute this attack and ultimately was the demise of WEP.

Fast forward 11+ years and here we are talking about another major vulnerability affecting pretty much EVERY wireless network deployed. The saving grace… This is NOWHERE near as bad as the WEP exploit and can be fixed.

Just the Facts

Next Steps






I wholeheartedly agree with you Mathy!


Catalyst 9300: Hands-On Review

Catalyst 9300: Hands-On Review

Cisco Catalyst 9300 (First Impressions)

I received an email from our awesome lobby ambassador about two packages that arrived in the Malvern office. I didn’t remember what I ordered and quickly forgot about the packages because, it was such a chaotic week. When I finally made my way to the office, I saw the boxes in the mail room and thought “NO! That can’t be them already…”. Upon closer inspection, they were in fact the Catalyst 9300’s I had ordered.

Now, I don’t get as excited about gear as I used to. Perhaps, its’ because I know what’s in store… Software updates, uncomfortable high temps in my home office, figuring out why a certain command syntax isn’t working for me, lots of reading, in other words… WORK!

Then again, this is the Cat9K and the cornerstone of Cisco’s SDA fabric and something our customers are really curious about so, it’s totally worth it!

Here’s what I received.

2x (C9300-24P) w/ DNA Advantage Licensing

Standard (zero-cost) StackPower (30cm) and StackWise-480 (50cm) cables. Single PSU (715W) per switch. 24p of 10/100/1000Mbps PoE and 8x 10Gbps network module (NM).

Let’s start with the new and simplified licensing model for the Catalyst 9Ks.

There are four licenses for the C9300.

Network Essentials
Network Advantage
DNA Essentials 
DNA Advantage

Network Essentials and Advantage are perpetual platform based base licenses. These licenses are locked to the hardware. Between them, the base licensing packages cover switching fundamentals, management automation, troubleshooting, and advanced switching features.

DNA Essentials and Advantage are term based (3, 5, 7 year). In addition to on-box capabilities, the features available with this package provide Cisco innovations on the switch, as well as on Cisco DNA Center, in the APIC-EM. Think of this much like CiscoOne for the Cat3850s.

Licensing Combinations

Cisco DNA Essentials  Cisco DNA Advantage
Network Essentials Yes No
Network Advantage Yes Yes

Essentials and Advantage Package Features


Network Essentials

Network Advantage

Cisco DNA Essentials

Cisco DNA Advantage

Switch features

Switch fundamentals
Spanning Tree Protocol (STP), Rapid STP (RSTP), VLAN Trunking Protocol (VTP), trunking, Private VLAN (PVLAN), dynamic voice VLAN, IPv6, PnP, Cisco Discovery Protocol, 802.1Q tunneling (Q-in-Q), Routed Access – OSPF and RIP, Policy-Based Routing (PBR), Virtual Router Redundancy Protocol (VRRP), Internet Group Management Protocol (IGMP), PIM Stub, Weighted Random Early Detection (WRED), First Hop Security (FHS), 802.1X, MACsec-128, Control Plane Policing (CoPP), Cisco TrustSec® SGT Exchange Protocol (SXP), IP SLA Responder, SSO, EIGRP Stub, Microflow Policing, Class-Based Weighted Fair Queuing (CBWFQ), hierarchical QoS (H-QoS), Application Reporting, Syslog, SNMP

Advanced switch capabilities and scale
BGP, EIGRP, Hot Standby Router Protocol (HSRP), IS-IS, Bootstrap Router (BSR), Multicast Source Discovery Protocol (MSDP), Bidirectional PIM (PIM-BIDIR), Label Switched Multicast (LSM), IP SLA, Full OSPF

Network segmentation
VPN Routing and Forwarding (VRF), Virtual Extensible LAN (VXLAN), Cisco Locator/ID Separation Protocol (LISP), Cisco TrustSec, SD-Wireless, Multiprotocol Label Switching (MPLS), Layer 3 VPN (L3VPN), Multicast VPN (mVPN)

Optimized network deployments
mDNS gateway

Netconf/YANG, PnP Agent, ZTP/Open PnP

Advanced automation
Containers, Python, Cisco IOS Embedded Event Manager (EEM), Autonomic Networking Infrastructure

Telemetry and visibility
Streaming telemetry, sampled NetFlow, Switched Port Analyzer (SPAN), Remote SPAN (RSPAN)

Advanced telemetry and visibility
Flexible NetFlow, Wireshark

Optimized telemetry a visibility
Encapsulated Remote SPAN (ERSPAN), Application Visibility and Control (AVC), NBAR2

High availability and resiliency
NSF, Graceful Insertion and Removal (GIR)

High availability and resiliency


Advanced security
Encrypted Traffic Analytics (ETA)

Cisco DNA Center Features

Day 0 network bring-up automation
Cisco Network Plug-n-Play application, network settings, device credentials

Element management
Discovery, inventory, topology, software image, licensing, and configuration management

Element management

Network monitoring
Product Security Incident Response Team (PSIRT) compliance, end-of-life/end-of-sale reporting, telemetry quotient, client 360, device 360, top talkers/ NetFlow/streaming telemetry collection and correlation

Static QoS configuration and monitoring
EasyQoS application

Policy-based automation
SD-Access, group-based policy for access, app prioritization, monitoring, and path selection;
SD-Access with Integrated Wireless

Network assurance and analytics
Insights driven from analytics and machine learning for the network, clients and applications that cover onboarding, connectivity, and performance

A couple of takeaways from this features & license eye chart.

DNA Advantage is REQUIRED for SMU (hot patching), Encrypted Traffic Analytics, ERSPAN, and AVC/NBAR. DNA Essentials is REQUIRED for advanced network automation and programmability.

All that said, let’s get into my initial impressions of this switch.

  • Design: Very clean industrial design. The top cover almost looks white, but it’s just a light shade of silver. Intuitive icon LED indicators. Clean angles and not as deep as I thought it was. In fact, width and depth are identical at 17.5″. Height is standard 1RU or 1.73″

  • Air Flow: Port side intake and rear exhaust. It also appears that near the front (port-side) there are additional intake vents on the side. Fan noise was very low when the room was properly cooled, but as expected the fan speed and noise ramped up when the room reached 80+℉.
  • StackWise-480 and PowerStack: Data stacking (480Gbps) and power stack use the identical cables and procedure as the 3850. You can stack up to eight switches in a DataStack and 4 in a ring PowerStack or 8 in a star PowerStack topology.
  • Network Modules/Uplinks: Interesting enough, the network modules are backwards compatible with the existing NMs for the 3850s. I thought that was cool, because I have a ton of 3850 NMs and tried them out. Worked 100%. Another observation was the C9300 has a spring loaded mechanism that makes removing the modules seamless and natural. It’s almost as if a helping hand was inside the chassis saying “here’s your network module Shaun, please take good care of it for me”. <GRIN>
    • The hardware installation guide stated the NMs were “hot swappable”, so of course I tried this without gracefully powering down the NM and it worked as expected.
    • “The network module is hot-swappable. If you remove a module, replace it with another network module or a blank module.”

  • Code: I noticed some strange behavior with the factory loaded 16.5.1a (Everest), so I upgraded to 16.6.1 (Everest) and that seemed to correct this issue.
    • Just like the Cat3850, you have install or bundle mode with install mode being the default and recommended mode.
    • New command syntax (new vs. the 3850 & 3.6 IOS-XE) for software install/upgrade.
    • “request platform software package install switch all file flash:xxx.bin auto-copy”
  • System Memory: 16GB of flash RAM and 8GB of DRAM. So, plenty of memory on this platform.
  • RFID tag: I couldn’t for the life of me find the RFID tag. I pinged the Cat9k BU and they enlightened me. #1 my RFID/NFC reader/writer was not compatible with this type of tag (EPC Gen2/ISO 18000-6C compliant) and #2 The tag is in stealth mode under the front bezel. See image for details.
  • Open IOS-XE: One word, AMAZING! I have waited so long for on-box/off-box programmability on the Catalyst platforms and it’s finally here. You got on-box python, bash shell, NETCONF(SSH)/RESTCONF(HTTPS)/YANG, LxC, SMU/hot patching. This ain’t your mommy/daddy’s switch. 
  • ASIC: Doppler/UADP v2.0 programmable ASIC, more buffer and line rate. NUFF SAID!

In summary, I’m excited more than ever for the future of networking and where we go with SDA! From what I experienced with the Cat9300, the BU has done an amazing job delivering on the next generation of enterprise switches and set a very high bar for the competition.

One more thing…
From what I can tell, the C9300 is also less expensive vs. the C3850.

The future of networking is now!

Reference Links

Release Notes for IOS-XE 16.6.1 (Everest):
The Network. Intuitive.

The Network. Intuitive.

A New Network for a New Era

Well, the cat is finally out the bag…

I’ve been biting my lips for the last several months working on campus designs with customers. That’s because internally at Cisco, all the buzz was around bringing SDN and most importantly intent driven networking to the campus in a BIG way. This is very much akin to how Cisco transformed the data center with ACI. In fact, I’ve heard verbatim from customers “why doesn’t Cisco have an ACI like solution for the campus?”.

Like a said earlier, I had to bite my lip each time I heard this comment unless we went through the mutual NDA process and even then we provided only a brief glimpse at what was coming.

I’d like to focus on ACI fabric automation and deployment when I draw a comparison to what I envision software defined access (SD-Access/SDA) will be.

In an ACI data center, I simply cable my spine/leaf switches and plug in my APIC controllers to the leaf.  I then go through a 5 minute setup process to define my credentials, TEP pool, infrastructure VLAN ID, and a couple other simple prompts on the APIC controller.

At this point, my ACI fabric is ready to go and all I need to do is register my leaf switches to the fabric , give them a name and ID and I’m off to the object/policy creation steps. Once my policy model and objects are set, it really becomes rinse and repeat. The key with this intent based networking is agility and automation at scale.

I didn’t have to give each leaf a management IP, specify VLANs, credentials, access methods, trunk ports, setup routing protocols, etc… While that’s how I’ve been doing things for over two decades, recently my eyes were open to what happens to that traditional/static model at scale. Quite frankly, it falls apart unless you got some awesome scripting folks automating box-by-box configs with tools like ansible/jinja/python.

In addition, native/embedded security is critical to detect and mitigate threats in the campus network. Detecting threats in encrypted traffic is a pretty amazing “nerd knob”.

In closing, I see a bright future for the campus network.  A future where the campus wired/wireless/WAN have embedded security functionality, deep contextual information (abstract subnet/vlan ID) of attached devices, is intent driven to allow automation at scale, and intuitive enough to deliver actionable and predictive insights.

If you’re going to Cisco Live next week, expect some major deep dive sessions on Cat9K, DNA, and more.


#WeAreCisco #Innovation

#CiscoDNA #NetworkIntuitive

Links & References

CCIE DNA: Reality or Myth?

CCIE DNA: Reality or Myth?


It all started at #CLUS

Unfortunately, I was unable to attend Cisco Live US in Las Vegas this year. Don’t shed any tears for me as I was fortunate enough to have customers, friends, and co-workers attend. They got me some sweet swag and provided a play-by-play as things unfolded.

One such morsel of information was regarding a “CCIE DNA” or “CCIE GUI”.

At first I was just sitting in front of my monitor drifting into space thinking what the format of such a practical exam would look like. Would it be exploratory like my transition experience from R/S v3 to v4 (open ended questions, remove open ended questions, add troubleshooting, leverage virtual & physical environments, etc)?

Then I envisioned an entire exam based on APIC-EM/APIC-DC, NFV, Postman, and lots of mouse clicking. It’s this very thought that I started to break out in a cold sweat from the possibility of CLI withdrawal.

This was roughly 6 weeks ago… Now that the dust has settled, I decided to dig into this “rumor” a little more. I was especially motivated after I observed confusion in the twittersphere today.


  • At #CLUS 2016 our commander and chief, Mr. Chuck Robbins provided insight into the importance of Digital Network Architecture (DNA). It’s not so much a product, but embracing emerging technologies such as automation, mobility, cloud, IoT, and analytics. In addition, Chuck discussed how important emerging technologies are and how we’ve never brought the application + network together from a visibility perspective.
  • My understanding is Chuck also discussed a DNA user group that would be certifying engineers with reference to the CCIE tracks. I believe this is where some folks walked away with the thought that Chuck announced a standalone CCIE DNA track.
  • I did some fact finding with our very own CCDE/CCAr program manager, Elaine Lopes @elopes01 and the reality is somewhere in the middle. 

The plan is to incorporate the DNA architecture and other evolving technologies into the pertinent CCxE tracks vs. being a separate track.

I can already see hints at this when I downloaded the current (v 2.1) CCDE written blueprint. There’s a new section in version labeled “5.0: Evolving Technologies”. While this doesn’t explicitly state “DNA”, it does have network programability/SDN and cloud which are core to DNA.

2016-08-25 10.32.18 pm

The “evolving technologies” section is NOT isolated to the CCDE either!
You can read more about it at Elaine’s blog titled “Myth Busters & Evolving Technologies” 

2016-08-25 10.33.57 pm

Disclaimer: This is the current plan as I know it. However, as with anything in our field it’s always subject to change. <GRIN>

My 2c FWIW

I’m excited that we’re putting evolving technologies into the various blueprints. There isn’t a day that goes by where a customer conversation doesn’t include leveraging cloud workloads, making sense of all the analytical (especially infosec) data collected, network programability, or “SDN”.

In addition, I feel strongly that using the generic topic of “Evolving Technologies” gives the CCxE program managers the ability to keep the exams fresh and relevant. This is at least the case for the written exams, how evolving technologies is incorporated into the practical is still TBD.

My thought is that the CCxE tracks will start to incorporate DNA into both the written and practicals. How that story unfolds will be one that I’ll watch closely and post updates on.

I’m waiting for a CCIE R/S candidate to say “Gomez, you got an instance of APIC-EM I can lab on?”.

2016-08-25 09.45.02 pm

CCIE Data Center: Version 2.0

CCIE Data Center: Version 2.0

Woah… Deja Vu


This all seems so familiar…

OH YEA! I went through this once before already. I took the CCIE R/S version 3 with the high (naive) hopes of passing it my first attempt. #n00b

The challenge I had with the R/S v4 update was that it felt like the content managers had a serious case of ADD. Open ended questions, no open ended questions, troubleshooting, etc… It was frustrating that I had to experience every possible derivative of the v4 lab. I’m just glad I passed before the v5 lab blueprint was out.

Now here I am, ready to rock the lab in January and we announce a v2 lab update. Don’t get me wrong, I really dig the changes. I only wish it happened sooner, so I’d be studying for the new (relevant) curriculum.

Let’s start out with the domain changes.

Domain comparison between CCIE Data Center v1.0 and CCIE Data Center v2.0

CCIE Data Center v1.0

  1. Cisco Data Center Architecture
  2. Cisco Data Center Infrastructure-Cisco NX-OS
  3. Cisco Storage Networking
  4. Cisco Data Center Virtualization
  5. Cisco Unified Computing System
  6. Cisco Application Networking Services

CCIE Data Center v2.0

  1. Cisco Data Center L2/L3 Technologies
  2. Cisco Data Center Network Services
  3. Data Center Storage Networking and Compute
  4. Data Center Automation and Orchestration
  5. Data Center Fabric Infrastructure
  6. Evolving Technologies

Thoughts: Focus on skills & technologies vs hardware. I like what I see so far. You still need to possess design, implementation, and troubleshooting skills just less emphasis on knowing all the intricacies of a certain product. Adding things like automation, cloud and ACI to the blueprint is a VERY good idea since the subjects are top of mind with customers.

Topics no longer included in CCIE Data Center v2.0

  • Implement Data Center application high availability and load balancing
  • Implement FCIP features

Thoughts: No more ACE/WAAS/FCIP. Yea, that’s a good thing considering ACE went EoL back in 2013. I just don’t see enough customers using FCIP these days, so I guess that’s also a good one to remove.

Lab Equipment & Software List

2015-12-07 02.43.09 pm

Thoughts: If you look at the updated 2.0 lab hardware, there is no MDS at all. Goodbye 9222i, you will be missed. IP Storage FTW!

The new thing that catches my eye is the update to the next gen FEX (2300) and N5K (5600). I’m very happy for this as the 5672 has been a great (low latency/1us) L2/native L3 ToR for storage. Deep buffers  (25MB per 12p of 10G) help and it doesn’t hurt that this switch supports unified ports (Ethernet/FC/FCoE).

The servers have been refreshed to M4’s the M-series (cloud scale workloads) chassis is added, emulex mezz card removed.

Now my favorite part. The networking gear update. N9K’s + ACI added, the 7k was updated to 7004 with SUP2E (more VDCs) and F3’s. Glad to see the M/F line cards replaced because of the complexity and having to remember which cards had what capabilities. The F1’s really needed to go!

The Diagnostic Module

2015-12-07 02.55.32 pmThoughts: This is probably the most controversial change.  I know this is the direction to align with the other CCIE tracks, however this is also the area in which many candidates will have MANY questions.

Let me post (inline) all that I have on the subject, but in many ways this feels like real world scenarios. I get this all the time from customers and it’s like figuring out a puzzle. I love doing this in the real world, I just hope the exam diagnostic section captures this experience naturally.

Diagnostic Module Details

The new Diagnostic module, which has a length of 60 min, focuses on the skills required to properly diagnose network issues, without having device access. The main objective of the Diagnostic module is to assess the skills required to properly diagnose network issues. These skills include:

  • Analyze
  • Correlate– Discerning multiple sources of documentation(in example e-mail threads, network topology diagrams, console outputs, logs, and even traffic captures.)In the Diagnostic module, candidates need to make choices between pre-defined options to indicate:
  • What is the root cause of an issue
  • Where is the issue located in the diagram
  • What is the critical piece of information allows us the identify the root cause
  • What piece of information is missing to be able to identify the root causeThe Configuration and Troubleshooting module consists of one topology, similar to CCIE Data Center v1.0. The length of the Configuration and Troubleshooting module is seven hours. At the beginning of the module, the candidate has a full overview of the entire module; and can make a choice of working on items in sequence or not, depending on the candidates comfort level, the overall scenario and question interdependencies.The Diagnostic and Configuration and Troubleshooting modules in the Lab exam are delivered in a fixed sequence: the candidate starts the day with the 1 hour Diagnostic module, which is followed by the 7 hours Configuration and Troubleshooting module. The entire Lab exam lasts up to eight hours. Note that candidates are not allowed to go back and forth between modules.

For the Diagnostic module, no device access is provided. Candidates are provided various pieces of information (example emails, debug outputs, example network diagram information that is provided to a Data Center support engineer assisting a customer in determining the root cause of an issue, or an analogy of information that is provided by a colleague who is stuck in a troubleshooting issue).

Within the Diagnostic module, the items are presented in a similar format as within the Written exam.The module includes multiple-choice, drag-and-drop, or even point-and-click style items. The major differences between the Written exam and the Diagnostic module is that the items in the Diagnostic module (called troubleshoot tickets) contain a set of documents that the candidate must consult in order to be able to understand and identify the root cause of the issue presented. Candidates need to analyze and correlate information (after discerning between valuable and worthless pieces of information) in order to make the right choice among the pre-defined options provided.

The troubleshoot tickets will not require candidates to type in order to provide the answer. All tickets will be close-ended so grading will be deterministic, ensuring a fair and consistent scoring process. The new module allows us to grant credit to candidates who are able to accurately identify the root cause of a networking issue, but fail to resolve it within specific constraints (as in the Configuration and Troubleshooting module).

Real-life experience is certainly the best training to prepare for this module. Candidates with limited experience should focus on discovering, practicing and applying efficient and effective troubleshooting methodologies that are used for any realistic networking challenge.

Passing Criteria

In order to pass the Lab exam, the candidate must meet both of the following conditions:

  • The minimum cut-score of each individual module must be achieved.
  • The total score of both modules togethermust be above the minimum value of the combined cut-score. The point value(s) of the items in each module is known to the candidate. Note points are only granted when all requirements and sometimes restrictions of the item are met. There is no partial scoring for any items.

2015-12-07 03.10.01 pm

Closing Thoughts: I would like to think that I’ll pass the CCIE DC 1.0 lab on the 1st attempt this January. If not, I’ll have until July 22nd to pass the current blueprint. After that… I’ll have to figure out if I want to adapt and conquer v2 or just move on to something else like the CCDE.

Important Dates:


CCIE Data Center Written Exam v1.0 (350-080 CCIE DC)

Last day to test: July 22, 2016

CCIE Data Center Lab Exam v1.0

Last day to test: July 22, 2016


CCIE Data Center Written Exam v2.0 (400-151 CCIE DC)

Available for testing: July 25, 2016

CCIE Data Center Lab Exam v2.0

Available for testing: July 25, 2016

Reference Links:

ConfigBytes: ASA 5506x w/ FirePOWER Services

ConfigBytes: ASA 5506x w/ FirePOWER Services


Getting Started with the ASA5506x & FirePOWER Services


Official Quick Start Guide:

FirePOWER User Guide:

FirePOWER Services for ASA Data Sheet:


TL:DR Key Points

  • Since the ASA5506x doesn’t have built-in switch capabilities (yet), you will need a L2 switch to connect the management interface which is used for firepower services module and your inside ASA interface for management. If you have an L3 switch the FirePOWER management interface can be on a different subnet from your inside ASA interface.
  • Download ASDM 7.4(3)image, ASA 9.4(1)3 and the latest firepower/sourcefire sensor patch ( at this time). Place these files on the ASA flash, upgrade and point to the new ASDM file.
  • Create a username/password w/ PRIV 15 for ASDM access. “username Wu-Tang password KillaBeesOnTheSwarm privilege 15”
  • I highly recommend using the ASA Startup Wizard, this is much easier then a console session (“session srf console”) to the FirePOWER services module for setup of management.
  • Default Username/Password for the SourceFIRE module is admin/Sourcefire
  • Upgrade FirePOWER through ASDM or FireSight. Remember you can use ASDM or FireSight to manage the FirePOWER services.
  • Install your FirePOWER licenses
  • Don’t forget to configure a service policy on the ASA to redirect traffic to the FirePOWER module.


Final Config

5506xFPS(config)# sh run
: Saved
: Serial Number: <removed>
: Hardware: ASA5506, 4096 MB RAM, CPU Atom C2000 series 1250 MHz, 1 CPU (4 cores)
ASA Version 9.4(1)3
hostname 5506xFPS
domain-name cisco.lab
enable password <removed>
xlate per-session deny tcp any4 any4
xlate per-session deny tcp any4 any6
xlate per-session deny tcp any6 any4
xlate per-session deny tcp any6 any6
xlate per-session deny udp any4 any4 eq domain
xlate per-session deny udp any4 any6 eq domain
xlate per-session deny udp any6 any4 eq domain
xlate per-session deny udp any6 any6 eq domain
interface GigabitEthernet1/1
nameif outside
security-level 0
ip address dhcp setroute
interface GigabitEthernet1/2
no nameif
no security-level
no ip address
interface GigabitEthernet1/3
no nameif
no security-level
no ip address
interface GigabitEthernet1/4
no nameif
no security-level
no ip address
interface GigabitEthernet1/5
no nameif
no security-level
no ip address
interface GigabitEthernet1/6
no nameif
no security-level
no ip address
interface GigabitEthernet1/7
no nameif
no security-level
no ip address
interface GigabitEthernet1/8
description Inside_2
nameif inside2
security-level 100
ip address
interface Management1/1
no nameif
no security-level
no ip address
boot system disk0:/asa941-3-lfbff-k8.SPA
ftp mode passive
clock timezone EST -5
clock summer-time EDT recurring
dns server-group DefaultDNS
domain-name cisco.lab
same-security-traffic permit inter-interface
same-security-traffic permit intra-interface
pager lines 24
logging enable
logging buffer-size 8192
logging asdm-buffer-size 250
logging console emergencies
logging asdm alerts
mtu outside 1500
mtu inside2 1500
icmp unreachable rate-limit 1 burst-size 1
icmp deny any outside
asdm image disk0:/asdm-743.bin
no asdm history enable
arp timeout 14400
no arp permit-nonconnected
nat (inside2,outside) after-auto source dynamic any interface
route inside2 1
route inside2 1
route inside2 1
timeout xlate 3:00:00
timeout pat-xlate 0:00:30
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
timeout tcp-proxy-reassembly 0:01:00
timeout floating-conn 0:00:00
user-identity default-domain LOCAL
http server enable
http inside2
no snmp-server location
no snmp-server contact
sysopt noproxyarp outside
service sw-reset-button
crypto ipsec security-association pmtu-aging infinite
crypto ca trustpoint ASDM_Launcher_Access_TrustPoint_0
enrollment self
fqdn none
subject-name CN=,CN=5506xFPS
crl configure
crypto ca trustpoint ASDM_TrustPoint0
crl configure
crypto ca trustpoint ASDM_TrustPoint1
enrollment terminal
crl configure
crypto ca trustpool policy
crypto ca certificate chain ASDM_Launcher_Access_TrustPoint_0
telnet timeout 5
ssh scopy enable
ssh stricthostkeycheck
ssh pubkey-chain
ssh timeout 5
ssh version 2
ssh key-exchange group dh-group1-sha1
console timeout 0
dhcpd address inside2
dhcpd dns interface inside2
dhcpd lease 28800 interface inside2
dhcpd enable inside2
threat-detection basic-threat
threat-detection statistics port
threat-detection statistics protocol
threat-detection statistics access-list
threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200
ntp server source outside prefer
dynamic-access-policy-record DfltAccessPolicy
username asa password encrypted privilege 15
username admin password encrypted privilege 15
class-map inspection_default
match default-inspection-traffic
class-map global-class-SF
match any
policy-map type inspect dns preset_dns_map
message-length maximum client auto
message-length maximum 512
policy-map global_policy
description Global+SF
class global-class-SF
sfr fail-close
class inspection_default
inspect dns preset_dns_map
inspect esmtp
inspect ftp
inspect h323 h225
inspect h323 ras
inspect ip-options
inspect netbios
inspect rsh
inspect rtsp
inspect sqlnet
inspect sunrpc
inspect tftp
inspect xdmcp
policy-map type inspect dns migrated_dns_map_1
message-length maximum client auto
message-length maximum 512
service-policy global_policy global
prompt hostname context
no call-home reporting anonymous
profile CiscoTAC-1
no active
destination address http
destination address email
destination transport-method http
subscribe-to-alert-group diagnostic
subscribe-to-alert-group environment
subscribe-to-alert-group inventory periodic monthly 8
subscribe-to-alert-group configuration periodic monthly 8
subscribe-to-alert-group telemetry periodic daily
hpm topN enable
: end

Video Example of URL Filtering with FirePOWER

Hope this latest #ConfigBytes was helpful!

The Journey to CCIE #2 Starts Now

The Journey to CCIE #2 Starts Now

Game On Old Friend

2015-07-20 10.25.25 pm


It’s hard to believe that it’s been almost 2 years since I passed the R/S lab and my digits (40755) were assigned. I remember the numbers just passed 40k and I was so hoping to get 40007.

This way I could be 007. <GRIN>

Now I’m ready for the next challenge. My motivation for CCIE DC was simple. First I wanted to challenge myself yet again. Second, I feel strongly that a deep understanding of UCS & virtualization helps me stay relevant when it comes to private cloud conversations which all the cool kids are doing. Finally, I suck at storage. If storage was a weakness to me, it would be like green kryptonite to Clark.

2015-07-20 09.34.46 pm









All that said, I also miss the behind the wheel configuration and troubleshooting. I’m a pre-sales SE and spend most of my time these days in design sessions, product updates, and evangelizing new solutions. What better way to get serious hands-on than a CCIE lab?

Right before Christmas 2014, I took the CCIE DC written and failed it by 1-2 questions. I was so upset about carrying that disappointment through the holidays. Jan 8th was my date of redemption and I passed with a 953/1000.

I purchased workbooks from INE and leveraged their all access pass program and have about 1/2 the lab gear in one of our Cisco offices Just don’t have enough juice. <FACEPALM>

I’m also going to leverage VIRL and UCS Emulator for my studies.

Now it’s time to lock down and get this lab banged out in November. T-Minus 4 months… #TickTock


CCIE Data Center Lab Exam v1.0 

Lab Equipment and Software Versions

Passing the lab exam requires a depth of understanding difficult to obtain without hands-on experience. Early in your preparation you should arrange access to equipment similar to that used on the exam, and listed below.

The lab exam tests any feature that can be configured on the equipment and the NXOS versions indicated below. Occasionally, you may see more recent NXOS versions installed in the lab, but you will not be tested on the new features of a release unless indicated below.

  • Cisco Catalyst Switch 3750
  • Cisco 2511 Terminal Server
  • MDS 9222i
  • Nexus7009
    • (1) Sup
    • (1) 32 Port 10Gb (F1 Module)
    • (1) 32 Port 10Gb (M1 Module)
  • Nexus5548
  • Nexus2232
  • Nexus 1000v
  • UCS C200 Series Server
    • vic card for c-series
  • UCS-6248 Fabric Interconnects
  • UCS-5108 Blade Chassis
    • B-200 Series Blades
    • Palo mezzanine card
    • Emulex mezzanine card
  • Cisco Application Control Engine Appliance – ACE4710
  • Dual attached JBODs

Software Versions

  • NXOS v6.x on Nexus 7000 Switches
  • NXOS v5.x on Nexus 5000 Switches
  • NXOS v4.x on Nexus 1000v
  • NXOS v5.x on MDS 9222i Switches
  • UCS Software release 2.x Fabric Interconnect
  • Software Release A5(1.0) for ACE 4710
  • Cisco Data Center Manager software v5.x

ACE!? Really!??!?!?

2015-07-20 10.05.10 pm


ConfigBytes: Nexus 6000/5600 Latency & Buffer Monitor

ConfigBytes: Nexus 6000/5600 Latency & Buffer Monitor


Episode 2
Platforms: Nexus 6000 & 5600 (UPC based ASIC)


Latency Monitor:

Full Documentation

The switch latency monitoring feature marks each ingress and egress packet with a timestamp value. To calculate the latency for each packet in the system the switch compares the ingress with the egress timestamp. The feature allows you to display historical latency averages between all pairs of ports, as well as real-time latency data.

You can use the latency measurements to identify which flows are impacted by latency issues. In addition the statistics generated by the switch latency monitoring feature allow you to plan network topologies, manage incident responses and identify root causes for application issues in the network. You can also use the statistics to provide a Service Level Agreement (SLA) for latency intensive applications.

Configuration Example for Switch Latency Monitoring

Requires 7.x code

This example shows how to configure switch latency monitoring:

switch(config)# hardware profile latency monitor base 800
switch(config)# interface ethernet 1/1
switch(config-if)# packet latency interface ethernet 1/2 mode linear step 40
switch(config-if)# packet latency interface ethernet 1/3-4 mode exponential step 40
switch(config-if)# packet latency interface ethernet 1/5 mode custom low 40 high 1200
switch(config)# interface ethernet 2/1
switch(config-if)# packet latency interface ethernet 1/1 mode exponential step 80

Buffer Utilization Histogram:

Full Documentation

The Buffer Utilization Histogram feature enables you to analyze the maximum queue depths and buffer utilization in the system in real time. Instantaneous or real time buffer utilization information is supported by the hardware. You can use software to obtain the history of the buffer usage by polling the hardware at regular intervals. Obtaining an historic timeline of the buffer usage provides a better picture of the traffic pattern in the system and helps in traffic engineering. Ultimately, you are able to make better use of the hardware buffer resources.

On the Cisco Nexus device, every three ports of 40 Gigabit Ethernet or every 12 ports of 10 Gigabit Ethernet have access to a shared 25 Mb packet buffer. 15.6 Mb are reserved for ingress and 8.6 Mb are reserved for egress. The remaining space is used for SPAN and control packets.

The Buffer Utilization Histogram enables you to do the following:

  • Configure buffer utilization history measurements on the interested ports.
  • View buffer utilization over an interval of time.
  • Configure either a slow or a fast polling mode.
  • Copy collected statistics to the buffer_util_stats file on the bootflash drive every hour to allow for later analysis. The collected statistics are appended to the end of the file after an hour and a timestamp is placed in the header that has the interface name.

Configuration Example for Buffer Utilization:

Requires 7.x code

switch# configure terminal
switch(config)# interface ethernet 1/1
switch(config-if)# hardware profile buffer monitor

Output Examples for Buffer Utilization Histogram

2015-05-14 10.05.56 am

Write Histogram Data to File & Syslog Alert via EEM/Python

Python Script:

import sys
import re
import io
import syslog
from cisco import cli
from sys import argv

def parse_and_print_interface(input_string):
print “Received input – {0}”.format(input_string)
result = re.findall(r’\bEthernet\w+\W+\w+’, input_string)
print result
#print result[0]
show_cli_cmd = “show hardware profile buffer monitor interface ” + result[0] + ” history detail “
# show_cli_cmd = “show hardware profile buffer monitor all history detail”
# Execute the command on the switch
print show_cli_cmd
time1 = cli(“show clock”)
raw_input = cli(show_cli_cmd)
output1 = time1 + raw_input + “\n”
target = open(“/bootflash/EEM_buffer_log”, “a”)
time2 = cli(“show clock”)
raw_input2 = cli(“show interface burst-counters”)
output2 = time2 + raw_input2 + “\n”
target = open(“/bootflash/EEM_burst_log”, “a”)

def main():
print sys.argv

if __name__==”__main__”:

EEM Script:

event manager applet burst_monitor

  event syslog pattern “bigsurusd”

  action 1 cli source -l “$_syslog_msg”

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data Center Lab Blueprint

1.1c Implementing Port Channels


ConfigBytes #2

Port Channels

A port channel bundles physical links into a channel group to create a single logical link that provides the aggregate bandwidth of up to 16 physical links. If a member port within a port channel fails, the traffic previously carried over the failed link switches to the remaining member ports within the port channel.

  • F and M series line card port members cannot be mixed into a port-channel.
  • On a single switch, the port-channel compatibility parameters (SPEED,DUPLEX,ETC) must be the same among all the port-channel members on the physical switch.
  • Use port-channels for resiliency and aggregation of throughput.
  • 8 member links per port-channel prior to 5.1
  • NXOS 5.1> 16
  •  member links
  • L2 & L3 port-channels available on NXOS
  • Port-channel interface ID range 1-4096
  • Configuration changes made to logical port-channel interface is inherited by the individual member interfaces.
  • You can use static port channels, with no associated aggregation protocol, for a simplified configuration. For more flexibility, you can use LACP. When you use LACP, the link passes protocol packets. You cannot configure LACP on shared interfaces.
  • PAgP is NOT supported on NXOS
  • The port channel is operationally up when at least one of the member ports is up and that port’s status is channeling. The port channel is operationally down when all member ports are operationally down.
Note After a Layer 2/3 port becomes part of a port channel, all configurations must be done on the port channel; you can no longer apply configurations to individual port-channel members. you must apply the configuration to the entire port channel.

2015-04-06 08.14.44 am

Compatibility Requirements

When you add an interface to a channel group, the software checks certain interface attributes to ensure that the interface is compatible with the channel group. For example, you cannot add a Layer 3 interface to a Layer 2 channel group. The Cisco NX-OS software also checks a number of operational attributes for an interface before allowing that interface to participate in the port-channel aggregation.

The compatibility check includes the following operational attributes:

  • (Link) speed capability
  • Access VLAN
  • Allowed VLAN list
  • Check rate mode
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration
  • Layer 3 ports—Cannot have subinterfaces
  • MTU size
  • Media type, either copper or fiber
  • Module Type
  • Network layer
  • Port mode
  • SPAN—Cannot be a SPAN source or a destination port
  • Speed configuration
  • Storm control
  • Tagged or untagged
  • Trunk native VLAN

Use the show port-channel compatibility-parameters command to see the full list of compatibility checks that the Cisco NX-OS uses.


You can only add interfaces configured with the channel mode set to on to static port channels, and you can only add interfaces configured with the channel mode as active or passive to port channels that are running LACP. You can configure these attributes on an individual member port. If you configure a member port with an incompatible attribute, the software suspends that port in the port channel.


Alternatively, you can force ports with incompatible parameters to join the port channel if the following parameters are the same:

  • (Link) speed capability
  • Speed configuration
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration


Port Channel Load Balancing

  • Port channels provide load balancing by default
  • Port-channel load balancing uses L2 (MAC), L3 (IP), or L4 (port) to select the link
  • SRC or DST or both SRC and DST
  • Per switch (global) or per module. Per module takes precedence over per switch
  • L3 default is SRC/DST IP address
  • L2/non-IP default is SRC/DST MAC address
  • 6.0(1) for F series line card L2 load balancing
  • Must be in the default VDC to configure

You can configure load balancing either by the entire system or by specific modules, regardless of the VDC. The port-channel loadbalancing is a global setting across all VDCs.

If the ingress traffic is Multiprotocol Label Switching (MPLS) traffic, the software looks under the labels for the IP address on the packet.

The load-balancing algorithms that use port channels do not apply to multicast traffic. Regardless of the load-balancing algorithm you have configured, multicast traffic uses the following methods for load balancing with port channels:

  • Multicast traffic with Layer 4 information—Source IP address, source port, destination IP address, destination port
  • Multicast traffic without Layer 4 information—Source IP address, destination IP address
  • Non-IP multicast traffic—Source MAC address, destination MAC address
Note Devices that run Cisco IOS can optimize the behavior of the member ports. ASICs if a failure of a single member occurred if you enter the port-channel hash-distribution command. The Cisco Nexus 7000 Series device performs this optimization by default and does not require or support this command.

Cisco NX-OS Release 6.1(3) supports a new Result Bundle Hash (RBH) mode to improve load balancing on port-channel members on Cisco Nexus 7000 M Series I/O XL modules and on F Series modules. With the new RBH modulo mode, the RBH result is based on the actual count of port-channel members.



2015-04-06 08.15.47 am



  • Feature disabled by default. Must be enable feature first
  • Up to 16 active interfaces with 5.1>
  • Active 8, 8 Standby before 5.1
  •  Modes are active, passive, or ON (static port-channel NO LACP)
  • ON mode or static port channels is the DEFAULT mode

Both the passive and active modes allow LACP to negotiate between ports to determine if they can form a port channel based on criteria such as the port speed and the trunking state.


The passive mode is useful when you do not know whether the remote system, or partner, supports LACP.


Ports can form an LACP port channel when they are in different LACP modes if the modes are compatible as in the following examples:


  • A port in active mode can form a port channel successfully with another port that is in active mode.
  • A port in active mode can form a port channel with another port in passive mode.
  • A port in passive mode cannot form a port channel with another port that is also in passive mode, because neither port will initiate negotiation.
  • A port in on mode is not running LACP and cannot form a port channel with another port that is in active or passive mode.


LACP System ID is the combination of the LACP System Priority and MAC Address. Value of system priority is 1-32,768. Lower priority value = higher system priority. 1 being the highest priority.


Port Priority values are from 1-65535. Port priority + port number (interface ID) = LACP Port ID

Lower PortID value = higher priority to be chosen for forwarding/active vs. standby links. Default port priority is 32,768


Prerequisites for Port Channeling

Port channeling has the following prerequisites:

  • You must be logged onto the device.
  • If necessary, install the Advanced Services license and enter the desired VDC.
  • All ports in the channel group must be in the same VDC.
  • All ports for a single port channel must be either Layer 2 or Layer 3 ports.
  • All ports for a single port channel must meet the compatibility requirements. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • You must configure load balancing from the default VDC.

Guidelines and Limitations

Port channeling has the following configuration guidelines and limitations:

  • The LACP port-channel minimum links and maxbundle feature is not supported for host interface port channels.
  • You must enable LACP before you can use that feature.
  • You can configure multiple port channels on a device.
  • Do not put shared and dedicated ports into the same port channel. (See “Configuring Basic Interface Parameters,” for information about shared and dedicated ports.)
  • For Layer 2 port channels, ports with different STP port path costs can form a port channel if they are compatibly configured with each other. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • In STP, the port-channel cost is based on the aggregated bandwidth of the port members.
  • After you configure a port channel, the configuration that you apply to the port channel interface affects the port channel member ports. The configuration that you apply to the member ports affects only the member port where you apply the configuration.
  • LACP does not support half-duplex mode. Half-duplex ports in LACP port channels are put in the suspended state.
  • You must remove the port-security information from a port before you can add that port to a port channel. Similarly, you cannot apply the port-security configuration to a port that is a member of a channel group.
  • Do not configure ports that belong to a port channel group as private VLAN ports. While a port is part of the private VLAN configuration, the port channel configuration becomes inactive.
  • Channel member ports cannot be a source or destination SPAN port.
  • You cannot configure the ports from an F1 and an M1 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • You cannot configure the ports from an M1 and M2 series linecard in the same port channel.
  • You cannot configure the ports from an F2e and an F3 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • Beginning with Cisco NX-OS Release 5.1, you can bundle up to 16 active links into a port channel on the F1 series linecard.
  • F1 Series modules do not support load balancing of non-IP traffic based on a MAC address. If ports on an F1 Series module are used in a port channel and non-IP traffic is sent over the port channel, Layer 2 traffic might get out of order.
  • Only F Series and the XL type of M Series modules support the RBH modulo mode.


Feature History for Configuring Port Channels

Feature Name Release Feature Information
Display policy errors on interfaces and VLANs 6.2(2) Added the show interface status error policy command.
Prevent traffic-drop during bi-directional flow on F2 or F2e modules 6.2(2) Added the asymmetric keyword to port-channel load-balance command to improve load balancing across port channels.
Result Bundle Hash load balancing 6.1(3) Support for the RBH modulo mode to improve load balancing across port channels.
Minimum links for FEX fabric port channel 6.1(3) This feature was introduced.
Port channels hash distribution 6.1(1) Support for port channel hash distribution fixed and adaptive mode.
Load-balancing supports F2 modules 6.0(1) Added support for F2 modules on load-balancing across port channels.
Port channels 5.2(1) Support increased to 528 port channels.
Minimum links and Maxbundle for LACP 5.1(1) This feature was introduced.
Port channels 4.2(1) Support increased to 256 port channels.
Port channels 4.0(1) This feature was introduced.


Example Lab Question and Configuration


Port Channel Task

Assuming that more links will be added later, with the desire for minimal traffic disruption (LACP), configure the following:

Configure trunking on port channel 100 from N7K1 to UCS FI-A, and ensure that the same port channel number is used later from the UCS side.


interface Ethernet1/22


  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 100 mode active (LACP)

  no shutdown


Configure trunking on port channel 200 from N7K1 to UCS FI-B, and ensure that the same port channel number is used later from the UCS side.


interface Ethernet1/24


  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 200 mode active (LACP)

  no shutdown


Ensure that both of these port channels transition immediately to a state of

forwarding traffic.

“Int port-channel 100” & “Int port-channel 200”

“spanning-tree port type edge trunk”


Ensure that the N7K1 is the primary device in LACP negotiation. Ensure that the hashing algorithm takes L3 and L4 for both source and destination into account.

“lacp system-priority 1” Lower system priority value = higher priority


“port-channel load-balance src-dst ip-l4port”


Trunk only previously created VLANs 100,200,300,400,500 southbound from N7K1 to both FIs.


Verify with “Show port-channel summary”