Browsed by
Category: Cisco

The Network. Intuitive.

The Network. Intuitive.

A New Network for a New Era

Well, the cat is finally out the bag…

I’ve been biting my lips for the last several months working on campus designs with customers. That’s because internally at Cisco, all the buzz was around bringing SDN and most importantly intent driven networking to the campus in a BIG way. This is very much akin to how Cisco transformed the data center with ACI. In fact, I’ve heard verbatim from customers “why doesn’t Cisco have an ACI like solution for the campus?”.

Like a said earlier, I had to bite my lip each time I heard this comment unless we went through the mutual NDA process and even then we provided only a brief glimpse at what was coming.

I’d like to focus on ACI fabric automation and deployment when I draw a comparison to what I envision software defined access (SD-Access/SDA) will be.

In an ACI data center, I simply cable my spine/leaf switches and plug in my APIC controllers to the leaf.  I then go through a 5 minute setup process to define my credentials, TEP pool, infrastructure VLAN ID, and a couple other simple prompts on the APIC controller.

At this point, my ACI fabric is ready to go and all I need to do is register my leaf switches to the fabric , give them a name and ID and I’m off to the object/policy creation steps. Once my policy model and objects are set, it really becomes rinse and repeat. The key with this intent based networking is agility and automation at scale.

I didn’t have to give each leaf a management IP, specify VLANs, credentials, access methods, trunk ports, setup routing protocols, etc… While that’s how I’ve been doing things for over two decades, recently my eyes were open to what happens to that traditional/static model at scale. Quite frankly, it falls apart unless you got some awesome scripting folks automating box-by-box configs with tools like ansible/jinja/python.

In addition, native/embedded security is critical to detect and mitigate threats in the campus network. Detecting threats in encrypted traffic is a pretty amazing “nerd knob”.

In closing, I see a bright future for the campus network.  A future where the campus wired/wireless/WAN have embedded security functionality, deep contextual information (abstract subnet/vlan ID) of attached devices, is intent driven to allow automation at scale, and intuitive enough to deliver actionable and predictive insights.

If you’re going to Cisco Live next week, expect some major deep dive sessions on Cat9K, DNA, and more.

-shaun

#WeAreCisco #Innovation

#CiscoDNA #NetworkIntuitive

Links & References

CCIE DNA: Reality or Myth?

CCIE DNA: Reality or Myth?

MythBusters_title_screen

It all started at #CLUS

Unfortunately, I was unable to attend Cisco Live US in Las Vegas this year. Don’t shed any tears for me as I was fortunate enough to have customers, friends, and co-workers attend. They got me some sweet swag and provided a play-by-play as things unfolded.

One such morsel of information was regarding a “CCIE DNA” or “CCIE GUI”.

At first I was just sitting in front of my monitor drifting into space thinking what the format of such a practical exam would look like. Would it be exploratory like my transition experience from R/S v3 to v4 (open ended questions, remove open ended questions, add troubleshooting, leverage virtual & physical environments, etc)?

Then I envisioned an entire exam based on APIC-EM/APIC-DC, NFV, Postman, and lots of mouse clicking. It’s this very thought that I started to break out in a cold sweat from the possibility of CLI withdrawal.

This was roughly 6 weeks ago… Now that the dust has settled, I decided to dig into this “rumor” a little more. I was especially motivated after I observed confusion in the twittersphere today.

Reality

  • At #CLUS 2016 our commander and chief, Mr. Chuck Robbins provided insight into the importance of Digital Network Architecture (DNA). It’s not so much a product, but embracing emerging technologies such as automation, mobility, cloud, IoT, and analytics. In addition, Chuck discussed how important emerging technologies are and how we’ve never brought the application + network together from a visibility perspective.
  • My understanding is Chuck also discussed a DNA user group that would be certifying engineers with reference to the CCIE tracks. I believe this is where some folks walked away with the thought that Chuck announced a standalone CCIE DNA track.
  • I did some fact finding with our very own CCDE/CCAr program manager, Elaine Lopes @elopes01 and the reality is somewhere in the middle. 

The plan is to incorporate the DNA architecture and other evolving technologies into the pertinent CCxE tracks vs. being a separate track.

I can already see hints at this when I downloaded the current (v 2.1) CCDE written blueprint. There’s a new section in version labeled “5.0: Evolving Technologies”. While this doesn’t explicitly state “DNA”, it does have network programability/SDN and cloud which are core to DNA.

2016-08-25 10.32.18 pm

The “evolving technologies” section is NOT isolated to the CCDE either!
You can read more about it at Elaine’s blog titled “Myth Busters & Evolving Technologies” 

2016-08-25 10.33.57 pm

Disclaimer: This is the current plan as I know it. However, as with anything in our field it’s always subject to change. <GRIN>

My 2c FWIW

I’m excited that we’re putting evolving technologies into the various blueprints. There isn’t a day that goes by where a customer conversation doesn’t include leveraging cloud workloads, making sense of all the analytical (especially infosec) data collected, network programability, or “SDN”.

In addition, I feel strongly that using the generic topic of “Evolving Technologies” gives the CCxE program managers the ability to keep the exams fresh and relevant. This is at least the case for the written exams, how evolving technologies is incorporated into the practical is still TBD.

My thought is that the CCxE tracks will start to incorporate DNA into both the written and practicals. How that story unfolds will be one that I’ll watch closely and post updates on.

I’m waiting for a CCIE R/S candidate to say “Gomez, you got an instance of APIC-EM I can lab on?”.

2016-08-25 09.45.02 pm

CCIE Data Center: Version 2.0

CCIE Data Center: Version 2.0

Woah… Deja Vu

matrix1

This all seems so familiar…

OH YEA! I went through this once before already. I took the CCIE R/S version 3 with the high (naive) hopes of passing it my first attempt. #n00b

The challenge I had with the R/S v4 update was that it felt like the content managers had a serious case of ADD. Open ended questions, no open ended questions, troubleshooting, etc… It was frustrating that I had to experience every possible derivative of the v4 lab. I’m just glad I passed before the v5 lab blueprint was out.

Now here I am, ready to rock the lab in January and we announce a v2 lab update. Don’t get me wrong, I really dig the changes. I only wish it happened sooner, so I’d be studying for the new (relevant) curriculum.

Let’s start out with the domain changes.

Domain comparison between CCIE Data Center v1.0 and CCIE Data Center v2.0

CCIE Data Center v1.0

  1. Cisco Data Center Architecture
  2. Cisco Data Center Infrastructure-Cisco NX-OS
  3. Cisco Storage Networking
  4. Cisco Data Center Virtualization
  5. Cisco Unified Computing System
  6. Cisco Application Networking Services

CCIE Data Center v2.0

  1. Cisco Data Center L2/L3 Technologies
  2. Cisco Data Center Network Services
  3. Data Center Storage Networking and Compute
  4. Data Center Automation and Orchestration
  5. Data Center Fabric Infrastructure
  6. Evolving Technologies

Thoughts: Focus on skills & technologies vs hardware. I like what I see so far. You still need to possess design, implementation, and troubleshooting skills just less emphasis on knowing all the intricacies of a certain product. Adding things like automation, cloud and ACI to the blueprint is a VERY good idea since the subjects are top of mind with customers.

Topics no longer included in CCIE Data Center v2.0

  • Implement Data Center application high availability and load balancing
  • Implement FCIP features

Thoughts: No more ACE/WAAS/FCIP. Yea, that’s a good thing considering ACE went EoL back in 2013. I just don’t see enough customers using FCIP these days, so I guess that’s also a good one to remove.

Lab Equipment & Software List

2015-12-07 02.43.09 pm

Thoughts: If you look at the updated 2.0 lab hardware, there is no MDS at all. Goodbye 9222i, you will be missed. IP Storage FTW!

The new thing that catches my eye is the update to the next gen FEX (2300) and N5K (5600). I’m very happy for this as the 5672 has been a great (low latency/1us) L2/native L3 ToR for storage. Deep buffers  (25MB per 12p of 10G) help and it doesn’t hurt that this switch supports unified ports (Ethernet/FC/FCoE).

The servers have been refreshed to M4’s the M-series (cloud scale workloads) chassis is added, emulex mezz card removed.

Now my favorite part. The networking gear update. N9K’s + ACI added, the 7k was updated to 7004 with SUP2E (more VDCs) and F3’s. Glad to see the M/F line cards replaced because of the complexity and having to remember which cards had what capabilities. The F1’s really needed to go!

The Diagnostic Module

2015-12-07 02.55.32 pmThoughts: This is probably the most controversial change.  I know this is the direction to align with the other CCIE tracks, however this is also the area in which many candidates will have MANY questions.

Let me post (inline) all that I have on the subject, but in many ways this feels like real world scenarios. I get this all the time from customers and it’s like figuring out a puzzle. I love doing this in the real world, I just hope the exam diagnostic section captures this experience naturally.

Diagnostic Module Details

The new Diagnostic module, which has a length of 60 min, focuses on the skills required to properly diagnose network issues, without having device access. The main objective of the Diagnostic module is to assess the skills required to properly diagnose network issues. These skills include:

  • Analyze
  • Correlate– Discerning multiple sources of documentation(in example e-mail threads, network topology diagrams, console outputs, logs, and even traffic captures.)In the Diagnostic module, candidates need to make choices between pre-defined options to indicate:
  • What is the root cause of an issue
  • Where is the issue located in the diagram
  • What is the critical piece of information allows us the identify the root cause
  • What piece of information is missing to be able to identify the root causeThe Configuration and Troubleshooting module consists of one topology, similar to CCIE Data Center v1.0. The length of the Configuration and Troubleshooting module is seven hours. At the beginning of the module, the candidate has a full overview of the entire module; and can make a choice of working on items in sequence or not, depending on the candidates comfort level, the overall scenario and question interdependencies.The Diagnostic and Configuration and Troubleshooting modules in the Lab exam are delivered in a fixed sequence: the candidate starts the day with the 1 hour Diagnostic module, which is followed by the 7 hours Configuration and Troubleshooting module. The entire Lab exam lasts up to eight hours. Note that candidates are not allowed to go back and forth between modules.

For the Diagnostic module, no device access is provided. Candidates are provided various pieces of information (example emails, debug outputs, example network diagram information that is provided to a Data Center support engineer assisting a customer in determining the root cause of an issue, or an analogy of information that is provided by a colleague who is stuck in a troubleshooting issue).

Within the Diagnostic module, the items are presented in a similar format as within the Written exam.The module includes multiple-choice, drag-and-drop, or even point-and-click style items. The major differences between the Written exam and the Diagnostic module is that the items in the Diagnostic module (called troubleshoot tickets) contain a set of documents that the candidate must consult in order to be able to understand and identify the root cause of the issue presented. Candidates need to analyze and correlate information (after discerning between valuable and worthless pieces of information) in order to make the right choice among the pre-defined options provided.

The troubleshoot tickets will not require candidates to type in order to provide the answer. All tickets will be close-ended so grading will be deterministic, ensuring a fair and consistent scoring process. The new module allows us to grant credit to candidates who are able to accurately identify the root cause of a networking issue, but fail to resolve it within specific constraints (as in the Configuration and Troubleshooting module).

Real-life experience is certainly the best training to prepare for this module. Candidates with limited experience should focus on discovering, practicing and applying efficient and effective troubleshooting methodologies that are used for any realistic networking challenge.

Passing Criteria

In order to pass the Lab exam, the candidate must meet both of the following conditions:

  • The minimum cut-score of each individual module must be achieved.
  • The total score of both modules togethermust be above the minimum value of the combined cut-score. The point value(s) of the items in each module is known to the candidate. Note points are only granted when all requirements and sometimes restrictions of the item are met. There is no partial scoring for any items.

2015-12-07 03.10.01 pm

Closing Thoughts: I would like to think that I’ll pass the CCIE DC 1.0 lab on the 1st attempt this January. If not, I’ll have until July 22nd to pass the current blueprint. After that… I’ll have to figure out if I want to adapt and conquer v2 or just move on to something else like the CCDE.

Important Dates:

 

CCIE Data Center Written Exam v1.0 (350-080 CCIE DC)

Last day to test: July 22, 2016

CCIE Data Center Lab Exam v1.0

Last day to test: July 22, 2016

 

CCIE Data Center Written Exam v2.0 (400-151 CCIE DC)

Available for testing: July 25, 2016

CCIE Data Center Lab Exam v2.0

Available for testing: July 25, 2016

Reference Links: https://learningcontent.cisco.com/cln_storage/text/cln/marketing/ccie-dc-examtopic-delta-v1-v2-01.pdf

ConfigBytes: ASA 5506x w/ FirePOWER Services

ConfigBytes: ASA 5506x w/ FirePOWER Services

#ConfigBytes

Getting Started with the ASA5506x & FirePOWER Services

 

Official Quick Start Guide:

http://www.cisco.com/c/en/us/td/docs/security/asa/quick_start/5506X/5506x-quick-start.html

FirePOWER User Guide:

http://www.cisco.com/c/en/us/td/docs/security/firesight/541/firepower-module-user-guide/asa-firepower-module-user-guide-v541.html

FirePOWER Services for ASA Data Sheet:

http://www.cisco.com/c/en/us/products/collateral/security/asa-5500-series-next-generation-firewalls/datasheet-c78-733916.html

 

TL:DR Key Points

  • Since the ASA5506x doesn’t have built-in switch capabilities (yet), you will need a L2 switch to connect the management interface which is used for firepower services module and your inside ASA interface for management. If you have an L3 switch the FirePOWER management interface can be on a different subnet from your inside ASA interface.
  • Download ASDM 7.4(3)image, ASA 9.4(1)3 and the latest firepower/sourcefire sensor patch (5.4.1.2 at this time). Place these files on the ASA flash, upgrade and point to the new ASDM file.
  • Create a username/password w/ PRIV 15 for ASDM access. “username Wu-Tang password KillaBeesOnTheSwarm privilege 15”
  • I highly recommend using the ASA Startup Wizard, this is much easier then a console session (“session srf console”) to the FirePOWER services module for setup of management.
  • Default Username/Password for the SourceFIRE module is admin/Sourcefire
  • Upgrade FirePOWER through ASDM or FireSight. Remember you can use ASDM or FireSight to manage the FirePOWER services.
  • Install your FirePOWER licenses
  • Don’t forget to configure a service policy on the ASA to redirect traffic to the FirePOWER module.

topology

Final Config

5506xFPS(config)# sh run
: Saved
: Serial Number: <removed>
: Hardware: ASA5506, 4096 MB RAM, CPU Atom C2000 series 1250 MHz, 1 CPU (4 cores)
:
ASA Version 9.4(1)3
!
hostname 5506xFPS
domain-name cisco.lab
enable password <removed>
xlate per-session deny tcp any4 any4
xlate per-session deny tcp any4 any6
xlate per-session deny tcp any6 any4
xlate per-session deny tcp any6 any6
xlate per-session deny udp any4 any4 eq domain
xlate per-session deny udp any4 any6 eq domain
xlate per-session deny udp any6 any4 eq domain
xlate per-session deny udp any6 any6 eq domain
names
!
interface GigabitEthernet1/1
nameif outside
security-level 0
ip address dhcp setroute
!
interface GigabitEthernet1/2
shutdown
no nameif
no security-level
no ip address
!
interface GigabitEthernet1/3
shutdown
no nameif
no security-level
no ip address
!
interface GigabitEthernet1/4
shutdown
no nameif
no security-level
no ip address
!
interface GigabitEthernet1/5
shutdown
no nameif
no security-level
no ip address
!
interface GigabitEthernet1/6
shutdown
no nameif
no security-level
no ip address
!
interface GigabitEthernet1/7
shutdown
no nameif
no security-level
no ip address
!
interface GigabitEthernet1/8
description Inside_2
nameif inside2
security-level 100
ip address 10.100.220.1 255.255.255.0
!
interface Management1/1
management-only
no nameif
no security-level
no ip address
!
boot system disk0:/asa941-3-lfbff-k8.SPA
ftp mode passive
clock timezone EST -5
clock summer-time EDT recurring
dns server-group DefaultDNS
domain-name cisco.lab
same-security-traffic permit inter-interface
same-security-traffic permit intra-interface
pager lines 24
logging enable
logging buffer-size 8192
logging asdm-buffer-size 250
logging console emergencies
logging asdm alerts
mtu outside 1500
mtu inside2 1500
icmp unreachable rate-limit 1 burst-size 1
icmp deny any outside
asdm image disk0:/asdm-743.bin
no asdm history enable
arp timeout 14400
no arp permit-nonconnected
!
nat (inside2,outside) after-auto source dynamic any interface
route inside2 10.0.0.0 255.0.0.0 10.100.220.2 1
route inside2 172.16.0.0 255.240.0.0 10.100.220.2 1
route inside2 192.168.0.0 255.255.0.0 10.100.220.2 1
timeout xlate 3:00:00
timeout pat-xlate 0:00:30
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
timeout tcp-proxy-reassembly 0:01:00
timeout floating-conn 0:00:00
user-identity default-domain LOCAL
http server enable
http 10.100.220.0 255.255.255.0 inside2
no snmp-server location
no snmp-server contact
sysopt noproxyarp outside
service sw-reset-button
crypto ipsec security-association pmtu-aging infinite
crypto ca trustpoint ASDM_Launcher_Access_TrustPoint_0
enrollment self
fqdn none
subject-name CN=10.100.220.1,CN=5506xFPS
keypair ASDM_LAUNCHER
crl configure
crypto ca trustpoint ASDM_TrustPoint0
crl configure
crypto ca trustpoint ASDM_TrustPoint1
enrollment terminal
crl configure
crypto ca trustpool policy
crypto ca certificate chain ASDM_Launcher_Access_TrustPoint_0
<removed>
quit
telnet timeout 5
ssh scopy enable
ssh stricthostkeycheck
ssh pubkey-chain
server 10.100.220.153
ssh timeout 5
ssh version 2
ssh key-exchange group dh-group1-sha1
console timeout 0
dhcpd address 10.100.220.10-10.100.220.199 inside2
dhcpd dns 216.144.187.199 8.8.8.8 interface inside2
dhcpd lease 28800 interface inside2
dhcpd enable inside2
!
threat-detection basic-threat
threat-detection statistics port
threat-detection statistics protocol
threat-detection statistics access-list
threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200
ntp server 129.6.15.30 source outside prefer
dynamic-access-policy-record DfltAccessPolicy
username asa password encrypted privilege 15
username admin password encrypted privilege 15
!
class-map inspection_default
match default-inspection-traffic
class-map global-class-SF
match any
!
!
policy-map type inspect dns preset_dns_map
parameters
message-length maximum client auto
message-length maximum 512
policy-map global_policy
description Global+SF
class global-class-SF
sfr fail-close
class inspection_default
inspect dns preset_dns_map
inspect esmtp
inspect ftp
inspect h323 h225
inspect h323 ras
inspect ip-options
inspect netbios
inspect rsh
inspect rtsp
inspect sqlnet
inspect sunrpc
inspect tftp
inspect xdmcp
policy-map type inspect dns migrated_dns_map_1
parameters
message-length maximum client auto
message-length maximum 512
!
service-policy global_policy global
prompt hostname context
no call-home reporting anonymous
call-home
profile CiscoTAC-1
no active
destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService
destination address email callhome@cisco.com
destination transport-method http
subscribe-to-alert-group diagnostic
subscribe-to-alert-group environment
subscribe-to-alert-group inventory periodic monthly 8
subscribe-to-alert-group configuration periodic monthly 8
subscribe-to-alert-group telemetry periodic daily
hpm topN enable
Cryptochecksum:8c074bd2be57c9a8df6e364e77b07ae7
: end

Video Example of URL Filtering with FirePOWER

Hope this latest #ConfigBytes was helpful!

The Journey to CCIE #2 Starts Now

The Journey to CCIE #2 Starts Now

Game On Old Friend

2015-07-20 10.25.25 pm

 

It’s hard to believe that it’s been almost 2 years since I passed the R/S lab and my digits (40755) were assigned. I remember the numbers just passed 40k and I was so hoping to get 40007.

This way I could be 007. <GRIN>

Now I’m ready for the next challenge. My motivation for CCIE DC was simple. First I wanted to challenge myself yet again. Second, I feel strongly that a deep understanding of UCS & virtualization helps me stay relevant when it comes to private cloud conversations which all the cool kids are doing. Finally, I suck at storage. If storage was a weakness to me, it would be like green kryptonite to Clark.

2015-07-20 09.34.46 pm

 

 

 

 

 

 

 

 

All that said, I also miss the behind the wheel configuration and troubleshooting. I’m a pre-sales SE and spend most of my time these days in design sessions, product updates, and evangelizing new solutions. What better way to get serious hands-on than a CCIE lab?

Right before Christmas 2014, I took the CCIE DC written and failed it by 1-2 questions. I was so upset about carrying that disappointment through the holidays. Jan 8th was my date of redemption and I passed with a 953/1000.

I purchased workbooks from INE and leveraged their all access pass program and have about 1/2 the lab gear in one of our Cisco offices Just don’t have enough juice. <FACEPALM>

I’m also going to leverage VIRL and UCS Emulator for my studies.

Now it’s time to lock down and get this lab banged out in November. T-Minus 4 months… #TickTock

 

CCIE Data Center Lab Exam v1.0 

Lab Equipment and Software Versions

Passing the lab exam requires a depth of understanding difficult to obtain without hands-on experience. Early in your preparation you should arrange access to equipment similar to that used on the exam, and listed below.

The lab exam tests any feature that can be configured on the equipment and the NXOS versions indicated below. Occasionally, you may see more recent NXOS versions installed in the lab, but you will not be tested on the new features of a release unless indicated below.

  • Cisco Catalyst Switch 3750
  • Cisco 2511 Terminal Server
  • MDS 9222i
  • Nexus7009
    • (1) Sup
    • (1) 32 Port 10Gb (F1 Module)
    • (1) 32 Port 10Gb (M1 Module)
  • Nexus5548
  • Nexus2232
  • Nexus 1000v
  • UCS C200 Series Server
    • vic card for c-series
  • UCS-6248 Fabric Interconnects
  • UCS-5108 Blade Chassis
    • B-200 Series Blades
    • Palo mezzanine card
    • Emulex mezzanine card
  • Cisco Application Control Engine Appliance – ACE4710
  • Dual attached JBODs

Software Versions

  • NXOS v6.x on Nexus 7000 Switches
  • NXOS v5.x on Nexus 5000 Switches
  • NXOS v4.x on Nexus 1000v
  • NXOS v5.x on MDS 9222i Switches
  • UCS Software release 2.x Fabric Interconnect
  • Software Release A5(1.0) for ACE 4710
  • Cisco Data Center Manager software v5.x

ACE!? Really!??!?!?

2015-07-20 10.05.10 pm

#CCIEDC

ConfigBytes: Nexus 6000/5600 Latency & Buffer Monitor

ConfigBytes: Nexus 6000/5600 Latency & Buffer Monitor

#CONFIGBYTES

Episode 2
Platforms: Nexus 6000 & 5600 (UPC based ASIC)

 

Latency Monitor:

Full Documentation

The switch latency monitoring feature marks each ingress and egress packet with a timestamp value. To calculate the latency for each packet in the system the switch compares the ingress with the egress timestamp. The feature allows you to display historical latency averages between all pairs of ports, as well as real-time latency data.

You can use the latency measurements to identify which flows are impacted by latency issues. In addition the statistics generated by the switch latency monitoring feature allow you to plan network topologies, manage incident responses and identify root causes for application issues in the network. You can also use the statistics to provide a Service Level Agreement (SLA) for latency intensive applications.

Configuration Example for Switch Latency Monitoring

Requires 7.x code

This example shows how to configure switch latency monitoring:

switch(config)# hardware profile latency monitor base 800
switch(config)# interface ethernet 1/1
switch(config-if)# packet latency interface ethernet 1/2 mode linear step 40
switch(config-if)# packet latency interface ethernet 1/3-4 mode exponential step 40
switch(config-if)# packet latency interface ethernet 1/5 mode custom low 40 high 1200
switch(config)# interface ethernet 2/1
switch(config-if)# packet latency interface ethernet 1/1 mode exponential step 80

Buffer Utilization Histogram:

Full Documentation

The Buffer Utilization Histogram feature enables you to analyze the maximum queue depths and buffer utilization in the system in real time. Instantaneous or real time buffer utilization information is supported by the hardware. You can use software to obtain the history of the buffer usage by polling the hardware at regular intervals. Obtaining an historic timeline of the buffer usage provides a better picture of the traffic pattern in the system and helps in traffic engineering. Ultimately, you are able to make better use of the hardware buffer resources.

On the Cisco Nexus device, every three ports of 40 Gigabit Ethernet or every 12 ports of 10 Gigabit Ethernet have access to a shared 25 Mb packet buffer. 15.6 Mb are reserved for ingress and 8.6 Mb are reserved for egress. The remaining space is used for SPAN and control packets.

The Buffer Utilization Histogram enables you to do the following:

  • Configure buffer utilization history measurements on the interested ports.
  • View buffer utilization over an interval of time.
  • Configure either a slow or a fast polling mode.
  • Copy collected statistics to the buffer_util_stats file on the bootflash drive every hour to allow for later analysis. The collected statistics are appended to the end of the file after an hour and a timestamp is placed in the header that has the interface name.

Configuration Example for Buffer Utilization:

Requires 7.x code

switch# configure terminal
switch(config)# interface ethernet 1/1
switch(config-if)# hardware profile buffer monitor

Output Examples for Buffer Utilization Histogram

2015-05-14 10.05.56 am

Write Histogram Data to File & Syslog Alert via EEM/Python

Python Script:

import sys
import re
import io
import syslog
from cisco import cli
from sys import argv

def parse_and_print_interface(input_string):
print “Received input – {0}”.format(input_string)
result = re.findall(r’\bEthernet\w+\W+\w+’, input_string)
print result
#print result[0]
show_cli_cmd = “show hardware profile buffer monitor interface ” + result[0] + ” history detail “
# show_cli_cmd = “show hardware profile buffer monitor all history detail”
# Execute the command on the switch
print show_cli_cmd
time1 = cli(“show clock”)
raw_input = cli(show_cli_cmd)
output1 = time1 + raw_input + “\n”
target = open(“/bootflash/EEM_buffer_log”, “a”)
target.write(output1)
target.close()
time2 = cli(“show clock”)
raw_input2 = cli(“show interface burst-counters”)
output2 = time2 + raw_input2 + “\n”
target = open(“/bootflash/EEM_burst_log”, “a”)
target.write(output2)
target.close()

def main():
print sys.argv
parse_and_print_interface(sys.argv[2])

if __name__==”__main__”:
sys.exit(main())

EEM Script:

event manager applet burst_monitor

  event syslog pattern “bigsurusd”

  action 1 cli source nameofbufferscript.py -l “$_syslog_msg”


CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data: Lab Blueprint 1.1c Implementing Port Channels

CCIE Data Center Lab Blueprint

1.1c Implementing Port Channels

 

ConfigBytes #2

Port Channels

A port channel bundles physical links into a channel group to create a single logical link that provides the aggregate bandwidth of up to 16 physical links. If a member port within a port channel fails, the traffic previously carried over the failed link switches to the remaining member ports within the port channel.

  • F and M series line card port members cannot be mixed into a port-channel.
  • On a single switch, the port-channel compatibility parameters (SPEED,DUPLEX,ETC) must be the same among all the port-channel members on the physical switch.
  • Use port-channels for resiliency and aggregation of throughput.
  • 8 member links per port-channel prior to 5.1
  • NXOS 5.1> 16
  •  member links
  • L2 & L3 port-channels available on NXOS
  • Port-channel interface ID range 1-4096
  • Configuration changes made to logical port-channel interface is inherited by the individual member interfaces.
  • You can use static port channels, with no associated aggregation protocol, for a simplified configuration. For more flexibility, you can use LACP. When you use LACP, the link passes protocol packets. You cannot configure LACP on shared interfaces.
  • PAgP is NOT supported on NXOS
  • The port channel is operationally up when at least one of the member ports is up and that port’s status is channeling. The port channel is operationally down when all member ports are operationally down.
Note After a Layer 2/3 port becomes part of a port channel, all configurations must be done on the port channel; you can no longer apply configurations to individual port-channel members. you must apply the configuration to the entire port channel.

2015-04-06 08.14.44 am

Compatibility Requirements

When you add an interface to a channel group, the software checks certain interface attributes to ensure that the interface is compatible with the channel group. For example, you cannot add a Layer 3 interface to a Layer 2 channel group. The Cisco NX-OS software also checks a number of operational attributes for an interface before allowing that interface to participate in the port-channel aggregation.

The compatibility check includes the following operational attributes:

  • (Link) speed capability
  • Access VLAN
  • Allowed VLAN list
  • Check rate mode
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration
  • Layer 3 ports—Cannot have subinterfaces
  • MTU size
  • Media type, either copper or fiber
  • Module Type
  • Network layer
  • Port mode
  • SPAN—Cannot be a SPAN source or a destination port
  • Speed configuration
  • Storm control
  • Tagged or untagged
  • Trunk native VLAN

Use the show port-channel compatibility-parameters command to see the full list of compatibility checks that the Cisco NX-OS uses.

 

You can only add interfaces configured with the channel mode set to on to static port channels, and you can only add interfaces configured with the channel mode as active or passive to port channels that are running LACP. You can configure these attributes on an individual member port. If you configure a member port with an incompatible attribute, the software suspends that port in the port channel.

 

Alternatively, you can force ports with incompatible parameters to join the port channel if the following parameters are the same:

  • (Link) speed capability
  • Speed configuration
  • Duplex capability
  • Duplex configuration
  • Flow-control capability
  • Flow-control configuration

 

Port Channel Load Balancing

  • Port channels provide load balancing by default
  • Port-channel load balancing uses L2 (MAC), L3 (IP), or L4 (port) to select the link
  • SRC or DST or both SRC and DST
  • Per switch (global) or per module. Per module takes precedence over per switch
  • L3 default is SRC/DST IP address
  • L2/non-IP default is SRC/DST MAC address
  • 6.0(1) for F series line card L2 load balancing
  • Must be in the default VDC to configure

You can configure load balancing either by the entire system or by specific modules, regardless of the VDC. The port-channel loadbalancing is a global setting across all VDCs.

If the ingress traffic is Multiprotocol Label Switching (MPLS) traffic, the software looks under the labels for the IP address on the packet.

The load-balancing algorithms that use port channels do not apply to multicast traffic. Regardless of the load-balancing algorithm you have configured, multicast traffic uses the following methods for load balancing with port channels:

  • Multicast traffic with Layer 4 information—Source IP address, source port, destination IP address, destination port
  • Multicast traffic without Layer 4 information—Source IP address, destination IP address
  • Non-IP multicast traffic—Source MAC address, destination MAC address
Note Devices that run Cisco IOS can optimize the behavior of the member ports. ASICs if a failure of a single member occurred if you enter the port-channel hash-distribution command. The Cisco Nexus 7000 Series device performs this optimization by default and does not require or support this command.

Cisco NX-OS Release 6.1(3) supports a new Result Bundle Hash (RBH) mode to improve load balancing on port-channel members on Cisco Nexus 7000 M Series I/O XL modules and on F Series modules. With the new RBH modulo mode, the RBH result is based on the actual count of port-channel members.

 

LACP

2015-04-06 08.15.47 am

 

 

  • Feature disabled by default. Must be enable feature first
  • Up to 16 active interfaces with 5.1>
  • Active 8, 8 Standby before 5.1
  •  Modes are active, passive, or ON (static port-channel NO LACP)
  • ON mode or static port channels is the DEFAULT mode

Both the passive and active modes allow LACP to negotiate between ports to determine if they can form a port channel based on criteria such as the port speed and the trunking state.

 

The passive mode is useful when you do not know whether the remote system, or partner, supports LACP.

 

Ports can form an LACP port channel when they are in different LACP modes if the modes are compatible as in the following examples:

 

  • A port in active mode can form a port channel successfully with another port that is in active mode.
  • A port in active mode can form a port channel with another port in passive mode.
  • A port in passive mode cannot form a port channel with another port that is also in passive mode, because neither port will initiate negotiation.
  • A port in on mode is not running LACP and cannot form a port channel with another port that is in active or passive mode.

 

LACP System ID is the combination of the LACP System Priority and MAC Address. Value of system priority is 1-32,768. Lower priority value = higher system priority. 1 being the highest priority.

 

Port Priority values are from 1-65535. Port priority + port number (interface ID) = LACP Port ID

Lower PortID value = higher priority to be chosen for forwarding/active vs. standby links. Default port priority is 32,768

 

Prerequisites for Port Channeling

Port channeling has the following prerequisites:

  • You must be logged onto the device.
  • If necessary, install the Advanced Services license and enter the desired VDC.
  • All ports in the channel group must be in the same VDC.
  • All ports for a single port channel must be either Layer 2 or Layer 3 ports.
  • All ports for a single port channel must meet the compatibility requirements. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • You must configure load balancing from the default VDC.

Guidelines and Limitations

Port channeling has the following configuration guidelines and limitations:

  • The LACP port-channel minimum links and maxbundle feature is not supported for host interface port channels.
  • You must enable LACP before you can use that feature.
  • You can configure multiple port channels on a device.
  • Do not put shared and dedicated ports into the same port channel. (See “Configuring Basic Interface Parameters,” for information about shared and dedicated ports.)
  • For Layer 2 port channels, ports with different STP port path costs can form a port channel if they are compatibly configured with each other. See the “Compatibility Requirements” section for more information about the compatibility requirements.
  • In STP, the port-channel cost is based on the aggregated bandwidth of the port members.
  • After you configure a port channel, the configuration that you apply to the port channel interface affects the port channel member ports. The configuration that you apply to the member ports affects only the member port where you apply the configuration.
  • LACP does not support half-duplex mode. Half-duplex ports in LACP port channels are put in the suspended state.
  • You must remove the port-security information from a port before you can add that port to a port channel. Similarly, you cannot apply the port-security configuration to a port that is a member of a channel group.
  • Do not configure ports that belong to a port channel group as private VLAN ports. While a port is part of the private VLAN configuration, the port channel configuration becomes inactive.
  • Channel member ports cannot be a source or destination SPAN port.
  • You cannot configure the ports from an F1 and an M1 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • You cannot configure the ports from an M1 and M2 series linecard in the same port channel.
  • You cannot configure the ports from an F2e and an F3 series linecard in the same port channel because the ports will fail to meet the compatibility requirements.
  • Beginning with Cisco NX-OS Release 5.1, you can bundle up to 16 active links into a port channel on the F1 series linecard.
  • F1 Series modules do not support load balancing of non-IP traffic based on a MAC address. If ports on an F1 Series module are used in a port channel and non-IP traffic is sent over the port channel, Layer 2 traffic might get out of order.
  • Only F Series and the XL type of M Series modules support the RBH modulo mode.

 

Feature History for Configuring Port Channels

Feature Name Release Feature Information
Display policy errors on interfaces and VLANs 6.2(2) Added the show interface status error policy command.
Prevent traffic-drop during bi-directional flow on F2 or F2e modules 6.2(2) Added the asymmetric keyword to port-channel load-balance command to improve load balancing across port channels.
Result Bundle Hash load balancing 6.1(3) Support for the RBH modulo mode to improve load balancing across port channels.
Minimum links for FEX fabric port channel 6.1(3) This feature was introduced.
Port channels hash distribution 6.1(1) Support for port channel hash distribution fixed and adaptive mode.
Load-balancing supports F2 modules 6.0(1) Added support for F2 modules on load-balancing across port channels.
Port channels 5.2(1) Support increased to 528 port channels.
Minimum links and Maxbundle for LACP 5.1(1) This feature was introduced.
Port channels 4.2(1) Support increased to 256 port channels.
Port channels 4.0(1) This feature was introduced.

 

Example Lab Question and Configuration

 

Port Channel Task

Assuming that more links will be added later, with the desire for minimal traffic disruption (LACP), configure the following:

Configure trunking on port channel 100 from N7K1 to UCS FI-A, and ensure that the same port channel number is used later from the UCS side.

 

interface Ethernet1/22

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 100 mode active (LACP)

  no shutdown

 

Configure trunking on port channel 200 from N7K1 to UCS FI-B, and ensure that the same port channel number is used later from the UCS side.

 

interface Ethernet1/24

  switchport

  switchport mode trunk

  switchport trunk allowed vlan 100,200,300,400,500

  channel-group 200 mode active (LACP)

  no shutdown

 

Ensure that both of these port channels transition immediately to a state of

forwarding traffic.

“Int port-channel 100” & “Int port-channel 200”

“spanning-tree port type edge trunk”

 

Ensure that the N7K1 is the primary device in LACP negotiation. Ensure that the hashing algorithm takes L3 and L4 for both source and destination into account.

“lacp system-priority 1” Lower system priority value = higher priority

1-32768

“port-channel load-balance src-dst ip-l4port”

 

Trunk only previously created VLANs 100,200,300,400,500 southbound from N7K1 to both FIs.

 

Verify with “Show port-channel summary”

 

DocCD: http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/nx-os/interfaces/configuration/guide/b-Cisco-Nexus-7000-Series-NX-OS-Interfaces-Configuration-Guide/b-Cisco-Nexus-7000-Series-NX-OS-Interfaces-Configuration-Guide-6x_chapter_0111.html

 

Cisco Smart Install

Cisco Smart Install

This is my first post in a new series called “Config Bytes”.

My objective is simple. Take a technology that I’m working on with a customer and post the data points.

Overview:

A global company asked me if there was an easy way to provision switches for rapid deployment. They are somewhat limited on networking personal and this would save the team some time if they could automate the staging of switches before deployment . The basic requirements were a standardized image depending on the platform and initial config for access switches. I had two viable solutions to match these requirements 1) Prime Infrastructure Plug & Play 2) Smart Install

2015-03-24 10.56.32 am

Smart Install:

Since the launch of the 3850/3650 access layer switches, we had slides that mentioned all the value add features of the Catalyst line. One of those bullet points was smart install and I remember this for the 3750x as well. At the end of 2014, we put out an updated configuration guide for smart install. I used this as a basis for design and configuration. http://goo.gl/mtYrha

You can read up on all the details, but let me summarize a few key points.

  • Smart Install is a plug-and-play configuration and image-management feature that provides zero-touch deployment (ZTD) for new switches. You can ship a switch to a location, place it in the network and power it on with no configuration required on the device.
  • Two roles for the switch infrastructure “clients” & “director”
  • Director can be an multilayer switch or router
  • Clients connect to director and pull down image and config without any intervention (ZTD)
  • If a client switch was already deployed, you must “wr erase” and reload without a startup-config for smart install to work. Out of the box, no intervention required.
  • If using an L3 switch for director the smart install “vstack” VLAN must be up or the director can fallback to a client role. Just make sure the VLAN has at lease one access port up/up if using that SVI for the director.
  • TFTP and DHCP services are required, however they can co-reside on the director. This is how I configured it in the example inline.
  • Make sure your director device has plenty of flash memory to store the images and configs. If you have many different PIDs, your going to need more flash. I found that 2GB on the 3650/4500x was suffice for my customer.
  • Be patient while the image is loaded to the client. This process takes time (sometimes up to an hour).
  • I found that using the .tar format for the images worked the best. I’m not even sure if the .bin format is supported.
  • If you want to verify the supported clients on the director use this command “show stack group built-in ?”

Table A-1 Supported Switches

Switch  Can be Director?  Can be Client? 
Catalyst 6500 Supervisor Engine 2T-10GE Yes No
Catalyst 4500 Supervisor Engine, 6E, 6LE, 7E, 7LE Yes No
Catalyst 3850 Yes Yes
Catalyst 3750-X Yes Yes
Catalyst 3750-E Yes Yes
Catalyst 3750 Yes Yes
Catalyst 3650 Yes Yes
Catalyst 3560-X Yes Yes
Catalyst 3560-E Yes Yes
Catalyst 3560-C No Yes
Catalyst 3560 Yes Yes
Catalyst 2960-S No Yes
Catalyst 2960-SF No Yes
Catalyst 2960-C No Yes
Catalyst 2960-P No Yes
Catalyst 2960 No Yes
Catalyst 2975 No Yes
IE 2000 Yes Yes
IE 3000 Yes Yes
IE 3010 Yes Yes
SM-ES2 SKUs No Yes
SM-ES3 SKUs No Yes
NME-16ES-1G-P No Yes
SM-X-ES3 SKUs Yes Yes

Table A-2 Supported Routers 

Router  Can be Director?  Can be Client? 
Cisco 3900 Series Integrated Services Routers G2 Yes No
Cisco 2900 Series Integrated Services Routers G2 Yes No
Cisco 1900 Series Integrated Services Routers G2 Yes No
Cisco 3800 Series Integrated Services Routers Yes No
Cisco 2800 Series Integrated Services Routers Yes No
Cisco 1800 Series Integrated Services Routers Yes No

Table A-3 Minimum Software Releases for Directors and Clients

Directors  Minimum Software Release 
Catalyst 6500 Supervisor Engine 2T-10GE Cisco IOS Release 15.1(1)SY
Catalyst 4500 Supervisor Engine 7E and 7LE Cisco IOS Release XE 3.4SG
Catalyst 4500 Supervisor Engine 6K and 6LE Cisco IOS Release 15.1(2)SG
Catalyst 3850 Cisco IOS Release 3.2(0)SE
Catalyst 3650 Cisco IOS Release 3.3(0)SE
Cisco 3900, 2900, and 1900 Series Integrated Services Routers G2 Cisco IOS Release 15.1(3)T
Cisco 3800, 2800, and 1800 Series Integrated Services Routers Cisco IOS Release 15.1(3)T
Catalyst 3750-E, 3750, 3560-E, and 3560 Switches Cisco IOS Release 12.2(55)SE
Catalyst 3750-X and 3560-X Switches Cisco IOS Release 12.2(55)SE
SM-X-ES3 SKUs Cisco IOS Release 15.0(2)EJ

Table A-4 Minimum Software Releases for Clients

Smart-Install Capable Clients1 Minimum Software Release 
Catalyst 3750-E, 3750, 3560-E, and 3560 Switches Cisco IOS Release 12.2(52)SE
Catalyst 3750-X and 3560-X Switches Cisco IOS Release 12.2(53)SE2
Catalyst 3560-C Compact Switches Cisco IOS Release 12.2(55)EX
Catalyst 2960 and 2975 Switches Cisco IOS Release 12.2(52)SE
Catalyst 2960-S Switches Cisco IOS Release 12.2(53)SE1
Catalyst 2960-C Compact Switches Cisco IOS Release 12.2(55)EX1
Catalyst 2960-SF Cisco IOS Release 15.0(2)SE
Catalyst 2960- P Cisco IOS Release 15.2(2)SE
IE 2000 Cisco IOS Release 15.2(2)SE
IE 3000 Cisco IOS Release 15.2(2)SE
IE 3010 Cisco IOS Release 15.2(2)SE
SM-ES3 SKUs, NME-16ES-1G-P Cisco IOS Release 12.2(52)SE
SM-ES2 SKUs Cisco IOS Release 12.2(53)SE1
SM-X-ES3 SKUs Cisco IOS Release 15.0(2)EJ

2015-03-24 10.58.00 am

Configuration Example:

n3tArk_3850#sh run | s vstack

description SmartInstall_vstack_lan
description smart_install_vstack_mgmt
vstack group custom 2960c product-id
image flash:c2960c405-universalk9-tar.152-3.E.tar
config flash:smartinstall_config_2960c.txt
match WS-C2960C-12PC-L
vstack dhcp-localserver smart_install
address-pool 192.168.200.0 255.255.255.0
file-server 192.168.200.1
default-router 192.168.200.1
vstack director 192.168.200.1
vstack basic

n3tArk_3850#sh run int vlan 1

interface Vlan1
description smart_install_vstack_mgmt
ip address 192.168.200.1 255.255.255.0

n3tArk_3850#sh run | s tftp

ip tftp source-interface Vlan777
tftp-server client_cfg.txt
tftp-server flash:smartinstall_config_2960c.txt
tftp-server flash:c2960c405-universalk9-tar.152-3.E.tar
tftp-server flash:2960c-imagelist.txt

n3tArk_3850#sh vstack status
SmartInstall: ENABLED

2015-03-24 10.43.20 am

n3tArk_3850#sh vstack download-status
SmartInstall: ENABLED

2015-03-24 10.44.18 am

 

That’s pretty much it! Here is a link to a YouTube video I created to show how easy this is to get up and running. https://www.youtube.com/watch?v=sOGMhTOt7Vs

Hope this was helpful. Please let feedback/comments in the section if I missed any key points or you want me to elaborate more on something specific.

shaun

VIRL is HERE!

VIRL is HERE!

virl

 

 

 

 

 

 

 

VIRL is HERE along with a new logo.

Dec 1st (aka Cyber Monday) brings us many good deals, including $50 off (virl50 at checkout) the $199 personal edition price.

If you have not seen my previous posts on CML, basically VIRL is the same as CML without TAC support and limited scale (15 nodes). If you don’t want to read through my previous posts, I’ll summarize inline.

http://www.4g1vn.com/2014/07/virlcml-update/ 
http://www.4g1vn.com/2014/09/cml-1-0-first-impressions-getting-started/

 

What is VIRL?

VIRL enables users to rapidly design, configure and simulate network topologies. The VIRL virtualization framework provides a platform for high-fidelity network simulations that can be used for hands-on training, education, testing and development.

  • VIRL provides the ability to design network topologies with a GUI
  • VIRL Personal Edition provides IOSv, IOS XRv, CSR1000v and NX OSv!
  • You can integrate real network environments with your virtual network simulations

 

More information about VIRL

  1. VIRL website: http://virl.cisco.com
  2. VIRL Community Support: http://virl-dev-innovate.cisco.com/
  3. Pricing:
    • $199.99 for VIRL Personal Edition Annual Subscription License
    • $79.99 for VIRL Personal Edition Academic Version (students & teachers)  Annual Subscription License
  4. Other promos: First 25 purchasers of Personal Edition and the Academic Version will get free VIRL t-shirts

Requirements

Verify that your PC or laptop meets the following minimum requirements:

• Host system must be able to access the Internet periodically

• Four CPU cores and 8GB of DRAM – more resources allows for larger simulations

• Intel VT-x / EPT or AMD-V / RVI virtualization extensions present and enabled in the BIOS

• 50GB of free disk space for installation

You must purchase and install one of the following supported Hypervisors in order to run Cisco VIRL.:

• VMware Fusion Pro v5.02 or later (including v6.x or v7.x)

• VMware Workstation v8.04 or later (including v9.x and 10.x)

• VMware Player v5.02 or later (including v6.x)

• ESXi 5.1 / 5.5 using the vSphere Client: ESXi 5.1U2 (Build 1483097) or ESXi 5.5U1 (Build 1623387)

These Hypervisors are not included as part of Cisco VIRL and must be purchased separately.