Browsed by
Tag: Cisco

VIRL/CML Update

VIRL/CML Update

Virtual Internet Routing Lab/Cisco Modeling Lab:

UPDATE (08.07.2014):

Cisco Modeling Labs 1.0 Corporate Edition
Available August 11, 2014

This is an excerpt from an email one of my colleagues received today.

“We are very excited to announce that Cisco Modeling Labs 1.0 Corporate Edition is expected to ship on Monday, August 11th (if this changes we’ll let you know).

As you know, Cisco Modeling Labs 1.0 Corporate Edition is a game changing product with powerful virtualization features that provide corporations and service providers around the world with agility, flexibility and cost savings.

Product information can be found at the following locations:

Thank you again for your patience and continued interest in Cisco Modeling Labs 1.0 Corporate Edition.

The CML Team”

PREFACE

I wanted to take a few moments and give an update on CML/VIRL. I have had many inquires from my clients about CML/VIRL, it just makes sense to summarize these conversations and post something for those of us that can’t wait to get our hands on the first customer shipping (FCS) of CML/VIRL. ***IMPORTANT*** CML is the TAC supported version of VIRL. Just keep this in mind when we get into the “When” section of this post.

Who?

First off what the heck is CML/VIRL? CML started off as a project called Virtual Internet Routing Lab (VIRL) and is a graphical front-end to virtualized networking devices. Hold the phone! That sounds like IOU/IOU-WEB or GNS/Dynamips… What makes CML/VIRL better? Glad you asked. With GNS3/Dynamips your emulating the ASIC/CP-CPU hardware and running the actual IOS image on that emulated node. Each node is emulated and in the case of GNS3/Dynamips your choices are limited to older hardware such as 2600 and 7200 series routers. The images supported are only the monolithic IOS images and not the newer IOS-XE images found on newer routers such as the ASR1K and 4451x. I have ALWAYS had GNS3/Dynamips on my laptops as a quick and dirty syntax validation tool and for small scenario recreates (BGP peering configs, ACL validation, etc…). It’s especially useful in offline environments like studying for the CCIE R&S on a coast-to-coast flight. But, alas all was not good. My hardware choices were limited and the topologies were small because if they grew too big, my MacBook would become a personal space heater. This brings me to my biggest complaint about Dynamips, performance. Because the complete hardware is emulated to accommodate the original IOS image, it’s as slow as a Smart FourTwo (slowest 0-60 car). With CLM/VIRL each node is a virtual appliance that runs images designed for x86. The hypervisor is KVM/QEMU, Openstack is the orchestration, VM maestro is the graphical topology edition tool, and AutoNetKit is used for network configuration and rendering. This allows your lab/topology to scale much better then Dynamips or IOU, have better performance, and introduce other appliances into the environment such as a linux jumpbox or any other appliances we decide to support (no commitments here, just theory). 2014-07-09 12.23.51 am   2014-07-09 12.42.18 am Now for a dose of sad news, L2 appliances are not supported in CML/VIRL. Perhaps in the future this may be different, but for now it’s L3 only. You do get the vswitch within KVM, so it’s not a total bust. In fact, this is a critical component as connecting your lab devices together and connecting the virtual world to the physical world. Also, no serial interface support, Ethernet only. Again, perhaps this will change down the road…

What?

What virtual appliances will be supported? 1) IOS-XE: VM CSR1000V 2) IOS-XR: VM XRVR 3) NX-OS: VM vNXOS 4) IOS: VM vIOS 5) Servers/3rd party appliances Host OS is ubuntu server v12.04.2 2014-07-09 12.24.06 am

When/Where?

This is my number one question from clients. I personally first saw VIRL back at Cisco Live US 2013 in Orlando, FL. They had a demo setup just outside the WISP labs. That year I hosted my own WISP lab (Nexus 3548 Algoboost technology) and got to spend a decent amount of time playing with the beta and talking to the team. Keep this key factor in mind. There are two VIRL platforms.

1) Individual customers

2) Corporate customers

For individual customers the target is July 30th, 2014 TBD and will be available to ANY customer registered on Devnet. This is community supported. For more information on ‘Devnet’, please take a look at https://developer.cisco.com/site/devnet/home/index.gsp. For corporate customers joining the “dev-innovate” program. VIRL will be included in the software bundle. http://dev-innovate.cisco.com/ For corporate customer looking for a TAC supported VIRL, this is the CML product and the target date is July 15th August 11th, 2014. ***Disclaimer*** Target dates are subject to change 

Why?

Because your tired of getting burns from your laptop after running a 14 node simulation in GNS/Dynamips or you don’t want to deal with getting the right image for IOU/IOL (Cisco employees of course) <GRIN> In all seriousness, I personally have been looking for something more realistic/serious for a test/dev environment. There are many times where customers ask for input on designs and I want to validate a theory via syntax before making a recommendation. I have done this for over 7 years with either real hardware (sometimes VERY expensive) or dynamips, albeit to a much smaller scale and with those limited node selection. My other major driver for VIRL is one of my customers is an ASR 9K shop and I don’t know IOS-XR that well. What better way to learn and save on my home lab electricity bill? Here are some of the “official” use cases.

  • Learn or provide training on new IOS versions or releases without the cost of purchasing, deploying, and maintaining expensive hardware
  • Stage and / or develop device configurations ahead of actual deployments
  • Test new software capabilities without impacting actual networks or hardware
  • Evaluate changes to network architectures or configurations – what-if scenarios
  • Trouble-shoot or diagnose control- or management-plane issues without scheduling network maintenance windows
  • Create and connect virtual instances of new hardware or solutions to real, existing networks to evaluate their impact, performance, or behavior

Each of these activities – prior to VIRL – required expensive hardware resources that were both static and costly.  VIRL on the other hand allows complete flexibility in terms of the architectures that can be created, limited only by the compute resources (which are significantly less costly than network hardware) that can be dedicated for use.

How?

Deploy the Ubuntu VIRL OVA of course… Just kidding! While getting started really is that easy, getting everything setup and configured is a little more involved. There will also a bare metal installer. I’ll be updating this section shortly when more public information is available. The resource requirements are pretty high for laptop/individual deployments. For example XRv requires 1.3GB of RAM (real or swap). Keep this in mind as it will limit how many VMs can be run. Real world customer deployments will be based on properly sized UCS servers. Mac (OSX 10.7+) /PC (Windows 7) minimum requirements: 8-16GB RAM (4-8GB for VIRL) and 20GB of disk space 2014-07-09 12.41.25 am

Summary:

VIRL enables customers to support many critical missions – designing, training, release-testing, configuration-staging, and others – without the expense of buying and staging real network hardware.  Networks of any complexity can be created and tested using the same software that will run on the real routing platforms.

The cost of network hardware required for training, testing, certification, pre-deployment, and other non-production activities can be a significant burden for customers – from the lone individual studying for CCIE to the largest of SPs.  VIRL, by providing the ability to deploy large, multi-OS virtual networks on comparatively inexpensive compute platforms, can significantly reduce both capital expenses and the expenses – both monetary and time-wise – associated with deploying hardware for non-production activities.

If you found this post to be helpful, please leave feedback.

Thanks!

2014-07-09 12.43.27 am

CCIE #40755 (Routing & Switching)

CCIE #40755 (Routing & Switching)

“It’s gonna take time, a whole lot of precious time, it’s going to take patience and time to do it right child.”
“It’s gonna take money, a whole lot of spending money, it’s going to take plenty of money, to do it right”

-George Harrison
Song: I got my mind set on you 

I’m pretty sure George had the ladies on his mind and NOT the CCIE when he wrote that song. I can tell you no other lyrics resonate as strong as these when it comes to my personal journey of becoming inducted into the League of Extraordinary Engineers. Yes my friends, after 5+ LONG years, I’m officially in da club. My number is 40755 and oh boy does it feel AWESOME.

Because this journey was very difficult, I would go as far to say it’s the most difficult educational challenge I committed myself to, it’s only right that I share my story with other CCIE candidates to instill hope and encouragement. If it was easy, everyone would be a CCIE. Just keep that in mind as you embark on your own journey.

And so the story begins in 2008 when I passed the CCIE R&S written and only had a small window to take the v3 lab. This was sometime in september if I recall correctly. I was naive in thinking this is going to be cake, I mean how hard could this lab really be? I was thinking that I may only need 1-2 attempts, but I should have it done by the end of the year no problem. Well my first lab was v3 (lab guide printed on REAL paper in binder) and I actually did pretty good. My major issues were managing the clock and weakness on certain on security related services. Other than that it was a noble attempt. This gave me confidence and when I went to reschedule I realized something awful. The blueprint changed and there were no more seats left for the v3 lab. Now hear comes the madness, I was offered a “free” beta lab for the v4 which I accepted the challenge. Let’s just say that after taking the v4 beta, I was humbled in a the most extreme way. Now begins a radical format change (changes) to the lab. Open ended questions, troubleshooting, removal of open ended questions. I tried very hard to adapt to these changes, but as a poor test taker to begin with it was very challenging to say the least.

I was working at a small ISP in Central, PA at the time of this endeavor. God opened up a great door of opportunity in August of 2010 and I jumped in feet first… Where did I go??? CISCO!!!

While this major transition is occurring we’re also expecting our third child. I started on August 1st and Leo was born on August 28th. Man life was crazy and through all this I was sticking to my studies. I forget the details, but since my CCIE written was first passed in 2008, I had to take the written again before I could schedule another lab. I did this december of 2010 and would actually wait a full year before taking the v4 exam again. My third attempt was in Nov of 2011, this is where it gets interesting. I took the lab in San Jose instead of RTP this time. I flew out of Philadelphia airport and my laptop was stolen out of my checked in luggage. The TSA agent even left one of those “inspected by TSA” tickets in the bag. It was a surgical strike as only my laptop and power cable were removed from the bag. All my study notes were on that laptop… Needless to say, this was one heck of a trip. I did not pass, but did OK. The troubleshooting section was VERY tough.

Now pay attention because this is where I made the biggest mistake. I took almost a full year before my next attempt. NEVER DO THIS!!! If you can manage it, keep coming back every 30-60 days if possible. No more than 90 days. Things just got so busy between life and work that I waited yet ANOTHER year before diving back. By this time RTP had a new proctor (David) and let me tell you all this. He is by far my favorite proctor. David constantly encouraged me and drove me to keep coming back ASAP. With his recommendation and such a strong support system behind me I was able to pass after my 3rd consecutive attempt. It feels great to have my life back and know I can focus on the most important thing that was neglected… My family. While my wife and children supported me through this endeavor, there is no doubt that it took it’s toll on all of us. I could not have done this without the support of my family, friends, and colleagues. THANK YOU!!!

Passing lab experience:

September 28th, 2013

I drove down to RTP, NC from Central PA early Friday morning. My stomach was bothering me the night before probably due to nerves. I get so sick just thinking about the exam that I’m miserable every time I went to building 3. I get to RTP at about 3pm on Friday and ate a bland meal at Chipotle in Morrisville. I went back to the hotel room and practiced INE labs and reviewed my TS notes. My weak areas are still services because there are so many and being an expert in all of them is impossible (at least for me), but there are some that I take pride in my knowledge like EEM and multicast. Here’s the worst part. I could NOT sleep. I think I may of had 45min – 1hour, but that’s it. No matter what I tried I could not fall asleep. In addition, my stomach is a wreak. I drink half a bottle of pepto in hopes of relief. It did not come… Now for those of you who know me. I don’t drink or smoke. Heck eating some spicy foods is about as risky of a move that I make when it comes to what goes in my body. I NEVER drank anything like red bull or monster in my life. Those of you know know me would probably say that I’m wired to begin with. Why the heck would I even need something like that in the first place. Well this morning I did and my buddy John told me it helped him get through the lab the prior week before. So I drove to sheetz early in the morning and bought a red bull and start bucks energy drink. I settled on the Starbucks and drank the whole can. It was tasty, but what the heck is 80mg of caffeine going to do to me? I’ll tell you what it did. I became Bevis aka cornholio. I was so wired within 30 minutes of drinking that I forgot I was even tired. When I got to Building 3 we all went in and I began right away. Thanks to the power of caffeine, I was typing at like 150 WPM. Hit some major roadblocks in TS, but the energy infusion was too powerful an ally for TS to overcome. I felt good based on my results that Starbucks and I conquered TS. OK, well perhaps the Holy Spirit and me because there were some miraculous things that happened in the last 15-20 minutes.

I don’t even waste time, I jump right into configuration and heck I don’t think I even used the bathroom up to this point. No time for potty breaks. I get my configuration and my smile is ear to ear after reading though it. Let’s just say this, it was a test that jives with my skills. I felt good about the objective this config had set before me. I felt like I was running in auto pilot mode. My typing is loud and fast and I’m starting to feel bad because none of the other candidates were using ear plugs. I must have sounded like an old school author with his typewriter. By lunch I’m done with all L2/L3 and started on some of the services. Best time I had yet. Lunch is quick and I get back to it. By 1:30, I’m done with everything I could possibly configure. I take the next 45 minutes for verification, config backups, and reload. I’m pretty sure at a little after 2pm, I ended the lab. My heart was still racing, but something strange happened to my body. My guess is all the caffeine wore off as well as the adrenaline and I was crashing. I actually went into the break room and sat in the chair for a quick power nap. David stopped by and we talked a little about the lab. I felt really good about it and told him “If I don’t pass it this time, your might see a grown man crying”. To which he replies, “that’s nothing new”. Now comes the worst part… WAITING. I grab some food and head back to the hotel room. My intention was to eat and sleep, but again I could not fall asleep. My body and mind are a complete disaster. I’m waiting for this email with the results and it probably won’t be till tomorrow I find out if I did it. So, I do something that I have not really done in the last 5 years. Enjoy life’s simple pleasures. I go to the local movie theater and see Riddick. It was OK, but no pitch black. By this time you would think sleep was inevitable right? WRONG! I can’t sleep one wink. I get in the shower at 3:30am and check out of the hotel by 4am. I’m on the road heading back to PA. I keep checking my email every chance I get, still nothing. I stop in VA for some rest and decided to check my email. THIS IS IT! I have a message. The anticipation is killing me, do I even want to look at this now… I did and this is what I got!

  •  Your CCIE status is Certified ( CCIE# 40755 )
  • Your next CCIE Recertification due by September 28, 2015

I notify everyone via FB, Twitter, text, IM, calls, you name it. Then I crash in the car only to wake up at like 10am. My excitement level at this point is sky high. I can’t contain myself when talking to people on the phone. I’m thinking about all the things I wanted to do when I passed. Get a custom tag with my number, finally buy the pinball machine I have talked about for years, but the most important thing was this… Reconnect with my wife and family. When I reflected on my attitude, especially when studying for each lab attempt it was like I was a non-existent husband/father. So, it’s with great happiness and peace that I enjoy life again and return back home both physically and mentally.

In closing, I leave you candidates to be with the following wisdom.

1) Be prepared to make great sacrifices on this journey

2) Never give up

3) While it’s one of the most challenges journeys you can embark on, it’s also the most rewarding

4) Never give up

5)  Always keep in perspective that all your hard work will make you a better engineer regardless if you pass or not

6) Never give up

7) If you need a boost, drink some serious caffeine before taking the lab.

8) NEVER GIVE UP!

I want to again thank God, my family, friends, colleagues, INE, for the support and encouragement that was essential for my success. Oh! one more thing…

“And this time I know it’s for real, The feelings that I feel, I know if I put my mind to it, I know that I really can do it”

Man, that song was really made for CCIE candidates.

CCIERouting_and_Switching_UseLogo

CCIE Studies: Performance Routing PfR/OER

CCIE Studies: Performance Routing PfR/OER

Prologue

Hey fellow CCIE’s candidates and networking geeks. Today I want to step deep into the realm of PfR or Performance Routing. First let’s go back in time to the predecessor, Optimized Edge Routing or OER. As crazy as this sounds, OER came out in 2006 with IOS 12.3 . So, technically before all this SDN fanfare, Cisco actually decoupled the control (part of it at least) and data plane with OER/PfR back in the dizay.

DID THAT JUST BLOW YOUR MIND? THAT JUST HAPPENED! <GRIN> 2013-07-23 12.28.34 am

OER/PfR was created to help with a major issue that plagues many mid-market customers even to this day, proper load sharing and/or balancing on the edge of the network. Who wants to have redundant Internet connections, possibly even with diverse providers and have one of those connection sit there idle until something blows up? The short answer, pretty much nobody. Your paying for that circuit, you should be using it. Well, Shaun why not just use BGP? Well that’s a great question! You sure could and advertise part of your networks off one connection and the remaining networks off the other connection. That would achieve a level of load sharing inbound to the enterprise. Traffic egressing out of the enterprise could also be split to share the two connections. Sometimes the issue with BGP peering is the complexity and requirements. When I worked at the SP, a class C (/24) was the longest prefix that you could advertise. I heard it’s now a /23, but that has not been confirmed. Working with ARIN for a direct assignment of two IPv4 /24’s will be an exercise in patience. Remember we are running out of IPv4 space, perhaps you could get some IPv6 block for half price… J/K All that said, it can be a pain in the you know what to make this happen and not all companies have the resources to manage that type of edge peering agreement with the providers.

Well that’s where OER/PfR comes into play. Let’s keep this simple because OER/PfR can be quite a deep subject. Rather than base forwarding decisions on destination and lowest cost metric, why not take a path’s characteristics into consideration such as jitter, delay, utilization, load distribution, packet loss/health, or even MOS score? That’s the power of OER/PfR!!!

This is right from Cisco.com.
http://www.cisco.com/en/US/products/ps8787/products_ios_protocol_option_home.html

“PfR can also improve application availability by dynamically routing around network problems like black holes and brownouts that traditional IP routing may not detect. In addition, the intelligent load balancing capability of PfR can optimize path selection based on link use or circuit pricing.”

So, what did we do without BGP or OER/PfR? Typically, static routes with a floating static route for the redundant link using IP SLA/objecting for state monitoring (far end reachability). Again we are paying for something we can’t use. To quote Brian Dennis from INE. “It’s something we always accepted, like STP. You paying for something you can’t use”. The good news, you don’t need to live in that world any more. We have evolved with technologies like Fabric Path/TRILL, vPC, OER/PfR, SDN. Man, it’s a good time to be into networking!

Let’s think about some use cases: Internet connection load sharing/balancing, application specific traffic steering based on performance (latency), loss/delay sensitive hosted IP telephony traffic, leverage burstable based circuits, etc…

In summary, PfR allows the network to intelligently choose link resources as needed to reduce operational costs. Sounds like a sales pitch right? Well I am a Cisco SE after all, it’s in my DNA plus I found that diddy in one of the PfR FAQs.

OK, now that you have an good background on the origins of OER/PfR, let’s talk about the major difference between OER and PfR. In short, OER was destination prefix based and PfR expanded the capabilities to include route control on a per application basis.

Let’s also get one major thing out of the way first before we drill into the specifics. With a holistic view of the EDGE network your able to accomplish this level of traffic engineering on a per application level. If there is something wrong within the PfR network devices the traffic will FALL BACK to old school forwarding. Got that? No catastrophic failure where the routers are sticking their hands up screaming for help.

Requirements:

OK, let’s talk a little about the components required for a PfR edge network.

***IOS 15.1+ minimum recommended for production network***

Versioning: Major versions must match! If running 12.4(T) the version is 2.x. Is running IOS 15 the version is 3.x. It’s OK to have say a 2.1 and a 2.2, but not a 2.x and a 3.x version, this is NOT supported. 

Border Router (BR): In the data plane of the edge network, monitors prefixes and reports back to MC. 
Master Controller (MC):
 Centralized control plane for central processoring and database for statistics collection. 
1x Internal Interface-
BRs ONLY peer with each other over internal interfaces (directly connected or via tunnel). Also used between BR and MC.
2x External Interfaces- OER/PfR expects traffic to flow between internal and external interfaces.
Route Control: Parent Route REQUIRED! This explanation is right from the Cisco FAQ.

A parent route is a route that is equal to, or less specific than, the destination prefix of the traffic class being optimized by Performance Routing. The parent route should have a route through the Performance Routing external interfaces. All routes for the parent prefix are called parent routes. For Performance Routing to control a traffic class on a Performance Routing external interface, the parent route must exist on the Performance Routing external interface. BGP and Static routes qualify as Performance Routing parent routes. In Cisco IOS Release 12.4(24)T and later releases, any route in RIB, with an equal or less specific mask than the traffic class, will qualify as a parent route.

For any route that PfR modifies or controls (BGP, Static, PIRO, EIGRP, PBR), having a Parent prefix in the routing table eliminates the possibility of a routing loop occurring. This is naturally a good thing to prevent in routed networks.

Now, since I’m an active CCIE candidate I’m gonna say this, IOS 12.4(T) has bugs with PfR. For one, the command operative syntax is still “OER” and certain functionality just seems downright broken. My lab consists of real 3560’s and ISR routers, so it’s not like I’m using emulation/GNS/dynamips and that’s my issue. I cannot stress enough, if doing a POC in a non-PROD environment feel free to use IOS 12.4(T). In a “real world” production environment, never settle for less than 15.1. ASR 1K requires IOS XE 2.6 or higher for PfR support.

Hardware Platform Support: ISR G1(RIP), G2, ASR, 7600, Cat6500, and 7200’s (RIP)
Classic IOS Feature Set Required: SP Services/Advance IP/Enterprise/Advance Enterprise
Universal IOS Image: Data Package required

Configuration:

Pfr faq fig3.jpg

I was going to use a complex CCIE sample config, but there are so many good examples of PfR already on the Cisco PfR Wiki.

http://docwiki.cisco.com/wiki/PfR:Solutions

Instead, let me concentrate on the basic requirements starting with the border router.

BR Config: 

key chain PFR
 key 1
  key-string PFR

oer border
 logging
 local Loopback0
 master 8.8.8.8 key-chain PFR

ip route 0.0.0.0 0.0.0.0 Serial1/2 (PARENT ROUTE)
ip route 0.0.0.0 0.0.0.0 Serial1/1 (PARENT ROUTE)

MC Config: 

oer master
logging
!
border 8.8.8.8 key-chain PFR
interface Serial1/2 external
interface Serial1/1 external
interface Serial1/0 internal
!
learn
throughput
periodic-interval 0
monitor-period 1
mode route control
resolve utilization priority 1 variance 10
no resolve delay
no resolve range

THAT’S IT!!! 

Granted, this is the most basic form of route control, but it will inject a route for the monitored prefix based on interface throughput utilization. I believe the default is 75% utilized.

Here are some useful commands to monitor/troubleshoot PfR.

“show pfr/oer master”

OER state: ENABLED and ACTIVE
Conn Status: SUCCESS, PORT: 3949
Version: 2.2
Number of Border routers: 1
Number of Exits: 2
Number of monitored prefixes: 1 (max 5000)
Max prefixes: total 5000 learn 2500
Prefix count: total 1, learn 1, cfg 0
PBR Requirements met
Nbar Status: Inactive

Border Status UP/DOWN AuthFail Version
8.8.8.8 ACTIVE UP 03:29:17 0 2.2

Global Settings:
max-range-utilization percent 20 recv 0
mode route metric bgp local-pref 5000
mode route metric static tag 5000
trace probe delay 1000
logging
exit holddown time 60 secs, time remaining 0

Default Policy Settings:
backoff 300 3000 300
delay relative 50
holddown 300
periodic 0
probe frequency 56
number of jitter probe packets 100
mode route control
mode monitor both
mode select-exit good
loss relative 10
jitter threshold 20
mos threshold 3.60 percent 30
unreachable relative 50
resolve utilization priority 1 variance 10

Learn Settings:
current state : STARTED
time remaining in current state : 115 seconds
throughput
no delay
no inside bgp
no protocol
monitor-period 1
periodic-interval 0
aggregation-type prefix-length 24
prefixes 100
expire after time 720

“show pfr/oer master border detail” 

Border Status UP/DOWN AuthFail Version8.8.8.8 ACTIVE UP 03:31:46 0 2.2
Se1/2 EXTERNAL UP
Se1/1 EXTERNAL UP
Se1/0 INTERNAL UP

External Capacity Max BW BW Used Load Status Exit Id
Interface (kbps) (kbps) (kbps) (%)
——— ——– —— ——- ——- —— ——
Se1/2 Tx 1544 1158 0 0 UP 2
Rx 1544 0 0
Se1/1 Tx 1544 1158 0 0 UP 1
Rx 1544 0 0

“show ip cache flow”

IP packet size distribution (25713 total packets):
1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480
.000 .040 .000 .200 .000 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000

512 544 576 1024 1536 2048 2560 3072 3584 4096 4608
.003 .000 .007 .000 .743 .000 .000 .000 .000 .000 .000

IP Flow Switching Cache, 4456704 bytes
2 active, 65534 inactive, 1007 added
16475 ager polls, 0 flow alloc failures
Active flows timeout in 1 minutes
Inactive flows timeout in 15 seconds
IP Sub Flow Cache, 533256 bytes
2 active, 16382 inactive, 1151 added, 1007 added to flow
0 alloc failures, 0 force free
1 chunk, 1 chunk added
last clearing of statistics never
Protocol Total Flows Packets Bytes Packets Active(Sec) Idle(Sec)
——– Flows /Sec /Flow /Pkt /Sec /Flow /Flow
TCP-Telnet 10 0.0 256 144 0.1 19.5 6.9
TCP-other 59 0.0 68 110 0.2 9.0 2.3
ICMP 13 0.0 1470 1500 1.3 52.3 3.5
Total: 82 0.0 313 1146 1.8 17.1 3.1

SrcIf SrcIPaddress DstIf DstIPaddress Pr SrcP DstP Pkts

“show pfr/oer master traffic-class”

OER Prefix Statistics:
Pas – Passive, Act – Active, S – Short term, L – Long term, Dly – Delay (ms),
P – Percentage below threshold, Jit – Jitter (ms),
MOS – Mean Opinion Score
Los – Packet Loss (packets-per-million), Un – Unreachable (flows-per-million),
E – Egress, I – Ingress, Bw – Bandwidth (kbps), N – Not applicable
U – unknown, * – uncontrolled, + – control more specific, @ – active probe all
# – Prefix monitor mode is Special, & – Blackholed Prefix
% – Force Next-Hop, ^ – Prefix is denied

DstPrefix Appl_ID Dscp Prot SrcPort DstPort SrcPrefix
Flags State Time CurrBR CurrI/F Protocol
PasSDly PasLDly PasSUn PasLUn PasSLos PasLLos EBw IBw
ActSDly ActLDly ActSUn ActLUn ActSJit ActPMOS ActSLos ActLLos
——————————————————————————–
7.7.7.0/24 N defa N N N N
INPOLICY 0 8.8.8.8 Se1/2 STATIC
U U 0 0 0 0 0 0
U U 0 0 N N N N

“show oer border routes static”

Flags: C – Controlled by oer, X – Path is excluded from control,
E – The control is exact, N – The control is non-exact

Flags Network Parent Tag
CE 7.7.7.0/24 0.0.0.0/0 5000

Epilogue:

Well folks, that’s all the steam I have left after pouring out my heart on PfR/OER. I hope this post was informative. Please drop me a line if you have any questions or I was not clear on any of my points. I appreciate any and all feedback. In my mind, Cisco gave us a glimpse into the future of networking way back in 2006. With data center technologies evolving on a daily basis, it’s only a matter of time before there is an MC for the enterprise network rather than just the edge. Heck Google is doing that already with 25% of all the Internet traffic TODAY! Until next time, keep those blinky lights flashing.

shaun

Cisco UCS: Virtual Interface Cards & VM-FEX

Cisco UCS: Virtual Interface Cards & VM-FEX

Hello once again! Today I decided to talk about some Cisco innovations around of UCS platform. I’m going to try my best to keep this post high-level and EASY to understand as most things “virtual” can get fairly complex.

First up is Virtual Interface Card (VIC). This is Cisco’s answer to 1:1 mapped blade mezzanine cards in blade servers and other “virtual connectivity” mezzanine solutions. Instead of having a single MEZZ/NIC mapped to a specific internal/external switch/interconnect we developed a vNIC optimized for virtualized environments. At the heart of this technology is FCoE and 10GBASE-KR backplane Ethernet. In the case of the VIC 1240, we have 4x 10G connections that connect to the FEX, this connectivity is FCoE until the traffic gets to the fabric interconnect outside the chassis. The internal mapping to the server/blade allows you to dynamically create up to 128 PCIe virtual interfaces. Now here is the best part, you can define the interface type (NIC/HBA) and the identity (MAC/WWN). What does that mean? Easy policy based, stateless, and agile server provisioning. Does one really need 128 interfaces per server??? Perhaps in an ESX host you want the “flexibility and scale”. Oh yea, there is ANOTHER VIC that supports 256 vNICs and has 80Gbps to the backplane!!! That model is the 1280 VIC.

NOTE: 8 interfaces are reserved on both the 128/256 VICs for internal use and the actual number of vNICs presented to the server may be limited by the OS. 

Update: 

Just had a great conversation with a customer today and I want to take a minute to break down the math.

Today we have the 2208 FEX (I/O) module for the 5108 chassis. Each one supports 80G (8×10) uplinks to the Fabric Interconnect. This give a total of 160G to each chassis if all uplinks were utilized.

On the back side of each 2208 I/O is 32 10G ports (downlinks) for a total of 320G to the midplane. We are now at 640G total (A/B side). Take the total amount of blades per chassis and multiple that by 80G. 8 (blades) * 80G (eight traces per blade of 10G) = 640G. 🙂

Just keep in mind that the eight traces to each blades are 4x10G on the (A) side and 4x10G on the (B) side.

OK great I got all this bandwidth in the chasis, what can I do with all that? How about we carve out some vNICs. With the VIC 1240 mezz card you got 128 vNICs and 40Gb to the fabric. Not good enough? How about the VIC 1280 with 256 vNICs and 80Gb to the fabric. Just remember that your vNICs are going to have an active path mapped to either side (A/B) and can fail over to the other side in the event of an issue.  All the (A) side active side vNICs are in a hardware portchannel. Conversely the same holds true for the (B) side vNICs.

So Shaun, what’s you point to all this math? Choice and flexibility. You want 20Gb to the blade, you got it. You want 40G to the blade, done. 80G to the blade, no problem. 160G to the blade, OK but it has to be a full width. <GRIN>

Cisco: Algo Boost Nexus 3548 Preview/Unbox

Cisco: Algo Boost Nexus 3548 Preview/Unbox

I got something very cool last week. It came overnight from my good friend Frank in NY. What we have here is a very special privilege folks. It’s a prototype of the Nexus 3548 ultra low latency switch using our custom ASIC called Algo Boost/Monticello. Instead of killing you with all the details I decided to create a video of the un-boxing and special features walkthrough. Enjoy!

https://www.youtube.com/user/4g1vn/featured

 

 

Cisco: Jabber Video for TelePresence

Cisco: Jabber Video for TelePresence

Experience telepresence with your family/friends/coworkers. Try our free Jabber Video client today. HD video camera recommended.

https://www.ciscojabbervideo.com/home

http://www.cisco.com/en/US/prod/collateral/ps7060/ps11303/ps11310/ps11328/data_sheet_c78-628609.html

Jabber Video system requirements

Windows

Windows 7, Vista, or XP (SP 2 or newer), with:
• OpenGL 1.2 or newer
• For 720p HD calls, Intel Core2Duo @ 1.2 GHz or better
• For VGA calls, Intel Atom @ 1.6 GHz or better

Webcam (built-in or external; you’ll need an HD webcam for the other side to see you in HD)

Broadband Internet connection with a recommended bandwidth of 768 kbps upstream and downstream. A 720p HD call will require approximately 1.2 Mbps upstream and downstream.

Mac

Apple Intel x86 processor computer, running OS X 10.6 (Snow Leopard) or newer, with:
• For 720p HD calls, Intel Core2Duo @ 1.2 GHz or better
• For optimal performance, we recommend Intel Core2Duo @ 2 GHz, with 2MB L2 cache per core

Webcam (built-in or external; you’ll need an HD webcam for the other side to see you in HD)

Broadband Internet connection with a recommended bandwidth of 768 kbps upstream and downstream. A 720p HD call will require approximately 1.2 Mbps upstream and downstream.

 

 

 

 

 

 

CCIE: QoS

CCIE: QoS

Hold-Queue & Hardware TX Ring:

TX-Ring DEFAULT on 1841 (128) packets on a FastEthernet interface
“tx-ring-limit X” verify with “sh controller fa 0/1 | in tx”

FIFO Ingress queue is 75 packets by default and 40 packets on an 1841 FastEthernet interface

“hold-queue X in|out” verify with “sh interface fa0/1 | in queue”

Keep in mind that the software queue is only invoked when the hardware (TX-RING/FIFO) is full. CPU/packet spikes can tie up CPU cycles causing the router to use the queues.

WFQ: Fair-queue can be configured using the following commands. FLOW BASED (IP S/D, Port S/D, Protocol type)

 bandwidth 128 (helps WFQ choose best settings, but does not directly influence the algorithm)
 tx-ring-limit 1 (forces the software queue to take affect)
 tx-queue-limit 1
 fair-queue 16 128 8 (16 packets, 128 conversations, 8 RSVP queues)
 hold-queue 256 out
 ip mtu 996 (+ 4B overhead due to HDLC) This is L3 fragmentation and is NOT recommended because it's going to reduce effective throughput for large size packets.

Tc = Bc/CiR

1536000 bits per second, 1 sec = 1000ms, 1000B (MAX SIZE), 1000B * 8 (8000 bits)
8000/1536000 = .005 * 1000(ms) = 5ms
Now let's say I want a Tc of 8 ms. Use this formula CiR * (8/1000)
1536000 * .008 = 12288 (Bc)

8ms = 12288/1536000

If we need to use a TC of 12ms on the same pvc:

1 Bc = CIR x (TC/1000)
2 Bc = 1536000 x (12/1000)
3 Bc = 18432
Legacy RTP Prioritization and Reserved Queue:
ip rtp priority range-start range-end BW
ip rtp reserved range-start range-end BW  
max-reserved-bandwidth percentage up to 100 (default is 75%)
Selective Packet Discard (Input Queue Priority): Input FIFO Queue Modification, Control Plane protocols such as HSRP, BGP Updates, IGP's, PIM, L2 keepalives, etc... Processor switched, or erroneous packets. 
***HIDDEN IOS COMMAND***
  spd enable
  spd headroom 120
  ip spd mode agg (normal and aggressive modes) Malformed packets are dropped as soon as the hold queue grows above minimum threshold. 
  ip spd queue max-thres 150
 "sh ip spd" to verify configuration.
Payload Compression on Serial Interfaces: STAC: CPU Intensive, replaces repetitive data with index value. Predictor: Memory Intensive, not as effective as STAC/LZ algorithm  Only allowed on HDLC/PPP/FR links with 2Mbps or less of bandwidth. HDLC only supports STAC, PPP supports Predictor. Something to remember is that with MQC vs. legacy QoS, packets are compressed BEFORE the burst or queue weight is calculated. 
Configs:
int ser 0/1/0
encap hdlc
compress stac

int ser 0/0/0
 frame-relay map ip 155.17.0.5 205 broadcast ietf payload-compression FRF9 stac one-way-negotiation

int ser 0/1/0
encap ppp
compress predictor

Verify with "sh compress detail" and "sh frame-relay map".
Test with repetitive data ping. "ping x.x.x.x size 500 rep 100 data ABAB"
TCP/RTP Header Compression:
  int ser 0/1/0
  ip tcp header-compression
  ip tcp compression-connections 32 (TCP/RTP is bi-directional requires a context on each side)
  ip rtp header-compression
  ip rtp compression-connections 32
Verify with "sh ip rtp/tcp header-compression"
MLP (multilink PPP): 
Configure with either "ppp multilink group#" & "int multilink group#" or 
"ppp multilink", int virtual-templateX, "multilink virtual-template X" (Single Interface in MLP group) or
Dialer interface
LFI: "ppp multilink fragment", "ppp multilink interleave" Use WFQ (fair-queue) on the virtual link to further give voice packets a better chance of being serviced. 
Also, I don't believe interleaving will work with FIFO!   
Frame-Relay Broadcast Queue:
  Broadcast queue 0/64, broadcasts sent/dropped 22932/0, interface broadcasts 5735
Modify with "frame-relay broadcast-queue 16(total for ALL pvc) 1280B 10pps



							
CCIE: MPLS

CCIE: MPLS

MPLS: Autoconfig (enable LDP on all interfaces) only available when using OSPF as IGP.

LDP send discovery packets via UDP to 224.0.0.2 (all routers) port 646. Route-ID is highest loopback but can be forced “mpls ldp route-id x.x.x.x force”. To use the physical connection of the interface (not the loopback due to lack of reachability) use this command on the interface. ” mpls ldp discovery transport-address interface”. Once communications is established, via TCP 646, authentication is verified (MD5 only). After peer is established prefix/label information is exchanged and LFIB is built.

MPLS-VPN

Two Labels: Transport and VPN Label

View Transport label with “sh mpls forwarding-table” and VPN label with “sh ip bgp vpn4 vrf XXX”

OSPF on MPLS VPN: MP-BGP cloud is super area 0 (super backbone), treated as T-3 LSA’s. SAME VPN, SAME DOMAIN_ID (PROCESS ID) T3, different DomainID, T5.

Creating a Sham-Link

Sham-Links allows MPLS network to override backdoor links.
Before you create a sham-link between PE routers in an MPLS VPN, you must:
  • Configure a separate /32 address on the remote PE so that OSPF packets can be sent over the VPN backbone to the remote end of the sham-link. The /32 address must meet the following criteria:
    • Belong to a VRF.
    • Not be advertised by OSPF.
    • Be advertised by BGP.

You can use the /32 address for other sham-links.

  • Associate the sham-link with an existing OSPF area.
EIGRP: Site of Origin – SoO
Used between the PE and CE to prevent route feedback and loops. Could be accomplished with tag and filter but that is too complex. Multi-homed CE’s and CE’s with backdoor links are ideal candidates. Also, used in BGP when the same ASN is used at all remote locations.
CE: Same ASN on both sides will not allow bgp prefixes to be advertised because of BGP’s loop prevention (same asn). You can override on the PE with the neighbor statement and “as-override” command. “Allowas-in” is another option but NOT RECOMMENDED.
CCIE: OSPF

CCIE: OSPF

The Basics:

Link state routing protocol. Uses IP protocol 89. Hellos sent on 224.0.0.5.

Uses Dijkstra SPF algorithm independently on each router against the local LSDB to calculate the best routes.

Hellos sent every 10 seconds on LAN and 30 seconds on WAN interfaces. Dead time is 4x hello, so 40sec and 120 sec respectively.

Router ID:

1) Configured “router id”
2) Highest loopback
3) Highest non loopback interface in up/up state.

Hello Process Sanity check:

Pass authentication (verify with “debug ip ospf adj”)
Same primary subnet (no secondaries used for neighbor)
Same OSPF area
Same OSPF area type (NSSA, STUB, etc…)
No duplicate RID’s
Hello/Dead times match

One a multiaccess network (LAN), DR are used to reduce LSDB flooding. Similar in concept to BGP route reflector. DR also create a type 2 LSA for the subnet. Non-DR routers send DD to the DR using 224.0.0.6 (ALL DR), DR ack with unicast DD. DR floods a new DD packet to 224.0.0.5. Highest priority ID wins DR election. Lookback/RID is the tie-breaker.

SPF Calculation: Lowest cost to destination. Uses OUTGOING interface cost.

Design:
Using areas will allow your routers to have smallers per-area LSDB’s ,requiring less memory.
Faster SFP computation due to the small LSDB.
Link failure in one area only requires partial SPF computation in another area.
ROUTES CAN ONLY BE SUMMARIZED ON ABR AND ASBR, this helps shrink the LSDB and improve SPF computation. “summary-address” only used on ASBR, “area X range” used on ABR, make sure that the area is where the actual routes reside/originate”.

E1= Include end-to-end metric
E2= Use metric calculated by ASBR only. (DEFAULT)

The big thing to remember, is that the ABR will not pass the dense type 1 & 2 LSAs, instead using a summary LSA type 3.

Let’s review LSA types real quick.
T1: Router – RID, and interface IP, neighbors, and Stub (one router with no other neighbors) – one per router
T2: Network – Created by DR on subnet, subnet and router interface on subnet WITH DR. – one per transit network (subnet with two or more routers).
T3: Summary – Created by ABR to summarize T1 & T2. Defines subnets and cost but not the topology.
T4: ASBR Summary – Host route to reach ASBR
T5: AS External – Created by ASBR’s for external routes redistributed into OSPF.
T7: NSSA External

Stub Area:

Prevent T5 LSA’s into area and ABR advertises default. Totally stubby areas also prevent T3 LSA’s into area. NSSA, allows routers to be redistributed into the stub area as a T7.

Interface Network Types:

non-broadcast: DR/BDR election, neighbor statement required, unicast hellos, no next-hop modification, so all spokes require recursive lookup
point-to-multipoint: no DR/BDR election, no neighbor, multicast hellos to 224.0.0.5, stub endpoint advertisement (/32) instead of a transit network.

Auto-cost Reference Bandwidth: Change bandwidth on local router to see updated cost. Should be consistent across all routers to prevent SPF based loops. Interface cost= Reference Bandwidth / Interface Bandwidth (this can also be used for P-to-MP neighbor costs).

Capability-Transit: Use non-backbone areas if a shorter path exists for summary LSA (inter-area), on by default. If you want to force the traffic to take the (0) path, issue “no capability-transit” on both ends.

Demand Circuit:

On point-to-point interfaces, only one end of the demand circuit must be configured with the ip ospf demand-circuit command. Periodic hello messages are suppressed and periodic refreshes of link-state advertisements (LSAs) do not flood the demand circuit. This command allows the underlying data link layer to be closed when the topology is stable. In point-to-multipoint topology, only the multipoint end must be configured with this command.

Paranoid flooding: Every 30 minutes re-flood by default. Disable with interface level: “ip ospf flood-reduction”, verify with DoNotAge (DNA) in OSPF LSA Database.

“Flood-War” in debug is an indication of identical router-id’s competing. Loop prevention mechanism.

Conditional Default Route:

router ospf 1
 default-information originate always route-map TRACK
ip prefix-list TRACK seq 5 permit 10.17.1.0/24
!
!
route-map TRACK permit 10
 match ip address prefix-list TRACK
!
Interface fa0/1
ip add 10.17.1.1 255.255.255.0
Reliable Conditional Default:
ip sla 1
icmp-echo 10.17.1.254
timeout 2000
frequency 5
ip sla schedule1 life forever start-time now

Track 1 rtr 1
ip route 127.100.100.10 255.255.255.255 null0 track1

ip prefix-list TRACK seq 5 permit 127.100.100.10/32
Route-map TRACK permit 10
match ip address prefix TRACK
router ospf 1
default-information-originate always route-map TRACK

STUB AREAS:

Allows filtering of the database based on the role of the LSA. Stub flag is sent as part of the hellos, so they must agree.

Stub will remove external T5 LSAs and replace them with a default. T5 LSA is setting the advertising router as it’s router ID and the forward address to 0.0.0.0 In that area, if the forward is set to 0.0.0.0 traffic is directed to the advertising router id. Essentially, it requires the router-id to be found in the database via an LSA T1. This process is causing redundant information in the database due to T1, T4, T5 entires. Specifically, the T5’s and T4’s are replaced with a default.

This also implies that since T5’s are filtered, redistribution cannot occur in a STUB area. The workaround? NSSA.

Totally Stubby Areas:

“area x stub NO-SUMMARY” Inter-Area (T3’s) are removed and replaced with a default. Configured on ABR only.

 

Not So Stubby Areas (NSSA):

“area x NSSA” This generates an LSA T7 instead of a T5. These have N1 and N2 subtypes. Much like E1 and E2. N1 considers the full path where N2 considers only the ASBR metric and not the cost to get to the ASBR. 

When traversing into Area 0, the T7 is converted into a T5. NSSA does NOT automatically generate a default route, but could be added. Important to note that if there are multiple ABR’s the one with the highest Router-ID will do the translations. 

Translate T7 to T5 will instruct the ABR to NOT perserve the value in the forward address field.
“area x nssa translate type7 supress-fa”

 

Not So Totally-Stubby Areas (ARE YOU FREAKING KIDDING ME???!!!):

As if NSSA, STUB, and Totally-Stubby was not confusing enough. We have “Not so totally-stubby areas”. WTF!!!

Basically a combination of Total Stub and NSSA. T3,T4,T5 are replaced with a T3 default, but also allows redistribution into the area as T7’s.

Nuff said!

Summary Routes:

Create the summary (AREA x RANGE x.x.x.x) in the AREA WITH THE ROUTES BEING SUMMARIZED!
When a summary is created on an ABR a null 0 route is created. This could cause a black hole. Override with “no discard-route internal”.

OSPF Resource Limits:

Limit LSA’s in the database: “max-lsa 10000” NON-SELF-GENERATED
Limit Redistribution: “redistribute maximum-prefix 1000”
Limit processor: “process-min-time percent 25”

Verify with “sh ip ospf”
DNS Lookup on Neighbors: “ip ospf name-lookup”
Add local host with “ip host R1  1.1.1.1”