Browsed by
Month: March 2012



Hold-Queue & Hardware TX Ring:

TX-Ring DEFAULT on 1841 (128) packets on a FastEthernet interface
“tx-ring-limit X” verify with “sh controller fa 0/1 | in tx”

FIFO Ingress queue is 75 packets by default and 40 packets on an 1841 FastEthernet interface

“hold-queue X in|out” verify with “sh interface fa0/1 | in queue”

Keep in mind that the software queue is only invoked when the hardware (TX-RING/FIFO) is full. CPU/packet spikes can tie up CPU cycles causing the router to use the queues.

WFQ: Fair-queue can be configured using the following commands. FLOW BASED (IP S/D, Port S/D, Protocol type)

 bandwidth 128 (helps WFQ choose best settings, but does not directly influence the algorithm)
 tx-ring-limit 1 (forces the software queue to take affect)
 tx-queue-limit 1
 fair-queue 16 128 8 (16 packets, 128 conversations, 8 RSVP queues)
 hold-queue 256 out
 ip mtu 996 (+ 4B overhead due to HDLC) This is L3 fragmentation and is NOT recommended because it's going to reduce effective throughput for large size packets.

Tc = Bc/CiR

1536000 bits per second, 1 sec = 1000ms, 1000B (MAX SIZE), 1000B * 8 (8000 bits)
8000/1536000 = .005 * 1000(ms) = 5ms
Now let's say I want a Tc of 8 ms. Use this formula CiR * (8/1000)
1536000 * .008 = 12288 (Bc)

8ms = 12288/1536000

If we need to use a TC of 12ms on the same pvc:

1 Bc = CIR x (TC/1000)
2 Bc = 1536000 x (12/1000)
3 Bc = 18432
Legacy RTP Prioritization and Reserved Queue:
ip rtp priority range-start range-end BW
ip rtp reserved range-start range-end BW  
max-reserved-bandwidth percentage up to 100 (default is 75%)
Selective Packet Discard (Input Queue Priority): Input FIFO Queue Modification, Control Plane protocols such as HSRP, BGP Updates, IGP's, PIM, L2 keepalives, etc... Processor switched, or erroneous packets. 
  spd enable
  spd headroom 120
  ip spd mode agg (normal and aggressive modes) Malformed packets are dropped as soon as the hold queue grows above minimum threshold. 
  ip spd queue max-thres 150
 "sh ip spd" to verify configuration.
Payload Compression on Serial Interfaces: STAC: CPU Intensive, replaces repetitive data with index value. Predictor: Memory Intensive, not as effective as STAC/LZ algorithm  Only allowed on HDLC/PPP/FR links with 2Mbps or less of bandwidth. HDLC only supports STAC, PPP supports Predictor. Something to remember is that with MQC vs. legacy QoS, packets are compressed BEFORE the burst or queue weight is calculated. 
int ser 0/1/0
encap hdlc
compress stac

int ser 0/0/0
 frame-relay map ip 205 broadcast ietf payload-compression FRF9 stac one-way-negotiation

int ser 0/1/0
encap ppp
compress predictor

Verify with "sh compress detail" and "sh frame-relay map".
Test with repetitive data ping. "ping x.x.x.x size 500 rep 100 data ABAB"
TCP/RTP Header Compression:
  int ser 0/1/0
  ip tcp header-compression
  ip tcp compression-connections 32 (TCP/RTP is bi-directional requires a context on each side)
  ip rtp header-compression
  ip rtp compression-connections 32
Verify with "sh ip rtp/tcp header-compression"
MLP (multilink PPP): 
Configure with either "ppp multilink group#" & "int multilink group#" or 
"ppp multilink", int virtual-templateX, "multilink virtual-template X" (Single Interface in MLP group) or
Dialer interface
LFI: "ppp multilink fragment", "ppp multilink interleave" Use WFQ (fair-queue) on the virtual link to further give voice packets a better chance of being serviced. 
Also, I don't believe interleaving will work with FIFO!   
Frame-Relay Broadcast Queue:
  Broadcast queue 0/64, broadcasts sent/dropped 22932/0, interface broadcasts 5735
Modify with "frame-relay broadcast-queue 16(total for ALL pvc) 1280B 10pps

CCIE: Multicast

CCIE: Multicast

Preface: To clear old entries in the multicast table, use “clear ip mroute *”. This command usually will allow changes to be sync:ed, but not always. In the worst case scenario, you may have to reload the device. Modifications to a working multicast environment is not recommended if you cannot interrupt traffic forwarding. Be sure to schedule maintenance window in a REAL production environment. 

PIM: Signaling protocol that uses the unicast routing table to preform RPF checks.

Dense mode: Flood to all multicast enabled interfaces and downstream routers prune back. Interface is excluded from the OIL for the pruned group. This results in excessive flooding as the state expires in 3 minutes by default and then flooding out that interface will resume. This is a plug & play method, but is not scaleable. State Refresh is enabled by default to send control messages (60 seconds) and keep interfaces pruned if necessary.

RPF Failure: This is one of the most common issues in a multicast routing domain. Unicast shortest paths not matching the multicast distribution tree is a common example. Simple fix would be a static mroute.”ip mroute x.x.x.x (correct RPF interface).

Be very careful of static routes in a multicast environment as they change the local routers perception of shortest path.

In the output of “sh ip mroute” look for an S,G entry that has an incoming (towards the source) interface of NULL. This is an indication that the multicast path is different from the unicast route table entry for the source.

“debug ip mpacket”
enable process switching on the multicast interface
“no ip mroute-cache”

PIM Assert Message vs. PIM DR:
This is something that took me sometime to fully understand. On a multi-access (LAN) network, one router may win the assert process, while another may become the IGMP querier (PIM DR or IGMP Querier v2). The winner of the Assert is the one responsible for forwarding multicast one the LAN and the IGMP querier is responsible for managing the IGMp process and sending IGMP query messages on the LAN.

IGMP v1 had not querier, so it required a PIM-DR. 


interface loopback 0
ip pim sparse-dense-mode (required dense mode SPT for and, make sure to use “no ip pim dm-fallback” in a live network. You could also define static RP for the Auto-RP groups with “override” option. The best option for Auto-RP is SM with auto-rp listener.

ip pim send-rp-announce loopback 0 scope 12 (cRP)
ip pim send-rp-discovery loopback 0 scope 12  (Mapping Agent) selects best RP for group range

Negative ACL: “DENY” will cause group to fall back to dense mode. Effectively, a single cRP could announce a deny any and cause all groups to be treated as dense. The reason being is in the order process negative/deny are first.

Filter Auto-RP Messages with TTL Scoping (low number for boundary threshold) or “ip multicast boundary”. Multicast boundary filters at control plane (PIM/IGMP/Auto-RP) and data plane (multicast route state). WIth IOS 12.3(17)T and higher the in/out keywords are possible. In affects control and Out affects the data plane.

Bootstrap Router (BSR): 

Standards based for PIMv2, does not use any dense mode groups like Auto-RP.

Configure candidate RP with “ip pim rp-candidate interface | group-list | interval | priority ”
Configure BSR (MAPPING) with ” ip pim bsr-candidate interface | hash | priority|

Filtering BSR: Filter RP info with “ip pim bsr-border” on the edge of the multicast domain.

Stub Multicast (IGMP Helper):

Head-end/Hub runs sparse mode pim. Remote/stub uses dense mode. Remote router acts as a “dumb” packet forwarder.

R1: Stub/Remote

int fa0/0
ip pim dense-mode
ip igmp helper-address

int ser 0/0/0
ip pim dense-mode
ip add

R5: Hub
int ser 0/0/0
ip pim sparse-mode
ip add
ip pim neighbor-filter 7

access-list 7 deny
access-list 7 permit any

SW1: Client side
int fa 0/1
ip pim dense-mode
ip pim neighbor-filter 8 (disallows R1 and SW1 to become PIM neighbor)

access-list 8 deny any

IGMP Timers:

Reports are sent ASYNC, so some might be missed by the router. On a shared segment, one IGMP querier is elected designated and send membership queries to hosts. Lowest IP address win election. This is confusing because the DR is elected by highest IP. 60 seconds is the default query time and the timeout is 2x that value (120). IGMP v1 has no leave group message and introduces leave latency.

“ip igmp querier-timeout”

MTRACE: Trace from the leaf to the root. mtrace (leaf) (group) output will trace back to the root (RP). Preform on the RP.



MPLS: Autoconfig (enable LDP on all interfaces) only available when using OSPF as IGP.

LDP send discovery packets via UDP to (all routers) port 646. Route-ID is highest loopback but can be forced “mpls ldp route-id x.x.x.x force”. To use the physical connection of the interface (not the loopback due to lack of reachability) use this command on the interface. ” mpls ldp discovery transport-address interface”. Once communications is established, via TCP 646, authentication is verified (MD5 only). After peer is established prefix/label information is exchanged and LFIB is built.


Two Labels: Transport and VPN Label

View Transport label with “sh mpls forwarding-table” and VPN label with “sh ip bgp vpn4 vrf XXX”

OSPF on MPLS VPN: MP-BGP cloud is super area 0 (super backbone), treated as T-3 LSA’s. SAME VPN, SAME DOMAIN_ID (PROCESS ID) T3, different DomainID, T5.

Creating a Sham-Link

Sham-Links allows MPLS network to override backdoor links.
Before you create a sham-link between PE routers in an MPLS VPN, you must:
  • Configure a separate /32 address on the remote PE so that OSPF packets can be sent over the VPN backbone to the remote end of the sham-link. The /32 address must meet the following criteria:
    • Belong to a VRF.
    • Not be advertised by OSPF.
    • Be advertised by BGP.

You can use the /32 address for other sham-links.

  • Associate the sham-link with an existing OSPF area.
EIGRP: Site of Origin – SoO
Used between the PE and CE to prevent route feedback and loops. Could be accomplished with tag and filter but that is too complex. Multi-homed CE’s and CE’s with backdoor links are ideal candidates. Also, used in BGP when the same ASN is used at all remote locations.
CE: Same ASN on both sides will not allow bgp prefixes to be advertised because of BGP’s loop prevention (same asn). You can override on the PE with the neighbor statement and “as-override” command. “Allowas-in” is another option but NOT RECOMMENDED.
New Edition to my Neo Collection!

New Edition to my Neo Collection!

Not sure if anyone really cares, but I’m a huge NeoGeo fan/collector.. Today I found quite a surprise at a local gaming store. English version of SvC Chaos (SNK vs. Capcom) for the NeoGeo AES. Needless to say, I did not even hesitate for a second and bought it. It’s in pristine condition and I’ll probably spend the rest of the night obsessing on how beautiful the condition is. Enjoy these pics and Thank You Play N Trade!

Game Data: 
N.American version
708 MEGS (Mbits) / 88.5MB
Released: Nov. 2003 for AES , July 2003 MVS