CCIE: QoS

CCIE: QoS

Hold-Queue & Hardware TX Ring:

TX-Ring DEFAULT on 1841 (128) packets on a FastEthernet interface
“tx-ring-limit X” verify with “sh controller fa 0/1 | in tx”

FIFO Ingress queue is 75 packets by default and 40 packets on an 1841 FastEthernet interface

“hold-queue X in|out” verify with “sh interface fa0/1 | in queue”

Keep in mind that the software queue is only invoked when the hardware (TX-RING/FIFO) is full. CPU/packet spikes can tie up CPU cycles causing the router to use the queues.

WFQ: Fair-queue can be configured using the following commands. FLOW BASED (IP S/D, Port S/D, Protocol type)

 bandwidth 128 (helps WFQ choose best settings, but does not directly influence the algorithm)
 tx-ring-limit 1 (forces the software queue to take affect)
 tx-queue-limit 1
 fair-queue 16 128 8 (16 packets, 128 conversations, 8 RSVP queues)
 hold-queue 256 out
 ip mtu 996 (+ 4B overhead due to HDLC) This is L3 fragmentation and is NOT recommended because it's going to reduce effective throughput for large size packets.

Tc = Bc/CiR

1536000 bits per second, 1 sec = 1000ms, 1000B (MAX SIZE), 1000B * 8 (8000 bits)
8000/1536000 = .005 * 1000(ms) = 5ms
Now let's say I want a Tc of 8 ms. Use this formula CiR * (8/1000)
1536000 * .008 = 12288 (Bc)

8ms = 12288/1536000

If we need to use a TC of 12ms on the same pvc:

1 Bc = CIR x (TC/1000)
2 Bc = 1536000 x (12/1000)
3 Bc = 18432
Legacy RTP Prioritization and Reserved Queue:
ip rtp priority range-start range-end BW
ip rtp reserved range-start range-end BW  
max-reserved-bandwidth percentage up to 100 (default is 75%)
Selective Packet Discard (Input Queue Priority): Input FIFO Queue Modification, Control Plane protocols such as HSRP, BGP Updates, IGP's, PIM, L2 keepalives, etc... Processor switched, or erroneous packets. 
***HIDDEN IOS COMMAND***
  spd enable
  spd headroom 120
  ip spd mode agg (normal and aggressive modes) Malformed packets are dropped as soon as the hold queue grows above minimum threshold. 
  ip spd queue max-thres 150
 "sh ip spd" to verify configuration.
Payload Compression on Serial Interfaces: STAC: CPU Intensive, replaces repetitive data with index value. Predictor: Memory Intensive, not as effective as STAC/LZ algorithm  Only allowed on HDLC/PPP/FR links with 2Mbps or less of bandwidth. HDLC only supports STAC, PPP supports Predictor. Something to remember is that with MQC vs. legacy QoS, packets are compressed BEFORE the burst or queue weight is calculated. 
Configs:
int ser 0/1/0
encap hdlc
compress stac

int ser 0/0/0
 frame-relay map ip 155.17.0.5 205 broadcast ietf payload-compression FRF9 stac one-way-negotiation

int ser 0/1/0
encap ppp
compress predictor

Verify with "sh compress detail" and "sh frame-relay map".
Test with repetitive data ping. "ping x.x.x.x size 500 rep 100 data ABAB"
TCP/RTP Header Compression:
  int ser 0/1/0
  ip tcp header-compression
  ip tcp compression-connections 32 (TCP/RTP is bi-directional requires a context on each side)
  ip rtp header-compression
  ip rtp compression-connections 32
Verify with "sh ip rtp/tcp header-compression"
MLP (multilink PPP): 
Configure with either "ppp multilink group#" & "int multilink group#" or 
"ppp multilink", int virtual-templateX, "multilink virtual-template X" (Single Interface in MLP group) or
Dialer interface
LFI: "ppp multilink fragment", "ppp multilink interleave" Use WFQ (fair-queue) on the virtual link to further give voice packets a better chance of being serviced. 
Also, I don't believe interleaving will work with FIFO!   
Frame-Relay Broadcast Queue:
  Broadcast queue 0/64, broadcasts sent/dropped 22932/0, interface broadcasts 5735
Modify with "frame-relay broadcast-queue 16(total for ALL pvc) 1280B 10pps



							
Comments are closed.