This is an annotated pathchar trace from a Linux laptop on the Qualcomm internal network to the San Diego Road Runner system. This test was run in the late evening of June 23, 1997. Pathchar takes a long time, especially when the packet loss rate is high as it often is along this path. It typically takes minutes to complete the probing at one hop. So you must take the results with a grain of salt, as network conditions (e.g., routes) can easily change faster than pathchar can finish. The best example of this are the links on which the propagation delay appears to be negative.

For some reason, pathchar didn't map the IP addresses back to domain names. So I've looked them up manually and replaced them as the pathchar program should have showed them.

Read the Pathchar Notes for a description of the numbers in the following output.

My annotations are in italics. More extensive comments follow --Phil

dhcp-855107462# ./pathchar -i 51 -m 1500 204.210.37.154
pathchar to 204.210.37.154 (204.210.37.154)
 doing 32 probes at each of 64 to 1500 by 44
 0 localhost				133 MHz laptop running Linux
 |   6.3 Mb/s,   408 us (2.72 ms)	Enet from my office
 1 gateway100.qualcomm.com (129.46.100.1)cisco router
 |   5.1 Mb/s,   -39 us (5.01 ms)	an enet in the machine room
 2 krypton-e0.qualcomm.com (129.46.54.3) cisco router (firewall)
 |   8.4 Mb/s,   217 us (6.88 ms)	our "dmz" enet, with web servers,etc
 3 kryptonite.qualcomm.com (192.35.156.1)cisco "border" router
 |    39 Mb/s,   231 us (7.64 ms)	our DS3 radio link to SDSC
 4 sdsc-qualcomm-ds3.cerf.net (134.24.47.100)
 |    81 Mb/s,   501 us (8.79 ms)
 5 atm1-0.svnode.sd.cerf.net (134.24.32.6)
 |    45 Mb/s,   1.17 ms (11.4 ms)
 6 pos5-0-0.ana.la.cerf.net (134.24.32.10)
 |    60 Mb/s,   4.04 ms (19.7 ms)
 7 hssi2-0.maew2.sf.cerf.net (134.24.29.14) CERFnet exit router at MAE-West
 |    17 Mb/s,   597 us (21.6 ms)	the gigaswitch at MAE-West
 8 fddi0/0.cpe5.hay.mci.net (198.32.136.116) MCI entry router at MAE-West
 |   ?? b/s,   869 us (23.1 ms),  1% dropped
 9 core1-hssi3-0.SanFrancisco.mci.net (204.70.1.205)
 |    17 Mb/s,   4.24 ms (32.2 ms),  +q 1.28 ms (2.81 KB) *2
10 core2-hssi-2.LosAngeles.mci.net (204.70.1.154)
 |   126 Mb/s,   145 us (32.6 ms),  +q 1.07 ms (16.9 KB) *2
11 border7-fddi-0.LosAngeles.mci.net (204.70.170.51) MCI's pop in LA
 |   1.9 Mb/s,   4.41 ms (47.8 ms),  +q 23.3 ms (5.49 KB),  7% dropped
					RR's link to MCI
12 time-inc-new-media.LosAngeles.mci.net (204.70.252.106) RR's Cisco 7507
13  * 1   460 553      46
14:  32   152  27      48
The TASes did not respond reliably with ICMP messages, so the pathchar went into the weeds. I killed it at this point.

And here is a pathchar run from a Linux system at home through Road Runner towards Qualcomm. This was run at approximately the same time as the trace above. The target of this trace was a Linux system on our DMZ I skipped the first several hops to avoid the usual braindamage in the cable routers and TASes that keep them from reliably generating the ICMP messages that pathchar requires.

Keep in mind that every test packet had to pass over the cable modem's polled reverse channel, with all its attendant variable delays. So the numbers here are somewhat more suspect than those taken from the other direction, especially since the route through CERFnet did not appear congested when probed from the other end.

marge# ./pathchar -f 4 -i 51 -m 1500 192.35.156.12
pathchar to 192.35.156.12 (192.35.156.12)
 doing 32 probes at each of 64 to 1500 by 44
 3 204.210.0.254 (204.210.0.254)	RR Cisco 7507 router
 |   2.3 Mb/s,   9.62 ms (62.1 ms),  +q 23.3 ms (6.74 KB) RR link to MCI
 4 204.70.252.105 (204.70.252.105)	MCI's router in Los Angeles
 |   ?? b/s,   -2642 us (54.3 ms),  8% dropped
 5 204.70.170.49 (204.70.170.49)
 |   7.4 Mb/s,   4.86 ms (65.7 ms),  +q 26.0 ms (24.0 KB),  13% dropped
 6 204.70.1.153 (204.70.1.153)
 |    14 Mb/s,   -356 us (65.8 ms),  +q 25.9 ms (45.1 KB),  7% dropped
 7 204.70.1.206 (204.70.1.206)
 |   ?? b/s,   -1442 us (62.9 ms),  3% dropped MAE-West gigaswitch
 8 134.24.88.100 (134.24.88.100)
 |   7.9 Mb/s,   3.53 ms (71.5 ms),  +q 12.2 ms (12.1 KB) *2
 9 134.24.29.13 (134.24.29.13)
 |   ?? b/s,   1.31 ms (72.3 ms),  1% dropped
10 134.24.32.9 (134.24.32.9)
 |    78 Mb/s,   581 us (73.6 ms),  +q 8.80 ms (86.1 KB)
11 134.24.32.5 (134.24.32.5)
 |    39 Mb/s,   -259 us (73.4 ms),  +q 7.98 ms (38.6 KB) DS3 link to QCOM 
12 134.24.47.200 (134.24.47.200)
 |   ?? b/s,   -880 us (66.0 ms)
13 192.35.156.12 (192.35.156.12)
13 hops, rtt 32.2 ms (66.0 ms), bottleneck 2.3 Mb/s, pipe 12098 bytes

It is interesting that the pathchars in both directions made roughly the same estimate of the bandwidth of the RR/CERFnet link: about 2 megabits/sec. It was also pretty clear that this link is rather heavily loaded, and this can cause the bandwidth estimate to be on the low side. That's because of the way pathchar works: it sends packets of a variety of sizes and looks for a correlation between packet size and delay. It takes the minimum delay for each size and assumes that corresponds to the transmit queue being empty. Chances are good that if you send enough packets, sooner or later one will get sent when the queue is empty. But if the link is continually busy during the entire test, pathchar will instead measure the maximum bandwidth available at the time of the test, and this will be less than the link's true bandwidth. In any event, the figures shown are consistent with some small number of T1s, perhaps 2 or 3.

Pathchar to a @Home subscriber

I couldn't resist a pathchar run to an @Home subscriber, just for comparison. This particular @Home subscriber is my father, who lives just north of Baltimore, Maryland. I haven't had time to fully annotate this script yet, but note the nice fat pipes all the way across the country. Also note the pipe size at the end -- this is how large your TCP receive window has to be to take full advantage of the available bandwidth.
pathchar to 24.3.11.51 (24.3.11.51)
 doing 32 probes at each of 64 to 1500 by 44
 0 localhost
 |   6.3 Mb/s,   417 us (2.74 ms)
 1 129.46.100.1 (129.46.100.1)
 |   5.1 Mb/s,   -50 us (5.00 ms)
 2 129.46.54.3 (129.46.54.3)
 |   8.3 Mb/s,   199 us (6.84 ms)
 3 192.35.156.1 (192.35.156.1)
 |    39 Mb/s,   220 us (7.59 ms)
 4 134.24.47.100 (134.24.47.100)
 |   107 Mb/s,   520 us (8.74 ms)
 5 134.24.32.6 (134.24.32.6)
 |    35 Mb/s,   1.14 ms (11.4 ms) San Diego to LA
 6 134.24.32.10 (134.24.32.10)
 |    43 Mb/s,   3.99 ms (19.6 ms),  1% dropped LA to SF
 7 134.24.29.14 (134.24.29.14)
 |    81 Mb/s,   456 us (20.7 ms) MAE-West gigaswitch
 8 198.32.136.70 (198.32.136.70)
 |    43 Mb/s,   407 us (21.8 ms)
 9 172.16.4.2 (172.16.4.2)
 |    30 Mb/s,   29.2 ms (80.5 ms) obviously the cross-country link
10 172.16.0.58 (172.16.0.58)
 |    47 Mb/s,   96 us (80.9 ms),  +q 1.31 ms (7.62 KB) *2,  2% dropped
11 10.0.236.6 (10.0.236.6)
 |    31 Mb/s,   284 us (81.9 ms),  +q 1.75 ms (6.73 KB) *2
12 24.3.0.240 (24.3.0.240)
 |   4.5 Mb/s,   1.89 ms (88.3 ms),  +q 3.40 ms (1.91 KB) *2
13 24.3.11.51 (24.3.11.51)
13 hops, rtt 77.5 ms (88.3 ms), bottleneck 4.5 Mb/s, pipe 49481 bytes