U-verse data rates and artificial limits

My U-verse Residential Gateway maintains a VDSL2 link to an AT&T VRAD about 438 meters from our house. All three U-verse services (video, telephone and Internet access) are carried in Internet Protocol packets and Ethernet frames over this single link. It's an elegant concept, really. Especially for a guy like me who saw the potential for this kind of thing way back in the 1980s when I first got interested in the Internet.

VDSL2 (ITU G.993.2) is much like ordinary DSL, only faster. It is also limited to shorter distances, which is why AT&T had to deploy so many VRADs: to shorten the copper loops to each customer. As far as I can tell, VDSL2 does not use ATM internally, unlike conventional DSL. This is a major win, if true. "Ordinary" DSL modems actually carry 48 byte ATM frames, each with a 5 byte header, so nearly 10 percent of the link capacity is wasted in ATM overhead that rarely provides any benefit. [1] DSL carries Ethernet frames over ATM with AAL5 encapsulation, which adds its own overhead though not nearly as much as ATM itself. And of course most DSL services implement PPPoE, meaning yet another layer of mostly useless overhead.

At installation in mid-November 2009, my U-verse VDSL2 link ran at 25208 kb/s down and 2040 kbps up. This was a standard profile reported by many other U-verse users. I had the fastest Internet service they offered, 18 Mb/s down and 1.5 Mb/s up. In late January, 2010, AT&T quietly upgraded the speed of my VDSL2 link to 32200 kb/s down and 5040 kb/s up. Others have also reported these increases in their areas. Not until April did they finally tell me that I could buy their "Turbo Max" service: 3 Mb/s up and 24 Mb/s down. I immediately subscribed.

The new downstream limit still reserves 8 Mb/s for video and voice, so unless I'm watching or recording at least one HDTV channel I'm still forced to waste part of the pipe even when I could use it. Video control uses negligible upstream capacity and voice uses only 100 kb/s tops, so in the upstream direction I'm forced to waste 2 Mb/s all the time. That's on a copper pair where the engineers had to "pull out the stops" -- including deploying all those VRADs that so many find so objectionable in their yards -- to achieve those wire speeds. Why do they then turn around and waste part of that hard-fought capacity?

I determined the data rate for U-verse video streams by tracing the IP multicast video packets to the set top box. A typical standard definition TV channel (NASA TV) generates 1.66 Mb/s and a typical HD TV channel (SyFy HD) generates 5.766 Mb/s.

Artificial limits

It seems to me that while they've made definite progress, AT&T is still stuck firmly in the past. [2]

Now that AT&T has finally built a high speed residential packet switched network, they still treat it as though it were still like the stuff they used to have. The great thing about a packet switched network like the Internet is its dynamic use of bandwidth. It's continually shared by all the applications. You never have to waste capacity on something that isn't using it, or let capacity go to waste when someone could be using it.

A "triple play" service like U-verse can benefit greatly from this flexibility, but AT&T has hobbled it by artificially limiting how much of the VDSL2 link can be used for Internet access and/or video. Not only is much of the channel forced to remain idle when it could be used for Internet traffic, but the video service is also arbitrarily limited to two HD and two SD channels at any time. The user isn't even given the option to slow down his Internet downloads to get another TV channel. Why?

(The reason is not "Quality of Service". U-verse already uses QoS mechanisms, specifically IEEE 802.1p priority tags, to mark its own voice and video packets as more important than those to or from the user's own network (including, incidentally, any VoIP adapters the user might have on a competing service like Vonage). Nothing the user does on his computer can degrade the quality of a U-verse phone call or video stream.)

This 2 HD + 2 SD limit is probably the most common complaint I see on the U-verse discussion groups. Although we have only a single TV and DVR, there will probably be times when we'd like to record more than two HD channels at once. It would be nice to have that ability, and it would put U-verse well ahead of cable systems where you're limited by the number of RF receivers and demodulators you can put into each set-top box.

U-verse's chief competitor, cable, is (currently) a broadcast system so it has no inherent limit on how many different channels can be watched at once. Although existing DVRs like the TiVo may have hardware limits (such as only two tuners), these can be overcome with additional DVRs. This is not an option in U-verse; it's my understanding that you simply cannot get more than one DVR per household.

It's possible that these limits are due to capacity limitations elsewhere in the U-verse network. But I tend not to think so. U-verse is a Fiber To The Node (FTTN) system, so the bottleneck is invariably the link between the node and the user since it uses old existing copper. The nodes are connected to their upstream facilities by new fiber, and while fiber capacity isn't infinite, it is huge compared to that of even the latest VDSL2 modems. It is also relatively easy to upgrade capacity on existing fiber facilities as it usually entails activating spare strands and not installing new fiber cables.

The bandwidth demands of individual users are inherently bursty so their VDSL2 links are idle much of the time. So when their traffic is combined at the VRAD, the total is considerably less than the sum of the individual link speeds.

I brought this up in an email conversation with one of AT&T's "level 2" support people. He tried to tell me that the network equipment was like the engine in a sports car. You don't want to drive it at the red line all the time because that'll wear it out. I don't know if he was told to use that anology or if he came up with it on his own, but needless to say it's a pretty silly one. And completely inapplicable.

He then claimed, rather weakly, that backhaul capacity considerations from the VRAD limit how much can be offered to each individual subscriber. This argument might even have begun to hold water except for the numbers he then provided. The VRADs, he said, are connected by 10 gigabit Ethernet over fiber, and each VRAD serves upwards of 200 homes. Let's see...10 gigabits over 200 homes is 50 megabits per home. My VDSL2 link runs at 32.2. Um, what's the problem again...?

The whole point is that it doesn't really matter how fast or slow the backhaul from the VRAD may be. With modern Internet routers and priority QoS mechanisms there's no reason to force capacity to remain idle when a user could be using it. Not unless, of course, you're trying to maintain the public impression that broadband capacity is really scarce and expensive.

Footnotes

[1] All that overhead in ATM provides only one real advantage: the ability to multiplex several streams of data of different priorities over a relatively slow link in a way that minimizes latency for the high priority traffic. The selection is made on a cell-by-cell basis, and the small cell (53 bytes) means that the high priority traffic doesn't have to wait long for a low priority frame to finish transmission.

The combination of user Internet (computer) traffic with Voice over IP (VoIP) would seem to be an ideal application for this feature, but I have yet to find a single DSL service that actually takes advantage of it. They just send everything over a single permanent virtual circuit (PVC), so you pay all the overhead and reap none of the benefits.

ATM priority multiplexing is most useful on slow links. It becomes less useful as link speeds increase because you don't have to wait very long even for a full Ethernet frame (1500 bytes) to finish transmission. This is undoubtedly a big reason that VDSL/VDSL2 doesn't use ATM at all.

[2] For decades AT&T tried to ignore or even kill packet switched technology. When the ARPANET started in the late 1960s, AT&T refused an offer by the US government to manage it; the contract went to Bolt, Baranek and Newman (BBN) instead. History was repeating itself; when Alexander Graham Bell offered to sell his telephone invention to Western Union, then by far the dominant US telecommunications company, they rejected it as useless. After all, they said, "we have plenty of messenger boys!" When the Internet began to command general attention in the mid 1990s, some friends of mine at Bell Labs got panicked calls from their Board asking why they hadn't been told about "this Internet thing". Believe me, it wasn't because my friends hadn't been trying...


Last modified: Mon Jul 12 02:08:02 PDT 2010