Wireless Digital Communication: A View
Based on Three Lessons Learned
Andrew J. Viterbi
San Diego, CA
While digital modulation is also certain to become the universal choice for two-way personal and mobile communication, and while the two techniques just mentioned will be equally important here for the same reasons, there is another even more critical requirement for such applications: namely, affording many users simultaneous and equal access to the network. Whatever multiple access technique is employed, the ultimate performance limitation is the system's susceptibility to interference. In narrowband multiple access techniques, whether frequency division or time division, this interference manifests itself as inter-user, intersymbol and multipath interference. Mitigation of the detrimental effects of these disturbances is limited to equalization and to isolation by frequency reuse with sufficient spatial separation. The first may be difficult to implement, particularly in mobile vehicles, while the second reduces spectrum efficiency.
The pitfall of the computer-communication analogy  is the exporting of not only hardware capabilities but of system concepts as well from the computer to the communication arena. Thus, for example, the concept of OSI layering, which is a logical extension of software hierarchies in computers, presents a deceptively over simplified view of communication systems and networks. The design of the lowest (or physical) layer has a much more profound impact on the feasibility of communication processes at higher layers than can be explained solely by issues of speed and memory. For an understanding of this key point, it is necessary to explore more fully the fundamentals of communications which by a remarkable coincidence were revealed in a research publication in 1948, the same year as the discovery of the transistor and at the same research organization, the Bell Telephone Laboratories in Murray Hill, New Jersey.
These three lessons are transforming all the communication products and services which surround us.
The first lesson has radically changed the design approach to satellite communication over the past two decades. It has also had a major impact on the wireline modem industry and is just beginning to be felt as an economic factor in the recording industry as well. Each field has applied Shannon's first lesson and assigned its own jargon to its implementation. In satellite communication they are called "forward error correction (FEC) with soft decision decoding".  In high quality wireline modems the techniques are known as "maximum-likelihood sequence estimation"  and "trellis coding"  and in magnetic recording it is called "partial response maximum-likelihood (PRML) detection".  More interesting is the impact of these techniques:
(a) In recording they are replacing the very inefficient "peak detector" and increasing recording densities several-fold.
(b) In wireline modems, adaptive linear equalization in the seventies raised the data rate of a 3 KHz leased line from 1200 baud to 9600 baud; enhancement by trellis coding in the eighties has produced transmission rates as high as 19.2 Kbaud.
(c) For satellites, convolutional codes with soft decision decoders have reduced the link budget requirement for digital communication by 6dB and thus ensured the feasibility  of a new billion dollar industry, VSAT's providing up to 64 kbps service using only 1.2m to 2.4m dishes. The impact of this technology on satellite digital TV broadcasting is just over the horizon. Mobile satellite communication in the heretofore underused Ku-band spectrum is even more impressive. A one-watt transmitter provides both position-location and messaging communications over one transponder of a conventional non-processing satellite  for tens of thousands of trucks on the highways of North America today, of Europe shortly, and possibly Japan in the not-too-distant future.
The second lesson of information theory is just now being learned. Digital compression of speech and video has been a research topic for at least three decades. Bit rate reductions of almost two orders of magnitude are now the norm. High definition television (HDTV) broadcasting is now a major R & D thrust of the world's largest electronic manufacturers. Until recently an all-digital approach, which completely separates video-source compression coding from transmission coding, was scoffed at as hopelessly impractical, not because of the feasibility of implementing such compression factors but for the difficulty in transmitting 20 to 60 Mbps of digital information and especially the cost of consumer receivers for such. I believe my company, QUALCOMM, was the first to advocate such a system primarily for direct broadcast satellite (DBS) applications. On the strength of our proposal to the US Defense Advanced Research Projector Agency (DARPA), in 1989  we were awarded one of three grants for HDTV signal processing technology,  the other two going to well-established teams headed by David Sarnoff Laboratories and by MIT, both advocating analog-digital hybrid methodologies. Since then numerous well known participants have joined us on the "All Digital Bandwagon," including General Instruments, NBC, Thomson S.A., Philips N.V., Sarnoff and most recently the AT&T-Zenith joint venture.  Some are concentrating on DBS and others on terrestrial or cable delivery. All seem to be learning the second lesson.
Which brings us finally to Shannon's third lesson, not yet as widely accepted as the first two. For nearly half a century, military communication specialists, who deal in particularly pernicious and malignant interference, have intuitively understood that by proper processing at both transmitter and receiver, these could be tamed into behaving like the most benign of interference, thermal (White Gaussian) noise. (It is, in fact, no coincidence that just prior to the research which led to his 1948 publication, Shannon's primary preoccupation was military communication and secrecy.)  But personal, wireless and cellular communication also must deal with a variety of interference forms, primarily "the four multiples":
a. Multiple-user access
b. Multiple cell-sites
d. Multiple media
Through the use of spread spectrum techniques, the detrimental effect of all four can be mitigated and in some cases, (b) and (c) in particular, can even be used to improve communication performance. For example, with wideband spread spectrum modulation, the multiple paths of multipath propagation can be isolated and through diversity combining can be used to advantage. Actually, this is but one of several diversity techniques utilized in a well-designed spread spectrum cellular system. Another is cell-site antenna diversity combining and a third the combining of signals from and to two or more cell sites through a technique known as "soft-handoff." Equally important unique features of spread spectrum CDMA are interference reduction through "voice activity gating" and "cell-site sectorization gain." What is left, after these mitigating techniques have been exhausted, is the benign additive Gaussian noise which is itself mitigated by FEC coding.
Most important is the gain in "Reuse Factor" a cellular parameter which is inversely proportional to the number of different frequency assignments necessary to guarantee that neighboring cells are assigned disjoint frequency bands. Reuse factors of 1/19, 1/12, 1/7, 1/4, and 1/3 have been used  or proposed for progressively more optimistic designs of FDMA and TDMA systems. CDMA systems employ ubiquitous frequency reuse and thus have a factor  of 1. Ubiquitous and universal frequency reuse applies to all users in the assigned spectrum or channel, to all cell sites, and to all media, terrestrial and satellite-both geostationary and low earth orbit-and, in fact, with CDMA, seamless handover between media becomes possible.
There is, however, one major impediment to universal reuse which must be overcome, particularly in terrestrial systems. Known as the "Near-Far" problem, this refers to the condition where some users are much closer to the base station than others, thus introducing excessive interference. This has been solved in cellular systems by implementing several levels of rapid, tight, power control. Experimental systems have demonstrated control of power up to 100 dB in fractions of a second, with a total variation in controlled power on the order of +/- 1.5 dB.
This is not the forum for detailed analyses. Such have been presented by my associates and myself in other venues  and journal publications.  To provide some measure of the capacity of CDMA, we quote the current goal of the QUALCOMM CDMA system:
CAPACITY (CDMA) = 1 Bit/Sec/Hz/Cell
This presupposes a Voice Activity factor of 1/2 and Sectorization Gain on the order of 4 to 6 dB. (This is by no means a physical limitation - increases of 2 to 4 or even more may well be feasible in second generation systems.)
By comparison, the current equivalent capacity of analog AMPS, (assuming the same 10 K bit/sec voice quality) is
CAPACITY (AMPS) = 1/21 Bit/Sec/Hz/Cell
(Narrow-band AMPS may, at best, double this capacity given that it triples the calls per current channel but undoubtedly will require a lower reuse factor.) North American TDMA will provide at best
CAPACITY (N.A. - TDMA) = 1/7 BIT/SEC/HZ/CELL
assuming the current reuse factor 1/7, provided the current C/I = 18 dB requirement can be tolerated. European (GSM) TDMA is designed to
CAPACITY (GSM-TDMA) = 1/10 BIT/SEC/HZ/CELL
assuming a reuse factor of 1/4.
What is it about TDMA which prevents it from achieving a higher reuse factor? In TDMA, users in the same cell are kept from interfering (much) with one another by assigning them disjoint frequency and time slots. Once all frequency slots assigned to a given cell have been allocated, the neighboring cells must be assigned another set of frequency slots and so forth until we reach a distant enough cell that the original frequency can be reused. What keeps this from happening too soon (and therefore with higher reuse factor) is that both the multi-user interference and the multipath interference (that which has not been neutralized by equalization) are seriously detrimental, unlike the benign noise interference characteristic of CDMA.
In short, the principal drawback of TDMA (and FDMA) is that while its proponents have learned the first two lessons of Information Theory (they do employ soft decision decoding and equalization and they do separate voice compression-Vocoding-from channel transmission-processing and FEC soft-decision decoding), they have missed the third and most important lesson on rendering the interference benign. This is only possible with CDMA and spread spectrum, the logical choice for personal, mobile and wireless digital access.
Though capacity and call quality may be the primary concern, other desirable features in personal and mobile communication include:
a) Transmitter power requirements of subscriber units.
b) Cell-site costs, which are dominated by RF and analog circuitry.
c) Transition plan for gradual and profitable conversion from analog cellular and coexistence with existing systems.
d) Security and privacy.
CDMA excels over all other multiple access techniques in providing the best solution for all of the above. Without embarking on a detailed exploration of why and how, suffice it to say again that all are facilitated by the wideband benign interference properties of CDMA of which we have spoken, and that universal frequency reuse avoids the complicated issue of frequency-management planning when additional cells are introduced.
No assessment is complete without a comparison of implementation complexity and cost. Doubtless, CDMA is conceptually more difficult to understand. But difficulty of concept should not be confused with difficulty of implementation. Were this so, the CD-player would never have become the popular low-priced consumer product it is. And this brings me back to my original premise, that advances in digital communication in the latter half of this century were guided by the lessons of information theory but fueled by the spectacular progress in solid state electronics. Specifically, the "smart" and "difficult" algorithms are relegated to the solid state circuitry, where levels of integration and speed double every two years, making yesterday's hopelessly complex and costly implementation into the high-volume low-cost microchips and ASIC implementations of today. What differentiates CDMA from other multiple access techniques is mostly contained in these chips. At the same time, the conventional analog and RF circuitry in the cell-site is considerably reduced because it is shared among more users, with user separation performed at baseband, again in the same powerful chips. The subscriber equipment complexity is also reduced, primarily because of reduced transmitter power requirements.
Let me conclude with a personal observation. I have been privileged to participate in the communication engineering profession and the telecommunication industry over the majority of the period during which we have learned how to apply the key lessons of Shannon's Information Theory. I firmly believe that the final decade of the half century will bring this knowledge to complete fruition in the form of ubiquitous digital communication products and services undreamed of as I entered the field.
 K. Kobayashi, Computers and Communications; A Vision of C & C, MIT Press, Cambridge, MA, 1986.
 C. E. Shannon, "A Mathematical Theory of Communication" Bell System Technical Journal, Vol. 27, pp. 379-423 and 623-656, July and October, 1948.
 J. A. Heller and I. M. Jacobs, "Viterbi Decoding for Satellite and Space Communication," IEEE Transactions on Communications Technology, Vol. COM-19, pp. 835-848, October, 1971.
 G. D. Forney, Jr. "Maximum-Likelihood Sequence Estimation of Digital Sequences in the Presence of Intersymbol Interference, "IEEE Transactions on Information Theory, Vol. IT-18, pp. 363-378, May, 1972.
 G. Ungerboeck, "Channel Coding with Multilevel/Phase Signals," IEEE Transactions on Information Theory, Vol. IT-28, pp. 55-67, January, 1982.
 R. Wood, "Magnetic Megabits" IEEE Spectrum, Vol. 27, pp. 32-38, May, 1990.
 Having previously doubled the communication ranges of space missions such as Voyager and Galileo.
 I. M. Jacobs et al., "The Application of a Novel Two-Way Mobile Satellite Communications and Vehicle Tracking System to the Transportation Industry", IEEE Transactions on Vehicular Technology, Vol. 40, pp. 57-63, February, 1991.
 QUALCOMM, Inc. "Technical Proposal for an HDTV Receiver/Processor" submitted to DARPA, Arlington, VA, February, 1989.
 United Press International National Wire Service, "Pentagon Names More HDTV Contractors" October, 26, 1989.
 New York Times, "Advanced TV Testing Set Amid Tumult on Technology" pp. C1 and C6, November 15, 1990 and "AT&T and Zenith in Venture - Plan to Jointly Build all-Digital System for Advanced TV", pp. C1 and C18, December 18, 1990.
 R. Price (Ed.), "A Conversation with Claude Shannon" IEEE Communication Magazine, Vol. 22, pp. 123-126, May, 1984.
 W. C-Y Lee, Mobile Cellular Telecommunications Systems, McGraw-Hill, New York, 1989.
 This factor is effectively reduced by the increase in interference from neighboring cells, but still remains greater than 3/5. See also Reference .
 IEEE GLOBECOM Workshop on CDMA in Satellite and Terrestrial Applications, San Diego, CA, December, 1990.
 K.S. Gilhousen, et al, "On the Capacity of a Cellular CDMA System", IEEE Transactions on Vehicular Technology, Vol. VT-40, May, 1991.
Converted to HTML by P. Karn 8 May 1996
[Note: Where the original document uses the "approximately equals" symbol, I have replaced it with "equals" as the former doesn't seem to be in the list of HTML escape sequences. -PRK]