Phil Karn

28 Dec 2000

*28 Feb 2001 note:* Walker has revised his document since
I wrote this critique, so some of the page and section numbers may have
changed. But as of this date, it still contains the same fundamental errors.

The VMSK website includes the document Some notes on Shannon's Limit by Hal Walker. It is riddled with fundamental errors almost from the first line.

Starting at the top of page 1, paragraph A, formula (1) giving the
classic form of Shannon's Limit is correct. However, use of the term
"filter bandwidth" is wrong. Shannon's formula gives the capacity of a
*band-limited noisy channel*, so the bandwidth term in the
formula is that of the channel. Shannon says nothing about
transmitter filters, and he doesn't care whether you use them or not.

Formula (2) is just a restatement of formula (1) with different terms. For some reason, Walker uses two different terms to represent channel bandwidth: W and B. We'll see why later on.

Reading on, Walker's cluelessness becomes clear:

W is the Nyquist bandwidth, defined as the minimum bandwidth that can be used to pass the signal. The Nyquist Sampling theorem is drived [sic] from that bandwidth. Basically the theorem says "you must sample once every bit to see if it is there or not".

W is indeed the minimum bandwidth, but Walker's understanding of the Nyquist theorem is completely wrong.

Nyquist's theorem is best known today for its application to digital audio systems: in order to reproduce all of the frequencies in a band limited analog signal, the sampling rate must be at least twice the signal bandwidth. For example, compact discs sample at 44,100 Hz, so they can reproduce audio signals up to 22,050 Hz. (The actual cutoff is usually cited as 20 KHz because of imperfect filters in the sampling process).

But the problem Nyquist worked on in the 1920s was actually the reverse
of its more modern application to digital audio. It was in fact the
*very same* problem we have here. He wanted to know how fast
you could send digital signals over a telegraph line with limited
bandwidth. His well established answer: the maximum signalling rate is
twice the baseband bandwidth. E.g., if the telegraph line passes
frequencies from DC to 5 KHz, then you can change the telegraph signal
no more than 10,000 times per second without having your individual
signalling elements smeared together. When this baseband signal is
translated to RF with a balanced modulator, the occupied bandwidth
doubles. That makes the maximum signalling rate equal to the occupied
RF bandwidth, or 1 bps/Hz for a binary signal.

Continuing on:

bits/symbol = bandwidth efficiency = bit rate/(noise) bandwidth

and:

bits/symbol = bit rate/Nyquist bandwidth = bit rate / sampling rateThese equalities are true only if the bandwidth of the filter equals the Nyquist bandwidth. If not, the bandwidth efficiency does not = bits/symbol.

If the term *noise bandwidth * is replaced with the correct
*channel bandwidth*, then this is actually true. However, the
channel bandwidth must be *at least* the Nyquist bandwidth of
the modulation scheme for it to work. (More on this later.)

Walker goes on to analyze 1024QAM, taking its efficiency as 10
bps/Hz. This is a little questionable, as the bandwidth occupied by
1024QAM (and most forms of digital modulation) is ill-defined
without filtering. But 10 bps/Hz is the maximum efficiency possible
when 1024QAM is filtered to its minimum Nyquist bandwidth. And for
*any* form of modulation that achieves 10 bps/Hz, Shannon
requires a minimum Eb/No of 20 dB. Because this is a theoretical limit
that assumes the use of error control coding, uncoded 1024QAM actually
requires quite a bit more than 20 dB in practice.

Continuing:

D) Still considering 1024 QAM, now double the receiver noise bandwidth ( cutting the bandwidth efficiency in half ) and the value of W in half so that (f b /W) and (fb / B) = 5, then, in (4): (4) 5 = log 2 { 1+ 5 Eb / N0 } Solving through, Shannon's Limit would then be 7.9 dB. This implies that broadening the filter s noise bandwidth improves the transmission system. This is obviously incorrect. If it were correct, engineers wouldn't try so hard to obtain narrow band filters, they would merely use broader and broader filters.

Actually, it *is* correct, if we again replace "noise
bandwidth" with the correct "channel bandwidth". By adding forward
error correction coding that doubles the occupied bandwidth (i.e., to
5 bps/Hz), the required Eb/No *does* drop from 20 dB to only
7.9 dB. Again, this is only a theoretical limit; any real system would
require a little more.

Yes, some engineers do try hard to obtain narrow band RF digital
modulation. They're the ones who don't understand the implications of
Shannon's channel capacity theorem. Those who *do* understand
it know that going to *wider* bandwidths actually increases
spectral efficiency in many real-world applications. The first person
to recognize this was John P. Costas, who wrote the seminal paper
Poisson, Shannon and the Radio Amateur in the Proceedings
of the IRE in December 1959. Not until the 1990s was the technology
ready for these ideas to bear fruit in Qualcomm's highly successful
IS-95 CDMA digital cellular system, which transmits voice on 1.25 MHz
wide channels.

Shannon's theorem is all about the tradeoffs between bandwidth efficiency, power efficiency and interference resistance that modulation and forward error correction coding make possible. But Walker seems to know nothing about FEC coding. Indeed, he doesn't even understand the Nyquist theorem, which was the foundation for Shannon's work 20 years later on the limits of coding.

In section F, Walker evaluates the Shannon equation using his claimed spectral efficiency for VMSK/2, 100 bps/Hz. He gets an enormous Eb/No ratio of 300 dB and dismisses it as proof that the Shannon equation is being used incorrectly.

This calculation is wrong. 301 dB is the minimum
*signal-to-noise* ratio required for 100 bps/Hz; the minimum
Eb/No is 281 dB or 20 dB lower. (The 20 dB difference comes from the
100 bps/Hz spectral efficiency ratio). 281 dB is still a huge value,
but it *is* the correct result! The fact that VMSK/2 works (or
is claimed to work) with much less power hardly proves that the
Shannon equation is widely misunderstood. It merely proves that
VMSK/2 does not achieve 100 bps/Hz -- Walker's claims notwithstanding.

Continuing on to section G:

G) Instead, let Bits/Symbol (fb/W) = 1 and Bandwidth efficiency (fb/B) = 100. Then B does not equal W and in(8) 1 = log 2 { 1+ 100 Eb /N0 }

Shannon's Limit with this equation is reached when Eb / N0 is about -20 dB (with Q = 100 b/s/Hz). This answer agrees with the SNR limit for VMSK. Eq. 9 below. But this is also a very controversial answer, since it is below the accepted minimum for Shannon's Limit.

The "accepted minimum" that Walker alludes to is Shannon's
firmly-established proof that *no* digital communication system
can operate without errors at an Eb/No of less than -1.6 dB, and then
only when infinite bandwidth is available. Walker's answer is
"controversial" for a simple reason: it's completely wrong!

Walker went *completely* off the deep end when he used two
separate terms (B & W) to represent bandwidth in his equation (2).
His mistake is in setting B to only a tiny fraction of the VMSK/2
signal's Nyquist bandwidth, W. In so doing, he captures only a tiny
fraction of the total VMSK signal power. When B is set to its true
value, the "controversy" disappears. When one accepts that VMSK/2
requires at least as much RF bandwidth as BPSK, because it *is*
BPSK (plus a narrow clock component), it is also seen to require the
same Eb/No as BPSK (plus the energy wasted on the clock that carries
no information).

I could slog through the rest of Walker's document, but there's little point in doing so. It's already clear he hasn't a clue what he's talking about.

At the core of the VMSK controversy are Walker's claims of high bandwidth efficiency. But what is "bandwidth", anyway?

It turns out that there are several definitions of bandwidth.
Which you choose makes a big difference. Shannon's formula, being a
theoretical limit, uses an idealized definition: the channel passes
every signal within its bandwidth, and it *completely* blocks
every signal outside it.

Walker describes his filters in terms of their noise bandwidths. The textbooks define the "noise bandwidth" of a filter as the bandwidth of a "perfect" rectangular filter with the same amplitude as the center of the real filter's passband, and the same area under the amplitude curve as the real filter.

Because no real filter has a rectangular response, signals outside the noise bandwidth of the real filter are only attenuated, not removed entirely. Walker's inability to understand the implications here lies at the crux of his delusions about VMSK. His special filters may indeed have noise bandwidths of only a few kilohertz, but they do not define the bandwidth of his VMSK signal. It's clear from their response curves that his filters pass enough of the wideband VMSK signal (what he calls "grass") to permit his demodulators to work.

A better definition of "signal bandwidth" would allow the introduction of arbitrary amounts of attenuation and/or the introduction of arbitrary amounts of noise or interference into the channel outside the claimed signal bandwidth without affecting the operation of the link. When this is done to VMSK, it quickly becomes clear that its true bandwidth is far greater than Walker claims it to be. (Walker himself has admitted that "ordinary" narrowband filters completely destroy his modulation.) Because any real-world advantages of VMSK would come from an ability to tolerate "close packing" of other signals, it becomes clear that defining VMSK's bandwidth by the noise bandwidth of its filters or its appearance on a spectrum analyzer is meaningless.