8.3 Digital Communication and Modulation
8.3 Digital Communication and Modulation
Introduction to Digital Communication
Digital communication involves transmitting information using a finite set of discrete symbols (typically binary digits - 0 and 1). This paradigm offers significant advantages over analog communication, including superior immunity to noise, the ability to apply powerful error correction coding, ease of encryption, and efficient multiplexing. The core of any digital communication system is the modulator, which maps discrete symbols onto analog waveforms suitable for transmission over a physical channel. This section details fundamental digital modulation techniques, their performance analysis, optimal receiver design, and the theoretical limits governing all communication systems, as established by Shannon.
1. Digital Modulation Techniques
Digital modulation involves varying a parameter (amplitude, frequency, or phase) of a carrier wave according to a digital bit stream.
1.1 Binary Modulation Schemes
1.1.1 Amplitude Shift Keying (ASK or BASK)
Principle: The amplitude of the carrier is switched between two levels (e.g., On and Off) to represent binary symbols 1 and 0.
Mathematical Representation:
For bit '1': s1(t)=Acos(2πfct),0≤t≤Tb
For bit '0': s0(t)=0,0≤t≤Tb where Tb is the bit duration.
Generation: Can be generated by simply turning the carrier oscillator on and off (On-Off Keying - OOK).
Bandwidth: Approximately B≈2Rb, where Rb=1/Tb is the bit rate.
Characteristics: Simple but inefficient in power and bandwidth. Highly susceptible to amplitude noise and fading.
1.1.2 Frequency Shift Keying (FSK or BFSK)
Principle: The frequency of the carrier is shifted between two discrete values to represent binary symbols.
Mathematical Representation:
For bit '1': s1(t)=Acos(2πf1t)
For bit '0': s0(t)=Acos(2πf0t)
Frequency Separation: The two frequencies f1 and f0 are chosen to be orthogonal over the bit period Tb. The minimum separation for orthogonality (Coherent FSK) is Δf=∣f1−f0∣=2Tb1.
Bandwidth: Using Carson's rule approximation, B≈2Δf+2Bm≈2Δf+2Rb. For coherent FSK with minimum separation, B≈2Rb.
Characteristics: More robust to amplitude variations than ASK but requires more bandwidth than PSK for the same data rate.
1.1.3 Phase Shift Keying (PSK or BPSK)
Principle: The phase of the carrier is shifted between two values (typically 0° and 180°) to represent binary symbols.
Mathematical Representation:
For bit '1': s1(t)=Acos(2πfct+0)=Acos(2πfct)
For bit '0': s0(t)=Acos(2πfct+π)=−Acos(2πfct)
General Form: sBPSK(t)=Ad(t)cos(2πfct), where d(t) is a polar NRZ signal taking values +1 or -1.
Constellation Diagram: Two points on the real axis at (+A, 0) and (-A, 0).
Bandwidth: B≈2Rb (same as BASK).
Characteristics: Most robust of the three basic binary schemes in an AWGN channel due to its constant envelope and maximum separation between symbol points.
1.2 M-ary Modulation Schemes
M-ary schemes transmit k=log2M bits per symbol, increasing spectral efficiency at the cost of requiring higher SNR for the same error probability.
1.2.1 M-ary Phase Shift Keying (M-PSK)
Principle: The phase of the carrier takes on one of M equally spaced values. si(t)=Acos(2πfct+M2πi),i=0,1,...,M−1
Common Types:
QPSK (Quadrature PSK, M=4): Four phases (0°, 90°, 180°, 270°). Transmits 2 bits/symbol. It can be viewed as two independent BPSK streams on in-phase (I) and quadrature (Q) carriers. sQPSK(t)=2A[I(t)cos(2πfct)−Q(t)sin(2πfct)] where I(t) and Q(t) are the data streams taking values ±1.
8-PSK, 16-PSK, etc..
Bandwidth Efficiency: Symbol rate Rs=Rb/log2M. Bandwidth B≈2Rs. Therefore, spectral efficiency increases with M.
Trade-off: For a fixed symbol energy, the points on the constellation get closer as M increases, making the system more susceptible to noise.
1.2.2 M-ary Frequency Shift Keying (M-FSK)
Principle: One of M distinct carrier frequencies is transmitted during each symbol period.
Orthogonality Condition: To ensure minimum probability of error, the frequency separation between any two tones must be Δf=2Tsn, where Ts is the symbol period and n is an integer (often n=1).
Bandwidth: Approximately B≈M⋅Δf. Bandwidth increases with M, making M-FSK bandwidth inefficient.
Trade-off: M-FSK is power efficient. For a fixed probability of error, required SNR per bit decreases as M increases. It trades bandwidth for power.
1.2.3 Quadrature Amplitude Modulation (QAM)
Principle: Information is encoded in both the amplitude and phase of the carrier. It is a hybrid of ASK and PSK.
Signal Representation: si(t)=Aicos(2πfct+θi)=Iicos(2πfct)−Qisin(2πfct) where Ii and Qi are the in-phase and quadrature amplitudes defining a point in the 2D constellation.
Constellation: Points are arranged in a rectangular grid (e.g., 4-QAM is identical to QPSK, 16-QAM, 64-QAM, 256-QAM).
Bandwidth Efficiency: Same as M-PSK for the same M, but QAM places points farther apart for a given average power, making it more power-efficient than M-PSK for M > 4.
Application: The workhorse of modern high-speed wired and wireless standards (Wi-Fi, DSL, 4G/5G cellular).
2. Error Probability Analysis for PSK
2.1 Bit Error Rate (BER) for BPSK in AWGN
Assumptions: Coherent detection, Additive White Gaussian Noise (AWGN) channel.
Result: The probability of bit error (BER) is given by: Pb(BPSK)=Q(N02Eb) where:
Eb is the energy per bit.
N0/2 is the two-sided noise power spectral density.
Q(z)=2π1∫z∞e−x2/2dx is the Gaussian Q-function.
Interpretation: The argument inside the Q-function, 2Eb/N0, is related to the distance between the two constellation points in the presence of noise. A higher Eb/N0 yields a lower BER.
2.2 Symbol Error Rate (SER) for QPSK
Relation to BPSK: Since QPSK can be decomposed into two independent BPSK streams in I and Q channels, the bit error probability is the same as for BPSK, assuming Gray coding (where nearest-neighbor symbol errors cause only one bit error).
Symbol Error Probability: Ps≈2Q(N0Es) for high SNR, where Es=2Eb is the energy per symbol.
The factor of 2 arises because an error can occur in the I channel OR the Q channel.
2.3 General Trend for M-PSK and M-QAM
For a fixed Eb/N0, the probability of error increases as M increases (constellation points get closer).
To maintain the same Pb, Eb/N0 must be increased as M increases.
Exact formulas involve the Q-function and depend on the specific constellation geometry.
3. Matched Filter Receiver
3.1 The Concept of Optimal Detection
Problem: Given a received signal r(t)=si(t)+n(t) over an interval 0≤t≤Ts, where si(t) is one of M possible symbols and n(t) is AWGN, determine which si(t) was sent with minimum probability of error.
Solution: The optimal receiver structure consists of two parts: a matched filter followed by a sampler and decision device.
3.2 Matched Filter Definition
Objective: To maximize the Signal-to-Noise Ratio (SNR) at the sampling instant t=Ts.
Definition: A filter whose impulse response h(t) is the time-reversed and delayed version of the signal waveform it is designed to detect. h(t)=ks(Ts−t) where k is a constant (often set to 1) and Ts is the symbol duration.
Frequency Response: H(f)=kS∗(f)e−j2πfTs, where S∗(f) is the complex conjugate of the signal's spectrum.
Output at Sampling Instant:
Signal component: ys(Ts)=∫0Tss(τ)h(Ts−τ)dτ=∫0Tss2(τ)dτ=Es (the signal energy).
Noise power: σn2=2N0Es.
Maximum SNR: SNRmax=σn2(ys(Ts))2=N02Es.
3.3 Correlation Receiver - An Equivalent Implementation
The operation of the matched filter is equivalent to calculating the correlation between the received signal r(t) and a replica of the expected signal si(t). li=∫0Tsr(t)si(t)dt
The receiver computes these correlations li for all M possible symbols and decides in favor of the symbol that yields the largest correlation output.
3.4 Importance
The matched filter is the fundamental building block for optimal detection in AWGN. It is used in virtually all modern digital receivers.
4. Source Coding Principles
Purpose: To represent the output of a source (e.g., speech, text, image) with the minimum number of bits without losing the required information. This is data compression.
Core Idea: Remove redundancy (statistical or deterministic) from the source data.
Types:
Lossless Compression: The original data can be perfectly reconstructed from the compressed data (e.g., ZIP, PNG, FLAC). Essential for text and executable files.
Lossy Compression: Some information is intentionally discarded, typically details that are less perceptually important (e.g., JPEG, MP3, MPEG). Used for audio, images, and video where perfect reconstruction is not critical.
Key Metric: Compression Ratio.
Fundamental Limit - Source Coding Theorem (Shannon's First Theorem):
For a discrete memoryless source with entropy H (bits/symbol), the source can be losslessly compressed to an average code length arbitrarily close to H bits per symbol, but no lower.
Entropy H: The theoretical minimum average number of bits needed to represent each symbol. H=−∑i=1MPilog2Pi where Pi is the probability of symbol i.
5. Pulse Code Modulation (PCM)
PCM is the standard method for converting an analog signal into a digital bit stream.
5.1 The PCM Process (Three Key Steps)
Sampling:
The continuous-time signal m(t) is measured at regular intervals.
Nyquist-Shannon Sampling Theorem: To avoid aliasing and allow perfect reconstruction, the sampling frequency fs must be at least twice the highest frequency component fm in the signal. fs≥2fm
Quantization:
Each sample (with infinite precision) is mapped to the nearest value from a finite set of discrete quantization levels.
This step is irreversible and introduces quantization error (noise). The error is bounded by ±Δ/2, where Δ is the step size between levels.
Signal-to-Quantization-Noise Ratio (SQNR): For a uniform quantizer with L levels, SQNR (dB)≈6.02n+1.76 where n=log2L is the number of bits per sample.
Encoding:
Each quantized level is represented by a unique binary code word (e.g., 8-bit code for 256 levels). The output is a serial bit stream.
5.2 Key Parameters
Bit Rate: Rb=fs×n bits/second.
Bandwidth Requirement: Minimum theoretical bandwidth to transmit the PCM signal is Bmin=2Rb=2fsn Hz.
6. Shannon-Hartley Channel Capacity Theorem
This is the fundamental theorem of information theory, defining the ultimate limit of reliable communication over a noisy channel.
Statement: The channel capacity C (in bits per second) of a continuous channel of bandwidth B Hz, perturbed by additive white Gaussian noise of power spectral density N0/2, and limited in power to P, is given by: C=Blog2(1+N0BP)=Blog2(1+SNR) where SNR=N0BP is the signal-to-noise ratio.
Profound Implications:
Achievability: It is possible to transmit information at any rate R<C with an arbitrarily small probability of error, by using a sufficiently sophisticated coding scheme.
Converse: It is impossible to achieve reliable transmission at a rate R>C, no matter what coding scheme is used.
Trade-offs: Capacity can be increased by:
Increasing bandwidth B.
Increasing signal power P.
Decreasing noise power spectral density N0.
Spectral Efficiency Limit: The maximum bits per second per Hz (b/s/Hz) is: ηmax=BC=log2(1+SNR) This sets the target for all modulation and coding schemes.
7. Multiplexing Concept
Multiplexing is the technique of combining multiple signals into one signal over a shared medium to utilize the channel efficiently.
7.1 Types of Multiplexing
7.1.1 Frequency Division Multiplexing (FDM)
Principle: The total available bandwidth is divided into non-overlapping frequency sub-bands. Each user's signal is modulated onto a different carrier frequency within its assigned sub-band. All signals are transmitted simultaneously.
Application: Traditional radio/TV broadcasting, analog telephone trunk lines, DSL.
7.1.2 Time Division Multiplexing (TDM)
Principle: The total time is divided into repeating frames. Each frame is divided into fixed-length time slots. Each user is assigned one or more time slots per frame and transmits its entire signal bandwidth during that slot. Users transmit sequentially.
Synchronization: Requires precise timing (synchronization) between transmitter and receiver.
Application: Digital telephony (T1/E1 lines), SONET/SDH, the basic principle behind cellular network time slots.
7.1.3 Code Division Multiple Access (CDMA)
Principle: All users transmit simultaneously over the entire frequency band. Users are separated by assigning them unique, nearly orthogonal spreading codes. The receiver uses the corresponding code to extract the desired signal.
Key Feature: Provides soft capacity and resistance to interference.
Application: 3G cellular systems (UMTS, cdma2000), GPS.
7.1.4 Orthogonal Frequency Division Multiplexing (OFDM)
Principle: A special form of FDM where the sub-carriers are orthogonal to each other, allowing their spectra to overlap without interference. This provides very high spectral efficiency and robustness against frequency-selective fading.
Application: Modern high-speed standards (Wi-Fi 802.11a/g/n/ac/ax, 4G/LTE, 5G, DSL, DVB).
Conclusion: Digital modulation forms the basis of modern communication, translating bits into waveforms. The performance of these schemes under noise is precisely characterized by error probability analysis, with the matched filter providing the optimal receiver structure. Techniques like PCM enable analog-to-digital conversion, while multiplexing allows efficient shared use of channels. The entire field operates under the ultimate theoretical boundaries set by Shannon's Channel Capacity Theorem, which defines the frontier between possible and impossible communication.
Last updated