1170

Digital communications j. proakis 5th edition 2007

  • Upload
    ngo-manh

  • View
    40.839

  • Download
    24

Embed Size (px)

Citation preview

  • 1.Proakis-27466pro57166fmSeptember 26, 200712:35Digital Communications Fifth EditionJohn G. Proakis Professor Emeritus, Northeastern University Department of Electrical and Computer Engineering, University of California, San DiegoMasoud Salehi Department of Electrical and Computer Engineering, Northeastern University

2. Proakis-27466pro57166fmSeptember 26, 200712:35DIGITAL COMMUNICATIONS, FIFTH EDITION Published by McGraw-Hill, a business unit of The McGraw-Hill Companies, Inc., 1221 Avenue of the Americas, New York, NY 10020. Copyright 2008 by The McGraw-Hill Companies, Inc. All rights reserved. Previous editions 2001 and 1995. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written consent of The McGraw-Hill Companies, Inc., including, but not limited to, in any network or other electronic storage or transmission, or broadcast for distance learning. Some ancillaries, including electronic and print components, may not be available to customers outside the United States. This book is printed on acid-free paper. 1 2 3 4 5 6 7 8 9 0 DOC/DOC 0 9 8 7 ISBN 9780072957167 MHID 0072957166 Global Publisher: Raghothaman Srinivasan Executive Editor: Michael Hackett Director of Development: Kristine Tibbetts Developmental Editor: Lorraine K. Buczek Executive Marketing Manager: Michael Weitz Senior Project Manager: Kay J. Brimeyer Lead Production Supervisor: Sandy Ludovissy Associate Design Coordinator: Brenda A. Rolwes Cover Designer: Studio Montage, St. Louis, Missouri Compositor: ICC Macmillan Typeface: 10.5/12 Times Roman Printer: R. R. Donnelley Crawfordsville, IN (USE) Cover Image: Chart located at top left (Figure 8.9-6): ten Brink, S. (2001). Convergence behavior of iteratively decoded parallel concatenated codes, IEEE Transactions on Communications, vol. 49, pp.17271737. Library of Congress Cataloging-in-Publication Data Proakis, John G. Digital communications / John G. Proakis, Masoud Salehi.5th ed. p. cm. Includes index. ISBN 9780072957167ISBN 0072957166 (hbk. : alk. paper) 1. Digital communications. I. Salehi, Masoud. II. Title. TK5103.7.P76 2008 621.382dc22 2007036509 www.mhhe.com 3. Proakis-27466pro57166fmSeptember 26, 200712:35D E D I C A T I O NTo Felia, George, and Elena John G. Proakis To Fariba, Omid, Sina, and My Parents Masoud Salehiiii 4. Proakis-27466pro57166fmSeptember 26, 200712:35 5. Proakis-27466pro57166fmSeptember 26, 200712:35B R I E FPreface Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16C O N T E N T SxviIntroduction Deterministic and Random Signal Analysis Digital Modulation Schemes Optimum Receivers for AWGN Channels Carrier and Symbol Synchronization An Introduction to Information Theory Linear Block Codes Trellis and Graph Based Codes Digital Communication Through Band-Limited Channels Adaptive Equalization Multichannel and Multicarrier Systems Spread Spectrum Signals for Digital Communications Fading Channels I: Characterization and Signaling Fading Channels II: Capacity and Coding Multiple-Antenna Systems Multiuser CommunicationsAppendices Appendix A Matrices Appendix B Error Probability for Multichannel Binary Signals Appendix C Error Probabilities for Adaptive Reception of M-Phase Signals Appendix D Square Root Factorization References and Bibliography Index1 17 95 160 290 330 400 491 597 689 737 762 830 899 966 10281085 1090 1096 1107 1109 1142 v 6. Proakis-27466pro57166fmSeptember 26, 200712:35C O N T E N T SPrefacexviChapter 1 Introduction 1.1 1.2 1.3 1.4 1.5 1.6Elements of a Digital Communication System Communication Channels and Their Characteristics Mathematical Models for Communication Channels A Historical Perspective in the Development of Digital Communications Overview of the Book Bibliographical Notes and ReferencesChapter 2 Deterministic and Random Signal Analysis 2.12.22.3 2.4 2.5 2.6 2.72.82.9viBandpass and Lowpass Signal Representation 2.11 Bandpass and Lowpass Signals / 2.12 Lowpass Equivalent of Bandpass Signals / 2.13 Energy Considerations / 2.14 Lowpass Equivalent of a Bandpass System Signal Space Representation of Waveforms 2.21 Vector Space Concepts / 2.22 Signal Space Concepts / 2.23 Orthogonal Expansions of Signals / 2.24 Gram-Schmidt Procedure Some Useful Random Variables Bounds on Tail Probabilities Limit Theorems for Sums of Random Variables Complex Random Variables 2.61 Complex Random Vectors Random Processes 2.71 Wide-Sense Stationary Random Processes / 2.72 Cyclostationary Random Processes / 2.73 Proper and Circular Random Processes / 2.74 Markov Chains Series Expansion of Random Processes 2.81 Sampling Theorem for Band-Limited Random Processes / 2.82 The Karhunen-Lo` ve Expansion e Bandpass and Lowpass Random Processes1 1 3 10 12 15 15 17 182840 56 63 63 667478 7. Proakis-27466pro57166fmSeptember 26, 200712:35Contentsvii 2.10 Bibliographical Notes and References ProblemsChapter 3 Digital Modulation Schemes 3.1 3.23.33.43.5Representation of Digitally Modulated Signals Memoryless Modulation Methods 3.21 Pulse Amplitude Modulation (PAM) / 3.22 Phase Modulation / 3.23 Quadrature Amplitude Modulation / 3.24 Multidimensional Signaling Signaling Schemes with Memory 3.31 Continuous-Phase Frequency-Shift Keying (CPFSK) / 3.32 Continuous-Phase Modulation (CPM) Power Spectrum of Digitally Modulated Signals 3.41 Power Spectral Density of a Digitally Modulated Signal with Memory / 3.42 Power Spectral Density of Linearly Modulated Signals / 3.43 Power Spectral Density of Digitally Modulated Signals with Finite Memory / 3.44 Power Spectral Density of Modulation Schemes with a Markov Structure / 3.45 Power Spectral Densities of CPFSK and CPM Signals Bibliographical Notes and References ProblemsChapter 4 Optimum Receivers for AWGN Channels 4.1 4.24.34.4Waveform and Vector Channel Models 4.11 Optimal Detection for a General Vector Channel Waveform and Vector AWGN Channels 4.21 Optimal Detection for the Vector AWGN Channel / 4.22 Implementation of the Optimal Receiver for AWGN Channels / 4.23 A Union Bound on the Probability of Error of Maximum Likelihood Detection Optimal Detection and Error Probability for Band-Limited Signaling 4.31 Optimal Detection and Error Probability for ASK or PAM Signaling / 4.32 Optimal Detection and Error Probability for PSK Signaling / 4.33 Optimal Detection and Error Probability for QAM Signaling / 4.34 Demodulation and Detection Optimal Detection and Error Probability for Power-Limited Signaling 4.41 Optimal Detection and Error Probability for Orthogonal Signaling / 4.42 Optimal Detection and Error Probability for Biorthogonal Signaling / 4.43 Optimal Detection and Error Probability for Simplex Signaling82 82 95 95 97114131148 148 160 160 167188203 8. Proakis-27466pro57166fmSeptember 26, 200712:35viiiContents 4.5Optimal Detection in Presence of Uncertainty: Noncoherent Detection 4.51 Noncoherent Detection of Carrier Modulated Signals / 4.52 Optimal Noncoherent Detection of FSK Modulated Signals / 4.53 Error Probability of Orthogonal Signaling with Noncoherent Detection / 4.54 Probability of Error for Envelope Detection of Correlated Binary Signals / 4.55 Differential PSK (DPSK) 4.6 A Comparison of Digital Signaling Methods 4.61 Bandwidth and Dimensionality 4.7 Lattices and Constellations Based on Lattices 4.71 An Introduction to Lattices / 4.72 Signal Constellations from Lattices 4.8 Detection of Signaling Schemes with Memory 4.81 The Maximum Likelihood Sequence Detector 4.9 Optimum Receiver for CPM Signals 4.91 Optimum Demodulation and Detection of CPM / 4.92 Performance of CPM Signals / 4.93 Suboptimum Demodulation and Detection of CPM Signals 4.10 Performance Analysis for Wireline and Radio Communication Systems 4.101 Regenerative Repeaters / 4.102 Link Budget Analysis in Radio Communication Systems 4.11 Bibliographical Notes and References ProblemsChapter 5 Carrier and Symbol Synchronization 5.15.25.35.4 5.5 5.6Signal Parameter Estimation 5.11 The Likelihood Function / 5.12 Carrier Recovery and Symbol Synchronization in Signal Demodulation Carrier Phase Estimation 5.21 Maximum-Likelihood Carrier Phase Estimation / 5.22 The Phase-Locked Loop / 5.23 Effect of Additive Noise on the Phase Estimate / 5.24 Decision-Directed Loops / 5.25 Non-Decision-Directed Loops Symbol Timing Estimation 5.31 Maximum-Likelihood Timing Estimation / 5.32 Non-Decision-Directed Timing Estimation Joint Estimation of Carrier Phase and Symbol Timing Performance Characteristics of ML Estimators Bibliographical Notes and References ProblemsChapter 6 An Introduction to Information Theory 6.1Mathematical Models for Information Sources210226 230242 246259265 266 290 290295315321 323 326 327 330 331 9. Proakis-27466pro57166fmSeptember 26, 200712:35Contentsix 6.2 6.36.46.5 6.6 6.7 6.86.9A Logarithmic Measure of Information Lossless Coding of Information Sources 6.31 The Lossless Source Coding Theorem / 6.32 Lossless Coding Algorithms Lossy Data Compression 6.41 Entropy and Mutual Information for Continuous Random Variables / 6.42 The Rate Distortion Function Channel Models and Channel Capacity 6.51 Channel Models / 6.52 Channel Capacity Achieving Channel Capacity with Orthogonal Signals The Channel Reliability Function The Channel Cutoff Rate 6.81 Bhattacharyya and Chernov Bounds / 6.82 Random Coding Bibliographical Notes and References ProblemsChapter 7 Linear Block Codes 7.1 7.27.37.4 7.57.6 7.77.8Basic Denitions 7.11 The Structure of Finite Fields / 7.12 Vector Spaces General Properties of Linear Block Codes 7.21 Generator and Parity Check Matrices / 7.22 Weight and Distance for Linear Block Codes / 7.23 The Weight Distribution Polynomial / 7.24 Error Probability of Linear Block Codes Some Specic Linear Block Codes 7.31 Repetition Codes / 7.32 Hamming Codes / 7.33 Maximum-Length Codes / 7.34 Reed-Muller Codes / 7.35 Hadamard Codes / 7.36 Golay Codes Optimum Soft Decision Decoding of Linear Block Codes Hard Decision Decoding of Linear Block Codes 7.51 Error Detection and Error Correction Capability of Block Codes / 7.52 Block and Bit Error Probability for Hard Decision Decoding Comparison of Performance between Hard Decision and Soft Decision Decoding Bounds on Minimum Distance of Linear Block Codes 7.71 Singleton Bound / 7.72 Hamming Bound / 7.73 Plotkin Bound / 7.74 Elias Bound / 7.75 McEliece-Rodemich-Rumsey-Welch (MRRW) Bound / 7.76 Varshamov-Gilbert Bound Modied Linear Block Codes 7.81 Shortening and Lengthening / 7.82 Puncturing and Extending / 7.83 Expurgation and Augmentation332 335348354 367 369 371380 381 400 401 411420424 428436 440445 10. Proakis-27466pro57166fmSeptember 26, 200712:35xContents 7.97.107.11 7.12 7.13 7.14Cyclic Codes 7.91 Cyclic Codes Denition and Basic Properties / 7.92 Systematic Cyclic Codes / 7.93 Encoders for Cyclic Codes / 7.94 Decoding Cyclic Codes / 7.95 Examples of Cyclic Codes Bose-Chaudhuri-Hocquenghem (BCH) Codes 7.101 The Structure of BCH Codes / 7.102 Decoding BCH Codes Reed-Solomon Codes Coding for Channels with Burst Errors Combining Codes 7.131 Product Codes / 7.132 Concatenated Codes Bibliographical Notes and References ProblemsChapter 8 Trellis and Graph Based Codes 8.1The Structure of Convolutional Codes 8.11 Tree, Trellis, and State Diagrams / 8.12 The Transfer Function of a Convolutional Code / 8.13 Systematic, Nonrecursive, and Recursive Convolutional Codes / 8.14 The Inverse of a Convolutional Encoder and Catastrophic Codes 8.2 Decoding of Convolutional Codes 8.21 Maximum-Likelihood Decoding of Convolutional Codes The Viterbi Algorithm / 8.22 Probability of Error for Maximum-Likelihood Decoding of Convolutional Codes 8.3 Distance Properties of Binary Convolutional Codes 8.4 Punctured Convolutional Codes 8.41 Rate-Compatible Punctured Convolutional Codes 8.5 Other Decoding Algorithms for Convolutional Codes 8.6 Practical Considerations in the Application of Convolutional Codes 8.7 Nonbinary Dual-k Codes and Concatenated Codes 8.8 Maximum a Posteriori Decoding of Convolutional Codes The BCJR Algorithm 8.9 Turbo Codes and Iterative Decoding 8.91 Performance Bounds for Turbo Codes / 8.92 Iterative Decoding for Turbo Codes / 8.93 EXIT Chart Study of Iterative Decoding 8.10 Factor Graphs and the Sum-Product Algorithm 8.101 Tanner Graphs / 8.102 Factor Graphs / 8.103 The Sum-Product Algorithm / 8.104 MAP Decoding Using the Sum-Product Algorithm447463471 475 477 482 482491 491510516 516 525 532 537 541 548558 11. Proakis-27466pro57166fmSeptember 26, 200712:35Contentsxi 8.11 Low Density Parity Check Codes 8.111 Decoding LDPC Codes 8.12 Coding for Bandwidth-Constrained Channels Trellis Coded Modulation 8.121 Lattices and Trellis Coded Modulation / 8.122 Turbo-Coded Bandwidth Efcient Modulation 8.13 Bibliographical Notes and References ProblemsChapter 9 Digital Communication Through Band-Limited Channels 9.1 9.29.39.49.59.6 9.7 9.8Characterization of Band-Limited Channels Signal Design for Band-Limited Channels 9.21 Design of Band-Limited Signals for No Intersymbol InterferenceThe Nyquist Criterion / 9.22 Design of Band-Limited Signals with Controlled ISIPartial-Response Signals / 9.23 Data Detection for Controlled ISI / 9.24 Signal Design for Channels with Distortion Optimum Receiver for Channels with ISI and AWGN 9.31 Optimum Maximum-Likelihood Receiver / 9.32 A Discrete-Time Model for a Channel with ISI / 9.33 Maximum-Likelihood Sequence Estimation (MLSE) for the Discrete-Time White Noise Filter Model / 9.34 Performance of MLSE for Channels with ISI Linear Equalization 9.41 Peak Distortion Criterion / 9.42 Mean-Square-Error (MSE) Criterion / 9.43 Performance Characteristics of the MSE Equalizer / 9.44 Fractionally Spaced Equalizers / 9.45 Baseband and Passband Linear Equalizers Decision-Feedback Equalization 9.51 Coefcient Optimization / 9.52 Performance Characteristics of DFE / 9.53 Predictive Decision-Feedback Equalizer / 9.54 Equalization at the TransmitterTomlinsonHarashima Precoding Reduced Complexity ML Detectors Iterative Equalization and DecodingTurbo Equalization Bibliographical Notes and References ProblemsChapter 10 Adaptive Equalization 10.1 Adaptive Linear Equalizer 10.11 The Zero-Forcing Algorithm / 10.12 The LMS Algorithm / 10.13 Convergence Properties of the LMS568571589 590597 598 602623640661669 671 673 674 689 689 12. Proakis-27466pro57166fmSeptember 26, 200712:35xiiContents10.2 10.3 10.410.510.6Algorithm / 10.14 Excess MSE due to Noisy Gradient Estimates / 10.15 Accelerating the Initial Convergence Rate in the LMS Algorithm / 10.16 Adaptive Fractionally Spaced EqualizerThe Tap Leakage Algorithm / 10.17 An Adaptive Channel Estimator for ML Sequence Detection Adaptive Decision-Feedback Equalizer Adaptive Equalization of Trellis-Coded Signals Recursive Least-Squares Algorithms for Adaptive Equalization 10.41 Recursive Least-Squares (Kalman) Algorithm / 10.42 Linear Prediction and the Lattice Filter Self-Recovering (Blind) Equalization 10.51 Blind Equalization Based on the Maximum-Likelihood Criterion / 10.52 Stochastic Gradient Algorithms / 10.53 Blind Equalization Algorithms Based on Second- and Higher-Order Signal Statistics Bibliographical Notes and References ProblemsChapter 11 Multichannel and Multicarrier Systems 11.1 Multichannel Digital Communications in AWGN Channels 11.11 Binary Signals / 11.12 M-ary Orthogonal Signals 11.2 Multicarrier Communications 11.21 Single-Carrier Versus Multicarrier Modulation / 11.22 Capacity of a Nonideal Linear Filter Channel / 11.23 Orthogonal Frequency Division Multiplexing (OFDM) / 11.24 Modulation and Demodulation in an OFDM System / 11.25 An FFT Algorithm Implementation of an OFDM System / 11.26 Spectral Characteristics of Multicarrier Signals / 11.27 Bit and Power Allocation in Multicarrier Modulation / 11.28 Peak-to-Average Ratio in Multicarrier Modulation / 11.29 Channel Coding Considerations in Multicarrier Modulation 11.3 Bibliographical Notes and References ProblemsChapter 12 Spread Spectrum Signals for Digital Communications 12.1 Model of Spread Spectrum Digital Communication System 12.2 Direct Sequence Spread Spectrum Signals 12.21 Error Rate Performance of the Decoder / 12.22 Some Applications of DS Spread Spectrum Signals / 12.23 Effect of Pulsed Interference on DS Spread705 706 710721731 732 737 737 743759 760762 763 765 13. Proakis-27466pro57166fmSeptember 26, 200712:35Contentsxiii12.312.4 12.5 12.6Spectrum Systems / 12.24 Excision of Narrowband Interference in DS Spread Spectrum Systems / 12.25 Generation of PN Sequences Frequency-Hopped Spread Spectrum Signals 12.31 Performance of FH Spread Spectrum Signals in an AWGN Channel / 12.32 Performance of FH Spread Spectrum Signals in Partial-Band Interference / 12.33 A CDMA System Based on FH Spread Spectrum Signals Other Types of Spread Spectrum Signals Synchronization of Spread Spectrum Systems Bibliographical Notes and References ProblemsChapter 13 Fading Channels I: Characterization and Signaling 13.1 Characterization of Fading Multipath Channels 13.11 Channel Correlation Functions and Power Spectra / 13.12 Statistical Models for Fading Channels 13.2 The Effect of Signal Characteristics on the Choice of a Channel Model 13.3 Frequency-Nonselective, Slowly Fading Channel 13.4 Diversity Techniques for Fading Multipath Channels 13.41 Binary Signals / 13.42 Multiphase Signals / 13.43 M-ary Orthogonal Signals 13.5 Signaling over a Frequency-Selective, Slowly Fading Channel: The RAKE Demodulator 13.51 A Tapped-Delay-Line Channel Model / 13.52 The RAKE Demodulator / 13.53 Performance of RAKE Demodulator / 13.54 Receiver Structures for Channels with Intersymbol Interference 13.6 Multicarrier Modulation (OFDM) 13.61 Performance Degradation of an OFDM System due to Doppler Spreading / 13.62 Suppression of ICI in OFDM Systems 13.7 Bibliographical Notes and References ProblemsChapter 14 Fading Channels II: Capacity and Coding 14.1 Capacity of Fading Channels 14.11 Capacity of Finite-State Channels 14.2 Ergodic and Outage Capacity 14.21 The Ergodic Capacity of the Rayleigh Fading Channel / 14.22 The Outage Capacity of Rayleigh Fading Channels 14.3 Coding for Fading Channels802814 815 823 823830 831844 846 850869884890 891 899 900 905918 14. Proakis-27466pro57166fmSeptember 26, 200712:35xivContents 14.4 Performance of Coded Systems In Fading Channels 14.41 Coding for Fully Interleaved Channel Model 14.5 Trellis-Coded Modulation for Fading Channels 14.51 TCM Systems for Fading Channels / 14.52 Multiple Trellis-Coded Modulation (MTCM) 14.6 Bit-Interleaved Coded Modulation 14.7 Coding in the Frequency Domain 14.71 Probability of Error for Soft Decision Decoding of Linear Binary Block Codes / 14.72 Probability of Error for Hard-Decision Decoding of Linear Block Codes / 14.73 Upper Bounds on the Performance of Convolutional Codes for a Rayleigh Fading Channel / 14.74 Use of Constant-Weight Codes and Concatenated Codes for a Fading Channel 14.8 The Channel Cutoff Rate for Fading Channels 14.81 Channel Cutoff Rate for Fully Interleaved Fading Channels with CSI at Receiver 14.9 Bibliographical Notes and References ProblemsChapter 15 Multiple-Antenna Systems 15.1 Channel Models for Multiple-Antenna Systems 15.11 Signal Transmission Through a Slow Fading Frequency-Nonselective MIMO Channel / 15.12 Detection of Data Symbols in a MIMO System / 15.13 Signal Transmission Through a Slow Fading Frequency-Selective MIMO Channel 15.2 Capacity of MIMO Channels 15.21 Mathematical Preliminaries / 15.22 Capacity of a Frequency-Nonselective Deterministic MIMO Channel / 15.23 Capacity of a Frequency-Nonselective Ergodic Random MIMO Channel / 15.24 Outage Capacity / 15.25 Capacity of MIMO Channel When the Channel Is Known at the Transmitter 15.3 Spread Spectrum Signals and Multicode Transmission 15.31 Orthogonal Spreading Sequences / 15.32 Multiplexing Gain Versus Diversity Gain / 15.33 Multicode MIMO Systems 15.4 Coding for MIMO Channels 15.41 Performance of Temporally Coded SISO Systems in Rayleigh Fading Channels / 15.42 Bit-Interleaved Temporal Coding for MIMO Channels / 15.43 Space-Time Block Codes for MIMO Channels / 15.44 Pairwise Error Probability for a Space-Time Code / 15.45 Space-Time Trellis Codes for MIMO Channels / 15.46 Concatenated Space-Time Codes and Turbo Codes919 929936 942957960 961 966 9669819921001 15. Proakis-27466pro57166fmSeptember 28, 200720:5Contentsxv 15.5 Bibliographical Notes and References ProblemsChapter 16 Multiuser Communications 16.1 Introduction to Multiple Access Techniques 16.2 Capacity of Multiple Access Methods 16.3 Multiuser Detection in CDMA Systems 16.31 CDMA Signal and Channel Models / 16.32 The Optimum Multiuser Receiver / 16.33 Suboptimum Detectors / 16.34 Successive Interference Cancellation / 16.35 Other Types of Multiuser Detectors / 16.36 Performance Characteristics of Detectors 16.4 Multiuser MIMO Systems for Broadcast Channels 16.41 Linear Precoding of the Transmitted Signals / 16.42 Nonlinear Precoding of the Transmitted SignalsThe QR Decomposition / 16.43 Nonlinear Vector Precoding / 16.44 Lattice Reduction Technique for Precoding 16.5 Random Access Methods 16.51 ALOHA Systems and Protocols / 16.52 Carrier Sense Systems and Protocols 16.6 Bibliographical Notes and References ProblemsAppendix A Matrices A.1 A.2 A.3 A.4Eigenvalues and Eigenvectors of a Matrix Singular-Value Decomposition Matrix Norm and Condition Number The MoorePenrose Pseudoinverse1021 1021 1028 1028 1031 1036105310681077 1078 1085 1086 1087 1088 1088Appendix B Error Probability for Multichannel Binary Signals1090Appendix C Error Probabilities for Adaptive Reception of M-Phase Signals1096C.1 C.2 C.3 C.4Mathematical Model for an M-Phase Signaling Communication System Characteristic Function and Probability Density Function of the Phase Error Probabilities for Slowly Fading Rayleigh Channels Error Probabilities for Time-Invariant and Ricean Fading Channels1096 1098 1100 1104Appendix D Square Root Factorization1107References and Bibliography1109Index1142 16. Proakis-27466pro57166fmSeptember 26, 200712:35P R E F A C EIt is a pleasure to welcome Professor Masoud Salehi as a coauthor to the fth edition of Digital Communications. This new edition has undergone a major revision and reorganization of topics, especially in the area of channel coding and decoding. A new chapter on multiple-antenna systems has been added as well. The book is designed to serve as a text for a rst-year graduate-level course for students in electrical engineering. It is also designed to serve as a text for self-study and as a reference book for the practicing engineer involved in the design and analysis of digital communications systems. As to background, we presume that the reader has a thorough understanding of basic calculus and elementary linear systems theory and prior knowledge of probability and stochastic processes. Chapter 1 is an introduction to the subject, including a historical perspective and a description of channel characteristics and channel models. Chapter 2 contains a review of deterministic and random signal analysis, including bandpass and lowpass signal representations, bounds on the tail probabilities of random variables, limit theorems for sums of random variables, and random processes. Chapter 3 treats digital modulation techniques and the power spectrum of digitally modulated signals. Chapter 4 is focused on optimum receivers for additive white Gaussian noise (AWGN) channels and their error rate performance. Also included in this chapter is an introduction to lattices and signal constellations based on lattices, as well as link budget analyses for wireline and radio communication systems. Chapter 5 is devoted to carrier phase estimation and time synchronization methods based on the maximum-likelihood criterion. Both decision-directed and non-decisiondirected methods are described. Chapter 6 provides an introduction to topics in information theory, including lossless source coding, lossy data compression, channel capacity for different channel models, and the channel reliability function. Chapter 7 treats linear block codes and their properties. Included is a treatment of cyclic codes, BCH codes, Reed-Solomon codes, and concatenated codes. Both soft decision and hard decision decoding methods are described, and their performance in AWGN channels is evaluated. Chapter 8 provides a treatment of trellis codes and graph-based codes, including convolutional codes, turbo codes, low density parity check (LDPC) codes, trellis codes for band-limited channels, and codes based on lattices. Decoding algorithms are also treated, including the Viterbi algorithm and its performance on AWGNxvi 17. Proakis-27466pro57166fmSeptember 26, 200712:35Prefacechannels, the BCJR algorithm for iterative decoding of turbo codes, and the sum-product algorithm. Chapter 9 is focused on digital communication through band-limited channels. Topics treated in this chapter include the characterization and signal design for bandlimited channels, the optimum receiver for channels with intersymbol interference and AWGN, and suboptimum equalization methods, namely, linear equalization, decisionfeedback equalization, and turbo equalization. Chapter 10 treats adaptive channel equalization. The LMS and recursive leastsquares algorithms are described together with their performance characteristics. This chapter also includes a treatment of blind equalization algorithms. Chapter 11 provides a treatment of multichannel and multicarrier modulation. Topics treated include the error rate performance of multichannel binary signal and M-ary orthogonal signals in AWGN channels; the capacity of a nonideal linear lter channel with AWGN; OFDM modulation and demodulation; bit and power allocation in an OFDM system; and methods to reduce the peak-to-average power ratio in OFDM. Chapter 12 is focused on spread spectrum signals and systems, with emphasis on direct sequence and frequency-hopped spread spectrum systems and their performance. The benets of coding in the design of spread spectrum signals is emphasized throughout this chapter. Chapter 13 treats communication through fading channels, including the characterization of fading channels and the key important parameters of multipath spread and Doppler spread. Several channel fading statistical models are introduced, with emphasis placed on Rayleigh fading, Ricean fading, and Nakagami fading. An analysis of the performance degradation caused by Doppler spread in an OFDM system is presented, and a method for reducing this performance degradation is described. Chapter 14 is focused on capacity and code design for fading channels. After introducing ergodic and outage capacities, coding for fading channels is studied. Bandwidthefcient coding and bit-interleaved coded modulation are treated, and the performance of coded systems in Rayleigh and Ricean fading is derived. Chapter 15 provides a treatment of multiple-antenna systems, generally called multiple-input, multiple-output (MIMO) systems, which are designed to yield spatial signal diversity and spatial multiplexing. Topics treated in this chapter include detection algorithms for MIMO channels, the capacity of MIMO channels with AWGN without and with signal fading, and space-time coding. Chapter 16 treats multiuser communications, including the topics of the capacity of multiple-access methods, multiuser detection methods for the uplink in CDMA systems, interference mitigation in multiuser broadcast channels, and random access methods such as ALOHA and carrier-sense multiple access (CSMA). With 16 chapters and a variety of topics, the instructor has the exibility to design either a one- or two-semester course. Chapters 3, 4, and 5 provide a basic treatment of digital modulation/demodulation and detection methods. Channel coding and decoding treated in Chapters 7, 8, and 9 can be included along with modulation/demodulation in a one-semester course. Alternatively, Chapters 9 through 12 can be covered in place of channel coding and decoding. A second semester course can cover the topics ofxvii 18. Proakis-27466xviiipro57166fmSeptember 27, 200718:12Prefacecommunication through fading channels, multiple-antenna systems, and multiuser communications. The authors and McGraw-Hill would like to thank the following reviewers for their suggestions on selected chapters of the fth edition manuscript: Paul Salama, Indiana University/Purdue University, Indianapolis; Dimitrios Hatzinakos, University of Toronto, and Ender Ayanoglu, University of California, Irvine. Finally, the rst author wishes to thank Gloria Doukakis for her assistance in typing parts of the manuscript. We also thank Patrick Amihood for preparing several graphs in Chapters 15 and 16 and Apostolos Rizos and Kostas Stamatiou for preparing parts of the Solutions Manual. 19. Proakis-27466bookSeptember 25, 200711:61IntroductionIn this book, we present the basic principles that underlie the analysis and design of digital communication systems. The subject of digital communications involves the transmission of information in digital form from a source that generates the information to one or more destinations. Of particular importance in the analysis and design of communication systems are the characteristics of the physical channels through which the information is transmitted. The characteristics of the channel generally affect the design of the basic building blocks of the communication system. Below, we describe the elements of a communication system and their functions.1.1 ELEMENTS OF A DIGITAL COMMUNICATION SYSTEMFigure 1.11 illustrates the functional diagram and the basic elements of a digital communication system. The source output may be either an analog signal, such as an audio or video signal, or a digital signal, such as the output of a computer, that is discrete in time and has a nite number of output characters. In a digital communication system, the messages produced by the source are converted into a sequence of binary digits. Ideally, we should like to represent the source output (message) by as few binary digits as possible. In other words, we seek an efcient representation of the source output that results in little or no redundancy. The process of efciently converting the output of either an analog or digital source into a sequence of binary digits is called source encoding or data compression. The sequence of binary digits from the source encoder, which we call the information sequence, is passed to the channel encoder. The purpose of the channel encoder is to introduce, in a controlled manner, some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encountered in the transmission of the signal through the channel. Thus, the added redundancy serves to increase the reliability of the received data and improves 1 20. Proakis-27466bookSeptember 25, 200711:62Digital CommunicationsFIGURE 1.11 Basic elements of a digital communication system.the delity of the received signal. In effect, redundancy in the information sequence aids the receiver in decoding the desired information sequence. For example, a (trivial) form of encoding of the binary information sequence is simply to repeat each binary digit m times, where m is some positive integer. More sophisticated (nontrivial) encoding involves taking k information bits at a time and mapping each k-bit sequence into a unique n-bit sequence, called a code word. The amount of redundancy introduced by encoding the data in this manner is measured by the ratio n/k. The reciprocal of this ratio, namely k/n, is called the rate of the code or, simply, the code rate. The binary sequence at the output of the channel encoder is passed to the digital modulator, which serves as the interface to the communication channel. Since nearly all the communication channels encountered in practice are capable of transmitting electrical signals (waveforms), the primary purpose of the digital modulator is to map the binary information sequence into signal waveforms. To elaborate on this point, let us suppose that the coded information sequence is to be transmitted one bit at a time at some uniform rate R bits per second (bits/s). The digital modulator may simply map the binary digit 0 into a waveform s0 (t) and the binary digit 1 into a waveform s1 (t). In this manner, each bit from the channel encoder is transmitted separately. We call this binary modulation. Alternatively, the modulator may transmit b coded information bits at a time by using M = 2b distinct waveforms si (t), i = 0, 1, . . . , M 1, one waveform for each of the 2b possible b-bit sequences. We call this M-ary modulation (M > 2). Note that a new b-bit sequence enters the modulator every b/R seconds. Hence, when the channel bit rate R is xed, the amount of time available to transmit one of the M waveforms corresponding to a b-bit sequence is b times the time period in a system that uses binary modulation. The communication channel is the physical medium that is used to send the signal from the transmitter to the receiver. In wireless transmission, the channel may be the atmosphere (free space). On the other hand, telephone channels usually employ a variety of physical media, including wire lines, optical ber cables, and wireless (microwave radio). Whatever the physical medium used for transmission of the information, the essential feature is that the transmitted signal is corrupted in a random manner by a 21. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introductionvariety of possible mechanisms, such as additive thermal noise generated by electronic devices; man-made noise, e.g., automobile ignition noise; and atmospheric noise, e.g., electrical lightning discharges during thunderstorms. At the receiving end of a digital communication system, the digital demodulator processes the channel-corrupted transmitted waveform and reduces the waveforms to a sequence of numbers that represent estimates of the transmitted data symbols (binary or M-ary). This sequence of numbers is passed to the channel decoder, which attempts to reconstruct the original information sequence from knowledge of the code used by the channel encoder and the redundancy contained in the received data. A measure of how well the demodulator and decoder perform is the frequency with which errors occur in the decoded sequence. More precisely, the average probability of a bit-error at the output of the decoder is a measure of the performance of the demodulatordecoder combination. In general, the probability of error is a function of the code characteristics, the types of waveforms used to transmit the information over the channel, the transmitter power, the characteristics of the channel (i.e., the amount of noise, the nature of the interference), and the method of demodulation and decoding. These items and their effect on performance will be discussed in detail in subsequent chapters. As a nal step, when an analog output is desired, the source decoder accepts the output sequence from the channel decoder and, from knowledge of the source encoding method used, attempts to reconstruct the original signal from the source. Because of channel decoding errors and possible distortion introduced by the source encoder, and perhaps, the source decoder, the signal at the output of the source decoder is an approximation to the original source output. The difference or some function of the difference between the original signal and the reconstructed signal is a measure of the distortion introduced by the digital communication system.1.2 COMMUNICATION CHANNELS AND THEIR CHARACTERISTICSAs indicated in the preceding discussion, the communication channel provides the connection between the transmitter and the receiver. The physical channel may be a pair of wires that carry the electrical signal, or an optical ber that carries the information on a modulated light beam, or an underwater ocean channel in which the information is transmitted acoustically, or free space over which the information-bearing signal is radiated by use of an antenna. Other media that can be characterized as communication channels are data storage media, such as magnetic tape, magnetic disks, and optical disks. One common problem in signal transmission through any channel is additive noise. In general, additive noise is generated internally by components such as resistors and solid-state devices used to implement the communication system. This is sometimes called thermal noise. Other sources of noise and interference may arise externally to the system, such as interference from other users of the channel. When such noise and interference occupy the same frequency band as the desired signal, their effect can be minimized by the proper design of the transmitted signal and its demodulator at3 22. Proakis-274664bookSeptember 25, 200711:6Digital Communicationsthe receiver. Other types of signal degradations that may be encountered in transmission over the channel are signal attenuation, amplitude and phase distortion, and multipath distortion. The effects of noise may be minimized by increasing the power in the transmitted signal. However, equipment and other practical constraints limit the power level in the transmitted signal. Another basic limitation is the available channel bandwidth. A bandwidth constraint is usually due to the physical limitations of the medium and the electronic components used to implement the transmitter and the receiver. These two limitations constrain the amount of data that can be transmitted reliably over any communication channel as we shall observe in later chapters. Below, we describe some of the important characteristics of several communication channels. Wireline Channels The telephone network makes extensive use of wire lines for voice signal transmission, as well as data and video transmission. Twisted-pair wire lines and coaxial cable are basically guided electromagnetic channels that provide relatively modest bandwidths. Telephone wire generally used to connect a customer to a central ofce has a bandwidth of several hundred kilohertz (kHz). On the other hand, coaxial cable has a usable bandwidth of several megahertz (MHz). Figure 1.21 illustrates the frequency range of guided electromagnetic channels, which include waveguides and optical bers. Signals transmitted through such channels are distorted in both amplitude and phase and further corrupted by additive noise. Twisted-pair wireline channels are also prone to crosstalk interference from physically adjacent channels. Because wireline channels carry a large percentage of our daily communications around the country and the world, much research has been performed on the characterization of their transmission properties and on methods for mitigating the amplitude and phase distortion encountered in signal transmission. In Chapter 9, we describe methods for designing optimum transmitted signals and their demodulation; in Chapter 10, we consider the design of channel equalizers that compensate for amplitude and phase distortion on these channels. Fiber-Optic Channels Optical bers offer the communication system designer a channel bandwidth that is several orders of magnitude larger than coaxial cable channels. During the past two decades, optical ber cables have been developed that have a relatively low signal attenuation, and highly reliable photonic devices have been developed for signal generation and signal detection. These technological advances have resulted in a rapid deployment of optical ber channels, both in domestic telecommunication systems as well as for transcontinental communication. With the large bandwidth available on ber-optic channels, it is possible for telephone companies to offer subscribers a wide array of telecommunication services, including voice, data, facsimile, and video. The transmitter or modulator in a ber-optic communication system is a light source, either a light-emitting diode (LED) or a laser. Information is transmitted by varying (modulating) the intensity of the light source with the message signal. The light propagates through the ber as a light wave and is amplied periodically (in the case of 23. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introduction5 FIGURE 1.21 Frequency range for guided wire channel.digital transmission, it is detected and regenerated by repeaters) along the transmission path to compensate for signal attenuation. At the receiver, the light intensity is detected by a photodiode, whose output is an electrical signal that varies in direct proportion to the power of the light impinging on the photodiode. Sources of noise in ber-optic channels are photodiodes and electronic ampliers. Wireless Electromagnetic Channels In wireless communication systems, electromagnetic energy is coupled to the propagation medium by an antenna which serves as the radiator. The physical size and the conguration of the antenna depend primarily on the frequency of operation. To obtain efcient radiation of electromagnetic energy, the antenna must be longer than 24. Proakis-27466bookSeptember 25, 2007611:6Digital Communications 1 10of the wavelength. Consequently, a radio station transmitting in the amplitudemodulated (AM) frequency band, say at f c = 1 MHz [corresponding to a wavelength of = c/ f c = 300 meters (m)], requires an antenna of at least 30 m. Other important characteristics and attributes of antennas for wireless transmission are described in Chapter 4. Figure 1.22 illustrates the various frequency bands of the electromagnetic spectrum. The mode of propagation of electromagnetic waves in the atmosphere and inFIGURE 1.22 Frequency range for wireless electromagnetic channels. [Adapted from Carlson (1975), 2nd edition, c McGraw-Hill Book Company Co. Reprinted with permission of the publisher.] 25. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introduction7 FIGURE 1.23 Illustration of ground-wave propagation.free space may be subdivided into three categories, namely, ground-wave propagation, sky-wave propagation, and line-of-sight (LOS) propagation. In the very low frequency (VLF) and audio frequency bands, where the wavelengths exceed 10 km, the earth and the ionosphere act as a waveguide for electromagnetic wave propagation. In these frequency ranges, communication signals practically propagate around the globe. For this reason, these frequency bands are primarily used to provide navigational aids from shore to ships around the world. The channel bandwidths available in these frequency bands are relatively small (usually 110 percent of the center frequency), and hence the information that is transmitted through these channels is of relatively slow speed and generally conned to digital transmission. A dominant type of noise at these frequencies is generated from thunderstorm activity around the globe, especially in tropical regions. Interference results from the many users of these frequency bands. Ground-wave propagation, as illustrated in Figure 1.23, is the dominant mode of propagation for frequencies in the medium frequency (MF) band (0.33 MHz). This is the frequency band used for AM broadcasting and maritime radio broadcasting. In AM broadcasting, the range with ground-wave propagation of even the more powerful radio stations is limited to about 150 km. Atmospheric noise, man-made noise, and thermal noise from electronic components at the receiver are dominant disturbances for signal transmission in the MF band. Sky-wave propagation, as illustrated in Figure 1.24, results from transmitted signals being reected (bent or refracted) from the ionosphere, which consists of several layers of charged particles ranging in altitude from 50 to 400 km above the surface of the earth. During the daytime hours, the heating of the lower atmosphere by the sun causes the formation of the lower layers at altitudes below 120 km. These lower layers, especially the D-layer, serve to absorb frequencies below 2 MHz, thus severely limiting sky-wave propagation of AM radio broadcast. However, during the nighttime hours, the electron density in the lower layers of the ionosphere drops sharply and the frequency absorption that occurs during the daytime is signicantly reduced. As a consequence, powerful AM radio broadcast stations can propagate over large distances via sky wave over the F-layer of the ionosphere, which ranges from 140 to 400 km above the surface of the earth. FIGURE 1.24 Illustration of sky-wave propagation. 26. Proakis-274668bookSeptember 25, 200711:6Digital CommunicationsA frequently occurring problem with electromagnetic wave propagation via sky wave in the high frequency (HF) range is signal multipath. Signal multipath occurs when the transmitted signal arrives at the receiver via multiple propagation paths at different delays. It generally results in intersymbol interference in a digital communication system. Moreover, the signal components arriving via different propagation paths may add destructively, resulting in a phenomenon called signal fading, which most people have experienced when listening to a distant radio station at night when sky wave is the dominant propagation mode. Additive noise in the HF range is a combination of atmospheric noise and thermal noise. Sky-wave ionospheric propagation ceases to exist at frequencies above approximately 30 MHz, which is the end of the HF band. However, it is possible to have ionospheric scatter propagation at frequencies in the range 3060 MHz, resulting from signal scattering from the lower ionosphere. It is also possible to communicate over distances of several hundred miles by use of tropospheric scattering at frequencies in the range 40300 MHz. Troposcatter results from signal scattering due to particles in the atmosphere at altitudes of 10 miles or less. Generally, ionospheric scatter and tropospheric scatter involve large signal propagation losses and require a large amount of transmitter power and relatively large antennas. Frequencies above 30 MHz propagate through the ionosphere with relatively little loss and make satellite and extraterrestrial communications possible. Hence, at frequencies in the very high frequency (VHF) band and higher, the dominant mode of electromagnetic propagation is LOS propagation. For terrestrial communication systems, this means that the transmitter and receiver antennas must be in direct LOS with relatively little or no obstruction. For this reason, television stations transmitting in the VHF and ultra high frequency (UHF) bands mount their antennas on high towers to achieve a broad coverage area. In general, the coverage area for LOS propagation is limited by the curvature of the earth. If the transmitting antenna is mounted at a height h m above the surface of the earth, the distance to the radio horizon, assuming no physical obstructions such as mountains, is approximately d = 15h km. For example, a television antenna mounted on a tower of 300 m in height provides a coverage of approximately 67 km. As another example, microwave radio relay systems used extensively for telephone and video transmission at frequencies above 1 gigahertz (GHz) have antennas mounted on tall towers or on the top of tall buildings. The dominant noise limiting the performance of a communication system in VHF and UHF ranges is thermal noise generated in the receiver front end and cosmic noise picked up by the antenna. At frequencies in the super high frequency (SHF) band above 10 GHz, atmospheric conditions play a major role in signal propagation. For example, at 10 GHz, the attenuation ranges from about 0.003 decibel per kilometer (dB/km) in light rain to about 0.3 dB/km in heavy rain. At 100 GHz, the attenuation ranges from about 0.1 dB/km in light rain to about 6 dB/km in heavy rain. Hence, in this frequency range, heavy rain introduces extremely high propagation losses that can result in service outages (total breakdown in the communication system). At frequencies above the extremely high frequency (EHF) band, we have the infrared and visible light regions of the electromagnetic spectrum, which can be used to provide LOS optical communication in free space. To date, these frequency bands 27. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introductionhave been used in experimental communication systems, such as satellite-to-satellite links. Underwater Acoustic Channels Over the past few decades, ocean exploration activity has been steadily increasing. Coupled with this increase is the need to transmit data, collected by sensors placed under water, to the surface of the ocean. From there, it is possible to relay the data via a satellite to a data collection center. Electromagnetic waves do not propagate over long distances under water except at extremely low frequencies. However, the transmission of signals at such low frequencies is prohibitively expensive because of the large and powerful transmitters required. The attenuation of electromagnetic waves in water can be expressed in terms of the skin depth, which is the distance a signal is attenuated by 1/e. For seawater, the skin depth = 250/ f , where f is expressed in Hz and is in m. For example, at 10 kHz, the skin depth is 2.5 m. In contrast, acoustic signals propagate over distances of tens and even hundreds of kilometers. An underwater acoustic channel is characterized as a multipath channel due to signal reections from the surface and the bottom of the sea. Because of wave motion, the signal multipath components undergo time-varying propagation delays that result in signal fading. In addition, there is frequency-dependent attenuation, which is approximately proportional to the square of the signal frequency. The sound velocity is nominally about 1500 m/s, but the actual value will vary either above or below the nominal value depending on the depth at which the signal propagates. Ambient ocean acoustic noise is caused by shrimp, sh, and various mammals. Near harbors, there is also man-made acoustic noise in addition to the ambient noise. In spite of this hostile environment, it is possible to design and implement efcient and highly reliable underwater acoustic communication systems for transmitting digital signals over large distances. Storage Channels Information storage and retrieval systems constitute a very signicant part of datahandling activities on a daily basis. Magnetic tape, including digital audiotape and videotape, magnetic disks used for storing large amounts of computer data, optical disks used for computer data storage, and compact disks are examples of data storage systems that can be characterized as communication channels. The process of storing data on a magnetic tape or a magnetic or optical disk is equivalent to transmitting a signal over a telephone or a radio channel. The readback process and the signal processing involved in storage systems to recover the stored information are equivalent to the functions performed by a receiver in a telephone or radio communication system to recover the transmitted information. Additive noise generated by the electronic components and interference from adjacent tracks is generally present in the readback signal of a storage system, just as is the case in a telephone or a radio communication system. The amount of data that can be stored is generally limited by the size of the disk or tape and the density (number of bits stored per square inch) that can be achieved by9 28. Proakis-27466bookSeptember 25, 20071011:6Digital Communicationsthe write/read electronic systems and heads. For example, a packing density of 109 bits per square inch has been demonstrated in magnetic disk storage systems. The speed at which data can be written on a disk or tape and the speed at which it can be read back are also limited by the associated mechanical and electrical subsystems that constitute an information storage system. Channel coding and modulation are essential components of a well-designed digital magnetic or optical storage system. In the readback process, the signal is demodulated and the added redundancy introduced by the channel encoder is used to correct errors in the readback signal.1.3 MATHEMATICAL MODELS FOR COMMUNICATION CHANNELSIn the design of communication systems for transmitting information through physical channels, we nd it convenient to construct mathematical models that reect the most important characteristics of the transmission medium. Then, the mathematical model for the channel is used in the design of the channel encoder and modulator at the transmitter and the demodulator and channel decoder at the receiver. Below, we provide a brief description of the channel models that are frequently used to characterize many of the physical channels that we encounter in practice. The Additive Noise Channel The simplest mathematical model for a communication channel is the additive noise channel, illustrated in Figure 1.31. In this model, the transmitted signal s(t) is corrupted by an additive random noise process n(t). Physically, the additive noise process may arise from electronic components and ampliers at the receiver of the communication system or from interference encountered in transmission (as in the case of radio signal transmission). If the noise is introduced primarily by electronic components and ampliers at the receiver, it may be characterized as thermal noise. This type of noise is characterized statistically as a Gaussian noise process. Hence, the resulting mathematical model for the channel is usually called the additive Gaussian noise channel. Because this channel model applies to a broad class of physical communication channels and because of its mathematical tractability, this is the predominant channel model used in our communication system analysis and design. Channel attenuation is easily incorporated into the model. When the signal undergoes attenuation in transmission through theFIGURE 1.31 The additive noise channel. 29. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introduction11 FIGURE 1.32 The linear lter channel with additive noise.channel, the received signal is r (t) = s(t) + n(t)(1.31)where is the attenuation factor. The Linear Filter Channel In some physical channels, such as wireline telephone channels, lters are used to ensure that the transmitted signals do not exceed specied bandwidth limitations and thus do not interfere with one another. Such channels are generally characterized mathematically as linear lter channels with additive noise, as illustrated in Figure 1.32. Hence, if the channel input is the signal s(t), the channel output is the signal r (t) = s(t) c(t) + n(t) = c( )s(t ) d + n(t)(1.32)where c(t) is the impulse response of the linear lter and denotes convolution. The Linear Time-Variant Filter Channel Physical channels such as underwater acoustic channels and ionospheric radio channels that result in time-variant multipath propagation of the transmitted signal may be characterized mathematically as time-variant linear lters. Such linear lters are characterized by a time-variant channel impulse response c( ; t), where c( ; t) is the response of the channel at time t due to an impulse applied at time t . Thus, represents the age (elapsed-time) variable. The linear time-variant lter channel with additive noise is illustrated in Figure 1.33. For an input signal s(t), the channel output signal is r (t) = s(t) c( ; t) + n(t) = c( ; t)s(t ) d + n(t)FIGURE 1.33 Linear time-variant lter channel with additive noise.(1.33) 30. Proakis-27466bookSeptember 25, 200711:612Digital CommunicationsA good model for multipath signal propagation through physical channels, such as the ionosphere (at frequencies below 30 MHz) and mobile cellular radio channels, is a special case of (1.33) in which the time-variant impulse response has the form Lak (t)( k )c( ; t) =(1.34)k=1where the {ak (t)} represents the possibly time-variant attenuation factors for the L multipath propagation paths and {k } are the corresponding time delays. If (1.34) is substituted into (1.33), the received signal has the form Lak (t)s(t k ) + n(t)r (t) =(1.35)k=1Hence, the received signal consists of L multipath components, where the kth component is attenuated by ak (t) and delayed by k . The three mathematical models described above adequately characterize the great majority of the physical channels encountered in practice. These three channel models are used in this text for the analysis and design of communication systems.1.4 A HISTORICAL PERSPECTIVE IN THE DEVELOPMENT OF DIGITAL COMMUNICATIONSIt is remarkable that the earliest form of electrical communication, namely telegraphy, was a digital communication system. The electric telegraph was developed by Samuel Morse and was demonstrated in 1837. Morse devised the variable-length binary code in which letters of the English alphabet are represented by a sequence of dots and dashes (code words). In this code, more frequently occurring letters are represented by short code words, while letters occurring less frequently are represented by longer code words. Thus, the Morse code was the precursor of the variable-length source coding methods described in Chapter 6. Nearly 40 years later, in 1875, Emile Baudot devised a code for telegraphy in which every letter was encoded into xed-length binary code words of length 5. In the Baudot code, binary code elements are of equal length and designated as mark and space. Although Morse is responsible for the development of the rst electrical digital communication system (telegraphy), the beginnings of what we now regard as modern digital communications stem from the work of Nyquist (1924), who investigated the problem of determining the maximum signaling rate that can be used over a telegraph channel of a given bandwidth without intersymbol interference. He formulated a model of a telegraph system in which a transmitted signal has the general form an g(t nT )s(t) = n(1.41) 31. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introduction13where g(t) represents a basic pulse shape and {an } is the binary data sequence of {1} transmitted at a rate of 1/T bits/s. Nyquist set out to determine the optimum pulse shape that was band-limited to W Hz and maximized the bit rate under the constraint that the pulse caused no intersymbol interference at the sampling time k/T, k = 0, 1, 2, . . . . His studies led him to conclude that the maximum pulse rate is 2W pulses/s. This rate is now called the Nyquist rate. Moreover, this pulse rate can be achieved by using the pulses g(t) = (sin 2 W t)/2 W t. This pulse shape allows recovery of the data without intersymbol interference at the sampling instants. Nyquists result is equivalent to a version of the sampling theorem for band-limited signals, which was later stated precisely by Shannon (1948b). The sampling theorem states that a signal of bandwidth W can be reconstructed from samples taken at the Nyquist rate of 2W samples/s using the interpolation formula s(t) =s nn 2Wsin[2 W (t n/2W )] 2 W (t n/2W )(1.42)In light of Nyquists work, Hartley (1928) considered the issue of the amount of data that can be transmitted reliably over a band-limited channel when multiple amplitude levels are used. Because of the presence of noise and other interference, Hartley postulated that the receiver can reliably estimate the received signal amplitude to some accuracy, say A . This investigation led Hartley to conclude that there is a maximum data rate that can be communicated reliably over a band-limited channel when the maximum signal amplitude is limited to Amax (xed power constraint) and the amplitude resolution is A . Another signicant advance in the development of communications was the work of Kolmogorov (1939) and Wiener (1942), who considered the problem of estimating a desired signal waveform s(t) in the presence of additive noise n(t), based on observation of the received signal r (t) = s(t) + n(t). This problem arises in signal demodulation. Kolmogorov and Wiener determined the linear lter whose output is the best meansquare approximation to the desired signal s(t). The resulting lter is called the optimum linear (KolmogorovWiener) lter. Hartleys and Nyquists results on the maximum transmission rate of digital information were precursors to the work of Shannon (1948a,b), who established the mathematical foundations for information transmission and derived the fundamental limits for digital communication systems. In his pioneering work, Shannon formulated the basic problem of reliable transmission of information in statistical terms, using probabilistic models for information sources and communication channels. Based on such a statistical formulation, he adopted a logarithmic measure for the information content of a source. He also demonstrated that the effect of a transmitter power constraint, a bandwidth constraint, and additive noise can be associated with the channel and incorporated into a single parameter, called the channel capacity. For example, in the case of an additive white (spectrally at) Gaussian noise interference, an ideal band-limited channel of bandwidth W has a capacity C given by C = W log2 1 +P W N0bits/s(1.43) 32. Proakis-27466bookSeptember 25, 20071411:6Digital Communicationswhere P is the average transmitted power and N0 is the power spectral density of the additive noise. The signicance of the channel capacity is as follows: If the information rate R from the source is less than C(R < C), then it is theoretically possible to achieve reliable (error-free) transmission through the channel by appropriate coding. On the other hand, if R > C, reliable transmission is not possible regardless of the amount of signal processing performed at the transmitter and receiver. Thus, Shannon established basic limits on communication of information and gave birth to a new eld that is now called information theory. Another important contribution to the eld of digital communication is the work of Kotelnikov (1947), who provided a coherent analysis of the various digital communication systems based on a geometrical approach. Kotelnikovs approach was later expanded by Wozencraft and Jacobs (1965). Following Shannons publications came the classic work of Hamming (1950) on error-detecting and error-correcting codes to combat the detrimental effects of channel noise. Hammings work stimulated many researchers in the years that followed, and a variety of new and powerful codes were discovered, many of which are used today in the implementation of modern communication systems. The increase in demand for data transmission during the last four decades, coupled with the development of more sophisticated integrated circuits, has led to the development of very efcient and more reliable digital communication systems. In the course of these developments, Shannons original results and the generalization of his results on maximum transmission limits over a channel and on bounds on the performance achieved have served as benchmarks for any given communication system design. The theoretical limits derived by Shannon and other researchers that contributed to the development of information theory serve as an ultimate goal in the continuing efforts to design and develop more efcient digital communication systems. There have been many new advances in the area of digital communications following the early work of Shannon, Kotelnikov, and Hamming. Some of the most notable advances are the following: The development of new block codes by Muller (1954), Reed (1954), Reed and Solomon (1960), Bose and Ray-Chaudhuri (1960a,b), and Goppa (1970, 1971). The development of concatenated codes by Forney (1966a). The development of computationally efcient decoding of BoseChaudhuriHocquenghem (BCH) codes, e.g., the BerlekampMassey algorithm (see Chien, 1964; Berlekamp, 1968). The development of convolutional codes and decoding algorithms by Wozencraft and Reiffen (1961), Fano (1963), Zigangirov (1966), Jelinek (1969), Forney (1970b, 1972, 1974), and Viterbi (1967, 1971). The development of trellis-coded modulation by Ungerboeck (1982), Forney et al. (1984), Wei (1987), and others. The development of efcient source encodings algorithms for data compression, such as those devised by Ziv and Lempel (1977, 1978), and Linde et al. (1980). The development of low-density parity check (LDPC) codes and the sum-product decoding algorithm by Gallager (1963). The development of turbo codes and iterative decoding by Berrou et al. (1993). 33. Proakis-27466bookSeptember 25, 200711:6Chapter One: Introduction1.5 OVERVIEW OF THE BOOKChapter 2 presents a review of deterministic and random signal analysis. Our primary objectives in this chapter are to review basic notions in the theory of probability and random variables and to establish some necessary notation. Chapters 3 through 5 treat the geometric representation of various digital modulation signals, their demodulation, their error rate performance in additive, white Gaussian noise (AWGN) channels, and methods for synchronizing the receiver to the received signal waveforms. Chapters 6 to 8 treat the topics of source coding, channel coding and decoding, and basic information theoretic limits on channel capacity, source information rates, and channel coding rates. The design of efcient modulators and demodulators for linear lter channels with distortion is treated in Chapters 9 and 10. Channel equalization methods are described for mitigating the effects of channel distortion. Chapter 11 is focused on multichannel and multicarrier communication systems, their efcient implementation, and their performance in AWGN channels. Chapter 12 presents an introduction to direct sequence and frequency hopped spread spectrum signals and systems and an evaluation of their performance under worst-case interference conditions. The design of signals and coding techniques for digital communication through fading multipath channels is the focus of Chapters 13 and 14. This material is especially relevant to the design and development of wireless communication systems. Chapter 15 treats the use of multiple transmit and receive antennas for improving the performance of wireless communication systems through signal diversity and increasing the data rate via spatial multiplexing. The capacity of multiple antenna systems is evaluated and space-time codes are described for use in multiple antenna communication systems. Chapter 16 of this book presents an introduction to multiuser communication systems and multiple access methods. We consider detection algorithms for uplink transmission in which multiple users transmit data to a common receiver (a base station) and evaluate their performance. We also present algorithms for suppressing multiple access interference in a broadcast communication system in which a transmitter employing multiple antennas transmits different data sequences simultaneously to different users.1.6 BIBLIOGRAPHICAL NOTES AND REFERENCESThere are several historical treatments regarding the development of radio and telecommunications during the past century. These may be found in the books by McMahon (1984), Millman (1984), and Ryder and Fink (1984). We have already cited the classical works of Nyquist (1924), Hartley (1928), Kotelnikov (1947), Shannon (1948), and15 34. Proakis-2746616bookSeptember 25, 200711:6Digital CommunicationsHamming (1950), as well as some of the more important advances that have occurred in the eld since 1950. The collected papers by Shannon have been published by IEEE Press in a book edited by Sloane and Wyner (1993) and previously in Russia in a book edited by Dobrushin and Lupanov (1963). Other collected works published by the IEEE Press that might be of interest to the reader are Key Papers in the Development of Coding Theory, edited by Berlekamp (1974), and Key Papers in the Development of Information Theory, edited by Slepian (1974). 35. Proakis-27466bookSeptember 25, 200713:92Deterministic and Random Signal AnalysisIn this chapter we present the background material needed in the study of the following chapters. The analysis of deterministic and random signals and the study of different methods for their representation are the main topics of this chapter. In addition, we also introduce and study the main properties of some random variables frequently encountered in analysis of communication systems. We continue with a review of random processes, properties of lowpass and bandpass random processes, and series expansion of random processes. Throughout this chapter, and the book, we assume that the reader is familiar with the properties of the Fourier transform as summarized in Table 2.01 and the important Fourier transform pairs given in Table 2.02. In these tables we have used the following signal denitions. 1 1 |t| < 2 sin(t) t =0 t sinc(t) = (t) = 1 t = 1 2 2 1 t =0 0 otherwise and sgn(t) = 1 t >01t 01 (+ j2 f )2e|t| ( > 0)2 2 +(2 f )2e te f21 j fsgn(t) 1 ( 2u 1 (t) 1 (t) 221 + j 2 tf)+1 j2 fu 1 ( f ) (t)j2 f (t)( j2 f )n1 t j sgn( f )(n)(t nT0 ) n=1 T0 n=f n T0signal, called the lowpass equivalent of the original bandpass signal. This result makes it possible to work with the lowpass equivalents of bandpass signals instead of directly working with them, thus greatly simplifying the handling of bandpass signals. That is so because applying signal processing algorithms to lowpass signals is much easier due to lower required sampling rates which in turn result in lower rates of the sampled data. The Fourier transform of a signal provides information about the frequency content, or spectrum, of the signal. The Fourier transform of a real signal x(t) has Hermitian symmetry, i.e., X ( f ) = X ( f ), from which we conclude that |X ( f )| = |X ( f )| and X ( f ) = X ( f ). In other words, for real x(t), the magnitude of X ( f ) is even and 38. Proakis-27466bookSeptember 25, 200713:920Digital Communications FIGURE 2.11 The spectrum of a real-valued lowpass (baseband) signal.X( f )Wfits phase is odd. Because of this symmetry, all information about the signal is in the positive (or negative) frequencies, and in particular x(t) can be perfectly reconstructed by specifying X ( f ) for f 0. Based on this observation, for a real signal x(t), we dene the bandwidth as the smallest range of positive frequencies such that X ( f ) = 0 when | f | is outside this range. It is clear that the bandwidth of a real signal is one-half of its frequency support set. A lowpass, or baseband, signal is a signal whose spectrum is located around the zero frequency. For instance, speech, music, and video signals are all lowpass signals, although they have different spectral characteristics and bandwidths. Usually lowpass signals are low frequency signals, which means that in the time domain, they are slowly varying signals with no jumps or sudden variations. The bandwidth of a real lowpass signal is the minimum positive W such that X ( f ) = 0 outside [W, +W ]. For these signals the frequency support, i.e., the range of frequencies for which X ( f ) = 0, is [W, +W ]. An example of the spectrum of a real-valued lowpass signal is shown in Fig. 2.11. The solid line shows the magnitude spectrum |X ( f )|, and the dashed line indicates the phase spectrum X ( f ). We also dene the positive spectrum and the negative spectrum of a signal x(t) as f >0 f W . Comparing these relations with Equation 2.112, we conclude thatE X A M P L E 2.11.xi (t) = m(t) yi (t) = 0xq (t) = 0 yq (t) = m(t) 45. Proakis-27466bookSeptember 25, 200713:9Chapter Two: Deterministic and Random Signal Analysis27or, equivalently, xl (t) = m(t) yl (t) = jm(t) Note that here xl ,yl = j m 2 (t) = jEmTherefore, x,y = Re (xl ,yl ) = Re ( jEm ) = 0 This means that x(t) and y(t) are orthogonal, but their lowpass equivalents are not orthogonal.2.14 Lowpass Equivalent of a Bandpass System A bandpass system is a system whose transfer function is located around a frequency f 0 (and its mirror image f 0 ). More formally, we dene a bandpass system as a system whose impulse response h(t) is a bandpass signal. Since h(t) is bandpass, it has a lowpass equivalent denoted by h l (t) where h(t) = Re h l (t)e j2 f0 t(2.127)If a bandpass signal x(t) passes through a bandpass system with impulse response h(t), then obviously the output will be a bandpass signal y(t). The relation between the spectra of the input and the output is given by Y ( f ) = X ( f )H ( f )(2.128)Using Equation 2.15, we have Yl ( f ) = 2Y ( f + f 0 )u 1 ( f + f 0 ) = 2X ( f + f 0 )H ( f + f 0 )u 1 ( f + f 0 ) 1 = [2X ( f + f 0 )u 1 ( f + f 0 )] [2H ( f + f 0 )u 1 ( f + f 0 )] 2 1 = X l ( f )Hl ( f ) (2.129) 2 where we have used the fact that for f > f 0 , which is the range of frequencies of interest, u 2 ( f + f 0 ) = u 1 ( f + f 0 ) = 1. In the time domain we have 1 1 xl (t) h l (t) (2.130) 2 Equations 2.129 and 2.130 show that when a bandpass signal passes through a bandpass system, the input-output relation between the lowpass equivalents is very similar to the relation between the bandpass signals, the only difference being that for the lowpass equivalents a factor of 1 is introduced. 2 yl (t) = 46. Proakis-27466bookSeptember 25, 200713:928Digital Communications2.2 SIGNAL SPACE REPRESENTATION OF WAVEFORMSSignal space (or vector) representation of signals is a very effective and useful tool in the analysis of digitally modulated signals. We cover this important approach in this section and show that any set of signals is equivalent to a set of vectors. We show that signals have the same basic properties of vectors. We study methods of determining an equivalent set of vectors for a set of signals and introduce the notion of signal space representation, or signal constellation, of a set of waveforms.2.21 Vector Space Concepts A vector v in an n-dimensional space is characterized by its n components v1 v2 vn . Let v denote a column vector, i.e., v = [v1 v2 vn ]t , where At denotes the transpose of matrix A. The inner product of two n-dimensional vectors v 1 = [v11 v12 v1n ]t and v 2 = [v21 v22 v2n ]t is dened as nv1, v2 = v1 v2 = H v1i v2i = v 2 v 1(2.21)i=1where A H denotes the Hermitian transpose of the matrix A, i.e., the result of rst transposing the matrix and then conjugating its elements. From the denition of the inner product of two vectors it follows that v1, v2 = v2, v1(2.22)and therefore, v 1 , v 2 + v 2 , v 1 = 2 Re [ v 1 , v 2 ](2.23)A vector may also be represented as a linear combination of orthogonal unit vectors or an orthonormal basis ei , 1 i n, i.e., nv=vi ei(2.24)i=1where, by denition, a unit vector has length unity and vi is the projection of the vector v onto the unit vector ei , i.e., vi = v, ei . Two vectors v 1 and v 2 are orthogonal if v 1 , v 2 = 0. More generally, a set of m vectors v k , 1 k m, are orthogonal if v i , v j = 0 for all 1 i, j m, and i = j. The norm of a vector v is denoted by v and is dened as nv = ( v, v )1/2 =|vi |2(2.25)i=1which in the n-dimensional space is simply the length of the vector. A set of m vectors is said to be orthonormal if the vectors are orthogonal and each vector has a 47. Proakis-27466bookSeptember 25, 200713:9Chapter Two: Deterministic and Random Signal Analysis29unit norm. A set of m vectors is said to be linearly independent if no one vector can be represented as a linear combination of the remaining vectors. Any two n-dimensional vectors v 1 and v 2 satisfy the triangle inequality v1 + v2 v1 + v2(2.26)with equality if v 1 and v 2 are in the same direction, i.e., v 1 = av 2 where a is a positive real scalar. The CauchySchwarz inequality states that | v1, v2 | v1 v2(2.27)with equality if v 1 = av 2 for some complex scalar a. The norm square of the sum of two vectors may be expressed as v1 + v22= v12+ v22+ 2 Re [ v 1 , v 2 ](2.28)If v 1 and v 2 are orthogonal, then v 1 , v 2 = 0 and, hence, v1 + v22= v12+ v22(2.29)This is the Pythagorean relation for two orthogonal n-dimensional vectors. From matrix algebra, we recall that a linear transformation in an n-dimensional vector space is a matrix transformation of the form v = Av, where the matrix A transforms the vector v into some vector v . In the special case where v = v, i.e., Av = v where is some scalar, the vector v is called an eigenvector of the transformation and is the corresponding eigenvalue. Finally, let us review the GramSchmidt procedure for constructing a set of orthonormal vectors from a set of n-dimensional vectors v i , 1 i m. We begin by arbitrarily selecting a vector from the set, say, v 1 . By normalizing its length, we obtain the rst vector, say, v1 (2.210) u1 = v1 Next, we may select v 2 and, rst, subtract the projection of v 2 onto u1 . Thus, we obtain u2 = v 2 ( v 2 , u1 )u1(2.211)Then we normalize the vector u2 to unit length. This yields u2 =u2 u2(2.212)The procedure continues by selecting v 3 and subtracting the projections of v 3 into u1 and u2 . Thus, we have u3 = v 3 ( v 3 , u1 )u1 ( v 3 , u2 )u2(2.213)Then the orthonormal vector u3 is u3 =u3 u3(2.214) 48. Proakis-27466bookSeptember 25, 200713:930Digital CommunicationsBy continuing this procedure, we construct a set of N orthonormal vectors, where N min(m, n).2.22 Signal Space Concepts As in the case of vectors, we may develop a parallel treatment for a set of signals. The inner product of two generally complex-valued signals x1 (t) and x2 (t) is denoted by x1 (t), x2 (t) and dened as x1 (t), x2 (t) = x1 (t)x2 (t) dt(2.215)similar to Equation 2.122. The signals are orthogonal if their inner product is zero. The norm of a signal is dened as x(t) =1/2|x(t)|2 dt=Ex(2.216)where Ex is the energy in x(t). A set of m signals is orthonormal if they are orthogonal and their norms are all unity. A set of m signals is linearly independent if no signal can be represented as a linear combination of the remaining signals. The triangle inequality for two signals is simply x1 (t) + x2 (t) x1 (t) + x2 (t)(2.217)and the CauchySchwarz inequality is | x1 (t), x2 (t) | x1 (t) x2 (t) =Ex1 Ex2(2.218)or, equivalently, x1 (t)x2 (t) dt 1/2|x1 (t)|2 dt1/2|x2 (t)|2 dt(2.219)with equality when x2 (t) = ax1 (t), where a is any complex number.2.23 Orthogonal Expansions of Signals In this section, we develop a vector representation for signal waveforms, and thus we demonstrate an equivalence between a signal waveform and its vector representation. Suppose that s(t) is a deterministic signal with nite energyEs = |s(t)|2 dt(2.220)Furthermore, suppose that there exists a set of functions {n (t), n = 1, 2, . . . , K } that are orthonormal in the sense that 1 m=n (2.221) n (t)m (t) dt = n (t), m (t) = 0 m=n 49. Proakis-27466bookSeptember 25, 200713:9Chapter Two: Deterministic and Random Signal Analysis31We may approximate the signal s(t) by a weighted linear combination of these functions, i.e., Ks(t) =sk k (t)(2.222)k=1where {sk , 1 k K } are the coefcients in the approximation of s(t). The approximation error incurred is e(t) = s(t) s(t) Let us select the coefcients {sk } so as to minimize the energy Ee of the approximation error. Thus,Ee = = |s(t) s(t)|2 dt 2Ks(t) (2.223)sk k (t) dt(2.224)k=1The optimum coefcients in the series expansion of s(t) may be found by differentiating Equation 2.223 with respect to each of the coefcients {sk } and setting the rst derivatives to zero. Alternatively, we may use a well-known result from estimation theory based on the mean square error criterion, which, simply stated, is that the minimum of Ee with respect to the {sk } is obtained when the error is orthogonal to each of the functions in the series expansion. Thus, K sk k (t) n (t) dt = 0,s(t) n = 1, 2, . . . , K(2.225)k=1Since the functions {n (t)} are orthonormal, Equation 2.225 reduces to sn = s(t), n (t) = s(t)n (t) dt,n = 1, 2, . . . , K(2.226)Thus, the coefcients are obtained by projecting the signal s(t) onto each of the functions {n (t)}. Consequently, s(t) is the projection of s(t) onto the K -dimensional signal space spanned by the functions {n (t)}, and therefore it is orthogonal to the error signal e(t) = s(t) s(t), i.e., e(t), s(t) = 0. The minimum mean-square approximation error isEmin = = e(t)s (t) dt |s(t)|2 dt (2.227) K k=1sk k (t)s (t) dt(2.228)K= Es |sk |2 k=1(2.229) 50. Proakis-27466bookSeptember 25, 200713:932Digital Communicationswhich is nonnegative, by denition. When the minimum mean square approximation error Emin = 0, KEs =|sk |2 = k=1 |s(t)|2 dt(2.230)Under the condition that Emin = 0, we may express s(t) as Ks(t) =sk k (t)(2.231)k=1where it is understood that equality of s(t) to its series expansion holds in the sense that the approximation error has zero energy. When every nite energy signal can be represented by a series expansion of the form in Equation 2.231 for which Emin = 0, the set of orthonormal functions {n (t)} is said to be complete. Consider a nite energy real signal s(t) that is zero everywhere except in the range 0 t T and has a nite number of discontinuities in this interval. Its periodic extension can be represented in a Fourier series asE X A M P L E 2.21. TRIGONOMETRIC FOURIER SERIES:s(t) =ak cos k=02 kt 2 kt + bk sin T T(2.232)where the coefcients {ak , bk } that minimize the mean square error are given by 1 T 2 ak = T 2 bk = TTa0 =s(t) dt 0 T2 kt (2.233) dt, k = 1, 2, 3, . . . T 0 T 2 kt s(t) sin dt, k = 1, 2, 3, . . . T 0 The set of functions {1/ T , 2/T cos 2 kt/T , 2/T sin 2 kt/T } is a complete set for the expansion of periodic signals on the interval [0, T ], and, hence, the series expansion results in zero mean square error. s(t) cosConsider a general nite energy signal s(t) (real or complex) that is zero everywhere except in the range 0 t T and has a nite number of discontinuities in this interval. Its periodic extension can be represented in an exponential Fourier series asE X A M P L E 2.22. EXPONENTIAL FOURIER SERIES:ns(t) =xn e j2 T t(2.234)n=where the coefcients {xn } that minimize the mean square error are given by xn =1 T x(t)e j2 T t dt n(2.235) 51. Proakis-27466bookSeptember 25, 200713:9Chapter Two: Deterministic and Random Signal Analysis33 n The set of functions { 1/T e j2 T t } is a complete set for expansion of periodic signals on the interval [0, T ], and, hence, the series expansion results in zero mean square error.2.24 Gram-Schmidt Procedure Now suppose that we have a set of nite energy signal waveforms {sm (t), m = 1, 2, . . . , M} and we wish to construct a set of orthonormal waveforms. The Gram-Schmidt orthogonalization procedure allows us to construct such a set. This procedure is similar to the one described in Section 2.21 for vectors. We begin with the rst waveform s1 (t), which is assumed to have energy E1 . The rst orthonormal waveform is simply constructed as s1 (t) (2.236) 1 (t) = E1 Thus, 1 (t) is simply s1 (t) normalized to unit energy. The second waveform is constructed from s2 (t) by rst computing the projection of s2 (t) onto 1 (t), which is c21 = s2 (t), 1 (t) = s2 (t)1 (t) dt(2.237)Then c21 1 (t) is subtracted from s2 (t) to yield 2 (t) = s2 (t) c21 1 (t)(2.238)This waveform is orthogonal to 1 (t), but it does not have unit energy. If E2 denotes the energy of 2 (t), i.e.,E2 = 2 2 (t) dtthe normalized waveform that is orthogonal to 1 (t) is 2 (t) 2 (t) = E2(2.239)In general, the orthogonalization of the kth function leads to k (t) k (t) = Ek(2.240)where k1k (t) = sk (t) cki i (t)(2.241)i=1cki = sk (t), i (t) =Ek = 2 k (t) dt sk (t)i (t) dt,i = 1, 2, . . . , k 1(2.242) (2.243) 52. Proakis-27466bookSeptember 25, 200713:934Digital CommunicationsThus, the orthogonalization process is continued until all the M signal waveforms {sm (t)} have been exhausted and N M orthonormal waveforms have been constructed. The dimensionality N of the signal space will be equal to M if all the signal waveforms are linearly independent, i.e., none of the signal waveforms is a linear combination of the other signal waveforms. Let us apply the Gram-Schmidt procedure to the set of four waveforms illustrated in Figure 2.21. The waveform s1 (t) has energy E1 = 2, so thatE X A M P L E 2.23.1 (t) =1 s1 (t) 2Next we observe that c21 = 0; hence, s2 (t) and 1 (t) are orthogonal. Therefore, 2 (t) = s2 (t)/ E2 = 1 s2 (t). To obtain 3 (t), we compute c31 and c32 , which are c31 = 2 2 and c23 = 0. Thus, 3 (t) = s3 (t) 1 021 (t) =2t 3 otherwiseSince 3 (t) has unit energy, it follows that 3 (t) = 3 (t). Determining 4 (t), we nd that c41 = 2, c42 = 0, and c43 = 1. Hence, 4 (t) = s4 (t) + 21 (t) 3 (t) = 0 Consequently, s4 (t) is a linear combination of 1 (t) and 3 (t) and, hence, 4 (t) = 0. The three orthonormal functions are illustrated in Figure 2.21(b).Once we have constructed the set of orthonormal waveforms {n (t)}, we can express the M signals {sm (t)} as linear combinations of the {n (t)}. Thus, we may write Nsm (t) =smn n (t),m = 1, 2, . . . , M(2.244)n=1Based on the expression in Equation 2.244, each signal may be represented by the vector sm = [sm1 sm2 sm N ]t(2.245)or, equivalently, as a point in the N -dimensional (in general, complex) signal space with M coordinates {smn , n = 1, 2, . . . , N }. Therefore, a set of M signals {sm (t)}m=1 can be M represented by a set of M vectors {sm }m=1 in the N -dimensional space, where N M. The corresponding set of vectors is called the signal space representation, or conM stellation, of {sm (t)}m=1 . If the original signals are real, then the corresponding vector representations are in R N ; and if the signals are complex, then the vector representations are in C N . Figure 2.22 demonstrates the process of obtaining the vector equivalent from a signal (signal-to-vector mapping) and vice versa (vector-to-signal mapping). From the orthonormality of the basis {n (t)} it follows thatEm = N|sm (t)| dt =|smn |2 = sm2n=12(2.246) 53. Proakis-27466bookSeptember 25, 200713:9Chapter Two: Deterministic and Random Signal Analysis35(a)1(t)3(t)2(t)(b)FIGURE 2.21 Gram-Schmidt orthogonalization of the signal {sm (t), m = 1, 2, 3, 4} and the corresponding orthonormal basis.The energy in the mth signal is simply the square of the length of the vector or, equivalently, the square of the Euclidean distance from the origin to the point sm in the N -dimensional space. Thus, any signal can be represented geometrically as a point in the signal space spanned by the orthonormal functions {n (t)}. From the orthonormality of the basis it also follows that sk (t), sl (t) = sk , sl(2.247)This shows that the inner product of two signals is equal to the inner product of the corresponding vectors. 54. Proakis-27466bookSeptember 25, 200713:936Digital Communications 1(t) s1 2(t)s2ss(t)...... N (t) sN (a) *(t) 1 s1 *(t) 2s2 s(t)s ......... * (t) NsN (b)FIGURE 2.22 Vector to signal (a), and signal to vector (b) mappings.Let us obtain the vector representation of the four signals shown in Figure 2.21(a) by using the orthonormal set of functions in Figure 2.21(b). Since the dimensionality of the signal space is N = 3, each signal is described by three components. The signal s1 (t) is characterized by the vector s1 = ( 2, 0, 0)t . Similarly, the signals s2 (t), s3 (t), and s4 (t) are characterized by the vectors s2 = (0, 2, 0)t , s3 = ( 2, 0, 1)t , and s4 = ( 2, 0, t , respectively. These vectors are shown in 1) Figure 2.23. Their lengths are s1 = 2, s2 = 2, s3 = 3, and s4 = 3, and the corresponding signal energies are Ek = sk 2 , k = 1, 2, 3, 4.E X A M P L E 2.24.We have demonstrated that a set of M nite energy waveforms {sm (t)} can be represented by a weighted linear combination of orthonormal functions {n (t)} of dimensionality N M. The functions {n (t)} are obtained by applying the Gram-Schmidt orthogonalization procedure on {sm (t)}. It should be emphasized, however, that the functions {n (t)} obtained from the Gram-Schmidt procedure are not unique. If we 55. Proakis-27466bookSeptember 25, 200713:9Chapter Two: Deterministic and Random Signal Analysis 237FIGURE 2.23 The four signal vectors represented as points in three-dimensional space.13alter the order in which the orthogonalization of the signals {sm (t)} is performed, the orthonormal waveforms will be different and the corresponding vector representation of the signals {sm (t)} will depend on the choice of the orthonormal functions {n (t)}. Nevertheless, the dimensionality of the signal space N will not change, and the vectors {sm } will retain their geometric conguration; i.e., their lengths and their inner products will be invariant to the choice of the orthonormal functions {n (t)}. An alternative set of orthonormal functions for the four signals in Figure 2.21(a) is illustrated in Figure 2.24(a). By using these functions to expand {sn (t)}, we obtain the corresponding vectors s1 = (1, 1, 0)t , s2 = (1, 1, 0)t , s3 = (1, 1, 1)t , and s4 = (1, 1, 1)t , which are shown in Figure 2.24(b). Note that the vector lengths are identical to those obtained from the orthonormal functions {n (t)}.E X A M P L E 2.25.1(t)2(t)3(t)(a) 213 (b)FIGURE 2.24 An alternative set of orthonormal functions for the four signals in Figure 2.21(a) and the corresponding signal points. 56. Proakis-27466bookSeptember 25, 200713:938Digital CommunicationsBandpass and Lowpass Orthonormal Basis Let us consider the case in which the signal waveforms are bandpass and represented as sm (t) = Re sml (t)e j2 f0 t ,m = 1, 2, . . . , M(2.248)where {sml (t)} denotes the lowpass equivalent signals. Recall from Section 2.11 that if two lowpass equivalent signals are orthogonal, the corresponding bandpass signa