Digital Signal Processing: Principles, Algorithms and Applications, 5th edition

Published by Pearson (February 19, 2021) © 2022

  • John G. Proakis Northeastern University
  • Dimitris G Manolakis Massachusetts Institute of Technology

eTextbook

per month

  • Anytime, anywhere learning with the Pearson+ app
  • Easy-to-use search, navigation and notebook
  • Simpler studying with flashcards
$79.99

  • Hardcover, paperback or looseleaf edition
  • Affordable rental option for select titles
  • Free shipping on looseleafs and traditional textbooks

For courses in electrical and computer engineering and digital signal processing.

Balanced coverage of digital signal processing theory and practical applications

Digital Signal Processing presents fundamental concepts and techniques of discrete-time signals, systems and modern digital processing as well as algorithms and applications. It covers both time-domain and frequency-domain methods for analyzing linear, discrete-time systems. Rigorous and challenging, examples and over 500 problems emphasize software implementation of digital signal processing algorithms.

The 5th Edition includes a new chapter on multirate digital filter banks and wavelets as well as new state-of-the-art topics.

Hallmark features of this title

  • Practical applications include examples, over 500 homework problems, and computer problems.
  • Describes techniques to convert analog signals, the analysis of linear time-invariant discrete-time systems and signals in the time domain, and bilateral and unilateral z-transform methods.
  • Analyzes signals and systems in the frequency domain, the Fourier series and transform in continuous- and discrete-time signals, and the sampling and reconstruction of continuous-time signals.
  • Examines realizations of IIR and FIR systems and techniques to design digital filters.
  • Looks at sampling-rate conversion and applications to multirate digital signal processing.
  • Discusses linear prediction, optimum linear filters and single-channel adaptive filers based on the LMS and RLS algorithms.

New and updated features of this title

  • NEW: Chapter 12 covers multirate digital filter banks and wavelets. It explains the two-channel quadrature mirror filter (QMF) banks and multichannel filter banks that eliminate aliasing and provide perfect reconstruction of signals. It includes treatment of the design of FIR filters for both two-channel and multichannel filter banks. Wavelets and the discrete wavelet transform are the focus in the second part of the chapter, which describes the construction of the discrete wavelet transform and the connections between wavelets and filter banks.
  • NEW: ARMA model parameter estimation is included in the detailed examination of power spectrum estimation.
  • NEW: Introduces the short-time Fourier Transform and the sparse FFT algorithm in its examination of DFT properties and applications and efficient computation of the DFT.
  • NEW: Discusses reverberation filters in its coverage of frequency-domain Analysis of LTI discrete systems.
  1. Introduction
    • 1.1 Signals, Systems, and Signal Processing
      • 1.1.1 Basic Elements of a Digital Signal Processing System
      • 1.1.2 Advantages of Digital over Analog Signal Processing
    • 1.2 Classification of Signals
      • 1.2.1 Multichannel and Multidimensional Signals
      • 1.2.2 Continuous-Time Versus Discrete-Time Signals
      • 1.2.3 Continuous-Valued Versus Discrete-Valued Signals
      • 1.2.4 Deterministic Versus Random Signals
    • 1.3 Summary
    • Problems
  2. Discrete-Time Signals and Systems
    • 2.1 Discrete-Time Signals
      • 2.1.1 Some Elementary Discrete-Time Signals
      • 2.1.2 Classification of Discrete-Time Signals
      • 2.1.3 Simple Manipulations of Discrete-Time Signals
    • 2.2 Discrete-Time Systems
      • 2.2.1 Input-Output Description of Systems
      • 2.2.2 Block Diagram Representation of Discrete-Time Systems
      • 2.2.3 Classification of Discrete-Time Systems
      • 2.2.4 Interconnection of Discrete-Time Systems
    • 2.3 Analysis of Discrete-Time Linear Time-Invariant Systems
      • 2.3.1 Techniques for the Analysis of Linear Systems
      • 2.3.2 Resolution of a Discrete-Time Signal into Impulses
      • 2.3.3 Response of LTI Systems to Arbitrary Inputs: The Convolution Sum
      • 2.3.4 Properties of Convolution and the Interconnection of LTI Systems
      • 2.3.5 Causal Linear Time-Invariant Systems
      • 2.3.6 Stability of Linear Time-Invariant Systems
      • 2.3.7 Systems with Finite-Duration and Infinite-Duration Impulse Response
    • 2.4 Discrete-Time Systems Described by Difference Equations
      • 2.4.1 Recursive and Nonrecursive Discrete-Time Systems
      • 2.4.2 Linear Time-Invariant Systems Characterized by Constant-Coefficient Difference Equations
      • 2.4.3 Application of LTI Systems for Signal Smoothing
    • 2.5 Implementation of Discrete-Time Systems
      • 2.5.1 Structures for the Realization of Linear Time-Invariant Systems
      • 2.5.2 Recursive and Nonrecursive Realizations of FIR Systems
    • 2.6 Correlation of Discrete-Time Signals
      • 2.6.1 Crosscorrelation and Autocorrelation Sequences
      • 2.6.2 Properties of the Autocorrelation and Crosscorrelation Sequences
      • 2.6.3 Correlation of Periodic Sequences
      • 2.6.4 Input-Output Correlation Sequences
    • 2.7 Summary
    • Problems
    • Computer Problems
  3. The z-Transform and Its Application to the Analysis of LTI Systems
    • 3.1 The z-Transform
      • 3.1.1 The Direct z-Transform
      • 3.1.2 The Inverse z-Transform
    • 3.2 Properties of the z-Transform
    • 3.3 Rational z-Transforms
      • 3.3.1 Poles and Zeros
      • 3.3.2 Pole Location and Time-Domain Behavior for Causal Signals
      • 3.3.3 The System Function of a Linear Time-Invariant System
    • 3.4 Inversion of the z-Transform
      • 3.4.1 The Inverse z-Transform by Contour Integration
      • 3.4.2 The Inverse z-Transform by Power Series Expansion
      • 3.4.3 The Inverse z-Transform by Partial-Fraction Expansion
      • 3.4.4 Decomposition of Rational z-Transforms
    • 3.5 Analysis of Linear Time-Invariant Systems in the z-Domain
      • 3.5.1 Response of Systems with Rational System Functions
      • 3.5.2 Transient and Steady-State Responses
      • 3.5.3 Causality and Stability
      • 3.5.4 Pole—Zero Cancellations
      • 3.5.5 Multiple-Order Poles and Stability
      • 3.5.6 Stability of Second-Order Systems
    • 3.6 The One-sided z-Transform
      • 3.6.1 Definition and Properties
      • 3.6.2 Solution of Difference Equations
      • 3.6.3 Response of Pole—Zero Systems with Nonzero Initial Conditions
    • 3.7 Summary
    • Problems
    • Computer Problems
  4. Frequency Analysis of Signals
    • 4.1 The Concept of Frequency in Continuous-Time and Discrete-Time Signals
      • 4.1.1 Continuous-Time Sinusoidal Signals
      • 4.1.2 Discrete-Time Sinusoidal Signals
      • 4.1.3 Harmonically Related Complex Exponentials
      • 4.1.4 Sampling of Analog Signals
      • 4.1.5 The Sampling Theorem
    • 4.2 Frequency Analysis of Continuous-Time Signals
      • 4.2.1 The Fourier Series for Continuous-Time Periodic Signals
      • 4.2.2 Power Density Spectrum of Periodic Signals
      • 4.2.3 The Fourier Transform for Continuous-Time Aperiodic Signals
      • 4.2.4 Energy Density Spectrum of Aperiodic Signals
    • 4.3 Frequency Analysis of Discrete-Time Signals
      • 4.3.1 The Fourier Series for Discrete-Time Periodic Signals
      • 4.3.2 Power Density Spectrum of Periodic Signals
      • 4.3.3 The Fourier Transform of Discrete-Time Aperiodic Signals
      • 4.3.4 Convergence of the Fourier Transform
      • 4.3.5 Energy Density Spectrum of Aperiodic Signals
      • 4.3.6 Relationship of the Fourier Transform to the z-Transform
      • 4.3.7 The Cepstrum
      • 4.3.8 The Fourier Transform of Signals with Poles on the Unit Circle
      • 4.3.9 Frequency-Domain Classification of Signals: The Concept of Bandwidth
      • 4.3.10 The Frequency Ranges of Some Natural Signals
    • 4.4 Frequency-Domain and Time-Domain Signal Properties
    • 4.5 Properties of the Fourier Transform for Discrete-Time Signals
      • 4.5.1 Symmetry Properties of the Fourier Transform
      • 4.5.2 Fourier Transform Theorems and Properties
    • 4.6 Summary
    • Problems
    • Computer Problems
  5. Frequency-Domain Analysis of LTI Systems
    • 5.1 Frequency-Domain Characteristics of Linear Time-Invariant Systems
      • 5.1.1 Response to Complex Exponential and Sinusoidal Signals: The Frequency Response Function
      • 5.1.2 Steady-State and Transient Response to Sinusoidal Input Signals
      • 5.1.3 Steady-State Response to Periodic Input Signals
      • 5.1.4 Steady-State Response to Aperiodic Input Signals
    • 5.2 Frequency Response of LTI Systems
      • 5.2.1 Frequency Response of a System with a Rational System Function
      • 5.2.2 Computation of the Frequency Response Function
    • 5.3 Correlation Functions and Spectra at the Output of LTI Systems
    • 5.4 Linear Time-Invariant Systems as Frequency-Selective Filters
      • 5.4.1 Ideal Filter Characteristics
      • 5.4.2 Lowpass, Highpass, and Bandpass Filters
      • 5.4.3 Digital Resonators
      • 5.4.4 Notch Filters
      • 5.4.5 Comb Filters
      • 5.4.6 Reverberation Filters
      • 5.4.7 All-Pass Filters
      • 5.4.8 Digital Sinusoidal Oscillators
    • 5.5 Inverse Systems and Deconvolution
      • 5.5.1 Invertibility of Linear Time-Invariant Systems
      • 5.5.2 Minimum-Phase, Maximum-Phase, and Mixed-Phase Systems
      • 5.5.3 System Identification and Deconvolution
      • 5.5.4 Homomorphic Deconvolution
    • 5.6 Summary
    • Problems
    • Computer Problems
  6. Sampling and Reconstruction of Signals
    • 6.1 Ideal Sampling and Reconstruction of Continuous-Time Signals
    • 6.2 Discrete-Time Processing of Continuous-Time Signals
    • 6.3 Sampling and Reconstruction of Continuous-Time Bandpass Signals
      • 6.3.1 Uniform or First-Order Sampling
      • 6.3.2 Interleaved or Nonuniform Second-Order Sampling
      • 6.3.3 Bandpass Signal Representations
      • 6.3.4 Sampling Using Bandpass Signal Representations
    • 6.4 Sampling of Discrete-Time Signals
      • 6.4.1 Sampling and Interpolation of Discrete-Time Signals
      • 6.4.2 Representation and Sampling of Bandpass Discrete-Time Signals
    • 6.5 Analog-to-Digital and Digital-to-Analog Converters
      • 6.5.1 Analog-to-Digital Converters
      • 6.5.2 Quantization and Coding
      • 6.5.3 Analysis of Quantization Errors
      • 6.5.4 Digital-to-Analog Converters
    • 6.6 Oversampling A/D and D/A Converters
      • 6.6.1 Oversampling A/D Converters
      • 6.6.2 Oversampling D/A Converters
    • 6.7 Summary
    • Problems
    • Computer Problems
  7. The Discrete Fourier Transform: Its Properties and Applications
    • 7.1 Frequency-Domain Sampling: The Discrete Fourier Transform
      • 7.1.1 Frequency-Domain Sampling and Reconstruction of Discrete-Time Signals
      • 7.1.2 The Discrete Fourier Transform (DFT)
      • 7.1.3 The DFT as a Linear Transformation
      • 7.1.4 Relationship of the DFT to Other Transforms
    • 7.2 Properties of the DFT
      • 7.2.1 Periodicity, Linearity, and Symmetry Properties
      • 7.2.2 Multiplication of Two DFTs and Circular Convolution
      • 7.2.3 Additional DFT Properties
    • 7.3 Linear Filtering Methods Based on the DFT
      • 7.3.1 Use of the DFT in Linear Filtering
      • 7.3.2 Filtering of Long Data Sequences
    • 7.4 Frequency Analysis of Signals Using the DFT
    • 7.5 The Short-Time Fourier Transform
    • 7.6 The Discrete Cosine Transform
      • 7.6.1 Forward DCT
      • 7.6.2 Inverse DCT
      • 7.6.3 DCT as an Orthogonal Transform
    • 7.7 Summary
    • Problems
    • Computer Problems
  8. Efficient Computation of the DFT: Fast Fourier Transform Algorithms
    • 8.1 Efficient Computation of the DFT: FFT Algorithms
      • 8.1.1 Direct Computation of the DFT
      • 8.1.2 Divide-and-Conquer Approach to Computation of the DFT
      • 8.1.3 Radix-2 FFT Algorithms
      • 8.1.4 Radix-4 FFT Algorithms
      • 8.1.5 Split-Radix FFT Algorithms
      • 8.1.6 Implementation of FFT Algorithms
      • 8.1.7 Sparse FFT Algorithm
    • 8.2 Applications of FFT Algorithms
      • 8.2.1 Efficient Computation of the DFT of Two Real Sequences
      • 8.2.2 Efficient Computation of the DFT of a 2N-Point Real Sequence
      • 8.2.3 Use of the FFT Algorithm in Linear Filtering and Correlation
    • 8.3 A Linear Filtering Approach to Computation of the DFT
      • 8.3.1 The Goertzel Algorithm
      • 8.3.2 The Chirp-z Transform Algorithm
    • 8.4 Quantization Effects in the Computation of the DFT
      • 8.4.1 Quantization Errors in the Direct Computation of the DFT
      • 8.4.2 Quantization Errors in FFT Algorithms
    • 8.5 Summary
    • Problems
    • Computer Problems
  9. Implementation of Discrete-Time Systems
    • 9.1 Structures for the Realization of Discrete-Time Systems
    • 9.2 Structures for FIR Systems
      • 9.2.1 Direct-Form Structure
      • 9.2.2 Cascade-Form Structures
      • 9.2.3 Frequency-Sampling Structures
      • 9.2.4 Lattice Structure
    • 9.3 Structures for IIR Systems
      • 9.3.1 Direct-Form Structures
      • 9.3.2 Signal Flow Graphs and Transposed Structures
      • 9.3.3 Cascade-Form Structures
      • 9.3.4 Parallel-Form Structures
      • 9.3.5 Lattice and Lattice-Ladder Structures for IIR Systems
    • 9.4 Representation of Numbers
      • 9.4.1 Fixed-Point Representation of Numbers
      • 9.4.2 Binary Floating-Point Representation of Numbers
      • 9.4.3 Errors Resulting from Rounding and Truncation
    • 9.5 Quantization of Filter Coefficients
      • 9.5.1 Analysis of Sensitivity to Quantization of Filter Coefficients
      • 9.5.2 Quantization of Coefficients in FIR Filters
    • 9.6 Round-Off Effects in Digital Filters
      • 9.6.1 Limit-Cycle Oscillations in Recursive Systems
      • 9.6.2 Scaling to Prevent Overflow
      • 9.6.3 Statistical Characterization of Quantization Effects in Fixed-Point Realizations of Digital Filters
    • 9.7 Summary
    • Problems
    • Computer Problems
  10. Design of Digital Filters
    • 10.1 General Considerations
      • 10.1.1 Causality and Its Implications
      • 10.1.2 Characteristics of Practical Frequency-Selective Filters
    • 10.2 Design of FIR Filters
      • 10.2.1 Symmetric and Antisymmetric FIR Filters
      • 10.2.2 Design of Linear-Phase FIR Filters Using Windows
      • 10.2.3 Design of Linear-Phase FIR Filters by the Frequency-Sampling Method
      • 10.2.4 Design of Optimum Equiripple Linear-Phase FIR Filters
      • 10.2.5 Design of FIR Differentiators
      • 10.2.6 Design of Hilbert Transformers
      • 10.2.7 Comparison of Design Methods for Linear-Phase FIR Filters
    • 10.3 Design of IIR Filters From Analog Filters
      • 10.3.1 IIR Filter Design by Approximation of Derivatives
      • 10.3.2 IIR Filter Design by Impulse Invariance
      • 10.3.3 IIR Filter Design by the Bilinear Transformation
      • 10.3.4 Characteristics of Commonly Used Analog Filters
      • 10.3.5 Some Examples of Digital Filter Designs Based on the Bilinear Transformation
    • 10.4 Frequency Transformations
      • 10.4.1 Frequency Transformations in the Analog Domain
      • 10.4.2 Frequency Transformations in the Digital Domain
    • 10.5 Summary
    • Problems
    • Computer Problems
  11. Multirate Digital Signal Processing
    • 11.1 Introduction
    • 11.2 Decimation by a Factor D
    • 11.3 Interpolation by a Factor I
    • 11.4 Sampling Rate Conversion by a Rational Factor I /D
    • 11.5 Implementation of Sampling Rate Conversion
      • 11.5.1 Polyphase Filter Structures
      • 11.5.2 Interchange of Filters and Downsamplers/Upsamplers
      • 11.5.3 Sampling Rate Conversion with Cascaded Integrator Comb Filters
      • 11.5.4 Polyphase Structures for Decimation and Interpolation Filters
      • 11.5.5 Structures for Rational Sampling Rate Conversion
    • 11.6 Multistage Implementation of Sampling Rate Conversion
    • 11.7 Sampling Rate Conversion of Bandpass Signals
    • 11.8 Sampling Rate Conversion by an Arbitrary Factor
      • 11.8.1 Arbitrary Resampling with Polyphase Interpolators
      • 11.8.2 Arbitrary Resampling with Farrow Filter Structures
    • 11.9 Applications of Multirate Signal Processing
      • 11.9.1 Design of Phase Shifters
      • 11.9.2 Interfacing of Digital Systems with Different Sampling Rates
      • 11.9.3 Implementation of Narrowband Lowpass Filters
      • 11.9.4 Subband Coding of Speech Signals
    • 11.10 Summary
    • Problems
    • Computer Problems
  12. Multirate Digital Filter Banks and Wavelets
    • 12.1 Multirate Digital Filter Banks
      • 12.1.1 DFT Filter Banks
      • 12.1.2 Polyphase Structure of the Uniform DFT Filter Bank
      • 12.1.3 An Alternative Structure of the Uniform DFT Filter Bank
    • 12.2 Two-Channel Quadrature Mirror Filter Bank
      • 12.2.1 Elimination of Aliasing
      • 12.2.2 Polyphase Structure of the QMF Bank
      • 12.2.3 Condition for Perfect Reconstruction
      • 12.2.4 Linear Phase FIR QMF Bank
      • 12.2.5 IIR QMF Bank
      • 12.2.6 Perfect Reconstruction in Two-Channel FIR QMF Bank
      • 12.2.7 Two-Channel Paraunitary QMF Bank
      • 12.2.8 Orthogonal and Biorthogonal Two-channel FIR Filter Banks
      • 12.2.9 Two-Channel QMF Banks in Subband Coding
    • 12.3 M-Channel Filter Banks
      • 12.3.1 Polyphase Structure for the M-Channel Filter Bank
      • 12.3.2 M-Channel Paraunitary Filter Banks
    • 12.4 Wavelets and Wavelet Transforms
      • 12.4.1 Ideal Bandpass Wavelet Decomposition
      • 12.4.2 Signal Spaces and Wavelets
      • 12.4.3 Multiresolution Analysis and Wavelets
      • 12.4.4 The Discrete Wavelet Transform
    • 12.5 From Wavelets to Filter Banks
      • 12.5.1 Dilation Equations
      • 12.5.2 Orthogonality Conditions
      • 12.5.3 Implications of Orthogonality and Dilation Equations
    • 12.6 From Filter Banks to Wavelets
    • 12.7 Regular Filters and Wavelets
    • 12.8 Summary
    • Problems
    • Computer Problems
  13. Linear Prediction and Optimum Linear Filters
    • 13.1 Random Signals, Correlation Functions, and Power Spectra
      • 13.1.1 Random Processes
      • 13.1.2 Stationary Random Processes
      • 13.1.3 Statistical (Ensemble) Averages
      • 13.1.4 Statistical Averages for Joint Random Processes
      • 13.1.5 Power Density Spectrum
      • 13.1.6 Discrete-Time Random Signals
      • 13.1.7 Time Averages for a Discrete-Time Random Process
      • 13.1.8 Mean-Ergodic Process
      • 13.1.9 Correlation-Ergodic Processes
      • 13.1.10 Correlation Functions and Power Spectra for Random Input Signals to LTI Systems
    • 13.2 Innovations Representation of a Stationary Random Process
      • 13.2.1 Rational Power Spectra
      • 13.2.2 Relationships Between the Filter Parameters and the Autocorrelation Sequence
    • 13.3 Forward and Backward Linear Prediction
      • 13.3.1 Forward Linear Prediction
      • 13.3.2 Backward Linear Prediction
      • 13.3.3 The Optimum Reflection Coefficients for the Lattice Forward and Backward Predictors
      • 13.3.4 Relationship of an AR Process to Linear Prediction
    • 13.4 Solution of the Normal Equations
      • 13.4.1 The Levinson—Durbin Algorithm
    • 13.5 Properties of the Linear Prediction-Error Filters
    • 13.6 AR Lattice and ARMA Lattice-Ladder Filters
      • 13.6.1 AR Lattice Structure
      • 13.6.2 ARMA Processes and Lattice-Ladder Filters
    • 13.7 Wiener Filters for Filtering and Prediction
      • 13.7.1 FIR Wiener Filter
      • 13.7.2 Orthogonality Principle in Linear Mean-Square Estimation
      • 13.7.3 IIR Wiener Filter
      • 13.7.4 Noncausal Wiener Filter
    • 13.8 Summary
    • Problems
    • Computer Problems
  14. Adaptive Filters
    • 14.1 Applications of Adaptive Filters
      • 14.1.1 System Identification or System Modeling
      • 14.1.2 Adaptive Channel Equalization
      • 14.1.3 Suppression of Narrowband Interference in a Wideband Signal
      • 14.1.4 Adaptive Line Enhancer
      • 14.1.5 Adaptive Noise Cancelling
      • 14.1.6 Adaptive Arrays
    • 14.2 Adaptive Direct-Form FIR Filters - The LMS Algorithm
      • 14.2.1 Minimum Mean-Square-Error Criterion
      • 14.2.2 The LMS Algorithm
      • 14.2.3 Related Stochastic Gradient Algorithms
      • 14.2.4 Properties of the LMS Algorithm
    • 14.3 Adaptive Direct-Form Filters - RLS Algorithms
      • 14.3.1 RLS Algorithm
      • 14.3.2 The LDU Factorization and Square-Root Algorithms
      • 14.3.3 Fast RLS Algorithms
      • 14.3.4 Properties of the Direct-Form RLS Algorithms
    • 14.4 Adaptive Lattice-Ladder Filters
      • 14.4.1 Recursive Least-Squares Lattice-Ladder Algorithms
      • 14.4.2 Other Lattice Algorithms
      • 14.4.3 Properties of Lattice-Ladder Algorithms
    • 14.5 Stability and Robustness of Adaptive Filter Algorithms
    • 14.6 Summary
    • Problems
    • Computer Problems
  15. Power Spectrum Estimation
    • 15.1 Estimation of Spectra from Finite-Duration Observations of Signals
      • 15.1.1 Computation of the Energy Density Spectrum
      • 15.1.2 Estimation of the Autocorrelation and Power Spectrum of Random Signals: The Periodogram
      • 15.1.3 The Use of the DFT in Power Spectrum Estimation
    • 15.2 Nonparametric Methods for Power Spectrum Estimation
      • 15.2.1 The Bartlett Method: Averaging Periodograms
      • 15.2.2 The Welch Method: Averaging Modified Periodograms
      • 15.2.3 The Blackman and Tukey Method: Smoothing the Periodogram
      • 15.2.4 Performance Characteristics of Nonparametric Power Spectrum Estimators
      • 15.2.5 Computational Requirements of Nonparametric Power Spectrum Estimates
    • 15.3 Parametric Methods for Power Spectrum Estimation
      • 15.3.1 Relationships Between the Autocorrelation and the Model Parameters
      • 15.3.2 The Yule—Walker Method for the AR Model Parameters
      • 15.3.3 The Burg Method for the AR Model Parameters
      • 15.3.4 Unconstrained Least-Squares Method for the AR Model Parameters
      • 15.3.5 Sequential Estimation Methods for the AR Model Parameters
      • 15.3.6 Selection of AR Model Order
      • 15.3.7 MA Model for Power Spectrum Estimation
      • 15.3.8 ARMA Model for Power Spectrum Estimation
      • 15.3.9 Some Experimental Results
    • 15.4 ARMA Model Parameter Estimation
    • 15.5 Filter Bank Methods
      • 15.5.1 Filter Bank Realization of the Periodogram
      • 15.5.2 Minimum Variance Spectral Estimates
    • 15.6 Eigenanalysis Algorithms for Spectrum Estimation
      • 15.6.1 Pisarenko Harmonic Decomposition Method
      • 15.6.2 Eigen-decomposition of the Autocorrelation Matrix for Sinusoids in White Noise
      • 15.6.3 MUSIC Algorithm
      • 15.6.4 ESPRIT Algorithm
      • 15.6.5 Order Selection Criteria
      • 15.6.6 Experimental Results
    • 15.7 Summary
    • Problems
    • Computer Problems
      1. Random Number Generators
      2. Tables of Transition Coefficients for the Design of Linear-Phase FIR Filters

References and Bibliography

Answers to Selected Problems

Index

About our authors

Known as a digital communications expert, inspiring educator and prolific writer, John G. Proakis has helped shape electrical engineering and digital communications programs and composed textbooks that have influenced graduate students worldwide. Dr. Proakis developed an outstanding reputation of providing inspired teaching and supervision of students with an academic career that began in 1969 with the Electrical Engineering Department at Northeastern University, MA, USA. As the chair of Northeastern's Department of Electrical and Computer Engineering, Dr. Proakis helped transform the department from a teaching environment to a dynamic research-active department. Dr. Proakis also served as associate dean and director of Northeastern's Graduate School of Engineering. Of his 10 textbooks on digital communication and signal processing, Digital Communications (McGraw Hill) is perhaps the best known. Considered the most influential resource on the topic and now in its 5th edition, the textbook has educated generations of students and engineers about the fundamentals associated with the digital information age. His other influential textbooks include Introduction to Digital Signal Processing (Prentice Hall), Communication Systems Engineering (Prentice Hall) and Fundamentals of Communication Systems (Prentice Hall). Dr. Proakis has also expanded engineering education beyond theory to laboratory experiments and simulation techniques using computers and software. His textbooks in this area include Digital Signal Processing Using MATLAB (CL-Engineering) and Contemporary Communication Systems Using MATLAB and Simulink (Cengage Learning). Through these approachable books, Dr. Proakis has helped expose students early on to the MATLAB development and simulation tool that they will likely need to use throughout their professional careers. Dr. Proakis also served as editor of the 5-volume Wiley Encyclopedia of Telecommunications. An IEEE Life Fellow and recipient of the IEEE Signal Processing Society Education Award (2004), Dr. Proakis is a Professor Emeritus with Northeastern University and an Adjunct Professor at the University of California in San Diego, CA, USA.

Dr. Dimitris G. Manolakis, a senior staff member in the Applied Space Systems Group, joined Lincoln Laboratory at the Massachusetts Institute of Technology in 1999 and has combined an extensive research career with a commitment to education. Dr. Manolakis's work has included the exploration and development of techniques in digital signal processing, adaptive filtering, array processing, pattern recognition and remote sensing. His recent research has focused on algorithms for hyperspectral target detection and modeling of spatio-temporal count data from down-looking sensors. Throughout his career, Dr. Manolakis has been involved in educating future engineers. He has taught undergraduate and graduate courses at the University of Athens, at which he earned a bachelor's degree in physics and a doctorate in electrical engineering; Northeastern University, at which he is an adjunct professor; Boston College and Worcester Polytechnic Institute. In addition, through an in-house technical education program, he conducts courses in digital and statistical signal processing and adaptive filtering to explain fundamental principles and concepts to Lincoln Laboratory staff members embarking on research in these areas. In 2013, Dr. Manolakis was recognized with an IEEE Signal Processing Society Education Award for his dedication to advancing education through the development of curriculum materials, publication of scholarly texts and teaching.

Dr. Manolakis is a prolific writer. He has authored or coauthored more than 135 articles on topics ranging from digital signal processing to hyperspectral remote sensing of chemical plumes to hyperspectral image processing for automatic target detection; these articles have been cited in almost 5000 scientific publications. In addition, he has coauthored 3 textbooks that are widely used in academia: Digital Signal Processing: Principles, Algorithms, and Applications (Prentice Hall, 2006, 4th ed.), which has been translated into 6 languages and cited 41,000 times; Statistical and Adaptive Signal Processing (Artech House, 2005) and Applied Digital Signal Processing (Cambridge University Press, 2011).

Need help? Get in touch

Pearson+

All in one place. Pearson+ offers instant access to eTextbooks, videos and study tools in one intuitive interface. Students choose how they learn best with enhanced search, audio and flashcards. The Pearson+ app lets them read where life takes them, no wi-fi needed. Students can access Pearson+ through a subscription or their MyLab or Mastering course.

Video
Play
Privacy and cookies
By watching, you agree Pearson can share your viewership data for marketing and analytics for one year, revocable by deleting your cookies.

Pearson eTextbook: What’s on the inside just might surprise you

They say you can’t judge a book by its cover. It’s the same with your students. Meet each one right where they are with an engaging, interactive, personalized learning experience that goes beyond the textbook to fit any schedule, any budget, and any lifestyle.Â