Time Series Analysis: Univariate and Multivariate Methods (Classic Version), 2nd edition

Published by Pearson (March 14, 2018) © 2019

  • William W.S. Wei

eTextbook

per month

  • Anytime, anywhere learning with the Pearson+ app
  • Easy-to-use search, navigation and notebook
  • Simpler studying with flashcards

For courses in Time Series Analysis.

A modern classic

Time Series Analysis, 2nd Edition is a thorough introduction to both time-domain and frequency-domain analyses of univariate and multivariate time series methods, with coverage of the most recently developed techniques in the field. With its broad coverage of methodology, it is a useful reference for those in applied sciences where analysis and research of time series is useful. 

This title is part of the Pearson Modern Classics series. Pearson Modern Classics are acclaimed titles at a value price.

Hallmark features of this title

  • Theory and applications are featured in a balanced presentation throughout.
  • Abundant examples show the operational details and purpose of a variety of univariate and multivariate time series methods.
  • Numerous figures, tables and real-life time series data sets illustrate the models and methods useful for analyzing, modeling and forecasting data collected sequentially in time.

New and updated features of this title

  • Exercises are now provided at the end of each chapter. They include both theoretical questions and numerical and data analyses.
  • All methods are illustrated with examples that include many historical and recently updated real-life data sets. For example, the data sets were all updated and reanalyzed in Chapters 6, 7 and 8.
  • To help a wide variety of readers, appendices on multivariate linear regression models and canonical correlation analysis were added for a better understanding of vector time series in Chapter 16 and state space models in Chapter 18.
  • For motivation and clarification, many illustrations and examples were added throughout the chapters.
  • Addition of the following recent important advances in the field:
    • Model building in the presence of outliers
    • Unit root tests for both non-seasonal and seasonal models
    • Time series regression and GARCH models
    • Cointegration and common factors
    • Equivalent representations of a vector ARMA model
    • Long memory and nonlinear processes
    • Aggregation and various time series tests

Table of Contents

  1. Overview
    • 1.1 Introduction
    • 1.2 Examples and Scope of This Book
  2. Fundamental Concepts
    • 2.1 Stochastic Processes
    • 2.2 The Autocovariance and Autocorrelation Functions
    • 2.3 The Partial Autocorrelation Function
    • 2.4 White Noise Processes
    • 2.5 Estimation of the Mean, Autocovariances, and Autocorrelations
      • 2.5.1 Sample Mean
      • 2.5.2 Sample Autocovariance Function
      • 2.5.3 Sample Autocorrelation Function
      • 2.5.4 Sample Partial Autocorrelation Function
    • 2.6 Moving Average and Autoregressive Representations of Time Series Processes
    • 2.7 Linear Difference Equations
  3. Stationary Time Series Models
    • 3.1 Autoregressive Processes
      • 3.1.1 The First-Order Autoregressive AR(1) Process
      • 3.1.2 The Second-Order Autoregressive AR(2) Process
      • 3.1.3 The General pth-Order Autoregressive AR(p) Process
    • 3.2 Moving Average Processes
      • 3.2.1 The First-Order Moving Average MA(1) Process
      • 3.2.2 The Second-Order Moving Average MA(2) Process
      • 3.2.3 The General qth-Order Moving Average MA(q) Process
    • 3.3 The Dual Relationship Between AR(p) and MA(q) Processes
    • 3.4 Autoregressive Moving Average ARMA(p, q) Processes
      • 3.4.1 The General Mixed ARMA(p, q) Process
      • 3.4.2 The ARMA(1, 1) Process
  4. Nonstationary Time Series Models
    • 4.1 Nonstationarity in the Mean
      • 4.1.1 Deterministic Trend Models
      • 4.1.2 Stochastic Trend Models and Differencing
    • 4.2 Autoregressive Integrated Moving Average (ARIMA) Models
      • 4.2.1 The General ARIMA Model
      • 4.2.2 The Random Walk Model
      • 4.2.3 The ARIMA(0, 1, 1) or IMA(1, 1) Model
    • 4.3 Nonstationarity in the Variance and the Autocovariance
      • 4.3.1 Variance and Autocovariance of the ARIMA Models
      • 4.3.2 Variance Stabilizing Transformations
  5. Forecasting
    • 5.1 Introduction
    • 5.2 Minimum Mean Square Error Forecasts
      • 5.2.1 Minimum Mean Square Error Forecasts for ARMA Models
      • 5.2.2 Minimum Mean Square Error Forecasts for ARIMA Models
    • 5.3 Computation of Forecasts
    • 5.4 The ARIMA Forecast as a Weighted Average of Previous Observations
    • 5.5 Updating Forecasts
    • 5.6 Eventual Forecast Functions
    • 5.7 A Numerical Example
  6. Model Identification
    • 6.1 Steps for Model Identification
    • 6.2 Empirical Examples
    • 6.3 The Inverse Autocorrelation Function (IACF)
    • 6.4 Extended Sample Autocorrelation Function and Other Identification Procedures
      • 6.4.1 The Extended Sample Autocorrelation Function (ESACF)
      • 6.4.2 Other Identification Procedures
  7. Parameter Estimation, Diagnostic Checking, and Model Selection
    • 7.1 The Method of Moments
    • 7.2 Maximum Likelihood Method
      • 7.2.1 Conditional Maximum Likelihood Estimation
      • 7.2.2 Unconditional Maximum Likelihood Estimation and Backcasting Method
      • 7.2.3 Exact Likelihood Functions
    • 7.3 Nonlinear Estimation
    • 7.4 Ordinary Least Squares (OLS) Estimation in Time Series Analysis
    • 7.5 Diagnostic Checking
    • 7.6 Empirical Examples for Series W1—W7
    • 7.7 Model Selection Criteria
  8. Seasonal Time Series Models
    • 8.1 General Concepts
    • 8.2 Traditional Methods
      • 8.2.1 Regression Method
      • 8.2.2 Moving Average Method
    • 8.3 Seasonal ARIMA Models
    • 8.4 Empirical Examples
  9. Testing for a Unit Root
    • 9.1 Introduction
    • 9.2 Some Useful Limiting Distributions
    • 9.3 Testing for a Unit Root in the AR(1) Model
      • 9.3.1 Testing the AR(1) Model without a Constant Term
      • 9.3.2 Testing the AR(1) Model with a Constant Term
      • 9.3.3 Testing the AR(1) Model with a Linear Time Trend
    • 9.4 Testing for a Unit Root in a More General Model
    • 9.5 Testing for a Unit Root in Seasonal Time Series Models
      • 9.5.1 Testing the Simple Zero Mean Seasonal Model
      • 9.5.2 Testing the General Multiplicative Zero Mean Seasonal Model
  10. Intervention Analysis and Outlier Detection
    • 10.1 Intervention Models
    • 10.2 Examples of Intervention Analysis
    • 10.3 Time Series Outliers
      • 10.3.1 Additive and Innovational Outliers
      • 10.3.2 Estimation of the Outlier Effect When the Timing of the Outlier Is Known
      • 10.3.3 Detection of Outliers Using an Iterative Procedure
    • 10.4 Examples of Outlier Analysis
    • 10.5 Model Identification in the Presence of Outliers
  11. Fourier Analysis
    • 11.1 General Concepts
    • 11.2 Orthogonal Functions
    • 11.3 Fourier Representation of Finite Sequences
    • 11.4 Fourier Representation of Periodic Sequences
    • 11.5 Fourier Representation of Nonperiodic Sequences: The Discrete-Time Fourier Transform
    • 11.6 Fourier Representation of Continuous-Time Functions
      • 11.6.1 Fourier Representation of Periodic Functions
      • 11.6.2 Fourier Representation of Nonperiodic Functions: The Continuous-Time Fourier Transform
    • 11.7 The Fast Fourier Transform
  12. Spectral Theory of Stationary Processes
    • 12.1 The Spectrum
      • 12.1.1 The Spectrum and Its Properties
      • 12.1.2 The Spectral Representation of Autocovariance Functions: The Spectral Distribution Function
      • 12.1.3 Wold’s Decomposition of a Stationary Process
      • 12.1.4 The Spectral Representation of Stationary Processes
    • 12.2 The Spectrum of Some Common Processes
      • 12.2.1 The Spectrum and the Autocovariance Generating Function
      • 12.2.2 The Spectrum of ARMA Models
      • 12.2.3 The Spectrum of the Sum of Two Independent Processes
      • 12.2.4 The Spectrum of Seasonal Models
    • 12.3 The Spectrum of Linear Filters
      • 12.3.1 The Filter Function
      • 12.3.2 Effect of Moving Average
      • 12.3.3 Effect of Differencing
    • 12.4 Aliasing
  13. Estimation of the Spectrum
    • 13.1 Periodogram Analysis
      • 13.1.1 The Periodogram
      • 13.1.2 Sampling Properties of the Periodogram
      • 13.1.3 Tests for Hidden Periodic Components
    • 13.2 The Sample Spectrum
    • 13.3 The Smoothed Spectrum
      • 13.3.1 Smoothing in the Frequency Domain: The Spectral Window
      • 13.3.2 Smoothing in the Time Domain: The Lag Window
      • 13.3.3 Some Commonly Used Windows
      • 13.3.4 Approximate Confidence Intervals for Spectral Ordinates
    • 13.4 ARMA Spectral Estimation
  14. Transfer Function Models
    • 14.1 Single-Input Transfer Function Models
      • 14.1.1 General Concepts
      • 14.1.2 Some Typical Impulse Response Functions
    • 14.2 The Cross-Correlation Function and Transfer Function Models
      • 14.2.1 The Cross-Correlation Function (CCF)
      • 14.2.2 The Relationship between the Cross-Correlation Function and the Transfer Function
    • 14.3 Construction of Transfer Function Models
      • 14.3.1 Sample Cross-Correlation Function
      • 14.3.2 Identification of Transfer Function Models
      • 14.3.3 Estimation of Transfer Function Models
      • 14.3.4 Diagnostic Checking of Transfer Function Models
      • 14.3.5 An Empirical Example
    • 14.4 Forecasting Using Transfer Function Models
      • 14.4.1 Minimum Mean Square Error Forecasts for Stationary Input and Output Series
      • 14.4.2 Minimum Mean Square Error Forecasts for Nonstationary Input and Output Series
      • 14.4.3 An Example
    • 14.5 Bivariate Frequency-Domain Analysis
      • 14.5.1 Cross-Covariance Generating Functions and the Cross-Spectrum
      • 14.5.2 Interpretation of the Cross-Spectral Functions
      • 14.5.3 Examples
      • 14.5.4 Estimation of the Cross-Spectrum
    • 14.6 The Cross-Spectrum and Transfer Function Models
      • 14.6.1 Construction of Transfer Function Models through Cross-Spectrum Analysis
      • 14.6.2 Cross-Spectral Functions of Transfer Function Models
    • 14.7 Multiple-Input Transfer Function Models
  15. Time Series Regression and GARCH Models
    • 15.1 Regression with Autocorrelated Errors
    • 15.2 ARCH and GARCH Models
    • 15.3 Estimation of GARCH Models
      • 15.3.1 Maximum Likelihood Estimation
      • 15.3.2 Iterative Estimation
    • 15.4 Computation of Forecast Error Variance
    • 15.5 Illustrative Examples
  16. Vector Time Series Models
    • 16.1 Covariance and Correlation Matrix Functions
    • 16.2 Moving Average and Autoregressive Representations of Vector Processes
    • 16.3 The Vector Autoregressive Moving Average Process
      • 16.3.1 Covariance Matrix Function for the Vector AR(1) Model
      • 16.3.2 Vector AR(p) Models
      • 16.3.3 Vector MA(1) Models
      • 16.3.4 Vector MA(q) Models
      • 16.3.5 Vector ARMA(1, 1) Models
    • 16.4 Nonstationary Vector Autoregressive Moving Average Models
    • 16.5 Identification of Vector Time Series Models
      • 16.5.1 Sample Correlation Matrix Function
      • 16.5.2 Partial Autoregression Matrices
      • 16.5.3 Partial Lag Correlation Matrix Function
    • 16.6 Model Fitting and Forecasting
    • 16.7 An Empirical Example
      • 16.7.1 Model Identification
      • 16.7.2 Parameter Estimation
      • 16.7.3 Diagnostic Checking
      • 16.7.4 Forecasting
      • 16.7.5 Further Remarks
    • 16.8 Spectral Properties of Vector Processes
    • Supplement 16.A Multivariate Linear Regression Models
  17. More on Vector Time Series
    • 17.1 Unit Roots and Cointegration in Vector Processes
      • 17.1.1 Representations of Nonstationary Cointegrated Processes
      • 17.1.2 Decomposition of Zt
      • 17.1.3 Testing and Estimating Cointegration
    • 17.2 Partial Process and Partial Process Correlation Matrices
      • 17.2.1 Covariance Matrix Generating Function
      • 17.2.2 Partial Covariance Matrix Generating Function
      • 17.2.3 Partial Process Sample Correlation Matrix Functions
      • 17.2.4 An Empirical Example: The U.S. Hog Data
    • 17.3 Equivalent Representations of a Vector ARMA Model
      • 17.3.1 Finite-Order Representations of a Vector Time Series Process
      • 17.3.2 Some Implications
  18. State Space Models and the Kalman Filter
    • 18.1 State Space Representation
    • 18.2 The Relationship between State Space and ARMA Models
    • 18.3 State Space Model Fitting and Canonical Correlation Analysis
    • 18.4 Empirical Examples
    • 18.5 The Kalman Filter and Its Applications
    • Supplement 18.A Canonical Correlations
  19. Long Memory and Nonlinear Processes
    • 19.1 Long Memory Processes and Fractional Differencing
      • 19.1.1 Fractionally Integrated ARMA Models and Their ACF
      • 19.1.2 Practical Implications of the ARFIMA Processes
      • 19.1.3 Estimation of the Fractional Difference
    • 19.2 Nonlinear Processes
      • 19.2.1 Cumulants, Polyspectrum, and Tests for Linearity and Normality
      • 19.2.2 Some Nonlinear Time Series Models
    • 19.3 Threshold Autoregressive Models
      • 19.3.1 Tests for TAR Models
      • 19.3.2 Modeling TAR Models
  20. Aggregation and Systematic Sampling in Time Series
    • 20.1 Temporal Aggregation of the ARIMA Process
      • 20.1.1 The Relationship of Autocovariances between the Nonaggregate and Aggregate Series
      • 20.1.2 Temporal Aggregation of the IMA(d, q) Process
      • 20.1.3 Temporal Aggregation of the AR(p) Process
      • 20.1.4 Temporal Aggregation of the ARIMA(p, d, q) Process
      • 20.1.5 The Limiting Behavior of Time Series Aggregates
    • 20.2 The Effects of Aggregation on Forecasting and Parameter Estimation
      • 20.2.1 Hilbert Space
      • 20.2.2 The Application of Hilbert Space in Forecasting
      • 20.2.3 The Effect of Temporal Aggregation on Forecasting
      • 20.2.4 Information Loss Due to Aggregation in Parameter Estimation
    • 20.3 Systematic Sampling of the ARIMA Process
    • 20.4 The Effects of Systematic Sampling and Temporal Aggregation on Causality
      • 20.4.1 Decomposition of Linear Relationship between Two Time Series
      • 20.4.2 An Illustrative Underlying Model
      • 20.4.3 The Effects of Systematic Sampling and Temporal Aggregation on Causality
    • 20.5 The Effects of Aggregation on Testing for Linearity and Normality
      • 20.5.1 Testing for Linearity and Normality
      • 20.5.2 The Effects of Temporal Aggregation on Testing for Linearity and Normality
    • 20.6 The Effects of Aggregation on Testing for a Unit Root
      • 20.6.1 The Model of Aggregate Series
      • 20.6.2 The Effects of Aggregation on the Distribution of the Test Statistics
      • 20.6.3 The Effects of Aggregation on the Significance Level and the Power of the Test
      • 20.6.4 Examples
      • 20.6.5 General Cases and Concluding Remarks
    • 20.7 Further Comments

References

Appendix

  • Time Series Data Used for Illustrations
  • Statistical Tables

Author Index

Subject Index

Need help? Get in touch

Pearson+

All in one place. Pearson+ offers instant access to eTextbooks, videos and study tools in one intuitive interface. Students choose how they learn best with enhanced search, audio and flashcards. The Pearson+ app lets them read where life takes them, no wi-fi needed. Students can access Pearson+ through a subscription or their MyLab or Mastering course.

Video
Play
Privacy and cookies
By watching, you agree Pearson can share your viewership data for marketing and analytics for one year, revocable by deleting your cookies.

Pearson eTextbook: What’s on the inside just might surprise you

They say you can’t judge a book by its cover. It’s the same with your students. Meet each one right where they are with an engaging, interactive, personalized learning experience that goes beyond the textbook to fit any schedule, any budget, and any lifestyle.Â