next up previous
Postscript version of these notes

STAT 804

Lecture 16 Notes

Distribution theory for sample autocovariances

The simplest statistic to consider is

\begin{displaymath}\frac{1}{T} \sum X_t X_{t+k}
\end{displaymath}

where the sum extends over those t for which the data are available. If the series has mean 0 then the expected value of this statistic is simply

\begin{displaymath}\frac{T-k}{T} C_X(k)
\end{displaymath}

which differs negligibly for T large compared to k from CX(k). To compute the variance we begin with the second moment which is

\begin{displaymath}\frac{1}{T^2} \sum_s\sum_t {\rm E}(X_sX_{s+k}X_t X_{t+k})
\end{displaymath}

The expectations in question involve the fourth order product moments of X and depend on the distribution of the X's and not just on CX. However, for the interesting case of white noise, we can compute the expected value. For k> 0 you may assume that s<t or s=t since the s> t cases can be figured out by swapping s and t in the s<t case. For s<t the variable Xs is independent of all 3 of Xs+k, Xt and Xt+k. Thus the expectation factors into something containing the factor ${\rm E}(X_s)=0$. For s=t, we get ${\rm E}(X_s^2){\rm E}(X_{s+k}^2)=\sigma^4$. and so the second moment is

\begin{displaymath}\frac{T-k}{T^2}\sigma^4
\end{displaymath}

This is also the variance since, for k> 0 and for white noise, CX(k)=0.

For k=0 and s <t or s> t the expectation is simply $\sigma^4$ while for s=t we get ${\rm E}(X_t^4)\equiv \mu_4$. Thus the variance of the sample variance (when the mean is known to be 0) is

\begin{displaymath}\frac{T-1}{T} \sigma^4 + \mu_4/T - \sigma^4 = (\mu_4-\sigma^4)/T \, .
\end{displaymath}

For the normal distribution the fourth moment $\mu_4$ is given simply by $3\sigma^4$.

Having computed the variance it is usual to look at the large sample distribution theory. For k=0 the usual central limit theorem applies to $\sum X_t^2 / T$ (in the case of white noise) to prove that

\begin{displaymath}\sqrt{T}({\hat C}_X(0) -\sigma^2)/\sqrt{\mu_4} \to N(0,1) \, .
\end{displaymath}

The presence of $\mu_4$ in the formula shows that the approximation is quite sensitive to the assumption of normality.

For k> 0 the theorem needed is called the m-dependent central limit theorem; it shows that

\begin{displaymath}\sqrt{T} {\hat C}_X(k)/\sigma^2 \to N(0,1) \, .
\end{displaymath}

In each of these cases the assertion is simply that the statistic in question divided by its standard deviation has an approximate normal distribution.

The sample autocorrelation at lag k is

\begin{displaymath}{\hat C}_X(k)/{\hat C}_X(0) \, .
\end{displaymath}

For k> 0 we can apply Slutsky's theorem to conclude that

\begin{displaymath}\sqrt{T}
{\hat C}_X(k)/{\hat C}_X(0) \to N(0,1) \, .
\end{displaymath}

This justifies drawing lines at $\pm 2/\sqrt{T}$ to carry out a 95% test of the hypothesis that the X series is white noise based on the kth sample autocorrelation.

It is possible to verify that subtraction of $\bar X$ from the observations before computing the sample covariances does not change the large sample approximations, although it does affect the exact formulas for moments.

When the X series is actually not white noise the situation is more complicated. Consider as an example the model

\begin{displaymath}X_t = \phi X_{t-1} + \epsilon_t
\end{displaymath}

with $\epsilon$ being white noise. Taking

\begin{displaymath}{\hat C}_X(k) = \frac{1}{T} X_t X_{t+k}
\end{displaymath}

we find that

\begin{displaymath}T^2{\rm E}({\hat C}_X(k)^2) = \sum_s\sum_t \sum_{u_1} \sum_{u...
...{s-u_1}\epsilon_{s+k-u_2}
\epsilon_{t-v_1}
\epsilon_{t+k-v_2})
\end{displaymath}

The expectation is 0 unless either all 4 indices on the $\epsilon$'s are the same or the indices come in two pairs of equal values. The first case requires u1=u2-k and v1=v2-k and then s-u1=t-v1. The second case requires one of three pairs of equalities: s-u1=t-v1 and s-u2 = t-v2 or s-u1=t+k-v2 and s+k-u2 = t-v1 or s-u1=s+k-u2 and t-v1 = t-+k-v2 along with the restriction that the four indices not all be equal. The actual moment is then $\mu_4$ when all four indices are equal and $\sigma^4$ when there are two pairs. It is now possible to do the sum using geometric series identities and compute the variance of ${\hat C}_X(k)$. It is not particularly enlightening to finish the calculation in detail.

There are versions of the central limit theorem called mixing central limit theorems which can be used for ARMA(p,q) processes in order to conclude that

\begin{displaymath}\sqrt{T} ( {\hat C}_X(k)-C_X(k))/\sqrt{{\rm Var}({\hat C}_X(k))}
\end{displaymath}

has asymptotically a standard normal distribution and that the same is true when the standard deviation in the denominator is replaced by an estimate. To get from this to distribution theory for the sample autocorrelation is easiest when the true autocorrelation is 0.

The general tactic is the $\delta$ method or Taylor expansion. In this case for each sample size T you have two estimates, say NT and DTof two parameters. You want distribution theory for the ratio RT = NT/DT. The idea is to write RT=f(NT,DT) where f(x,y)=x/y and then make use of the fact that NT and DT are close to the parameters they are estimates of. In our case NTis the sample autocovariance at lag k which is close to the true autocovariance CX(k) while the denominator DT is the sample autocovariance at lag 0, a consistent estimator of CX(0).

Write

\begin{eqnarray*}f(N_T,D_T)& = & f(C_X(k),C_X(0)) \cr
& & + (N_T-C_X(k))D_1f(C_...
...) \cr
& & + (D_T-C_X(0))D_2f(C_X(k),C_X(0))
+\mbox{remainder}
\end{eqnarray*}


If we can use a central limit theorem to conclude that

\begin{displaymath}(\sqrt{T}(N_T-C_X(k)), \sqrt{T}(D_T-C_X(0)))
\end{displaymath}

has an approximately bivariate normal distribution and if we can neglect the remainder term then

\begin{displaymath}\sqrt{T}(f(N_T,D_T)-f(C_X(k),C_X(0))) = \sqrt{T}({\hat\rho}(k)-\rho(k))
\end{displaymath}

has approximately a normal distribution. The notation here is that Dj denotes differentiation with respect to the jth argument of f. For f(x,y) = x/y we have D1f = 1/y and D2f = -x/y2. When CX(k)=0 the term involving D2f vanishes and we simply get the assertion that

\begin{displaymath}\sqrt{T}({\hat\rho}(k)-\rho(k))
\end{displaymath}

has the same asymptotic normal distribution as ${\hat C}_X(k)/C_X(0)$.

Similar ideas can be used for the estimated sample partial ACF.

Portmanteau tests

In order to test the hypothesis that a series is white noise using the distribution theory just given, you have to produce a single statistic to base youre test on. Rather than pick a single value of k the suggestion has been made to consider a sum of squares or a weighted sum of squares of the ${\hat\rho}(k)$.

A typical statistic is

\begin{displaymath}T\sum_{k=1}^K {\hat\rho}^2(k)
\end{displaymath}

which, for white noise, has approximately a $\chi_K^2$ distribution. (This fact relies on an extension of the previous computations to conclude that

\begin{displaymath}\sqrt{T}({\hat \rho}(1), \ldots , {\hat \rho}(K))
\end{displaymath}

has approximately a standard multivariate distribution. This, in turn, relies on computation of the covariance between ${\hat C}(j)$ and ${\hat C}(k)$.)

When the parameters in an ARMA(p,q) have been estimated by maximum likelihood the degrees of freedom must be adjusted to K-p-q. The resulting test is the Box-Pierce test; a refined version which takes better account of finite sample properties is the Box-Pierce-Ljung test. S-Plus plots the P-values from these tests for 1 through 10 degrees of freedom as part of the output of arima.diag.


next up previous



Richard Lockhart
1999-10-13