STAT 804: 97-3

Assignment 1

1.
Let $\epsilon_t$ be a Gaussian white noise process. Define

\begin{displaymath}X_t=\epsilon_{t-2}+4\epsilon_{t-1}+6\epsilon_t
+4\epsilon_{t+1}+\epsilon_{t+2} .\end{displaymath}

Compute and plot the autocovariance function of X.

Solution:

\begin{displaymath}R_X(h) = \begin{cases}70\sigma^2,&h=0\\
56\sigma^2,&h=1\\ 28...
...&h=2\\ 8\sigma^2,&h=3\\
\sigma^2,&h=4\\ 0,&h \ge 5
\end{cases}\end{displaymath}

2.
Suppose that $\epsilon_t$ are uncorrelated and have mean 0 with finite variance. Verify that $X_t=\epsilon_t\epsilon_{t-1}$ is stationary and that it is wide sense white noise assuming that the $\epsilon$ sequence is iid.

Solution: I should have simply asked the question with ``suppose that $\epsilon_t$ are iid with mean 0 and variance $\sigma^2$" instead of the first sentence. We have ${\rm E}(X_t) = {\rm E}(\epsilon_t){\rm E}(\epsilon_{t-1})=0.$The autocovariance function of X is

\begin{displaymath}R_X(h)=\begin{cases}
{\rm E}(\epsilon_t^2) {\rm E}(\epsilon_{...
...ge 2
\end{cases}=
\begin{cases}
\sigma^4\\
0\\
0
\end{cases}\end{displaymath}

Thus Xt is second order white noise. In fact, from question 4 this sequence is strongly stationary. It is also second order white noise but it is not strict sense white noise.

3.
Suppose that

\begin{displaymath}X_t=\rho_1X_{t-1}+\rho_2X_{t-2}+\epsilon_t\end{displaymath}

where $\epsilon_t$ is an iid mean 0 sequence with variance $\sigma_\epsilon^2$. Compute the autocovariance function and plot the results for $\rho_1=0.2$ and $\rho_2=0.1$. I have shown in class that the roots of a certain polynomial must have modulus more than 1 for there to be a stationary solution X for this difference equation. Translate the conditions on the roots $1/\alpha_1, 1/\alpha_2$ to get conditions on the coefficients $\rho_1,\rho_2$ and plot in the $\rho_1,\rho_2$ plane the region for which this process can be rewritten as a causal filter applied to the noise process $\epsilon_t$.

Solution: This is my rephrasing of the question. To compute the autocovariance function you have two possibilities. First you can factor

\begin{displaymath}(I-\rho_1 B - \rho_2B^2) = (I-\alpha_1B)(I-\alpha_2B)
\end{displaymath}

with the $\alpha_i$ the roots of $1-\rho_1x -\rho_2x^2=0$ and then write, as in class,

\begin{displaymath}X_t = \sum_{k=0}^\infty a_k \epsilon_{t-k}
\end{displaymath}

where

\begin{displaymath}a_k = \sum_\ell=0^k \alpha_1^\ell \alpha_2^{k-\ell}
\end{displaymath}

The autocovariance function is then

\begin{displaymath}C_X(k) = \sigma_\epsilon^2 \sum_{j=0}^\infty a_j a_{j+k}
\end{displaymath}

This would be rather tedious to compute; you would have to decide how many terms to take in the infinite sums.

The second possibility is the recursive method:

\begin{displaymath}C_X(h) = {\rm Cov} (X_{t+h},X_t) = \rho_1 C_X(h-1) + \rho_2 C_X(h-2)
\end{displaymath}

To get started you need values for CX(0) and CX(1). The simplest thing to do, since the value of $\sigma_\epsilon^2$ is free to choose when you plot, is to just assume CX(0) = 1 so that you just compute the autocorrelation function. To get CX(1) put h=1 in the recursion above and get

\begin{displaymath}C_X(1) = \rho_1 +\rho_2 C_X(1)
\end{displaymath}

so that $\rho_X(1) = \rho_1/(1-\rho_2)$. Divide the recursion by CX(0) to see that the recursion is then

\begin{displaymath}\rho_X(h) = \rho_1 \rho_X(h-1) + \rho_2 \rho_X(h-2) \, .
\end{displaymath}

You can use this for $h \ge 2$. Note that my choice of the symbol $\rho$ for coefficients in the recursion was silly.

Now the roots $1/\alpha_i$ are of the form

\begin{displaymath}\frac{ \rho_1 \pm \sqrt{\rho_1^2 + 4\rho_2}}{-2\rho_2}
\end{displaymath}

The stationarity conditions are that both of these roots must be larger than 1 in modulus.

If $\rho_1^2 + 4\rho_2 \ge 0$ then the two roots are real. Set them equal to 1 and then to -1 to get the boundary of the region of interest:

\begin{displaymath}\rho_1 \pm \sqrt{\rho_1^2 + 4\rho_2} = -2\rho_2
\end{displaymath}

gives $\rho_1^2 + 4\rho_2^2 +4\rho_1\rho_2 = \rho_1^2 + 4\rho_2
$ or, for $4 \rho_2 \ne 0$ we get $\rho_1+\rho_2 = 1$. Similarly, setting the root equal to -1 gives

\begin{displaymath}\rho_2-\rho_1 = 1
\end{displaymath}

It is now not hard to check that the inequalities

\begin{displaymath}\rho_1+\rho_2 < 1
\end{displaymath}


\begin{displaymath}\rho_2-\rho_1 <1
\end{displaymath}

and

\begin{displaymath}\rho_1^2 + 4\rho_2 \ge 0
\end{displaymath}

guarantee, for $\rho_2 \ne 0$ that the roots have absolute value more than 1.

When the discriminant $\rho_1^2 + 4\rho_2$ is negative the two roots are complex conjugates

\begin{displaymath}\frac{\rho_1}{-2\rho_2} \pm i\frac{\sqrt{-4\rho_2 -\rho_1^2}}{-2\rho_2}
\end{displaymath}

and have modulus squared

\begin{displaymath}1/\vert\rho_2\vert
\end{displaymath}

which will be more than 1 provided $\vert\rho_2\vert < 1$.

Finally for $\rho_2=0$ the process is simply an AR(1) which will be stationary for $\vert\rho_1\vert < 1$. Putting together all these limits gives a triangle in the $\rho_1,\rho_2$ plane bounded by the lines $\rho_1+\rho_2 = 1$, $\rho_2-\rho_1 = 1$ and $\rho_2=-1$.

4.
Suppose that Xt is strictly stationary. If g is some function from Rp+1 to R show that

\begin{displaymath}Y_t=g(X_t,X_{t-1},\ldots,X_{t-p})\end{displaymath}

is strictly stationary. What property must g have to guarantee the analogous result with strictly stationary replaced by $2^{\rm nd}$ order stationary?

Solution: You must prove the following assertion: for any k and any $A \subset R^k$ we have

\begin{displaymath}P((Y_{t+1}, \ldots,Y_{t+k}) \in A) = P((Y_{1}, \ldots,Y_{k}) \in A)
\end{displaymath}

(for the mathematically inclined you need this for ``Borel sets A".) Define g* by

\begin{displaymath}g^*(x_{1-p},\ldots, x_{k}) = (g(x_1,x_0,\ldots,x_{1-k}),\ldots,
g(x_{k},\ldots,x_{k-p}))
\end{displaymath}

so that

\begin{displaymath}(Y_{t+1}, \ldots,Y_{t+k}) = g^*(X_{t+1-p},\ldots,X_{t+k})
\end{displaymath}

and

\begin{displaymath}(Y_1,\ldots,Y_k) = g^*(X_{1-p},\ldots,X_{k})
\end{displaymath}

Then

\begin{displaymath}P((Y_{t+1}, \ldots,Y_{t+k})\in A) = P((X_{t+1-p},\ldots,X_{t+k}) \in B)
\end{displaymath}

where

B=(g*)-1(A)

is the inverse image of A under the map g*. In fact the probability on the right is the definition of the probability on the left!

(REMARK: A number of students worried about whether or not you could take this (g*)-1(A); I suspect there were worried about the existence of a so-called functional inverse of g*. The latter exists only if g*is a bijection: one-to-one and onto. But the inverse image B of Aexists for any g*; it is defined as $\{x: g^*(x)\in A\}$. As a simple example if g*(x) = x2 then there is no functional inverse of g* but for instance,

\begin{displaymath}(g^*)^{-1}([1,4]) = \{x: 1 \le x^2 \le 4\} =
[-2,-1]\cup[1,2]\end{displaymath}

so that the inverse image of [1,4] is perfectly well defined.)

For the special case t=0 we also get

\begin{displaymath}P((Y_1,\ldots,Y_k)\in A ) = P((X_{1-p},\ldots,X_k) \in B)
\end{displaymath}

But since X is stationary

\begin{displaymath}P((X_{t+1-p},\ldots,X_{t+k}) \in B) = P((X_{1-p},\ldots,X_k) \in B)
\end{displaymath}

from which we get the desired result.

For the second part, if g is affine, that is $g( x_1,\ldots , x_p) = A
x+b$for some $1\times p$ vector A and a constant b then Y will have stationary mean and covariance if X does. In fact I think the condition is necessary but do not know a complete proof.

5.
Suppose that $\epsilon_t$ is an iid mean 0 variance $\sigma_\epsilon^2$sequence and that $a_t; t = 0, \pm 1, \pm2 ,\ldots$ are constants. Define

\begin{displaymath}X_t = \sum a_s \epsilon_{t-s}
\end{displaymath}

(a)
Derive the autocovariance of the process X.

Solution:

\begin{displaymath}C_X(h) = {\rm Cov}( \sum a_s \epsilon_{t+h-s}, \sum_u a_u \epsilon_{t-u})
\end{displaymath}

simplifies to

\begin{displaymath}C_X(h) = \sum_s a_s a_{s-h} \, .
\end{displaymath}

(b)
Show that $\sum a_s^2 < \infty$ implies

\begin{displaymath}\lim_{N\to\infty} E[(X_t - \sum_{-N}^N a_s \epsilon_{t-s})^2] = 0
\end{displaymath}

This condition shows that the infinite sum defining X converges ``in the sense of mean square''. It is possible to prove that this means that X can be defined properly. [Note: I don't expect much rigour in this calculation.

Solution: I had in mind the simple calculation

\begin{displaymath}X_t -\sum_{-N}^N a_s \epsilon_{t-s} = \sum_{\vert s\vert>N} a_s \epsilon_{t-s}
\end{displaymath}

which has mean 0 and variance

\begin{displaymath}\sum_{\vert s\vert>N} a_s^2
\end{displaymath}

The latter quantity converges to 0 since

\begin{displaymath}\sum_{\vert s\vert>N} a_s^2 = \sum_s a_s^2 - \sum_{-N}^N a_s^2 \to 0
\end{displaymath}

More rigour requires the following ideas. I had no intention for students to discover or use these ideas but some, at least, were interested to know.

Let L2 be the set of all random variables X such that $E(X^2) <
\infty$ where we agree to regard two random variables X1 and X2 as being the same if E((X1-X2)2)=0. (Literally we define them to be equivalent in this case and then let L2 be the set of equivalence classes.) It is a mathematical fact about L2 that it is a Banach space, or a complete normed vector space with a norm defined by $\vert\vert X\vert\vert = \sqrt{E(X^2)}$. The important point is that any Cauchy sequence in L2 converges to some limit.

Define $S_N = \sum_{-N}^N a_s \epsilon_{t-s}$ and note that for N1 < N2 we have

\begin{displaymath}\vert\vert S_{N_2}-S_{N_1}\vert\vert^2 = \sum_{N_1 < n \le N_2} a_s^2 \le
\sum_{n>N_1}a_s^2
\end{displaymath}

which shows that SN is Cauchy because the sum converges. Thus there is an $S_\infty$ such that $S_N \to S_\infty$ in L2which means

\begin{displaymath}E((S_\infty - S_N)^2) \to 0
\end{displaymath}

This $S_\infty$ is precisely our definition of Xt.

6.
Given a stationary mean 0 series Xt with autocorrelation $\rho_k$, $k=0, \pm 1, \ldots$and a fixed lag D find the value of A which minimizes the mean squared error

E[(Xt+d-AXt)2]

and for the minimizing A evaluate the mean squared error in terms of the autocorrelation and the variance of Xt.

Solution: I added the mean 0 later because you need it and I had forgotten it. You get

\begin{eqnarray*}E[(X_{t+d}-AX_t)^2] & = & E(X_{t+d}^2) -2AE(X_{t+d}X_t) + A^2 E(X_t^2) \cr
& = & C_X(0) -2A C_X(d) +A^2 C_X(0)
\end{eqnarray*}


this quadratic is minimized when its derivative -2CX(d)+2ACX(0) is 0 which is when

\begin{displaymath}A=C_X(d)/C_X(0) =\rho_d
\end{displaymath}

Put in this value for A to get a mean squared error of

\begin{displaymath}C_X(0) -2\rho_d C_X(d) + \rho_d^2C_X(0) = C_X(0)(1-2\rho_d^2+\rho_d^2)
\end{displaymath}

or just

\begin{displaymath}C_X(0)(1-\rho_d^2)\, .
\end{displaymath}

7.
Suppose Xt is a stationary Gaussian series with mean $\mu_X$and autocovariance RX(k), $k=0, \pm 1, \ldots$. Show that $Y_t=\exp(X_t)$ is stationary and find its mean and autocovariance.

Solution: The stationarity comes from question 4. To compute the mean and covariance of Y we use the fact that the moment generating function of a $N(\mu,\sigma^2)$ random variable is $\exp(\mu s +
\sigma^2s^2/2)$. Since E(Yt) is just the mgf of Xt at s=1we see that the mean of Y is just $\exp(\mu_X+R_X(0)/2)$. To compute the covariance we need

\begin{displaymath}E(Y_tY_{t+h}) = E(\exp(X_t+X_{t+h}))
\end{displaymath}

which is just the mgf of Xt+Xt+h at 1. Since Xt+Xt+h is $N(2\mu_x, 2R_X(0)+2R_X(h))$ we see that the autocovariance of Yis

\begin{displaymath}C_Y(h) = \exp(2\mu_X +R_X(0) +R_X(h)) - \exp(2(\mu_X+R_X(0)/2))
\end{displaymath}

or

\begin{displaymath}C_Y(h) = \exp(2\mu_X +R_X(0))(\exp(R_X(h))-1)
\end{displaymath}

8.
The semivariogram of a stationary process X is

\begin{displaymath}\gamma_X(m) = \frac{1}{2} E[(X_{t+m}-X_t)^2] \, .
\end{displaymath}

(Without the 1/2 it's called the variogram.) Evaluate $\gamma$ in terms of the autocovariance of X.

Solution:

\begin{eqnarray*}\gamma_X(m) & = & \frac{1}{2} E[(X_{t+m}-X_t)^2] \cr
& = & \fr...
...} (C_X(0) +C_X(0) -2 C_X(m)) \cr
& = & C_X(0)(1-\rho_X(m)) \, .
\end{eqnarray*}




Richard Lockhart
1999-11-05