next up previous


Stat 804
Lecture 12 Notes

Large Sample Theory for Conditional Likelihood:

We have data X=(Y,Z) and study the conditional likelihood, score Fisher information and mle: $ \ell_{Y\vert Z}(\theta)$, $U_{Y\vert Z}(\theta)$, ${\cal I}_{Y\vert Z}(\theta)$and $\hat\theta$. In general standard maximum likelihood theory may be expected to apply to these conditional objects:

1.
$P(\ell_{Y\vert Z}(\theta_0) > \ell_{Y\vert Z}(\theta)) \to 1$ as the ``sample size'' (often measured by the Fisher information) tends to infinity.

2.
$\text{E}_\theta\left[ U_{Y\vert Z}(\theta)\vert Z\right]=0$

3.
$\hat\theta$ is consistent (converges to the true value as the Fisher information converges to infinity).

4.
The usual Bartlett identities hold. For example:

\begin{displaymath}{\cal I}_{Y\vert Z}(\theta) \equiv \text{Var}\left[ U_{Y\vert...
...c{\partial}{\partial\theta} U_{Y\vert Z}(\theta)\vert Z\right]
\end{displaymath}

5.
The error in the mle has approximately the form

\begin{displaymath}\hat\theta - \theta \approx \left({\cal I}_{Y\vert Z}(\theta)\right)^{-1} U_{Y\vert Z}(\theta)
\end{displaymath}

6.
The mle is approximately normal:

\begin{displaymath}\left({\cal I}_{Y\vert Z}(\theta)\right)^{1/2} \left(\hat\theta - \theta\right) \approx
MVN(0,I)
\end{displaymath}

(where I is the identity matrix).

7.
The conditional Fisher information can be estimated by the observed information:

\begin{displaymath}\left({\cal I}_{Y\vert Z}(\theta)\right)^{-1}\left( - \frac{\partial}{\partial\theta}
U_{Y\vert Z}(\hat\theta)\right) \to I
\end{displaymath}

8.
The log-likelihood ratio is approximately $\chi^2$:

\begin{displaymath}2( \ell_{Y\vert Z}(\hat\theta) - \ell_{Y\vert Z}(\theta_0)) \Rightarrow \chi_p^2
\end{displaymath}

In the previous lecture I showed you 2) and 4) in this list. Today we look at 5), 6) and 7) in the context of the AR(1) model $X_t=\rho X_{t-1} + \epsilon_t$.


next up previous



Richard Lockhart
1999-11-01