Paper 2, Section I, J

Statistical Modelling
Part II, 2010

Suppose you have a parametric model consisting of probability mass functions f(y;θ),θΘRf(y ; \theta), \theta \in \Theta \subset \mathbb{R}. Given a sample Y1,,YnY_{1}, \ldots, Y_{n} from f(y;θ)f(y ; \theta), define the maximum likelihood estimator θ^n\hat{\theta}_{n} for θ\theta and, assuming standard regularity conditions hold, state the asymptotic distribution of n(θ^nθ)\sqrt{n}\left(\hat{\theta}_{n}-\theta\right).

Compute the Fisher information of a single observation in the case where f(y;θ)f(y ; \theta) is the probability mass function of a Poisson random variable with parameter θ\theta. If Y1,,YnY_{1}, \ldots, Y_{n} are independent and identically distributed random variables having a Poisson distribution with parameter θ\theta, show that Yˉ=1ni=1nYi\bar{Y}=\frac{1}{n} \sum_{i=1}^{n} Y_{i} and S=1n1i=1n(YiYˉ)2S=\frac{1}{n-1} \sum_{i=1}^{n}\left(Y_{i}-\bar{Y}\right)^{2} are unbiased estimators for θ\theta. Without calculating the variance of SS, show that there is no reason to prefer SS over Yˉ\bar{Y}.

[You may use the fact that the asymptotic variance of n(θ^nθ)\sqrt{n}\left(\hat{\theta}_{n}-\theta\right) is a lower bound for the variance of any unbiased estimator.]