A3.12 B3.15

Principles of Statistics
Part II, 2002

(i) Describe in detail how to perform the Wald, score and likelihood ratio tests of a simple null hypothesis H0:θ=θ0H_{0}: \theta=\theta_{0} given a random sample X1,,XnX_{1}, \ldots, X_{n} from a regular oneparameter density f(x;θ)f(x ; \theta). In each case you should specify the asymptotic null distribution of the test statistic.

(ii) Let X1,,XnX_{1}, \ldots, X_{n} be an independent, identically distributed sample from a distribution FF, and let θ^(X1,,Xn)\hat{\theta}\left(X_{1}, \ldots, X_{n}\right) be an estimator of a parameter θ\theta of FF.

Explain what is meant by: (a) the empirical distribution function of the sample; (b) the bootstrap estimator of the bias of θ^\hat{\theta}, based on the empirical distribution function. Explain how a bootstrap estimator of the distribution function of θ^θ\hat{\theta}-\theta may be used to construct an approximate 1α1-\alpha confidence interval for θ\theta.

Suppose the parameter of interest is θ=μ2\theta=\mu^{2}, where μ\mu is the mean of FF, and the estimator is θ^=Xˉ2\hat{\theta}=\bar{X}^{2}, where Xˉ=n1i=1nXi\bar{X}=n^{-1} \sum_{i=1}^{n} X_{i} is the sample mean.

Derive an explicit expression for the bootstrap estimator of the bias of θ^\hat{\theta} and show that it is biased as an estimator of the true bias of θ^\hat{\theta}.

Let θ^i\hat{\theta}_{i} be the value of the estimator θ^(X1,,Xi1,Xi+1,,Xn)\hat{\theta}\left(X_{1}, \ldots, X_{i-1}, X_{i+1}, \ldots, X_{n}\right) computed from the sample of size n1n-1 obtained by deleting XiX_{i} and let θ^.n1i=1nθ^i\hat{\theta} . n^{-1} \sum_{i=1}^{n} \hat{\theta}_{i}. The jackknife estimator of the bias of θ^\hat{\theta} is

bJ=(n1)(θ^.θ^).b_{J}=(n-1)(\hat{\theta} .-\hat{\theta}) .

Derive the jackknife estimator bJb_{J} for the case θ^=Xˉ2\hat{\theta}=\bar{X}^{2}, and show that, as an estimator of the true bias of θ^\hat{\theta}, it is unbiased.