Paper 4 , Section II, J

Principles of Statistics
Part II, 2020

Consider X1,,XnX_{1}, \ldots, X_{n} drawn from a statistical model {f(,θ):θΘ},Θ=Rp\{f(\cdot, \theta): \theta \in \Theta\}, \Theta=\mathbb{R}^{p}, with non-singular Fisher information matrix I(θ)I(\theta). For θ0Θ,hRp\theta_{0} \in \Theta, h \in \mathbb{R}^{p}, define likelihood ratios

Zn(h)=logi=1nf(Xi,θ0+h/n)i=1nf(Xi,θ0),Xii.i.d.f(,θ0)Z_{n}(h)=\log \frac{\prod_{i=1}^{n} f\left(X_{i}, \theta_{0}+h / \sqrt{n}\right)}{\prod_{i=1}^{n} f\left(X_{i}, \theta_{0}\right)}, \quad X_{i} \sim^{i . i . d .} f\left(\cdot, \theta_{0}\right)

Next consider the probability density functions (ph:hRp)\left(p_{h}: h \in \mathbb{R}^{p}\right) of normal distributions N(h,I(θ0)1)N\left(h, I\left(\theta_{0}\right)^{-1}\right) with corresponding likelihood ratios given by

Z(h)=logph(X)p0(X),Xp0.Z(h)=\log \frac{p_{h}(X)}{p_{0}(X)}, \quad X \sim p_{0} .

Show that for every fixed hRph \in \mathbb{R}^{p}, the random variables Zn(h)Z_{n}(h) converge in distribution as nn \rightarrow \infty to Z(h).Z(h) .

[You may assume suitable regularity conditions of the model {f(,θ):θΘ}\{f(\cdot, \theta): \theta \in \Theta\} without specification, and results on uniform laws of large numbers from lectures can be used without proof.]