Paper 2, Section II, J

Principles of Statistics
Part II, 2015

Consider a random variable XX arising from the binomial distribution Bin(n,θ)\operatorname{Bin}(n, \theta), θΘ=[0,1]\theta \in \Theta=[0,1]. Find the maximum likelihood estimator θ^MLE\hat{\theta}_{M L E} and the Fisher information I(θ)I(\theta) for θΘ\theta \in \Theta.

Now consider the following priors on Θ\Theta :

(i) a uniform U([0,1])U([0,1]) prior on [0,1][0,1],

(ii) a prior with density π(θ)\pi(\theta) proportional to I(θ)\sqrt{I(\theta)},

(iii) a Beta(n/2,n/2)\operatorname{Beta}(\sqrt{n} / 2, \sqrt{n} / 2) prior.

Find the means E[θX]E[\theta \mid X] and modes mθXm_{\theta} \mid X of the posterior distributions corresponding to the prior distributions (i)-(iii). Which of these posterior decision rules coincide with θ^MLE\hat{\theta}_{M L E} ? Which one is minimax for quadratic risk? Justify your answers.

[You may use the following properties of the Beta(a,b)(a>0,b>0)\operatorname{Beta}(a, b)(a>0, b>0) distribution. Its density f(x;a,b),x[0,1]f(x ; a, b), x \in[0,1], is proportional to xa1(1x)b1x^{a-1}(1-x)^{b-1}, its mean is equal to a/(a+b)a /(a+b), and its mode is equal to

max(a1,0)max(a,1)+max(b,1)2\frac{\max (a-1,0)}{\max (a, 1)+\max (b, 1)-2}

provided either a>1a>1 or b>1b>1.

You may further use the fact that a unique Bayes rule of constant risk is a unique minimax rule for that risk.]