- I 8 Marks Assume One Observation X Bin N 1 Where Bin N 0 Stands For The Binomial Distribution With N Trials 1 (312.83 KiB) Viewed 86 times
= = i) [8 marks] Assume one observation X ~ Bin(n,1), where Bin(n, 0) stands for the binomial distribution with n trials
-
- Site Admin
- Posts: 899603
- Joined: Mon Aug 02, 2021 8:13 am
= = i) [8 marks] Assume one observation X ~ Bin(n,1), where Bin(n, 0) stands for the binomial distribution with n trials
= = i) [8 marks] Assume one observation X ~ Bin(n,1), where Bin(n, 0) stands for the binomial distribution with n trials and probability of suc- cess 0. Both n and are unknown. For @ assume a beta prior distri- bution (O) Be(a,b), and for n assume a Poisson prior distribution (n) = Pois(1). Write a Gibbs sampler to simulate from the posterior distribution of O and n. All the full conditionals are known in closed form. Fix a = 2, b = 4, x = 10 and X = 1. Fix the number of simu- lations N = 1,500 and run three chains, with these three initialisation: (01 0.5, n1 = x), (01 = 0.99, n1 = 100), (01 = 0.01, ni = 20). When comparing the results of the three chains, where would you fix the burnin? ii) [7 marks] Work on the dataset available on Moodle. It includes the response variable y indicating the number of awards earned by students at y high school in a year, math, a continuous predictor representing students' scores on their math final exam, and prog2 and prog3, dummy variables indicating the type of programme in which the students were enrolled (prog2 = 1 if in an “Academic” programme, prog3 = 1 if a “Vocational” programme, with “General” programme as reference category). To read the dataset in R and Python respectively, use p <- read.csv ("poisson_reg.csv", header=T) # in R import pandas as pd # in Python dataframe=pd.read_csv ("poisson_reg.csv") Perform a Bayesian Poisson regression on the number of awards with respect to the other variables. Define the response variable y as the last column of the dataset and the design matrix X as the first four columns of the dataset (it already includes the column relative to the intercept term). In Poisson regression, the likelihood function is defined as ile L(y;B,x) = II where n e-diyi == yi! i=1 = E[Y] = di = ex?B for i = 1, ..., n Write a Metropolis-Hastings algorithm to approximate the posterior dis- tribution of the ß coefficients. Use independent normal priors N(0, 100) (where 100 is the variance). Decide the scale parameter of the proposal in order to get the optimal acceptance rate of 23%.