3. Maximum Likelihood Estimation (5 Points): This question is on
the maximum likelihood estimation of a Gaussian distribution. In
the class, we derived the MLE estimator of a gaussian distribution
given a one dimensional dataset x 1 , · · · , xN . In particular,
we obtained that µMLE = PN i=1 xi/N and σ 2 MLE = PN i=1(xi − µMLE)
2/N. Next, assume that the prior distribution on the 2 mean itself
is a normal distribution with mean ν and variance β 2 . Compute
µMAP , i.e. the MAP estimators for the mean. Also contrast µMAP to
the MLE estimators (µMLE), as N → ∞.
4.Implement a Decision Tree from Scratch (6 Points): Implement a
Decision Tree from scratch. We will implement the classification
variant. Implement the simple decision tree algorithm (using the
information gain for feature splitting in a greedy manner), and
assume the features are categorical for simplicity. Compare your
performance with sk-learn for a comparable choice of
hyper-parameters (depth of the tree, number of leaf nodes etc). You
can use the same dataset we considered for the demo in class
(https://drive.google.com/drive/u/0/fold ... ogxZiRI-pf).
3. Maximum Likelihood Estimation (5 Points): This question is on the maximum likelihood estimation of a Gaussian distribution. In the class, we derived the MLE estimator of a gaussian distribution given a one dimensional dataset 2, ...,?. In particular, we obtained that MM LE Xi/N and ole = 5:41 (di - MMLE)?/N. Next, assume that the prior distribution on the 2 mean itself is a normal distribution with mean v and variance B2. Compute pumap, i.e. the MAP estimators for the mean. Also contrast map to the MLE estimators (HMLE), as N → . 4. Implement a Decision Tree from Scratch (6 Points): Implement a Decision Tree from scratch. We will implement the classification variant. Implement the simple decision tree algorithm (using the information gain for feature splitting in a greedy manner), and assume the features are categorical for simplicity. Compare your performance with sk-learn for a comparable choice of hyper-parameters (depth of the tree, number of leaf nodes etc). You can use the same dataset we considered for the demo in class (https://drive.google.com/drive/u/0/fold ... ogxZiRI-pf).
3. Maximum Likelihood Estimation (5 Points): This question is on the maximum likelihood estimation of a Gaussian distrib
-
- Site Admin
- Posts: 899603
- Joined: Mon Aug 02, 2021 8:13 am