Given the following data for a binary classification problem (including a "ones column" pre-pended to the data): and ini
Posted: Tue Jul 05, 2022 10:25 am
questions. xo 1 1 1 1 x1 X2 Y -1 2 2 1 1 -1 0 -3 -2 Wo W1 -3 0 1 W2 1 -5 0
Part 1: Compute output of the logistic regression Assuming a threshold of 0.5, compute zi, P(yi = 1|xi), and ŷ; (0 or 1) for each sample, then indicate whether the output of the classifier is correct or not. x0 1 1 1 1 x1 X2 Y Z 2 2 1 1 -1 0 -3 -2 1 -3 0 1 number number number number P(y = 1|x) ŷ (0 or 1) number number number number integer integer integer integer Correct? O Yes O Yes O Yes O No O No O No O Yes O No
Part 2: Update weights using gradient descent The logistic regression learns the coefficient vector w to minimize the binary cross-entropy loss function L(w) n - 2 - (2₁ i=1 (3 yį log L = Wk+1 = Wk + a the new weight vector if a = 0.3: Then, to minimize this loss function, the gradient descent update rule is Σ (1₁ Yi √5) ² i=1 Wo 1 1+e-(w,xi) number + (1 - y₂) log For the data and initial weight vector given above, compute the binary cross-entry loss: L = number (3 digits after decimal) number (3 digits after decimal) W1 1 1+e-(wk,xi) number and the binary cross-entropy loss for this new weight vector: -(w,xi) 1+e-(w,x) W2 e number Xi
Given the following data for a binary classification problem (including a "ones column" pre-pended to the data): and initial weights for a logistic regression: answer the following Part 1: Compute output of the logistic regression Assuming a threshold of 0.5, compute zi, P(yi = 1|xi), and ŷ; (0 or 1) for each sample, then indicate whether the output of the classifier is correct or not. x0 1 1 1 1 x1 X2 Y Z 2 2 1 1 -1 0 -3 -2 1 -3 0 1 number number number number P(y = 1|x) ŷ (0 or 1) number number number number integer integer integer integer Correct? O Yes O Yes O Yes O No O No O No O Yes O No
Part 2: Update weights using gradient descent The logistic regression learns the coefficient vector w to minimize the binary cross-entropy loss function L(w) n - 2 - (2₁ i=1 (3 yį log L = Wk+1 = Wk + a the new weight vector if a = 0.3: Then, to minimize this loss function, the gradient descent update rule is Σ (1₁ Yi √5) ² i=1 Wo 1 1+e-(w,xi) number + (1 - y₂) log For the data and initial weight vector given above, compute the binary cross-entry loss: L = number (3 digits after decimal) number (3 digits after decimal) W1 1 1+e-(wk,xi) number and the binary cross-entropy loss for this new weight vector: -(w,xi) 1+e-(w,x) W2 e number Xi