(le) An artificial neuron with linear activation function, two inputs and a bias is trained using the rules: wl+1 = w' +
Posted: Mon May 23, 2022 10:30 am
(le) An artificial neuron with linear activation function, two inputs and a bias is trained using the rules: wl+1 = w' + rt, if x belongs to class-1 and the transpose of w multiplied with x is less than or equal to zero; wł+1 = w- x', if x belongs to class-2 and the transpose of we multiplied with ris greater or equal to zero; wf+1 = w otherwise. [EXAM CONTINUES ON NEXT PAGE] © Birkbeck College 2021 COTY06517 Page 2 of 5 In the above expressions, w is the weight vector, i denotes iterations, and x is the feature vector as shown in the following table: 1 0 0 -1 0 classification class-1 class-1 1 3 1 0 class-2 All weights are zero at the start, i.e. wº = [0, 0, 0]? , where l'indicates the transpose of the vector, the first zero element in this vector is the initial weight of the first input X1, the second zero is the initial weight of the second input x2, and the last zero element in this vector is the weight of the bias term, which always receives an input equal to 1. The neuron is trained by feeding input vectors in the order presented in the table, i.e. r 0,1,2,3. For example, starting from the top, at r0, the first input pattern is (-1, 0] and the correct classification for this input vector is “class-1”. (lel) What would the weights be after the presentation of each one of the input patterns? Show and explain all your calculations. (5 marks) (lc2) What would the weights be if you continue training for three more iterations, i.e. 4,5,6? Show and explain your calculations. What is the weight vector at the end of training (1-6)? Show and explain all your calculations. (5 marks) (1c3) Has the training algorithm reached convergence at 6? Explain your view. What is the equation of the line that defines the decision boundary of this neuron at 67 Show and explain all your calculations. (10 marks)