(le) An artificial neuron with linear activation function, two inputs and a bias is trained using the rules: wl+1 = w' +

Business, Finance, Economics, Accounting, Operations Management, Computer Science, Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Algebra, Precalculus, Statistics and Probabilty, Advanced Math, Physics, Chemistry, Biology, Nursing, Psychology, Certifications, Tests, Prep, and more.
Post Reply
answerhappygod
Site Admin
Posts: 899603
Joined: Mon Aug 02, 2021 8:13 am

(le) An artificial neuron with linear activation function, two inputs and a bias is trained using the rules: wl+1 = w' +

Post by answerhappygod »

Le An Artificial Neuron With Linear Activation Function Two Inputs And A Bias Is Trained Using The Rules Wl 1 W 1
Le An Artificial Neuron With Linear Activation Function Two Inputs And A Bias Is Trained Using The Rules Wl 1 W 1 (340.44 KiB) Viewed 16 times
(le) An artificial neuron with linear activation function, two inputs and a bias is trained using the rules: wl+1 = w' + rt, if x belongs to class-1 and the transpose of w multiplied with x is less than or equal to zero; wł+1 = w- x', if x belongs to class-2 and the transpose of we multiplied with ris greater or equal to zero; wf+1 = w otherwise. [EXAM CONTINUES ON NEXT PAGE] © Birkbeck College 2021 COTY06517 Page 2 of 5 In the above expressions, w is the weight vector, i denotes iterations, and x is the feature vector as shown in the following table: 1 0 0 -1 0 classification class-1 class-1 1 3 1 0 class-2 All weights are zero at the start, i.e. wº = [0, 0, 0]? , where l'indicates the transpose of the vector, the first zero element in this vector is the initial weight of the first input X1, the second zero is the initial weight of the second input x2, and the last zero element in this vector is the weight of the bias term, which always receives an input equal to 1. The neuron is trained by feeding input vectors in the order presented in the table, i.e. r 0,1,2,3. For example, starting from the top, at r0, the first input pattern is (-1, 0] and the correct classification for this input vector is “class-1”. (lel) What would the weights be after the presentation of each one of the input patterns? Show and explain all your calculations. (5 marks) (lc2) What would the weights be if you continue training for three more iterations, i.e. 4,5,6? Show and explain your calculations. What is the weight vector at the end of training (1-6)? Show and explain all your calculations. (5 marks) (1c3) Has the training algorithm reached convergence at 6? Explain your view. What is the equation of the line that defines the decision boundary of this neuron at 67 Show and explain all your calculations. (10 marks)
Join a community of subject matter experts. Register for FREE to view solutions, replies, and use search function. Request answer by replying!
Post Reply