What is a Multilayer Network? A network with more than one layer is called a multilayer network. Input First Layer Secon

Business, Finance, Economics, Accounting, Operations Management, Computer Science, Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Algebra, Precalculus, Statistics and Probabilty, Advanced Math, Physics, Chemistry, Biology, Nursing, Psychology, Certifications, Tests, Prep, and more.
Post Reply
answerhappygod
Site Admin
Posts: 899603
Joined: Mon Aug 02, 2021 8:13 am

What is a Multilayer Network? A network with more than one layer is called a multilayer network. Input First Layer Secon

Post by answerhappygod »

What Is A Multilayer Network A Network With More Than One Layer Is Called A Multilayer Network Input First Layer Secon 1
What Is A Multilayer Network A Network With More Than One Layer Is Called A Multilayer Network Input First Layer Secon 1 (206.61 KiB) Viewed 42 times
What Is A Multilayer Network A Network With More Than One Layer Is Called A Multilayer Network Input First Layer Secon 2
What Is A Multilayer Network A Network With More Than One Layer Is Called A Multilayer Network Input First Layer Secon 2 (129.52 KiB) Viewed 42 times
What Is A Multilayer Network A Network With More Than One Layer Is Called A Multilayer Network Input First Layer Secon 3
What Is A Multilayer Network A Network With More Than One Layer Is Called A Multilayer Network Input First Layer Secon 3 (57.65 KiB) Viewed 42 times
Please consider using Python to solve this question.
Thank you!
What is a Multilayer Network? A network with more than one layer is called a multilayer network. Input First Layer Second Layer Third Layer a 22 P RX1 W W slx11 n' W 52x1 f2 50 x 50x1 n n S'XR $2x f + # + * S2 x 1 f3 50x1f b b- b R stx 1 SI $2x1 S2 59x1 S3 al=f(Wp+b) a = f(W2a'+b) a=f(Wa2+b) a = f( Wf(W2f(Wip+b)+b) + b) In a multilayer network, the first layer has R inputs and S1 neurons. Therefore, w1 is a (S1 x R), because we have R inputs for each neuron. The bias b1 is a (51 x 1) vector, because we have 1 bias per neuron. n- and al are also (51 x 1) vectors, because we have 1 output for each neuron. The second layer has 51 inputs and S2 neurons. Therefore, W2 is a (S? x 54), because we have S1 inputs for each neuron. The bias b2 is a (S2 x 1) vector, because we have 1 bias per neuron. n2 and a2 are also (52 x 1) vectors, because we have 1 output for each neuron. The third layer has X inputs and X neurons. Therefore, W3 is a (X), because we have X inputs for each neuron. The bias b3 is a (x) vector, because we have 1 bias per neuron. n3 and a3 are also (x) vectors, because we have 1 output for each neuron. In this lab, we will implement a two layer network, depicted in the figure below. We will start by implementing each layer separately then visualizing the network output when we change the values of W and b. Input Log-Sigmoid Layer Linear Layer ni w1.1 a ΣΗ . W211 712 a2 р ΣΗ al b2 "파 12 n12 W12. Σ b12 1.2 al = logsig(W1p+b) a2 = purelin (W241+52) Q1. Implement the first layer and name it "logsiglayer". The function takes the following parameters: W1, p and b?. W is the weight matrix, p is the input and b is the bias. The layer computes the output, given the input, using the following formula. [2 marks] a1 = logsig(wp+b) Q2. Implement the second layer and name it "linearlaver". The function takes the following parameters: W?, ał and b2. W is the weight matrix, a is the input and b is the bias. The layer computes the output, given the input, using the following formula. [1 marks)

a? = purelin(W2 a++b) Q3. Let the network has the following values. wi,1 = 10, w21 = 10, b) = -10, b) = 10, win = 1, wi2 = 1, b2 = 0. Plot the output of the network. To do so, follow the steps below: 1. Set the weight and bias values as given above 2. Let p (the input) be between -2 and 2. 3. Pass the input p to layer 1, to get a1 (the output of the first layer). 4. Pass a1 to layer 2, to get a2 (the output of the second layer). 5. Plot p vs a2. [2 marks] 2. To do so, follow the steps below: 1. Use the weight and bias values as given previously 2. For each value of p, compute by hand the following: a. a1 = logsig(W1p+b+) b. a? = purelin(W2 a++b) 3. Report the values you got by hand and from your python code in the table below. [2 marks] ||||0|1| р a2 (from code) a2 (by hand) -2 0 ? -1 0.5 ? 1 ? 1.5 ? 2 ? Q4. Test the effect of changing b, and plot the results. To do so, follow the steps below: 1. Set the weight and bias values as given previously 2. Let bį be between 0 s bis 20 3. For each value of b1, repeat: a. Let p (the input) be between -2 and 2. b. Pass the input p to layer 1, to get a1 (the output of the first layer). C. Pass a1 to layer 2, to get a 2 (the output of the second layer). d. Plot p vs a 2. 4. What changes does bị have on the network output? [2 marks] Q5. Repeat the same for bż. What changes does bị have on the network output? [2 marks]

Q6. Test the effect of changing W1,1, and plot the results. To do so, follow the steps below: 1. Set the weight and bias values as given previously 2. Let W1,1 be between -1 < W1,1 S1 3. For each value of W1.1, repeat: a. Let p (the input) be between -2 and 2. b. Pass the input p to layer 1, to get a1 (the output of the first layer). C. Pass a1 to layer 2, to get a 2 (the output of the second layer). d. Plot p vs a2. 4. What changes does W1,1 have on the network output? [2 marks] Q7. Repeat the same for Wị,1. What changes does Wị,1 have on the network output? [2 marks]
Join a community of subject matter experts. Register for FREE to view solutions, replies, and use search function. Request answer by replying!
Post Reply