欧美一级a免费放视频,欧美一级a免费放视频_丰满年轻岳欲乱中文字幕电影_欧美成人性一区二区三区_av不卡网站,99久久精品产品给合免费视频,色综合黑人无码另类字幕,特级免费黄片,看黃色录像片,色色资源站无码AV网址,暖暖 免费 日本 在线播放,欧美com

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

COMP3340代做,、代寫Python/Java程序
COMP3340代做,、代寫Python/Java程序

時(shí)間:2025-03-15  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯(cuò)



COMP3340 Applied Deep Learning The University of Hong Kong
Assignment 1
Feb 2025
Question 1 - XOR Approximation
We consider the problem of designing a feedforward neural network to approximate the XOR
function. Specifically, for any input points (x1, x2), x1, x2 ∈ {0, 1}, the output of the network
should be approximately equal to x1 ⊕ x2. Suppose the network has two input neurons, one
hidden layer with two neurons, and an output layer with one neuron, as shown in Figure 1.
The activation function for all neurons is the Sigmoid function defined as σ(z) = 1+
1
e−z .
(a) Please provide the specific values for the parameters in your designed network. Demon strate how your network approximates the XOR function (Table 1) by performing forward
propagation on the given inputs (x1, x2), x1, x2 ∈ {0, 1}.
(b) If we need the neural network to approximate the XNOR function (Table 1), how should
we modify the output neuron without altering the neurons in the hidden layer?
x1 x2 x1 ⊕ x2 x1  x2
0 0 0 1
0 1 1 0
1 0 1 0
1 1 0 1
Table 1: XOR and XNOR Value Table
 !
 "
 #
Figure 1: Network structure and the notation of parameters
COMP3340 Applied Deep Learning The University of Hong Kong
Question 2 - Backpropagation
We consider the problem of the forward pass and backpropagation in a neural network whose
structure is shown in Figure 1. The network parameters is initialized as w1 = 1, w2 = −2,
w3 = 2, w4 = −1, w5 = 1, w6 = 1, b1 = b2 = b3 = 0. The activation function for all neurons
is the Sigmoid function defined as σ(z) = 1+
1
e−z .
(a) Suppose the input sample is (1, 2) and the ground truth label is 0.1. Please compute
the output y of the network.
(b) Suppose we use the Mean Squared Error (MSE) loss. Please compute the loss value for
the sample (1, 2) and its gradient with respect to the network parameters using chain rules.
The final answer should be limited to 3 significant figures.
(c) Suppose we use stochastic gradient descent (SGD) with a learning rate of α = 0.1.
Please specify the parameters of the network after one step of gradient descent, using the
gradient computed in (b). Please also specify the prediction value and the corresponding loss
of the new network on the same input (1, 2).

請加QQ:99515681  郵箱:[email protected]   WX:codinghelp

掃一掃在手機(jī)打開當(dāng)前頁
  • 上一篇:被小豬應(yīng)急強(qiáng)制下款怎么辦?怎么聯(lián)系米來花客服?
  • 下一篇:CE860代做,、代寫C/C++編程設(shè)計(jì)
  • 無相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    出評 開團(tuán)工具
    出評 開團(tuán)工具
    挖掘機(jī)濾芯提升發(fā)動機(jī)性能
    挖掘機(jī)濾芯提升發(fā)動機(jī)性能
    戴納斯帝壁掛爐全國售后服務(wù)電話24小時(shí)官網(wǎng)400(全國服務(wù)熱線)
    戴納斯帝壁掛爐全國售后服務(wù)電話24小時(shí)官網(wǎng)
    菲斯曼壁掛爐全國統(tǒng)一400售后維修服務(wù)電話24小時(shí)服務(wù)熱線
    菲斯曼壁掛爐全國統(tǒng)一400售后維修服務(wù)電話2
    美的熱水器售后服務(wù)技術(shù)咨詢電話全國24小時(shí)客服熱線
    美的熱水器售后服務(wù)技術(shù)咨詢電話全國24小時(shí)
    海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
    海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
    合肥機(jī)場巴士4號線
    合肥機(jī)場巴士4號線
    合肥機(jī)場巴士3號線
    合肥機(jī)場巴士3號線
  • 上海廠房出租 短信驗(yàn)證碼 酒店vi設(shè)計(jì)