神經網絡:學習(一)

代價函數(Cost Function)

在神經網絡模型中,我們引入一些新的標記:

  • L:表示神經網絡模型的層數;
  • Sl:表示第l層的激活單元的個數(注:不包括偏置單元);
  • SL:表示輸出層的激活單元的個數;
  • K:表示類別分類的個數。

在正則化的邏輯回歸中,其代價函數J(θ)如下:

在邏輯回歸中,只有一個輸出變量y,但在神經網絡模型中,其輸出變量是一個維度為K的向量。因此,在神經網絡模型中,代價函數J(θ)改寫為:

補充筆記
Cost Function

Let's first define a few variables that we will need to use:

  • L = total number of layers in the network
  • sl = number of units (not counting bias unit) in layer l
  • K = number of output units/classes

Recall that in neural networks, we may have many output nodes. We denote hΘ(x)k as being a hypothesis that results in the kth output. Our cost function for neural networks is going to be a generalization of the one we used for logistic regression. Recall that the cost function for regularized logistic regression was:

For neural networks, it is going to be slightly more complicated:

We have added a few nested summations to account for our multiple output nodes. In the first part of the equation, before the square brackets, we have an additional nested summation that loops through the number of output nodes.

In the regularization part, after the square brackets, we must account for multiple theta matrices. The number of columns in our current theta matrix is equal to the number of nodes in our current layer (including the bias unit). The number of rows in our current theta matrix is equal to the number of nodes in the next layer (excluding the bias unit). As before with logistic regression, we square every term.

Note:

  • the double sum simply adds up the logistic regression costs calculated for each cell in the output layer
  • the triple sum simply adds up the squares of all the individual Θs in the entire network
  • the i in the triple sum does not refer to training example i
反向傳播算法(Backpropagation Algorithm)

在計算hΘ(x)時,我們采用正向傳播算法,從輸入層一層一層計算,直至輸出層為止。

現為了計算偏導數:

我們采用反向傳播算法,從輸出層一層一層計算誤差(誤差:指激活單元的預測值ak(l)與實際值yk之間的誤差,其中k=1:K。),直至倒數第二層為止。最后一層為輸入層,其數據時我們從訓練集中獲取的,所以該部分沒有誤差。

假設現訓練集中只有一個樣本,神經網絡模型如下圖所示:

按照反向傳播算法,我們先從輸出層開始計算誤差,此處為了標記誤差,我們引用δ來表示,則該表達式為:

這時,我們利用上述誤差δ(4)來計算第三層的誤差,其表達式為:

其中,g'(z(l))根據邏輯回歸(二)中關于梯度下降算法的公式推導,其求導后的表達式為:

最后,我們利用δ(3)來計算第二層的誤差,其表達式為:

因此,我們可以推導出代價函數J(θ)的偏導數為:

若考慮正則化以及全體樣本的訓練集,則我們用Δij(l)表示誤差矩陣,其運算步驟如下:

將上述步驟完成后得到誤差矩陣Δi,j(l)

我們便可以計算代價函數的偏導數了,其計算方法如下:

最后,我們可以得到:

由此,我們可以利用該表達式使用梯度下降算法或其他高級算法。

補充筆記
Backpropagation Algorithm

"Backpropagation" is neural-network terminology for minimizing our cost function, just like what we were doing with gradient descent in logistic and linear regression. Our goal is to compute:

That is, we want to minimize our cost function J using an optimal set of parameters in theta. In this section we'll look at the equations we use to compute the partial derivative of J(Θ):

To do so, we use the following algorithm:

Back propagation Algorithm

Given training set {(x(1),y(1))?(x(m),y(m))}

  • Set Δi,j(l) := 0 for all (l,i,j), (hence you end up having a matrix full of zeros)

For training example t =1 to m:

  1. Set a(1):=x(t)
  2. Perform forward propagation to compute a(l) for l=2,3,…,L
  1. Using y(t), compute δ(L)=a(L)?y(t)

Where L is our total number of layers and a(L) is the vector of outputs of the activation units for the last layer. So our "error values" for the last layer are simply the differences of our actual results in the last layer and the correct outputs in y. To get the delta values of the layers before the last layer, we can use an equation that steps us back from right to left:

  1. Compute δ(L?1)(L?2),…,δ(2) using δ(l)=((Θ(l))Tδ(l+1)) .? a(l) .? (1?a(l))

The delta values of layer l are calculated by multiplying the delta values in the next layer with the theta matrix of layer l. We then element-wise multiply that with a function called g', or g-prime, which is the derivative of the activation function g evaluated with the input values given by z(l).

The g-prime derivative terms can also be written out as:

  1. Δi,j(l):=Δi,j(l)+aj(l)δi(l+1) or with vectorization, Δ(l):=Δ(l)(l+1)(a(l))T

Hence we update our new Δ matrix.

he capital-delta matrix D is used as an "accumulator" to add up our values as we go along and eventually compute our partial derivative. Thus we get:

Backpropagation Intuition

Recall that the cost function for a neural network is:

If we consider simple non-multiclass classification (k = 1) and disregard regularization, the cost is computed with:

Intuitively, δj(l) is the "error" for aj(l) (unit j in layer l). More formally, the delta values are actually the derivative of the cost function:

Recall that our derivative is the slope of a line tangent to the cost function, so the steeper the slope the more incorrect we are. Let us consider the following neural network below and see how we could calculate some δj(l):

In the image above, to calculate δ2(2), we multiply the weights Θ12(2) and Θ22(2) by their respective δ values found to the right of each edge. So we get δ2(2)= Θ12(2)δ1(3)22(2)δ2(3). To calculate every single possible δj(l), we could start from the right of our diagram. We can think of our edges as our Θij. Going from right to left, to calculate the value of δj(l), you can just take the over all sum of each weight times the δ it is coming from. Hence, another example would be δ2(3)12(3)1(4).

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容

  • 我總是像個不懂事的孩子,所有的開心,難過,生氣,歡喜都寫在臉上;喜歡的人怎么都好,對不喜歡的人客套話都說不...
    Air煥煥閱讀 211評論 0 0