本文從CSDN上轉移過來:
http://blog.csdn.net/mounty_fsc/article/details/51379395
本部分剖析Caffe中Net::Backward()函數,即反向傳播計算過程。從LeNet網絡角度出發,且調試網絡為訓練網絡,共9層網絡。具體網絡層信息見 (Caffe,LeNet)初始化訓練網絡(三) 第2部分
本部分不介紹反向傳播算法的理論原理,以下介紹基于對反向傳播算法有一定的了解。
1 入口信息
Net::Backward()函數中調用BackwardFromTo函數,從網絡最后一層到網絡第一層反向調用每個網絡層的Backward。
void Net<Dtype>::BackwardFromTo(int start, int end) {
for (int i = start; i >= end; --i) {
if (layer_need_backward_[i]) {
layers_[i]->Backward(
top_vecs_[i], bottom_need_backward_[i], bottom_vecs_[i]);
if (debug_info_) { BackwardDebugInfo(i); }
}
}
}
2 第九層SoftmaxWithLossLayer
2.1 代碼分析
代碼實現如下:
void SoftmaxWithLossLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
// bottom_diff shape:64*10
Dtype* bottom_diff = bottom[0]->mutable_gpu_diff();
// prob_data shape:64*10
const Dtype* prob_data = prob_.gpu_data();
// top_data shape:(1)
const Dtype* top_data = top[0]->gpu_data();
// 將Softmax層預測的結果prob復制到bottom_diff中
caffe_gpu_memcpy(prob_.count() * sizeof(Dtype), prob_data, bottom_diff);
// label shape:64*1
const Dtype* label = bottom[1]->gpu_data();
// dim = 640 / 64 = 10
const int dim = prob_.count() / outer_num_;
// nthreads = 64 / 1 = 64
const int nthreads = outer_num_ * inner_num_;
// Since this memory is never used for anything else,
// we use to to avoid allocating new GPU memory.
Dtype* counts = prob_.mutable_gpu_diff();
// 該函數將bottom_diff(此時為每個類的預測概率)對應的正確類別(label)的概率值-1,其他數據沒變。見公式推導。
SoftmaxLossBackwardGPU<Dtype><<<CAFFE_GET_BLOCKS(nthreads),
CAFFE_CUDA_NUM_THREADS>>>(nthreads, top_data, label, bottom_diff,
outer_num_, dim, inner_num_, has_ignore_label_, ignore_label_, counts);
// 代碼展開開始,代碼有修改
__global__ void SoftmaxLossBackwardGPU(...) {
CUDA_KERNEL_LOOP(index, nthreads) {
const int label_value = static_cast<int>(label[index]);
bottom_diff[index * dim + label_value] -= 1;
counts[index] = 1;
}
}
// 代碼展開結束
Dtype valid_count = -1;
// 注意為loss的權值,對該權值(一般為1或者0)歸一化(除以64)
const Dtype loss_weight = top[0]->cpu_diff()[0] /
get_normalizer(normalization_, valid_count);
caffe_gpu_scal(prob_.count(), loss_weight , bottom_diff);
}
說明:
- SoftmaxWithLossLayer是沒有學習參數的(見前向計算(五)) ,因此不需要對該層的參數做調整,只需要計算bottom_diff(理解反向傳播算法的鏈式求導,求bottom_diff對上一層的輸出求導,是為了進一步計算調整上一層權值)
- 以上代碼核心部分在SoftmaxLossBackwardGPU。該函數將
bottom_diff
(此時為每個類的預測概率)對應的正確類別(label)的概率值-1,其他數據沒變。這里使用前幾節的符號系統及圖片進行解釋。
2.2 公式推導
-
符號系統
設SoftmaxWithLoss層的輸入為向量$\mathbf{z}$,即bottom_blob_data,也就是上一層的輸出。經過Softmax計算后的輸出為向量$\mathbf{f(z)}$,公式為(省略了標準化常量m)$f(z_k)=\frac{e{z_k}}{\sum_in{e{z_i}}}$。最后SoftmaxWithLoss層的輸出為$loss=\sumn-\log{f(z_y)}$,$y$為樣本的標簽。見前向計算(五)。
-
反向推導
把loss展開可得
$$loss=log\sum_in{e{z_i}}-z_y$$
所以$\frac{d loss}{d\mathbf{z}}$結果如下:
$$
\frac{\partial loss}{\partial z_i}=
\left {
\begin{aligned}
& f(z_y)-1,z_i= z_y \
& f(z_i),z_i \ne z_y
\end{aligned}
\right.
$$ -
圖示
3 第八層InnerProduct
3.1 代碼分析
template <typename Dtype>
void InnerProductLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
//對參數求偏導,top_diff*bottom_data=blobs_diff
if (this->param_propagate_down_[0]) {
const Dtype* top_diff = top[0]->gpu_diff();
const Dtype* bottom_data = bottom[0]->gpu_data();
// Gradient with respect to weight
caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans, N_, K_, M_, (Dtype)1.,
top_diff, bottom_data, (Dtype)1., this->blobs_[0]->mutable_gpu_diff());
}
// 對偏置求偏導top_diff*bias=blobs_diff
if (bias_term_ && this->param_propagate_down_[1]) {
const Dtype* top_diff = top[0]->gpu_diff();
// Gradient with respect to bias
caffe_gpu_gemv<Dtype>(CblasTrans, M_, N_, (Dtype)1., top_diff,
bias_multiplier_.gpu_data(), (Dtype)1.,
this->blobs_[1]->mutable_gpu_diff());
}
//對上一層輸出求偏導top_diff*blobs_data=bottom_diff
if (propagate_down[0]) {
const Dtype* top_diff = top[0]->gpu_diff();
// Gradient with respect to bottom data
caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, M_, K_, N_, (Dtype)1.,
top_diff, this->blobs_[0]->gpu_data(), (Dtype)0.,
bottom[0]->mutable_gpu_diff());
}
}
3.2 公式推導
如圖,當前層ip2層的輸入為$\mathbf{z}$,上一層的輸入為$\mathbf{u}$。
1. 對上一層輸出求偏導
$\frac{\partial loss}{\partial u_j}$存放在ip2層的bottom_blob_diff(64500)中,計算公式如下,其中$\frac{\partial loss}{\partial z_k}$存放在top_blob_diff(6410)中:
$$
\frac{\partial z_k}{\partial u_j} = \frac{\sum_j^{100}{w_{kj}u_j}}{\partial u_j}=w_{kj}
$$
$$
\frac{\partial loss}{\partial u_j}=\sum_k^{n=10}{\frac{\partial loss}{\partial z_k}\frac{\partial z_k}{\partial u_j}}=\sum_k^{n=10}{\frac{\partial loss}{\partial z_k}w_{kj}}
$$
寫成向量的形式為:
$$
\frac{\partial loss}{\partial u_j}=\frac{\partial loss}{\partial \mathbf{z^T}} \cdot \mathbf{w_{j}}
$$
進一步,寫成矩陣的形式,其中$\mathbf{u}$為500維,$\mathbf{z}$為10維,$\mathbf{W}$為$10 \times 500$:
$$
\frac{\partial loss}{\partial \mathbf{u^T}}=\frac{\partial loss}{\partial \mathbf{z^T}} \cdot \mathbf{W}
$$
再進一步,考慮到一個batch有64個樣本,表達式可以寫成如下形式,其中$\mathbf{U}$為$64 \times 500$;$\mathbf{Z}$為$64 \times 10$;$\mathbf{W}$為$10 \times 500$:
$$
\frac{\partial loss}{\partial \mathbf{U}}=\frac{\partial loss}{\partial \mathbf{Z}} \cdot \mathbf{W}
$$
2. 對參數求偏導
$$
\frac{\partial loss}{\partial w_{kj}}=\frac{\partial loss}{\partial z_k}\frac{\partial z_k}{\partial w_{kj}}=\frac{\partial loss}{\partial z_k} u_{j}
$$
寫成向量的形式有:
$$
\frac{\partial loss}{\partial \mathbf{w_{j}}}=\frac{\partial loss}{\partial \mathbf{z}} u_{j}
$$
進一步,可以寫成矩陣形式,其中$\mathbf{W}$為$10 \times 500$;$\mathbf{z}$為10維;$\mathbf{u}$為500維。
$$
\frac{\partial loss}{\partial \mathbf{W}}=\frac{\partial loss}{\partial \mathbf{z}} \mathbf{u^T}
$$
再進一步,考慮到一個batch有64個樣本,表達式可以寫成如下形式,其中$\mathbf{W}$為$10 \times 500$;$\mathbf{Z}$為$64 \times 10$;$\mathbf{U}$為$64 \times 500$:
$$
\frac{\partial loss}{\partial \mathbf{W}}=\frac{\partial loss}{\partial \mathbf{Z^T}} \cdot \mathbf{U}
$$
4 第七層ReLU
4.1 代碼分析
cpu代碼分析如下,注,該層沒有參數,只需對輸入求導
void ReLULayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down,
const vector<Blob<Dtype>*>& bottom) {
if (propagate_down[0]) {
const Dtype* bottom_data = bottom[0]->cpu_data();
const Dtype* top_diff = top[0]->cpu_diff();
Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();
const int count = bottom[0]->count();
//見公式推導
Dtype negative_slope = this->layer_param_.relu_param().negative_slope();
for (int i = 0; i < count; ++i) {
bottom_diff[i] = top_diff[i] * ((bottom_data[i] > 0)
+ negative_slope * (bottom_data[i] <= 0));
}
}
}
4.2 公式推導
設輸入向量為$\mathbf{bottom_data}$,輸出向量為$\mathbf{top_data}$,ReLU層公式為
$$top_data_i=
\left {
\begin{aligned}
& bottom_data_i & bottom_data_i \gt 0 \
& bottom_data_i*slope & bottom_data_i \le 0
\end{aligned}
\right .
$$所以,loss對輸入的偏導為:
$$
\frac{\partial loss}{\partial bottom_data_i}=\frac{\partial loss}{\partial top_data_i} \cdot \frac{\partial top_data_i}{\partial bottom_data_i} \
= \left {
\begin{aligned}
& top_diff_i & bottom_data_i \gt 0\
& top_diff_i * slope & bottom_data_i \le 0
\end{aligned}
\right .
$$
5 第五層Pooling
5.1 代碼分析
Maxpooling的cpu代碼分析如下,注,該層沒有參數,只需對輸入求導
void PoolingLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
const Dtype* top_diff = top[0]->cpu_diff();
Dtype* bottom_diff = bottom[0]->mutable_cpu_diff();
// bottom_diff初始化置0
caffe_set(bottom[0]->count(), Dtype(0), bottom_diff);
const int* mask = NULL; // suppress warnings about uninitialized variables
...
// 在前向計算時max_idx中保存了top_data中的點是有bottom_data中的點得來的在該feature map中的坐標
mask = max_idx_.cpu_data();
// 主循環,按(N,C,H,W)方式便利top_data中每個點
for (int n = 0; n < top[0]->num(); ++n) {
for (int c = 0; c < channels_; ++c) {
for (int ph = 0; ph < pooled_height_; ++ph) {
for (int pw = 0; pw < pooled_width_; ++pw) {
const int index = ph * pooled_width_ + pw;
const int bottom_index = mask[index];
// 見公式推導
bottom_diff[bottom_index] += top_diff[index];
}
}
bottom_diff += bottom[0]->offset(0, 1);
top_diff += top[0]->offset(0, 1);
mask += top[0]->offset(0, 1);
}
}
}
5.2 公式推導
由圖可知,maxpooling層是非線性變換,但有輸入與輸出的關系可線性表達為$bottom_data_j=top_data_i$(所以需要前向計算時需要記錄索引i到索引j的映射max_idx_
.
鏈式求導有:
$$
bottom_diff_j = \frac{\partial loss}{\partial bottom_data_j}=\frac{\partial loss}{\partial top_data_i} \cdot \frac{\partial top_data_i}{\partial bottom_data_j} \= top_diff_i \cdot 1(注意下標)
$$
6 第四層Convolution
void ConvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
const Dtype* weight = this->blobs_[0]->cpu_data();
Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();
for (int i = 0; i < top.size(); ++i) {
const Dtype* top_diff = top[i]->cpu_diff();
const Dtype* bottom_data = bottom[i]->cpu_data();
Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();
// Bias gradient, if necessary.
if (this->bias_term_ && this->param_propagate_down_[1]) {
Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();
// 對于每個Batch中的樣本,計算偏執的偏導
for (int n = 0; n < this->num_; ++n) {
this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);
}
}
if (this->param_propagate_down_[0] || propagate_down[i]) {
// 對于每個Batch中的樣本,關于權值及輸入求導部分代碼展開了函數(非可運行代碼)
for (int n = 0; n < this->num_; ++n) {
// gradient w.r.t. weight. Note that we will accumulate diffs.
//top_diff(50*64) * bottom_data(500*64,Transpose) = weight_diff(50*500)
caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,
kernel_dim_, conv_out_spatial_dim_,
(Dtype)1., top_diff + n * this->top_dim_, bottom_data + n * this->bottom_dim_,
(Dtype)1., weight_diff);
// gradient w.r.t. bottom data, if necessary.
// weight(50*500,Transpose) * top_diff(50*64) = bottom_diff(500*64)
caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,
conv_out_spatial_dim_, conv_out_channels_ ,
(Dtype)1., weight, top_diff + n * this->top_dim_,
(Dtype)0., bottom_diff + n * this->bottom_dim_);
}
}
}
}
說明:
- 第四層的bottom維度$(N,C,H,W)=(64,20,12,12)$,top的維度bottom維度$(N,C,H,W)=(64,50,8,8)$,由于每個樣本單獨處理,所以只需要關注$(C,H,W)$的維度,分別為$(20,12,12)$和$(50,8,8)$
- 根據(Caffe)卷積的實現,該層可以寫成矩陣相乘的形式$Weight_data \times Bottom_data^T = Top_data$
- $Weight_data$的維度為$C_{out} \times (CKK)=50 \times 500$
- $Bottom_data$的維度為$(HW) \times (CKK)=64 \times 500$,$64$為$88$個卷積核的位置,$500=CKK=2055$
- $Top_data$的維度為$64 \times 50$
- 寫成矩陣表示后,從某種角度上與全連接從(也是表示成矩陣相乘)相同,因此,可以借鑒全連接層的推導。