【轉載】Notes: From Faster R-CNN to Mask R-CNN

原文鏈接:http://www.yuthon.com/2017/04/27/Notes-From-Faster-R-CNN-to-Mask-R-CNN/


That’s my notes for the talk “From Faster-RCNN to Mask-RCNN” by Shaoqing Ren on April 26th, 2017.

Yesterday – background and pre-works of Mask R-CNN

Key functions

Classification- What are in the image?

Localization- Where are they?

Mask (per pixel) classification- Where+ ?

More precise to bounding box

Landmarks localization- What+, Where+ ?

Not only per-pixel mask, but also key points in the objects

Mask R-CNN Architecture

Classification

Please ignoring the bounding box in the image

class=Classifier(image)class=Classifier(image)

Problems

High-level semantic concepts

High efficiency

Solutions

SIFTorHOG(about 5 or 10 years ago)

Based on edge feature (low- level semantic infomations)

Sometimes mistake two objects which people can distinguish easily

e.g. mark the telegraph pole as a man

CNN(nowadays)

Based on high-level semantic concepts

Rarely mistake objects. If it do so, people are likely to mix up them, too.

Translation invariance

Scale invariance

Detection

location=Classifier(all patches of an image)precise_location=Regressor(image,rough_location)location=Classifier(all patches of an image)precise_location=Regressor(image,rough_location)

Problems

High efficiency

Solutions

Traverseall patches of an image and apply image classifier on them, then patches with highest scores are looked as locations of objects.

As long as the classifier is precise enough, and we are able to traverse millions of patches in an image, we can always get a satisfactory result.

But the amount of calculations is too large. (about 1 or 10 millon)

DoRegressioncosntantlty, starting from a rough location of an object, and finally we’ll get the precise object location.

Low amount of calculations. (about 10 or 100 times)

Hard to locate many adjacent and similar objects

The state-of-the-art methods tend to use exhaustion on large-scale, and refine the rough localtions by regression on small-scale.

R-CNN

Useregion proposalto decline millions of patches into 2~10k.

Use classifier to determine the class of a patch

Use BBox regression to refine the location

SPP-net / Fast R-CNN

UsePyramid Pooling / RoI-Poolingto generate a fixed-length representation regardless of image size/scale

Faster R-CNN

UseRPN(Region Proposal Network) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals.

An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position.

The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection.

We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ‘attention’ mechanisms, the RPN component tells the unified network where to look.

Number of patches:width×height×scales×ratioswidth×height×scales×ratios

scalestands for the size of image and objects

ratiostands for the aspect ratio of filter

Different schemes for addressing multiple scales and sizes.

Pyramids of images and feature maps are built, and the classifier is run at all scales.

Pyramids of filters with multiple scales/sizes are run on the feature map.

Faster R-CNN use pyramids of reference boxes in the regression functions, which avoids enumerating images or filters of multiple scales or aspect ratios.

SSD / FPN

FPN (Feature Pyramid Network)exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales.

Instance Segmentation

UseMask Regressionto predict instance segmentation based on object bounding box.

Replace RoI Pooling withRoI Align

Keypoint Detection

We make minor modifications to the segmentation system when adapting it for keypoints.

For each of theKKkeypoints of an instance, the training target is a one-hotm×mm×mbinary mask where only a single pixel is labeled as foreground.

Today - details about Mask-RCNN and comparisons

RoI Align

RoI pooling contains two step of coordinates quantization: from original image into feature map (divide by stride) and from feature map into roi feature (use grid).Those quantizations cause a huge loss of location precision.

e.g. we have two boxes whose coordinate are 1.1 and 2.2, and the stride of feature map is 16, then they’re the same in the feature map.

RoI Alignremove those two quantizations, andmanipulate coordinates on continuous domain, which increase the location accuracy greatly.

RoI Align really improves the result.

Moreover, note that with RoIAlign, using stride-32 C5 features (30.9 AP) is more accurate than using stride-16 C4 features (30.3 AP, Table 2c).RoIAlign largely resolves the long-standing challenge of using large-stride features for detection and segmentation.

Without RoIAlign, AP in ResNet-50-C4 is better than that in C5 with RoIPooling, i.e., large stride is worse.Thus many precious work try to find methods to get better results in smaller stride. Now with RoIAlign, we can consider whether to use those tricks.

Multinomial vs. Independent Masks

Replace softmax with sigmoid.

Mask R-CNN decouples mask and class prediction: as the existing box branch predicts the class label, we generate a mask for each class without competition among classes (by a per-pixel sigmoid and a binary loss).

In Table 2b, we compare this to using a per-pixel softmax and a multinomial loss (as com- monly used in FCN). This alternative couples the tasks of mask and class prediction, and results in a severe loss in mask AP (5.5 points).

The result suggests thatonce the instance has been classified as a whole (by the box branch), it is sufficient to predict a binary mask without concern for the categories, which makes the model easier to train.

Multi-task Cascade vs. Joint Learning

Cascading and paralleling are adopted alternately.

On training time, three tasks of Mask R-CNN areparalleling trained.

Buton testing time, we do classification and bbox regression first, and then use those results to get masks.

BBox regression may change the location of bbox, so we should wait it to be done.

After bbox regression, we may adopt NMS or other methods to reduce the number of boxes. That decreases the workload of segmenting masks.

Adding the mask branchto the box-only (i.e., Faster R-CNN) or keypoint-only versions consistentlyimproves these tasks.

However, adding the keypoint branch reduces the box/mask AP slightly, suggest- ing that while keypoint detection benefits from multitask training, it does not in turn help the other tasks.

Nevertheless, learning all three tasks jointly enables a unified system to efficiently predict all outputs simultaneously (Figure 6).

Comparison on Human Keypoints

Table 4 shows that our result (62.7 APkp) is 0.9 pointshigher than the COCO 2016 keypoint detection winner [4]that uses a multi-stage processing pipeline (see caption ofTable 4). Our method is considerably simpler and faster.

More importantly, we have a unified model that can si-multaneously predict boxes, segments, and keypoints whilerunning at 5 fps.

Results

Future - discussion

Order of key functions?

Order of classification, localization, mask classification and landmarks localization?

Top down or Buttom up?

Mask R-CNN uses Top-down method.

the COCO 2016 keypoint detection winner CMU-Pose+++ uses Buttom-up method.

Detect key points first (don’t know which keypoint belongs to which person)’

Then gradually stitch them together

Precious & semantic label

box-level label -> instance segmentation & keypoints detection -> instance seg with body parts

Semantic 3D reconstruction

Future

the performance & system improves rapidly

join a team, keep going

always try, thinking and discussion

understand and structure the world

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 230,106評論 6 542
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 99,441評論 3 429
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 178,211評論 0 383
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 63,736評論 1 317
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 72,475評論 6 412
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,834評論 1 328
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,829評論 3 446
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 43,009評論 0 290
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 49,559評論 1 335
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 41,306評論 3 358
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 43,516評論 1 374
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 39,038評論 5 363
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,728評論 3 348
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 35,132評論 0 28
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 36,443評論 1 295
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 52,249評論 3 399
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 48,484評論 2 379

推薦閱讀更多精彩內容

  • 突然有種理解老師良苦用心的感覺,希望通過我們這個月作業的完成,不斷的重復我們學的知識,并且進行刻意練習,使我們在整...
    陳天美閱讀 476評論 1 5
  • 不知道大家有沒有這樣的經驗? 我們很多思想和行為都是過去的重復。 比如我們有些人每次打開電腦,都會去播放相同的幾首...
    梁超文閱讀 604評論 0 1
  • 昨天去了薊縣公益 見了ss她們 充實的不得了 然而 今天買票就犯二了 身份證號沒填 無效 也是醉了 太著急...
    微涼r閱讀 172評論 0 0
  • ⊙桉樹先森 空氣安靜 爐火自娛 四月的雨嘀嗒嗒 駐進農田中 游在城巷里 流在房檐下 滴到心坎上 你不說話 我不打擾...
    桉樹先森閱讀 141評論 0 0