【特征學習】【無監督】【教程】EECS 598 Unsupervised Feature Learning

EECS 598 Unsupervised Feature Learning

Instructor: Prof. Honglak Lee

Instructor webpage:http://www.eecs.umich.edu/~honglak/

Office hours: Th 5pm-6pm, 3773 CSE

Classroom: 1690 CSE

Time: M W 10:30am-12pm

Course Schedule

(Note: this schedule is subject to change.)

DateTopicPapersPresenter

9/8IntroductionHonglak

9/13Sparse codingB. Olshausen, D. Field. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 1996.

H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. NIPS, 2007.Honglak

9/15Self-taught learning

Application: computer visionR. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007.

H. Lee, R. Raina, A. Teichman, and A. Y. Ng. Exponential Family Sparse Coding with Application to Self-taught Learning. IJCAI, 2009.

J. Yang, K. Yu, Y. Gong, and T. Huang. Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification. CVPR, 2009.Honglak

9/20Neural networks and deep architectures IY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 4.

Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007.Deepak

9/22Restricted Boltzmann machineY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 5.Byung-soo

9/27Variants of RBMs and AutoencodersP. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with denoising autoencoders. ICML, 2008.

H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. NIPS, 2008.Chun-Yuen

9/29Deep belief networksY. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009.Chapter 6.

R. Salakhutdinov, PhD Thesis.Chapter 2Anna

10/4Convolutional deep belief networksH. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009.Min-Yian

10/6Application: audioH. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. NIPS, 2009.

A. R. Mohamed, G. Dahl, and G. E. Hinton, Deep belief networks for phone recognition. NIPS 2009 workshop on deep learning for speech recognition.Yash

10/11Factorized models IM. Ranzato, A. Krizhevsky, G. E. Hinton, Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images. AISTATS, 2010.Chun

10/13Factorized models IIM. Ranzato, G. E. Hinton. Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines. CVPR, 2010.Soonam

10/18No class - study break

10/20Project proposal presentations

10/25Temporal modeling IG. Taylor, G. E. Hinton, and S. Roweis. Modeling Human Motion Using Binary Latent Variables. NIPS, 2007.

G. Taylor and G. E. Hinton. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. ICML, 2009.Jeshua

10/27Temporal modeling IIG. Taylor, R. Fergus, Y. LeCun and C. Bregler. Convolutional Learning of Spatio-temporal Features. ECCV, 2010.Robert

11/1Energy-based modelsK. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, Learning Invariant Features through Topographic Filter Maps. CVPR, 2009.

K. Kavukcuoglu, M. Ranzato, and Y. LeCun, Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition. CBLL-TR-2008-12-01, 2008.Ryan

11/3Pooling and invarianceK. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, What is the Best Multi-Stage Architecture for Object Recognition? ICML, 2009.Min-Yian

11/8Evaluating RBMsR. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008.

R. Salakhutdinov, PhD Thesis.Chapter 4Jeshua

11/10Deep Boltzmann machinesR. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. AISTATS, 2009.Dae Yon

11/15Local coordinate codingK. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding, NIPS, 2009.

J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Learning Locality-constrained Linear Coding for Image Classification. CVPR, 2010.Robert

11/17Deep architectures IIH. Larochelle, Y. Bengio, J. Louradour and P. Lamblin, Exploring Strategies for Training Deep Neural Networks, JMLR, 2009.Soonam

11/22Deep architectures IIID. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent and S. Bengio, Why Does Unsupervised Pre-training Help Deep Learning? JMLR, 2010.Chun

11/24Application: computer vision IIJ. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. CVPR, 2010.

Y. Boureau, F. Bach, Y. LeCun and J. Ponce: Learning Mid-Level Features for Recognition. CVPR, 2010.Dae Yon

11/29Pooling and invariance III. J. Goodfellow, Q. V. Le, A. M. Saxe, H. Lee, and A. Y. Ng. Measuring invariances in deep networks. NIPS, 2009.

Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in vision algorithms. ICML, 2010.Anna

12/1Application: natural language processingR. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. ICML, 2009.Guanyu

12/13Project presentations I

12/15Project presentations II

12/19Final project report due

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容

  • Last modified on December 18, 2014, at 9: 11 am. ----來自d...
    周筱魯閱讀 1,463評論 0 0
  • 早上,我還沒起床,就聽見外面有滴答,滴答的聲音。我喜歡這種聲音。我揉了揉眼睛,站了起來,趴在了窗臺上,只見天暗暗的...
    深夜靜思閱讀 362評論 0 0
  • 針對狀態欄的操作,只針對4.4kitKat(含)以上的機型,部分國產rom會失效,目前發現的有華為的EMUI Ac...
    紫闞閱讀 4,627評論 5 18
  • 一、個人演練(命令行)1.進入到工作目錄中,初始化一個代碼倉庫git init 2.給改git倉庫配置一個用戶名和...
    March_Cullen閱讀 168評論 0 1
  • 今日日志 早上八點起床,上午的時間主要用在了收拾房間,昨天跟藍朵說好帶她來參觀下我的小窩。中午十二點到二點多和同學...
    YaoYiLin閱讀 159評論 0 1