無(wú)標(biāo)題文章


Jason Yosinski

機(jī)器學(xué)習(xí) 深度學(xué)習(xí)

Hello there! I'm Jason, a Ph.D.studentcandidate in Computer Science at Cornell. My research focuses on training and understanding neural networks for computer vision and robotics. I work withHod Lipsonand theCornell Creative Machines Laband sometimes as a visiting student withYoshua Bengioand theLISA Labat U. Montreal. My work is supported by aNASA Space Technology Research Fellowship. This summer of 2015 I'm in London working atGoogle DeepMind.

Jason Yosinski http://yosinski.com/?


Understanding Neural Networks Through Deep Visualization

ICML DL Workshop paper|video|code and more info ?

Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Here we introduce two tools for better visualizing and interpreting neural nets. The first is a set of new regularization methods for finding preferred activations using optimization, which leads to clearer and more interpretable images than had been found before. The second tool is an interactive toolbox that visualizes the activations produced on each layer of a trained convnet. You can input image files or read video from your webcam, which we've found fun and informative. Both tools are open source.Read more ?

Deep Neural Networks are Easily Fooled

CVPR paper|code|more ?

Deep neural networks (DNNs) have recently been doing very well at visual classification problems (e.g. recognizing that one image is of a lion and another image is of a school bus). A recent study bySzegedy et al.showed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a network to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). We show methods of producing fooling images both with and without the class gradient in pixel space. The results shed light on interesting differences between human vision and state-of-the-art DNNs.Read more ?

How Transferable are Features in Deep Neural Networks?

NIPS paper|code|more ?

Many deep neural networks trained on natural images exhibit a curious phenomenon: they all learn roughly the same Gabor filters and color blobs on the first layer. These features seem to begeneric— useful for many datasets and tasks — as opposed tospecific— useful for only one dataset and task. By the last layer featuresmustbe task specific, which prompts the question: how do features transition from general to specific throughout the network? In this paper, presented at NIPS 2014, we show the manner in which features transition from general to specific, and also uncover a few other interesting results along the way.Read more

Generative Stochastic Networks

First arXiv paper|ICML paper|Latest arXiv paper

Unsupervised learning of models for probability distributions can be difficult due to intractable partition functions. We introduce a general family of models called Generative Stochastic Networks (GSNs) as an alternative to maximum likelihood. Briefly, we show how to learn the transition operator of a Markov chain whose stationary distribution estimates the data distribution. Because this transition distribution is a conditional distribution, it's often much easier to learn than the data distribution itself. Intuitively, this works by pushing the complexity that normally lives in the partition function into the “function approximation” part of the transition operator, which can be learned via simple backprop. We validate the theory by showing several successful experiments on two image datasets and with a particular architecture that mimics the Deep Boltzmann Machine but without the need for layerwise pretraining.

EndlessForms.com

Watch the two minute intro video.Users on EndlessForms.com collaborate to produce interesting crowdsourced designs. Since launch, over 4,000,000 shapes have been seen and evaluated by human eyes. This volume of user input has produced somereally cool shapes. EndlessForms has received somefavorable press.Evolve your own shape ?

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容

  • 最重要的事,只有一件 why 做事要抓本質(zhì)。 每一件事都有其核心。把握本質(zhì),發(fā)...
    若心向陽(yáng)閱讀 195評(píng)論 0 1
  • 這一部分主要講了提升學(xué)習(xí)能力的三個(gè)底層方法:反思,以教為學(xué)和刻意練習(xí)。 重點(diǎn)對(duì)反思這塊內(nèi)容進(jìn)行分析和學(xué)習(xí)。 文章行...
    緣奇楓閱讀 188評(píng)論 0 0
  • 3月25號(hào)那天,打開(kāi)電腦,接上移動(dòng)硬盤,一個(gè)窗口跳出來(lái),我選擇了第一個(gè)按鈕,接著悲劇發(fā)生了,移動(dòng)硬盤的資料在被刪除...
    donna王采寧閱讀 322評(píng)論 0 0
  • 日記 1 D1. 今天晚上我和爸爸興致勃勃的(其實(shí)興致勃勃的是媽媽,但爸爸還是努力配合了)召開(kāi)了第一次家庭會(huì)議...
    白曉如夢(mèng)閱讀 475評(píng)論 4 3