DeepMind’s work in 2016: a round-up

Authors
Tuesday, 3 January 2017

Demis HassabisCo-Founder & CEO, DeepMind

Mustafa SuleymanCo-Founder & Head of Applied AI

Shane LeggCo-Founder & Chief Scientist, DeepMind

In a world of fiercely complex, emergent, and hard-to-master systems - from our climate to the diseases we strive to conquer - we believe that intelligent programs will help unearth new scientific knowledge that we can use for social benefit. To achieve this, we believe we’ll need general-purpose learning systems that are capable of developing their own understanding of a problem from scratch, and of using this to identify patterns and breakthroughs that we might otherwise miss. This is the focus of our long-term research mission at DeepMind.

While we remain a long way from anything that approximates what you or we would term intelligence, 2016 was a big year in which we made exciting progress on a number of the core underlying challenges, and saw the first glimpses of the potential for positive real-world impact.

Our program AlphaGo, for which we were lucky enough to receive our second Nature front cover, took on and beat the world champion Lee Sedol at the ancient game of Go, a feat that many experts said came a decade ahead of its time. Most exciting for us - as well as for the worldwide Go community - were AlphaGo’s displays of game-winning creativity, in some cases finding moves that challenged millennia of Go wisdom. In its ability to identify and share new insights about one of the most contemplated games of all time, AlphaGo offers a promising sign of the value AI may one day provide, and we're looking forward to playing more games in 2017.

We also made meaningful progress in the field of generative models, building programs able to imagine new constructs and scenarios for themselves. Following our PixelCNN paper on image generation, our paper on WaveNet demonstrated the usefulness of generative audio, achieving the world’s most life-like speech synthesis by imaginatively creating raw waveforms rather than stitching together samples of recorded language. We’re planning to put this into production with Google and are excited about enabling improvements to products used by millions of people.

Another important area of research is memory, and specifically the challenge of combining the decision-making aptitude of neural networks with the ability to store and reason about complex, structured data. Our work on Differentiable Neural Computers, for which we received our third Nature paper in eighteen months, demonstrated models that can simultaneously learn like neural networks as well as memorise data like computers. These models are already able to learn how to answer questions about data structures from family trees to tube maps, and bring us closer to the goal of using AI for scientific discovery in complex datasets.

As well as pushing the boundaries of what these systems can do, we’ve also invested significant time in improving how they learn. A paper titled ‘Reinforcement Learning with Unsupervised Auxiliary Tasks’ described methods to improve the speed of learning for certain tasks by an order of magnitude. And given the importance of high-quality training environments for agents, we open sourced our flagship DeepMind Lab research environment for the community, and are working with Blizzard to develop AI-ready training environments for StarCraft II as well.

Of course, this is just the tip of the iceberg, and you can read much more about our work in the many papers we published this year in top-tier journals from Neuron to PNAS and at major machine learning conferences from ICLR to NIPS. It’s amazing to see how others in the community are already actively implementing and building on the work in these papers - just look at the remarkable renaissance of Go-playing computer programs in the latter part of 2016! - and to witness the broader fields of AI and machine learning go from strength to strength.

It’s equally amazing to see the first early signs of real-world impact from this work. Our partnership with Google’s data centre team used AlphaGo-like techniques to discover creative new methods of managing cooling, leading to a remarkable 15% improvement in the buildings’ energy efficiency. If it proves possible to scale these kinds of techniques up to other large-scale industrial systems, there's real potential for significant global environmental and cost benefits. This is just one example of the work we’re doing with various teams at Google to apply our cutting-edge research to products and infrastructure used across the world. We’re also actively engaged in machine learning research partnerships with two NHS hospital groups in the UK, our home, to explore how our techniques could enable more efficient diagnosis and treatment of conditions that affect millions worldwide, as well as working with two further hospital groups on mobile apps and foundational infrastructure to enable improved care on the clinical frontlines.

Of course, the positive social impact of technology isn’t only about the real-world problems we seek to solve, but also about the way in which algorithms and models are designed, trained and deployed in general. We’re proud to have been involved in founding the Partnership on AI, which will bring together leading research labs with non-profits, civil society groups and academics to develop best practices in areas such as algorithmic transparency and safety. By fostering a diversity of experience and insight, we hope that we can help address some of these challenges and find ways to put social purpose at the heart of the AI community across the world.

We’re still a young company early in our mission, but if in 2017 we can make further simultaneous progress on these three fronts - algorithmic breakthroughs, social impact, and ethical best practice - then we'll be in good shape to make a meaningful continued contribution to the scientific community and to the world beyond.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容

  • **2014真題Directions:Read the following text. Choose the be...
    又是夜半驚坐起閱讀 9,891評論 0 23
  • “捐步”,相信很多人都做過的小事,所謂捐步,就是把“微信運動”統計的每日行走的步數捐出來,由有一定實力的大公司按你...
    zwj發如雪閱讀 304評論 2 9
  • 都說一個人若年輕的時候就不爭與淡泊,那么他此生不會有多大希望。 我并不是一個淡泊的人,每天都在想著如何成功,如何獲...
    變不成貓的小獅子閱讀 586評論 0 1
  • 轉載請帶上出處, 謝謝. 一個 Graphics Context 代表一個繪制目標, 它包含繪制系統用于完成繪制指...
    Falme丶閱讀 1,819評論 0 2
  • 列隊從林芝賓館走到市委組織部會議廳,雖然有些累,但和廣東的同學一起坐在會議廳里不由的就挺直了身體。 庸部長,盧巍老...
    米多橋閱讀 243評論 0 0