Advances in Neural Information Processing Systems 29 (NIPS 2016) pre-proceedings by NIPS. The 2016 NIPS Conference, the leading annual machine learning conference, is next week, December 5 to 10, in Barcelona. Let us know if you’ll be there. The list of presenting papers is available here ahead of time.
What Artificial Intelligence Can and Can’t Do Right Now by Andrew Ng of Baidu. Overview of what machine learning is actually capable of and where most progress has been made in AI so far — supervised learning. And a helpful description of its current potential: If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.
The Next Frontier in AI: Unsupervised Learning by Yann LeCun, Director of AI Research at Facebook. Video discussion of the power of unsupervised deep learning, current challenges, and approaches.
Keras Tutorial: The Ultimate Beginner’s Guide to Deep Learning in Python by EliteDataScience. Learn how to build a convolutional neural network in Python.
Deep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville. A new and much-anticipated book by experts in deep learning, providing a broad, technical overview of the field and where it’s headed. Available digitally or hardcover. Elon Musk stated: “Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.”
Announcing GPUs for Google Cloud Platform by John Barrus of Google Cloud Platform. Google announces that in early 2017, Google Cloud Platform will offer GPUs worldwide for Google Compute Engine and Google Cloud Machine Learning users. They will offer AMD FirePro S9300 x2 and NVIDIA Tesla P100 and K80 GPUs.
The Montreal Institute for Learning Algorithms (MILA) announces funding by Google and others, and leadership by professor Yoshua Bengio, of the University of Montreal and an expert in deep learning. The lab is dedicated to deep learning research and advancement.
Reinforcement learning with unsupervised auxiliary tasks by Google DeepMind. DeepMind augments standard deep reinforcement learning methods with two additional tasks for agents to perform during training: The first task involves the agent learning how to control the pixels on the screen; in the second task the agent is trained to predict the onset of immediate rewards from a short historical context. This improves the learning speed and final performance of agents. Original paper here.
Neural Architecture Search with Reinforcement Learning by Google researchers Barret Zoph and Quoc V. Le. Deep learning is engendering a paradigm shift from feature design to architecture design. Neural networks are still hard to design. This research uses reinforcement learning to generate a neural network architecture.
Lip Reading Sentences in the Wild by Google DeepMind and University of Oxford. Researchers develop an AI system that performs lip-reading tasks better than professionals, with 46.8% accuracy (compared to professionals’ 12.4%). The AI was trained on 5000 hours of video.
Peeking into the neural network architecture used for Google’s Neural Machine Translation by Stephen Merity of Salesforce. A deeper explanation of Google’s architecture for its end-to-end learning approach to automated machine translation. Original paper here.