Up to Speed on Deep Learning: March Update, Part 2

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: March part 1, February, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.


Learning to Communicate by OpenAI. Pieter Abbeel and his team outline how they train agents to invent their own language. This contributes to how we can achieve true language understanding. Original paper here.

Tacotron: A Fully End-to-End Text-To-Speech Synthesis Model by Google. An end-to-end generative text-to-speech model that synthesizes speech directly from characters. The system sounds more natural that traditional methods, and is faster. Audio samples here.

Deep Photo Style Transfer by Fujun Luan et al. Researchers at Cornell and Adobe present a deep learning approach for enhanced photographic style transfer, meaning transferring the style of a reference style photo onto another input picture. Examples in the original paper.

Mask R-CNN by Facebook. A conceptually simple, flexible, and general framework for object instance segmentation. Instance segmentation is challenging because it requires the correct detection of all objects in an image while also precisely segmenting each instance.

Evolution Strategies as a Scalable Alternative to Reinforcement Learning by OpenAI. Findings that suggest evolution strategies (ES), an optimization technique that’s been known for decades, rivals the performance of standard reinforcement learning (RL) techniques on modern RL benchmarks, while overcoming many of RL’s inconveniences. Original paper here.

Neural Episodic Control by DeepMind. Introduces a deep reinforcement learning agent that is able to rapidly assimilate new experiences and act upon them. The result is a system that learns faster than traditional deep learning agents.


Research Debt by Chris Olah and Shan Carter of Google Brain. The team behind Distill explain the Distill Ecosystem — a journal, prize, and set of tools/infrastructure that add greater clarity and explanation to the world of machine learning research.

Visdom by Facebook. Analyzing the results of experiments and experimental code is tricky, and this tool is designed to make it easier to build visualizations and dashboards out of the box. A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.

Opening a new chapter of my work in AI by Andrew Ng. Head of the AI Group at Baidu announces his departure and continued dedication to work in AI.

Deep Learning, complete online version by Ian Goodfellow, Yoshua Bengio, Aaron Courville. The online version of this comprehensive book on deep learning is now available in full for free.

The AI Misinformation Epidemic by Zachary Lipton. UCSD PhD candidate & incoming CMU professor articulates the pervasive misinformation around AI as result of a confluence of interest and ignorance surrounding the topic.


Squeezing Deep Learning Into Mobile Phones by Anirudh Koul of Microsoft. How to run deep neural networks to run on memory and energy-constrained devices like smartphones. Highlights some frameworks and best practices.

Seeing Theory by Daniel Kunin of Brown University. A visual and interactive introduction to probability & statistics, covering topics from basic probability to linear regression. The goal of the project is to make statistics more accessible to a wider range of students through interactive visualizations.

Dissecting Reinforcement Learning by Massimiliano Patacchiola. The latest installment in a 5 part series on reinforcement learning — this one introduces evolutionary algorithms, specifically genetic algorithms.

Adapting ideas from neuroscience for AI by Nick Clark of O’Reilly. An interview with Geoffrey Hinton on the relevance of neuroscience and the brain on our understanding of artificial intelligence.

Fantastic GANs and where to find them by Guim Perarnau. A summary of relevant topics, links, and resources about generative adversarial networks.

Awesome Reinforcement Learning by Hyunsoo Kim & Jiwon Kim. A collection of of excellent resources on reinforcement learning, spanning code, theory, applications, demos, etc.

Try Deep Learning in Python now with a fully pre-configured VM by Adam Geitgey. Answers the question, How in the world do I get all these open source libraries installed and working on my computer? A great resource for folks getting started with with machine learning tutorials and hoping to streamline the setup process.

Classifying White Blood Cells With Deep Learning by Dhruv Parthasarathy of Athelas. Tutorial demonstrating how to leverage deep learning to classify white blood cells as mono or polynuclear. GitHub repo here.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning, we’d love to hear from you.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Leave a Reply

Your email address will not be published. Required fields are marked *