Up to Speed on Deep Learning: June 11–18 Update

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: June (part 1, part 2), May, April (part 1, part 2), March part 1, February, November, September part 2 & October part 1, September part 1, August (part 1, part 2), July (part 1, part 2), June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Research & Announcements

Learning to Speak via Interaction by Baidu Research. Teaching an AI agent to speak by interacting with a virtual agent. This represents an advancement in more closely replicating how humans learn, as well as advancing our goal to demonstrate general artificial intelligence. Our AI agent learns to speak in an interactive way similar to a baby. In contrast, the conventional approach relies on supervised training using a large corpus of pre-collected training set, which is static and makes it hard to capture the interactive nature within the process of language learning. Original paper here.

Deep Shimon: Robot that composes its own music by Mason Britan of Georgia Tech. The robot Shimon composes and performs his first deep learning driven piece. A recurrent deep neural network is trained on a large database of classical and jazz music. Based on learned semantic relationships between musical units in this dataset, Shimon generates and performs a new musical piece. Video here.

Curiosity-driven Exploration by Self-supervised Prediction by Pathak et al. UC Berkeley researchers demonstrate artificial curiosity via an intrinsic curiosity model to control a virtual agent in a video game and understand its environment faster — which can accelerate problem solving. Original paper here and video here.

Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour by Facebook Research. Deep learning benefits from massive data sets, but this means long training times that slow down development. Using commodity hardware, our implementation achieves ∼90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internet-scale data with high efficiency. Original paper here.

Resources

A gentle introduction to deep learning with TensorFlow by Michelle Fullwood at PyCon 2017. This talk aims to gently bridge the divide by demonstrating how deep learning operates on core machine learning concepts and getting attendees started coding deep neural networks using Google’s TensorFlow library. 41 minute video. Slides here and GitHub here.

Deep Reinforcement Learning Demystified (Episode 0) by Moustafa Alzantot. Basic description of what reinforcement learning is and provide examples for where it can be used. Cover the essential terminologies for reinforcement learning and provide a quick tutorial about OpenAI gym.

Neural Networks and Deep Learning by Michael Nielsen. Free online book that introduces neural networks and deep learning.

You can probably use deep learning even if your data isn’t that big by Andrew Beam. Article argues and explains how you can still use deep learning in (some) small data settings, if you train your model carefully. In response to Don’t use deep learning your data isn’t that big by Jeff Leek.

Posting on ArXiv is good, flag planting notwithstanding by Yann LeCun. In response to, and refuting, An Adversarial Review of “Adversarial Generation of Natural Language” by Yoav Goldberg of Bar Ilan University, which takes issue with deep learning researchers publishing aggressively on Arxiv.

Tutorials & Data

Computational Neuroscience Coursera course by University of Washington. Starts July 3, enroll now. Learn how the brain processes information. This course provides an introduction to basic computational methods for understanding what nervous systems do and for determining how they function. We will explore the computational principles governing various aspects of vision, sensory-motor control, learning, and memory.

Core ML and Vision: Machine Learning in iOS 11 Tutorial by Audrey Tam. iOS 11 introduces two new frameworks related to machine learning, Core ML and Vision. This tutorial walks you through how to use these new APIs and build a scene classifier.

Deep Learning CNN’s in Tensorflow with GPUs by Cole Murray. In this tutorial, you’ll learn the architecture of a convolutional neural network (CNN), how to create a CNN in Tensorflow, and provide predictions on labels of images. Finally, you’ll learn how to run the model on a GPU so you can spend your time creating better models, not waiting for them to converge.

Open-sourced Kinetics data set by Google DeepMind. Annotated data set of human actions — things like playing instruments, shaking hands, and hugging. Kinetics is a large-scale, high-quality dataset of YouTube video URLs which include a diverse range of human focused actions. The dataset consists of approximately 300,000 video clips, and covers 400 human action classes with at least 400 video clips for each action class.

Let’s evolve a neural network with a genetic algorithm by Matt Harvey of Coastline Automation. Applying a genetic algorithm to evolve a network with the goal of achieving optimal hyperparameters in a fraction of the time required to do a brute force search.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you.

Up to Speed on Deep Learning: June Update, Part 2

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: June (part 1)MayApril (part 1part 2)March part 1FebruaryNovemberSeptember part 2 & October part 1September part 1August (part 1part 2)July (part 1part 2), June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Research & Announcements

Scalable and Sustainable Deep Learning via Randomized Hashing by Spring and Srivastava of Cornell. Rice University computer scientists have adapted a widely used technique for rapid data lookup to slash the amount of computation — and thus energy and time — required for deep learning. “This applies to any deep learning architecture, and the technique scales sublinearly, which means that the larger the deep neural network to which this is applied, the more the savings in computations there will be,” said Shrivastava. News article here.

A neural approach to relational reasoning by DeepMind. Relational reasoning is the process of drawing conclusions about how things are related to one another, and is central to human intelligence. A key challenge in developing artificial intelligence systems with the flexibility and efficiency of human cognition is giving them a similar ability — to reason about entities and their relations from unstructured data. These papers show promising approaches to understanding the challenge of relational reasoning. Original papers here and here.

Resources

Applying deep learning to real-world problems by Rasmus Rothe of Merantix. A must-read on key learnings when using deep learning in the real world. Discussion of the value of pre-training, caveats of real-world label distributions, and understanding black box models.

CuPy by Preferred Networks. An open-source matrix library accelerated with NVIDIA CUDA. Compatible with, or a drop-in replacement for, Numpy. GitHub repo here.

Speaker Resources 2017 by The AI Conference. Various news articles, academic papers, and datasets shared by folks involved in and enthusiastic about AI. (h/t Michelle Valentine)

Neural Network Architectures by Eugenio Culurciello. An in-depth overview & history or neural network architectures in the context of deep learning, spanning LeNet5, AlexNet, GoogLeNet, Inception, and a discussion of where things are headed in the future. Original paper here.

Model Zoo by Sebastian Raschka. A collection of standalone TensorFlow models in Jupyter Notebooks, including classifiers, autoencoders, GANs, and more. The broader repo for Sebastian’s book is also useful, here.

Tutorials & Data

Sketch-RNN: A Generative Model for Vector Drawings by Google. A TensorFlow recurrent neural network model for teaching machines to draw. Overview of the model and how to use it. Described in greater depth by Google here and here.

Exploring LSTMs by Edwin Chen. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years. So I’ll try to present them as intuitively as possible — in such a way that you could have discovered them yourself. An overview of long short-term memory networks, and a tutorial on their use.

Vistas Dataset by Mapillary. Free for research, the MVD is the worlds largest manually annotated semantic segmentation training data set for street level imagery. Primarily being used to train deep neural nets focused on object detection, semantic segmentation, and scene understanding for ADAS and autonomous. (h/t Andrew Mahon)


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or there are resources I should share in a future newsletter, I’d love to hear from you.

Up to Speed on Deep Learning: June Update

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: May, April part 2, April part 1, March part 1, February, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Continue reading “Up to Speed on Deep Learning: June Update”

Up to Speed on Deep Learning: May Update

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: April part 2, April part 1, March part 1, February, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Continue reading “Up to Speed on Deep Learning: May Update”

7 Insightful Quotes from Amazon’s Letter to Shareholders

Every year, Jeff Bezos publishes a letter to Amazon shareholders. These letters tend to be an interesting lens into the company and its progress, as well as a source of insightful snippets about building & leading an iconic business. We thought we’d share a few of our favorite nuggets from Bezos’ 2017 letter published earlier this month. We did the same for last year’s letter as well here.

Continue reading “7 Insightful Quotes from Amazon’s Letter to Shareholders”

Up to Speed on Deep Learning: April Update, Part 2

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: April part 1, March, February, November, September part 2 & October part 1, September part 1, August part 2,August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Continue reading “Up to Speed on Deep Learning: April Update, Part 2”

Up to Speed on Deep Learning: March Update, Part 2

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan (email)

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: March part 1, February, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Continue reading “Up to Speed on Deep Learning: March Update, Part 2”

Up to Speed on Deep Learning: March Update

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: March part 1, November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Continue reading “Up to Speed on Deep Learning: March Update”

Channel Sales: The 3 Key Attributes You Need to Know to Scale Your Business

Here’s what to consider when approaching partners & resellers to help you sell more product.

By Jaimin Patel, Ascanio Guarini, and Isaac Madan

There are many factors that play a key role in picking the right indirect routes to market (aka partner strategy, channel selling) for a company looking to scale their revenue beyond direct sales.

Continue reading “Channel Sales: The 3 Key Attributes You Need to Know to Scale Your Business”

Up to Speed on Deep Learning: February Update

Sharing some of the latest research, announcements, and resources on deep learning.

By Isaac Madan

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: November, September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Continue reading “Up to Speed on Deep Learning: February Update”