Sharing some of the latest research, announcements, and resources on deep learning.
Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post. In case you missed it, here are our past updates: April part 1, March, February, November, September part 2 & October part 1, September part 1, August part 2,August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April 2016. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.
NSynth: Neural Audio Synthesis by Google Brain (Magenta). Researchers announce NSynth (Neural Synthesizer), a novel approach to music synthesis designed to aid the creative process. NSynth uses deep neural networks to generate sounds at the level of individual samples. They released a dataset of musical notes and a novel WaveNet-style autoencoder model. Original paper here.
Beyond Face Rotation by Huang et al. Reseachers propose a human-like global and local aware GAN architecture for frontal view synthesis from a single image, which can synthesize photorealistic and identity preserving frontal view images. From an image of the side of someone’s face, they can produce a realistic frontal image. See examples in the paper.
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks by Zhu et al. UC Berkeley researchers introduce a method for image-to-image translation. Examples (above) include mapping Monet paintings to landscape photographs from Flickr, turning a horse into a zebra in a video here, and other cool ones. Original paper here, GitHub here.
zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks by Yuchen Tian. An enhancement of prior work in neural style transfer for Chinese characters (called Rewrite), leveraging generative adversarial networks (GANs). GitHub repo here.
Unsupervised sentiment neuron by OpenAI. We’ve developed an unsupervised system which learns an excellent representation of sentiment, despite being trained only to predict the next character in the text of Amazon reviews. Our system beats other approaches on Stanford Sentiment Treebank while using dramatically less data. Original paper here.
Announcements & Resources
Future of Go Summit by DeepMind. From May 23–27, we’ll collaborate with the China Go Association and Chinese Government to bring AlphaGo, China’s top Go players, and leading A.I. experts from Google and China together in Wuzhen, one of the country’s most beautiful water towns. An opportunity to more deeply study the game of Go and advance AI via these learnings.
Neural Complete by Pascal van Kooten. A neural network trained to help writing neural network code using autocomplete. One could imagine that everyone will have a neural network to automagically complete their personal scripts based on their own neural model.
Which GPU(s) to Get for Deep Learning: My Experience and Advice for Using GPUs in Deep Learning by Tim Dettmers. An update on a previous post about GPU evaluation.
Open sourcing Sonnet — a new library for constructing neural networks by DeepMind. A higher level frameworks for for quickly building neural network modules with TensorFlow. GitHub repo here.
A Peek at Trends in Machine Learning by Andrej Karpathy. A fun look at Google Trends data to understand trending deep learning frameworks, models, researchers, and more.
Natural Language Processing with Deep Learning by Stanford. All the recorded lecture videos by Chris Manning & Richard Socher for Stanford NLP class held in Winter 2017.
Deep Learning with Emojis (not Math) by Jeremy Stanley. An example of a real-world implementation of deep learning by Instacart to sort shopping lists, via Keras and TensorFlow.