Indirect Routes to Market

 When startups focused on the enterprise market initially get started, the sales focus is on selling directly to end customers. This is a valid approach early on, as the companies are working to improve their products, messaging, value proposition, etc.
Early on, direct sales are all a company needs.

It is important at this stage for the company to have direct contact with customers, but as these elements solidify, scaling through a direct sales model can become challenging. This is where startups start to think about how to scale through indirect models. The expectation is that a company can leverage partners to grow their business faster and expand their reach.

Later, partners allow a company to scale in ways that are likely impossible through direct-sales approaches.

However, during execution, many of these strategies do not live up to expectations. For example, if a software company’s product requires specialized selling and implementation services, and their reselling partners are not geared for these specialized services, the mismatch can be catastrophic. Another example is when a company decides to leverage service based partners but have not built the sales pipeline to support that route.

In this post, we’ll give a brief overview of indirect routes to market, and a high-level framework for considering which route is best for a given company. In later posts, we’ll go into each route individually and examine some case studies of how these routes play out in practice, both in success and in failure.

When looking at an indirect route to market we need to first categorize the type of routes that are available and then decide which routes are the right fit for the company (a.k.a. the vendor) based on the maturity of the product, the state of evolution of the vendor, and the vendor’s expectations for growth.

The right partner strategy depends many things, but the most important are company stage and product characteristics (mainly technical complexity).

The most important elements are the first two: product and company stage. We begin this topic by breaking out the following indirect markets: Alliances partners, Channel partners, System Integration partners, Technology partners, OEM partners, and finally, Managed Service providers.

  • Alliance Partners — These partners tend to be the smallest group but with the largest influence when it comes to Fortune 1000 companies. These partners make up the big consulting houses like Deloitte, Accenture, Pricewaterhouse, KPMG just to name a few. These partners service the Fortune 1000 as strategic advisors on key business initiatives and often are in the position to define the framework for key technology decisions. So they have a strong, influential role in the marketplace.
  • Channel Partners — These partners make up the largest community of partners and encompass resellers, value-added resellers, and distribution partners. Examples of partners that fall into this category are CDW, Insight, Carahsoft, and Optiv. The main focus of these partners is to resell a company’s products and offer key services on top of the products that they resell. These partners often have technical expertise on staff to help their customers install and achieve the benefits that the vendor claim about their products.
  • Distribution Partners — These partners provide a powerful scaling effect for vendors as they build relationships with all of the channel partners and can take on the responsibilities of enabling their channel partner ecosystem to drive a vendor’s product in the marketplace. Examples of partners that fall into this category are Arrow Electronics, Synnex, and Ingram Micro. Many of these partners also have subsidiaries based around the world. They can act as a powerful extension of a vendor’s sales organization to scale the awareness and enablement of a vendor’s products. Distribution partners play a pivotal role in the channel but it is critical that the products and the company are setup to support their ability to scale.
  • System Integrators — These are partners that are focused purely on services and are less interested in reselling products. Examples of partners that fall into this category are CGI Group, Booz Allen, and Redapt. They are often positioning themselves as “trusted” advisors to their clients and do not want to be seen as receiving any benefit from the companies they may be advocating. However, these companies do have a strong affinity towards products that require services to implement and operate.
  • Technology Partners — These are partners that have an “integration” relationship between a product they sell in the marketplace and the vendor’s product. These companies are interested in partnerships that enable them to fill a gap in their product that is not a core competency for them.
  • OEM Partners — These partners embed a vendor’s technology into their own. These partnerships tend to be highly custom and can be very lucrative if the relationship has been crafted to enable growth for the OEM partner.
  • Managed Service Providers — These partners are a new and growing group of partners that buy a vendor’s product and set it up in their data center and offer services to end customers. This indirect model is becoming more prevalent with the growth of cloud based services.

Now that we have defined a few routes, we should think about what makes the ideal route for a vendor. There is no simple formulaic method for making the decision. However, as the framework above implies, the internal process should start with a thorough evaluation of the maturity of the product and company, not to mention the market landscape. In the next set of posts we’ll dig into the impact of a product’s complexity and a company’s stage in choosing the right indirect route to market.

By Jaimin Patel and Ascanio Guarini of Requests for Startups. Curated by Isaac Madan of Venrock (email).

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: September Update, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 31st. In case you missed it, here are our past updates: August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

One Hundred Year Study on Artificial Intelligence (AI100) by the AI100 Standing Committee (full PDF here). The One Hundred Year Study on Artificial Intelligence, or AI100, is a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play. Researchers explain opportunities to apply artificial intelligence across many domains, such as transportation and healthcare, and the ways in which it has and will affect our lives. They also share a framework around AI policy. The group plans to assess the state of AI every five years.

Infrastructure for Deep Learning by the OpenAI team. In this post, we’ll share how deep learning research usually proceeds, describe the infrastructure choices we’ve made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for Kubernetes.

TF-Slim: A high level library to define complex models in TensorFlow by Nathan Silberman and Sergio Guadarrama of Google Research. TF-Slim is a lightweight package for defining, training and evaluating models in TensorFlow. The Google Research team announces the latest release of TF-Slim, which includes many types of layers, loss functions, evaluation metrics, and handy routines for training and evaluating models.

PaddlePaddle by Baidu (GitHub repo here). Open and Easy-to-Use Deep Learning Platform for Enterprise and Research. PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Deep Learning in a Nutshell: Reinforcement Learning by Tim Dettmers of NVIDIA. A broad overview of reinforcement learning including the major concepts and areas in which the technique shines. The 4th installment in Tim’s excellent series of articles that explain deep learning core concepts.

WaveNet: A Generative Model for Raw Audio by DeepMind (original paper here). The team presents a model called WaveNet, which is able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%. Also, the same network can synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.

Attention and Augmented Recurrent Neural Networks by Chris Olah and Shan Carter. Researchers at Google Brain explain four ways to augment Recurrent Neural Networks with new and more sophisticated properties, as well as the concept of attention that is an underlying principle of each.

Why does deep and cheap learning work so well? by Henry Lin and Max Tegmark. Researchers from Harvard and MIT articulate that the reason behind the efficacy of neural networks can be explained via the laws of physics versus math alone. We show how the success of deep learning depends not only on mathematics but also on physics (summary article here).

How a Japanese cucumber farmer is using deep learning and TensorFlow by Kaz Sato of Google Cloud Platform. The article explains Makoto Koike’s — a Japanese embedded system designer — end-to-end process of building a cucumber sorter that he implemented on his parents’ cucumber farm in Japan. Koike leverages TensorFlow to implement his deep learning based classifier.

Playing for Data: Ground Truth from Computer Games by Richter et al. Researchers at TU Darmstadt and Intel Labs present an approach to creating detailed label maps from the Grand Theft Auto video game due to the game’s high degree of realism. This increases accuracy of self-driving algorithms that are traditionally trained on real-world data sets.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: August Update, Part 2

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 16th. In case you missed it, here’s the August update (part 1), here’s the July update (part 2), here’s the July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Learning to Segment by Piotr Dollar of Facebook. Piotr explains Facebook’s efforts and progress in image segmentation, as well as highlighting use cases and explaining the importance of such advancements. When humans look at an image, they can identify objects down to the last pixel. At Facebook AI Research (FAIR) we’re pushing machine vision to the next stage — our goal is to similarly understand images and objects at the pixel level.

Google Brain begins accepting applications to its Residency program on September 1st. Google’s Jeff Dean will deliver a Youtube livestream to describe the Google Brain team and the Residency program. The Google Brain Residency Program is a one-year intensive residency program focused on Deep Learning. Residents will have the opportunity to conduct cutting-edge research and work alongside some of the most distinguished deep learning scientists within the Google Brain team. To learn more about the team Consider applying here when applications open.

Text summarization with TensorFlow by Peter Liu of Google Brain. The Brain team open sources their TensorFlow model code for generating news headlines on a large dataset frequently used for summarization tasks. Peter explains two approaches — extractive and abstractive summarization, describes the model, and highlights areas of future interest.

Google Brain robot datasets by Sergey Levine, Chelsea Finn, and Laura Dows. The Google Brain team releases massive robotics datasets from two of their recent papers to further drive the field forward. Their grasping datasetcontains roughly 650,000 examples of robot grasping attempts (original paperhere). Their push dataset contains roughly 59,000 examples of robot pushing motions, including one training set (train) and two test sets of previously seen (testseen) and unseen (testnovel) objects (original paper here).

End-to-End Deep Learning for Self-Driving Cars by NVIDIA. The autonomous car team at NVIDIA describes their end-to-end approach to self-driving vehicles, using convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. Original paper here.

NIPS list of accepted papers. The 2016 Conference on Neural Information Processing Systems, cited as the top machine learning conference, takes place from December 5th through 10th in Barcelona, Spain. The list of accepted papers highlights some of the bleeding-edge machine learning & AI research that will be presented, as well as the researchers & practitioners driving the field forward who may be present. Consider attending this year — details here.

Combining satellite imagery and machine learning to predict poverty by Neal Jean et al of Stanford. Nighttime lighting is a rough proxy for economic wealth, and nighttime maps of the world show that many developing countries are sparsely illuminated. Jean et al. combined nighttime maps with high-resolution daytime satellite images. With a bit of machine-learning wizardry, the combined images can be converted into accurate estimates of household consumption and assets, both of which are hard to measure in poorer countries. Furthermore, the night- and day-time data are publicly available and nonproprietary.

Speech Is 3x Faster than Typing for English and Mandarin Text Entry on Mobile Devices by Sherry Ruan et al of Stanford & Baidu. Researchers evaluate Deep Speech 2, a deep learning-based speech recognition system, assessing that the system makes English text input 3.0X faster, and Mandarin Chinese input 2.8X faster than standard keyboard typing. The error rates were also dramatically reduced, and the results further highlight the potential & strength of speech interfaces.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Analyzing The Podcast Renaissance

By Niraj Pant

Before starting to analyze podcasting in its current form, it may be best to go back.

With the birth of podcasts in 2005, the popularity for the search term increased exponentially. Many things were happening at this time — Apple added support for podcasts on the iPod/iTunes, and many early podcasts were coming out with their first episodes. By 2007, there had been over 24 attempts to trademark the term “podcast”.¹

Why the significant drop in interest afterwards? In my eyes, podcasts went through a “false start” during this time period. Mobile wasn’t in full effect until a few years later, a low number of shows were available, and radio was still “good enough” for most consumers. However, since early 2015, the popularity of the search term has started clawing back. As all the right pieces are being put in place, podcasting is finally starting to go through its renaissance.

It’s no coincidence that the number of Americans that have listened to a podcast has doubled since 2008 (the graph’s scale doesn’t do it justice). We’re witnessing a few problems from 2005 being solved today:

  1. Content! A big reason listenership was so low early on was the lack of content. Not only was the number of podcasts low, but variety and selection was also very limited. Now that the # of shows being added is growing exponentially, demand for podcasts have only increased. This is similar to Netflix’s growth: As more content is added to the platform, demand increases in response. iTunes has over 325,000 podcasts and expects users to listen to 10 billion episodes in its apps by the end of 2016.² There are never enough podcasts:
  1. In-car audio. Traditionally, people have listened to AM/FM radio for dynamic content on-the-go. With better sound quality, less obtrusive ads, and a greater range of content, podcasts are becoming the go-to source for in-car listening. As cars become increasingly internet-connected, there’s an opportunity for many new people to become regular podcast listeners.
  2. Growth of mobile. Since podcasts first came out, data speeds have increased 100x. Mobile has become much more widespread, and in turn, there are many more ways to consume podcasts on the go (but there could be better apps to facilitate this).
  3. Apple. They’ve spearheaded the efforts in creating a lot of early listeners. By providing a clean, usable solution to download and listen to podcasts, it reduced many points of friction for listeners, driving higher engagement. Additionally, iTunes is shipped on every new consumer device (iPhone, iPad, Mac, etc.) that comes, which are some of the most popular products of all time.
  4. Monetization is becoming increasingly profitable. Podcast is a lousy word, since many people equate it with the idea of being free. However, many new podcasts are actually profitable. New models for publishing and monetizing are starting to come out as the space matures (some good ideas here), making running a podcast more lucrative than ever before.

With all this growth, the tools used to publish and consume podcasts are stilllacking. More tools need to go into place to give consumers the A+ experience they deserve, starting with a better podcast player. The podcast industry is ripe for some new companies — the tools people use to share, listen, and discover new podcasts just aren’t there yet.

Previous Attempts

The perfect podcast platform has been attempted before. In fact, Ev Williams, before going on to found Twitter and Medium, worked on Odeo, a podcast creation and consumption platform. Ev has helped define both blogging (Blogger, Medium) and social networking (Twitter), and he knew that this space had big potential. Unfortunately, the team behind Odeo later decided to sell the company, due to not understanding the space as well as they had hoped. Ultimately, Twitter came out of this, so we can’t be too mad 😉

More recently, Google added a podcast feature to Play Music, which is now standard on every Android device (similar to iTunes being packaged with every iOS device). Spotify recently added support for podcasts, and are looking into adding hosting and streaming services. Audible, known for being one of the largest audiobook providers, also recently started bringing on new, original podcast content to its platform.

The 2 Sides to Podcasting — Consumption and Publication

There are opportunities for both sides of the podcast experience, publishing and listening, as they are both still stuck in the 80’s. These days, to get a podcast published, you need to set up microphone software, audio editing tools, an RSS feed, and strategies to publish content to different podcast networks.

Getting a show on traditional radio networks are hard — creating a podcast, however, shouldn’t be. It brings a lower barrier to entry for getting your thoughts out there. I won’t go into the publishing side as much, since Ian Miles Ownbey wrote a pretty good post on it here.

Where’s the YouTube/GoodReads/Pocket/x for Podcasts?

Seriously, why hasn’t this been built yet??? The number of requests for podcast startups I’ve seen is at an all time high:

Literally applying any X for Y model would be better than what we’re currently working with.

There’s a lot missing from current podcast players. Aside from the obvious cross-platform sharing issue, here are a few features I would LOVE to see:

  • Ability to share soundbites from podcasts along with transcription. Similar to what Re/code has done here:

The idea of sharing individual soundbites from podcasts is great for leveraging its content. It’s almost like sharing a vine (small bits of audio), and it feeds back into the curation bit. There are so many quotes I’d love to share from a show onto twitter, but am limited to sharing an episode and time the quote occurs. If only I could just share soundbites with transcriptions onto twitter! (similar to highlighting on medium):

Pair this with an Twitter app modal, showing native content like Soundcloud’s modal (Twitter integration is crucial — it really helped accelerate Medium’s growth at the beginning):

Ooooh, look how beautiful this looks
  • Uniform sharing. I’ve seen people share links to podcasts hosted on Soundcloud, Overcast, and iTunes, but there needs to be one uniform platform. How do you share the link to a video? YouTube. Game stream? Twitch. What about a podcast? iTunes? Not so fast. Sharing the link to a podcast on iTunes doesn’t cut it for (obvious) cross-platform issues. The cross-platform solutions that do exist don’t have web players, social integration, or good recommendation systems.
  • Better discovery. Curation is a tough problem to crack. Most recommendations are done on a per-show basis, but it really should be on a per-episode basis. For example, I listen to Jalen and Jacoby and want to hear more sports episodes just like it (preferably from other shows). However, I’m still getting recommended entirely new shows, and not specific episodes within that show. I shouldn’t have to resort tosearches like this either. In iTunes’ case specifically, recommendations are currently hand-curated, and it’s often a mystery how shows are featured:

Additionally, the “Top Podcast” list within iTunes is also the subject of constant speculation among podcasters. An apparent change in the system this year resulted in small podcasts from Disney fans — and a show that hadn’t yet published its first full episode — being ranked alongside shows with hundreds of thousands of listeners (Podcasts Surge, but Producers Fear Apple Isn’t Listening)

The process could be improved greatly — Spotify’s Discover Weekly is a great model to follow, and is the best content recommendation system I’ve seen yet. To put it frankly:

  • A newsfeed. Twitter is my central place for getting news and updates in areas I’m interested in. The ability to see what friends + influencers are listening to/commenting on would have a great impact on the way I consume podcasts.
  • A better looking player. There have been many players over the years, but I’d like to see one that’s really well designed. It needs to be simple, beautiful, and something I keep coming back to. iTunes is a mess, but it’s the best we have. The most promising alternative I’ve seen so far isOvercast, but I’d love to see more well-designed apps overall.
  • Decreased friction. Listening in current apps take a lot of time and effort to find a new episode and start listening. A similar conundrum happened when video streaming services first started popping up. “Netflix was really the first streaming service for movies/TV that made people want to stream in lieu of DVDs. The podcast corollary is that, I think, a lot of potential listeners don’t want to have to ‘figure it out’ (same as a lot of potential streaming customers didn’t want to buy something on iTunes and then have to figure out how to get it on their tv)” (via /r/podcasts).³

Podcasts are not traditional audio, and they shouldn’t be treated as such. In fact, podcasts have more in common with written material than music. They require much more attentiveness, and being longer-form in nature, need different ways to consume and share. This requires a shift in vision from podcast companies, one which is clearly defined by leaders in other media spaces. i.e. YouTube was started on the premise that all videos on the web should be in one central portal. Or better put:

YouTube owns video consumption on the web, and is slowly starting to own creation as well. I think this accurately represents the mission podcast startups should look towards.

I’m stoked for the future of podcasting.

¹ List of US Podcast Trademarks

² Podcasts Surge, but Producers Fear Apple Isn’t Listening

³ The Netflix of podcasting?

Written by Niraj Pant (email). If you’re interested in chatting more about podcasts, I’m @niraj on twitter!

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Email Isaac if you’d like to contribute here.

Student Venture Capital Programs

By Niraj Pant

An increasing number of venture capital firms are going past their traditional avenues for finding companies and looking at a new source: students. Specifically, they’re hiring students to work part-time on sourcing companies that they find interesting.

Many tech investors are struck by FOMO, and are starting to become more aware that young adults are among the first to adopt new popular apps and create/identify new social media trends. No one wants to miss out on the next Facebook or Snapchat (both of which were started on college campuses!). Therefore, investors are putting more confidence into students as scouts for their VC program. While venture capital scouts aren’t a new concept (Sequoia Capital is famous for hiring scouts to invest in companies on their behalf), many new opportunities are being opened up for college students to get involved in venture capital.

Participants in these programs don’t usually get carry, but are often incentivized by compensation and/or networking opportunities with some of the smartest students in tech, as well as partners at their respective firms.

Rough Draft Ventures (RDV) and Dorm Room Fund (DRF) are some of the earliest venture capital firms focused on students investing in companies. Student partners are organized by chapters in Boston (RDV & DRF), Philadelphia, San Francisco, and New York (DRF), and have real input in funding decisions. These funds are usually backed by larger funds (First Round Capital for DRF and General Catalyst for RDV), and receive mentorship/resources from their parent VC firm, as well as deal flow that they find interesting. They often invest in companies that are founded on campus or have student/recent graduate founders.

The results from past investments have looked extremely strong, considering the limited time these firms have been around. Students seem to be picking the right companies to invest in, shown by the $100m+ dollars in follow-on capital raised by portfolio companies from large venture firms such as Google Ventures, Union Square Ventures, and Spark Capital.

Smaller firms, such as Alsop-Louie Partners, have also been hiring student partners for the past few years. However, instead of having a separate student-focused fund, they hire campus associates to help them stay connected to new ideas on campus and bring in interesting companies. StrictlyVC had a great interview with Stewart Alsop about the program.

Larger VC firms, such as Sequoia and KPCB, have seen the value in bringing on students, and recently created student ambassador programs. These programs are run year-by-year until the campus ambassador at a school graduates, who is then replaced with a new ambassador on each campus.

Here’s a list of venture capital firms with student programs I’ve come across (shoot me a message on twitter if there are any missing!):

  • Sequoia: Campus Ambassadors Program
  • KPCB: Campus Catalysts
  • Greylock X
  • Binary Capital: External Council*
  • The Mochary Group
  • Pejman Mar Ventures
  • Chicago Ventures
  • Lightbank
  • Alsop-Louie Partners
  • Accel Partners
  • Lowercase Capital

Student-focused funds

The following funds are primarily student-focused, rather than a program at a larger firm.

  • Free Ventures: Although not a traditional VC firm, Free Ventures is an accelerator for student startups, and very similar to an early Y Combinator.
  • The House Fund: Run by a UC Berkeley alum, The House Fund is a $6m seed-stage fund focused on investing into companies with student founders. They will invest angel-sized checks, and syndicate the rest of the deal.
  • Contrary Capital: A university-focused fund, with student partners at every major school.
  • A-Level Capital: Student-run fund at Johns Hopkins University. Students work in a variety of roles, from associates all the way up to partners, and usually invest in companies created on campus.
  • Dorm Room Fund: Described above
  • Rough Draft Ventures: Described above

Venture capital role placement

These types of programs not only help with learning more about venture capital, but often help place students into exciting startups, large tech companies, and venture roles.

Dorm Room Fund recently published a list of their student partners from the last graduating class and where they were headed full-time. While most startups when to working at startups or large companies in a variety of roles, about 20% of partners went into venture roles, a relatively high number considering most analyst/associate roles at VC firms are unadvertised.

Written by Niraj Pant (email). Thanks to Jay Bensal, Ali Afridi, and Wiley Jones for assisting in developing early iterations of this piece. Ali Afridi wrote a complementary post on how to get involved with VC while still in college. *Disclosure I work on Binary Capital’s External Council.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Email Isaac if you’d like to contribute here.

Up to Speed on Deep Learning: August Update, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 2nd. In case you missed it, here’s the July update (part 2), here’s the July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

An Intuitive Explanation of Convolutional Neural Networks by Ujjwal Karn. An thorough overview of CNNs: what they do, why they’re important, how they work, some history, and their underlying concepts. Inspired by Denny Britz’s Understanding Convolutional Neural Networks for NLP — Denny’s blog, WildML, is also an excellent resource with many deep learning explanations and tutorials.

Image Completion with Deep Learning in TensorFlow by Brandon Amos. A deep learning tutorial that explains how to do image completion and inpainting via deep learning. Image completion and inpainting are closely related technologies used to fill in missing or corrupted parts of images — this is important to designers and photographers, who often need to fill unwanted or missing parts of images. The code is also available on GitHub. Based on Raymond Yeh and Chen Chen et al.’s paper Semantic Image Inpainting with Perceptual and Contextual Losses.

Deep Learning Summer School lecture notes. Held the first week of August in Montreal and organized by Aaron Courville and Yoshua Bengio, professors at the University of Montreal, this conference provides a broad overview of current research in the deep neural networks. Speakers include preeminent deep learning researchers from Google, Facebook, Twitter, NVIDIA, and many others. All the lecture slides are available for review.

Vincent AI Artist (GitHub repository) by Saikat Basak. Vincent is an attempt to implement “a neural algorithm of artistic style”. A convolutional neural network (CNN) separates ‘style’ and ‘content’ from artistic images, and combines that artistic style with another image to create a unique expression.Leverage this repo to build your own of Prisma.

Robotics Science and Systems (RSS 2016) Workshop notes and videos.This workshop held in Ann Arbor, MI on June 18, 2016 convened a broad set of experts to discuss the topic of deep learning in robotics, particularly around computer vision. Speakers such as Pieter Abbeel of UC Berkeley and Ashutosh Saxena of Brain of Things spoke about their research in the field. Their recorded talks and slides are available for review.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Deep Learning in Healthcare: Challenges and Opportunities

“Medicine is an art and a science, but the science dominates the art.”

Dr. Dave Channin received a Bachelor’s degree in computer science and molecular biology from Brandeis University. After graduation, he worked as a programmer for a couple of years, and then left the United States to study medicine at the Faculté de Medicine Lariboisière-St. Louis in Paris. Returning to the USA, Dr. Channin completed medical school and residency in radiology at the Penn State College of Medicine. At the completion of residency, Dr. Channin was recruited to Northwestern University as the principal architect of the Northwestern Memorial Hospital PACS. In 2010, Dr. Channin became Chair of the Guthrie Clinic medical imaging service line. There, he had shared administrative responsibility for imaging at 4 hospital and 7 outpatient locations, performing 240,000 procedures per year. In 2015, Dr. Channin left Guthrie to return to his roots in informatics and technology, founding Insightful Medical Informatics, Inc.

What makes deep learning in medical and imaging informatics different from applications that are more consumer-facing?

This is healthcare and healthcare, itself, is fundamentally different from every other industry. People assign the highest priority to their health (or lack thereof), expect the highest levels of care and service regardless of cost and are more emotional and ideological about this industry than any other. Because it consumes 17.5% of US GDP and still does not meet societal expectations, it is the most regulated aspect of American society.

You are both a physician and an entrepreneur. What are the difficulties in starting a medically-relevant company as a physician, and what advice do you have for those who are looking to do so?

Actually, I was a computer programmer who became a radiologist and through an interest in R&D became an entrepreneur. Radiology, in particular, is a great specialty in which to find a technology driven path and apply the tools of the programmer.

The challenge to starting a medically relevant company is identifying the niche upon which you are going to focus. Work backward from the patient and their pain and suffering. Do not underestimate the size, complexity and regulation of the American healthcare system and the scientific rigor to which you will be held. Consider the American healthcare system as an ugly shrub that only 200 years of carefully metered cuts will transform it into the bonsai we all so desire. It is unrealistic to think you will uproot the entire shrub to plant something new. Even your branch may take decades to change.

Collaborate with people who are already in healthcare. You will be surprised by their insights and their desire to improve the system.

What are the most important factors teams must consider when building healthcare-minded products more generally?

In today’s environment, everything done in healthcare must address the pillars of the Triple AIM; improve the health of populations, lower the cost of care, or improve the patient experience. Some add a fourth aim of improving the provider experience so as to recruit and retain the best people. If your product or service does not address one or more of these, don’t bother.

Medicine is an art and a science but the science dominates the art. Medicine, directly or indirectly, is evidence-based and sooner or later you are going to have to produce hard scientific data to back up your marketing claims. The road from Hippocrates to HIPAA is littered with snake oil and its promoters.

Assume it is a zero sum game. You are going to make money in this business by taking it away from someone else. They, their lobbyists, legal staff and everyone else they can muster are going to try and stop you and maintain their playing field advantages.

You are dealing with a large number of highly educated, highly trained, highly enculturated individuals. Respect the validated, accumulated knowledge and wisdom and the culture of altruism, empathy and compassion; challenge unvalidated beliefs, disrupt bad workflow and bureaucracy and help these people do what they do best, better.

What catalyzed the interest in deep learning applied to healthcare?

It is important to remember that ‘artificial intelligence’ (in the largest, traditional sense) and ‘algorithmic learning’ has been applied to medical data including images since the earliest days of computing. Computer assisted diagnosis systems have been around since the 1970s. Automated processing and analysis of one-dimensional time signals (e.g., electrocardiograms) has been around for decades. Computer aided detection and diagnosis of medical images (e.g., Papanicolau smear cytology, detection of masses and microcalcifications in mammograms) have also been around for quite some time. Some of the latter already use deep learning techniques such as convolutional neural networks.

The current interest in deep learning in healthcare stems from two things. First, the flowering of machine learning techniques, in general, and especially unsupervised learning techniques, in the commercial space with the likes of Google, Facebook and IBM Watson. The second factor is the explosion of available healthcare data (lagging only slightly the explosion of internet data) that was triggered by the HITECH portion of the American Recovery and Reinvestment Act (ARRA). The latter effectively transformed medical records from carbon paper to silicon chips and made that data, structured and unstructured, available.

What hurdles do you see for these first-movers going forward?

Data in, data out and regulation.

Machine learning methods used in a vacuum have next to no utility — you need data to train your model. How significant of a data barrier is there when it comes to medical applications of machine learning concepts, given the significant privacy considerations?

This is the “data in” problem. The problem is not privacy. The use of medical subjects and data in research, including research to develop new technologies, is well established both within the context of Federal Policy for the Protection of Human Subjects (the so-called, “common rule”) and HIPAA. Even the transfer of technology and intellectual property developed with federal research dollars to the private sector has been facilitated for decades by the Bayh-Dole Act of 1980. Companies in this space “only” need to respect policy, paperwork and process.

The real “data in” problem, affecting deep learning applications, especially, but not exclusively, in medical imaging, is truth. Truth means knowing what is in the image. It is very easy to get a large number of images of hats and have people annotate the images that contain red hats or fedoras. Crowdsourcing to millions (billions?) of people, the annotation or validation of data (e.g., CAPTCHA) can also work to create/validate large datasets. Other small and large annotated datasets, for specific recognition tasks, have been created by government, academia and industry at no small cost in time and money.

Medical images are much more complex. There are dozens of kinds of medical imaging devices each producing images according to their respective physical principles. These machines are producing images of hundreds of different anatomic structures and normal variants and pathophysiologic processes resulting in thousands of observable imaging features.

In the case of supervised learning, and creating annotated datasets, it is important to remember that in the United States, there are only approx. 35,000 people trained and licensed to annotate all of those observable imaging features (though there are perhaps triple that number that could contribute annotations in their specialty areas).

Large numbers of patient imaging studies performed with digital technologies over the past 30 years have been annotated by this rolling population of 35,000 experts. The vast majority of those annotations, however, are in the form of unstructured free text and are absent links to the coordinates of the pixels containing the image features that engendered the annotation. The good news is that there is a new standard for Annotation and Image Markup (AIM) that was developed under a National Cancer Institute program and anyone developing annotated medical imaging data sets ignores the importance of standardized annotation at their peril.

But you can’t just take single annotations from one of the 35,000. Even though they are experts and very good at what they do, they are human and make mistakes. So you have to have consensus annotations by multiple expert observers.

What about data for unsupervised learning? Can’t we find millions of, for example, chest X-rays and see what patterns are found?

Well, yes, you could but you might suffer from garbage in — garbage out. There are thousands of imaging procedures. The Current Procedural Terminology (CPT) and other code sets used to classify and bill for these procedures lack the granularity to characterize the exact nature of the imaging performed. It turns out, there are 11 or so ways to produce a radiograph of the chest. The billing code, 71020, can be used for any two of these 11 views. In computed tomography (CT) there are dozens of parameters that can be varied to produce images, including whether or not the patient was injected with contrast media. In magnetic resonance imaging, even more parameters. Which of those parameters are going to affect the output of the unsupervised system? There are no widespread, detailed standards for the acquisition of medical imaging studies. The good news is that there is a developing standard for the nomenclature of imaging studies (the Radiological Society of North America’s RadLex™ playbook now being harmonized with LOINC). Furthermore, medical imaging has one of the best standards, DICOM, that specifies, in infinite detail, the metadata of medical images, so you can use this information to assist an intelligent triage of the images. As the saying goes, “DICOM is always documented in brown, because it is clear as mud, but delivers like UPS.”

Standards for non-image structured data are less, ummm, standardized. Even then, much non-image medical data is still unstructured (e.g., notes or structured laboratory data transformed into unstructured document formats). Vocabularies, lexicons and ontologies are mature but schemata and usage still have large local variance.

Lastly, there is no central clearinghouse or national interoperability for medical record data though some has been in development for a decade or more. Each institution, cluster of institutions or other association of data stewards act on their own within the limits of the law. So, obtaining high quality annotated data sets for both supervised and unsupervised learning will remain a costly challenge for years to come.

What is the “data out” problem?

Let’s say that you’ve overcome the data-in hurdles, you’ve acquired a great, annotated data set and the results on the test set are great. Now you have to validate it; compare the performance of your system to humans for this task and, I would warn, humans are very good at these tasks. This is done by performing an observer performance study and calculating a receiver operating characteristic curve that relates to the observer’s sensitivity and specificity. And since you are hoping the difference between your system and the human is small, the study must be large to have the statistical power to distinguish the two. These experiments take time and are costly to perform. Perhaps the system and the human used together are better than either alone? Does the system speed up the interpretation process or slow it down? I don’t want to throw any shade, but humans can determine gross normality of a chest radiograph in 200 milliseconds (Radiology. 1975 Sep;116(3):527–32).

OK. You’ve got an AI and it’s good enough for clinical use. How are you going to deliver your result to the clinician, radiologist or other anticipated user of the system and incorporate it into the electronic medical record? Their eyes are not fixed to generic development platforms like iOS or Android. Rather, they are attached to large, expensive, proprietary, often regulated devices and systems. There are standards for integration and interoperability but they must be addressed.

Unlike many consumer technology applications of machine learning, healthcare has a dedicated regulatory body in the FDA. As a result, the FDA will play a significant role in determining the future of machine learning in healthcare. What challenges do developers face in working with the FDA?

The first challenge is not to ignore the 800-pound gorilla in the room. Start early. Find out if your device is a device. I would argue that if your deep learning system is going to do anything meaningful it is going to be a device but there is plenty of guidance available to help the developer make that determination. Once you determine that your device is a device, you can determine what class of device it is and whether any exemptions apply. The class of the device is “based on the level of control necessary to assure the safety and effectiveness of the device.” These determinations will define the path you will take to FDA approval of your device.

Again, policy, paperwork, process. One fundamental philosophy of the FDA is “Quality System (QS) Regulation/Medical Device Good Manufacturing Practices.” While we all love ‘garage code’ that gets us 7 million users in 7 days, the FDA will insist that the code was developed with common good manufacturing process (CGMP). There are many software development methodologies that will meet CGMP and you might as well start using one from day one. Similarly, the FDA will look for GMP and appropriate regulations to have been applied to any data you use and any experiments you perform to validate that data.

Identify who is going to shepherd your company and product through the FDA process. Do you have a lawyer, accountant and CFO to deal with the IRS? You will probably need similar for the FDA. Prepare as much as you can in advance and work in parallel as much as possible.

What challenges does the FDA face in its consideration of these technologies? How can regulatory bodies such as the FDA keep up with the speed of development? How should investors and entrepreneurs think about the FDA’s role in the process of development?

How smart is the gorilla and how good is he at his job? Pretty smart and fairly good. The FDA works by assigning devices for evaluation to one of 16 medical specialty “panels”. These panels rely on published and unpublished scientific studies. One power of the FDA is its ability to convoke panels of industry and academic experts to analyze the evidence. The radiology panel has, for example, already approved “Analyzer, Medical Image” (govspeak) systems based on deep learning techniques such as convolutional neural networks.

The system is, admittedly, slow. This is not, however, solely due to the nature of a large government bureaucracy. Following and documenting the CGMP process, even for software, is tedious and time consuming. Performing and documenting the scientific validation is meticulous and time consuming. Statistical analyses, publishing and analyzing the published and unpublished results all take time. Remember, we are talking about a medical device that could diagnose or steer the diagnosis in many directions. It seems like a demonstration of “safety and effectiveness” is only just that for which your mother would ask before she allowed it to be used on her.

What are the benefits that deep learning can provide in healthcare? What is its value proposition, and in what areas of the healthcare system is it most helpful? How does the development of AI fit within the conversation about the rising and unsustainable costs in healthcare?

The value of deep learning systems in healthcare comes only in improving accuracy and/or increasing efficiency. Healthcare, today, is a human — machine collaboration that may ultimately become a symbiosis or even cyborg relationship. We are still at the stage, however, that we have both humans and machines each performing both tasks at which they are suboptimal. As deep learning systems develop and evolve they will more and more assist humans with those tasks at which humans are not good. So, for example, humans are very good at processing information from their senses including vision. They are very good at perceiving human emotions. But humans are not so good at remembering things, searching for and organizing data and not too good at correlating and reasoning about that data. So I foresee DL systems that will make physicians and other providers faster and smarter in their diagnoses and reduce uncertainty in their decisions thereby avoiding costs and hazards and saving time.

A similar debate that is facing industrial automation with robotics could be made about deep learning in health informatics when it comes to job replacement. Do you see backlash from the medical community towards utilizing concepts such as deep learning with regard to its part in changing medical practice? Are there any similar historical analogies you could speak on where technology fundamentally changed the way medicine was practiced, but had significant risks to “traditional” medical practice?

Medicine, in general, and radiology, perhaps more so than any other specialty, has been very good at developing and adapting to new technology. The golden road to the annual meeting of the Radiological Society of North America (the largest medical meeting in the world) is paved with technological innovation. Many fundamental technology “sea changes” have occurred in radiology, in a relatively short time, many within our lifetimes. For example, the transition within a decade or two from film based imaging to digital imaging. Dark room staff (large numbers of whom were blind!)? Eliminated like buggy whip manufacturers. Film file storage (c.f., The Cleveland Clinic X-Ray Fire of 1929) “librarians”? Reduced or eliminated. Job loss? Some, but not as much as you would think. The transformation to digital and the (ongoing) explosion of new imaging modalities opened new opportunities as did work in the information systems and the changing healthcare environment itself. Industrial disruption? Sure (c.f., Kodak where the small, growing digital siamese twin slew the body of the mighty film producer). Job loss? Some, especially locally. But less than expected given the number of healthcare information technology companies that arose in parallel.

What about radiologists? Remarkably adaptable to technology perceived as positive to the patient or the institution. At one institution, in 1999, 25 radiologists went from reading images on film to reading images on computer workstations overnight without a significant degradation in accuracy or efficiency. Eventually, they were faster on the new workstations and with new, learned behaviors could never return to film. Fewer radiologists? Not really as new uses for imaging and new imaging technologies were developed. Look how well radiologists have adapted first to mammography (special techniques and technology) then digital mammography, then digital mammography with computer assisted detection/diagnosis and now digital breast tomosynthesis. Accuracy and efficiency have incrementally increased at each step to the benefit of women everywhere. Fewer mammographers and radiologists? Not really.

We, as a society, are going to have to face the accelerating pace of automation and its impact on the workforce and society. There is, however, nothing to suggest to me that these effects will occur faster or in different form in healthcare and in particular due to deep learning. Do I still recommend Radiology as a career to high school and college students? Absolutely.

Deep learning in healthcare has been thriving in recent years. What do you see for the field going forward? What are the important considerations deep learning researchers need to consider for deep learning to be most effective (both from a cost and computational perspective) and ethical going forward?

I see unlimited opportunity to improve the system. Despite current best efforts, there are innumerable inaccuracies and inefficiencies in the system that are ripe targets for DL and other technologies. The most important consideration is to choose your target wisely. Don’t lose sight of the link between the accuracy and efficiency you improve and the pain and suffering you reduce.

By Dave Channin. Curated by Ascanio Guarini and Isaac Madan.

If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Email Isaac Madan to contribute here.

Up to Speed on Deep Learning: July Update, Part 2

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on July 18th. In case you missed it, here’s the July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

OpenAI announces and opens applications for their special projects.They list several problem areas likely to be important both for advancing AI and for its long-run impact on society: detecting breakthrough, malicious AI; an AI-building AI; cybersecurity; and multi-agent simulations. You can apply to lead one of these efforts.

Neural Networks for Machine Learning course by Geoffrey Hinton will be offered again in September 2016 on Coursera. Learn about artificial neural networks and how they’re being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc. We’ll emphasize both the basic algorithms and the practical tricks needed to get them to work well. You can enroll now.

Modern Face Recognition with Deep Learning tutorial by Adam Geitgey. Part 4 of Adam’s series Machine Learning is Fun. The series introduces machine learning in four detailed segments: spanning an introduction to machine learning to an in-depth convolutional neural network implementation for face recognition. Here are the three prior parts: part 1part 2, and part 3.

A Beginner’s Guide To Understanding Convolutional Neural Networks by Adit Deshpande. Adit, a CS undergrad at UCLA, provides an explanation of convolutional neural networks by walking through an image processing problem, with mathematical descriptions. He also explains the training process and touches upon practical use of CNNs.

CVPR Days 1–4 by Sri Chandrasekar. Review and summaries of the 2016 Conference on Computer Vision and Pattern Recognition (CVPR) held in Las Vegas in late June, highlighting some of the best talks. All parts here: day 1,day 2, day 3, day 4, summary.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: July Update

By Isaac Madan.

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on June 20th. In case you missed it, here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.


Google’s DeepMind partners with the National Health Service’s Moorfields Eye Hopsital to apply machine learning to spot common eye diseases earlier. The five-year research project will draw on one million anonymous eye scans which are held on Moorfields’ patient database, with the aim to speed up the complex and time-consuming process of analyzing eye scans (news article). The goal is that this leads to a better understanding of eye disease, earlier detection, and treatment. We previously wrote about the challenges inherent to deep learning in medical imaging here.

Andrew Ng announces the pre-launch of his book Machine Learning Yearning, to share practical advice & experience around building AI systems, to help practitioners get up to speed faster. Over 35,000 people signed up to receive a free draft copy as of June 21.

Google releases Wide & Deep Learning as part of the TensorFlow API. The project combines the power of both memorization and generalization, to better reflect the properties that make the human brain such an effective learning machine. They provide an in-depth example that illustrates the project’s purpose and potential via a fictional food delivery app.

The Harvard NLP and Visual Computing groups announce LSTMVis, a visual analysis tool for recurrent neural networks (RNNs). RNNs learn a black-box hidden state representation, and changes in these representations are challenging to study. The tool makes it easier to visually observe and isolate patterns in state changes. The Verge provides additional context around the black-box aspect of AI systems here.

Explanation, Review, and Cool Stuff

Experience and Advice for Using GPUs in Deep Learning. Tim Dettmers provides a comprehensive analysis of various GPUs and advice on how to best use them for deep learning. For example, he answers questions likeShould I get multiple GPUs? and What kind of accelerator should I get? — along with discussion of convolutional neural networks, speed, and memory considerations.

ICML 2016 not by the day by Stephanie Hyland. Review of the 2016 International Conference on Machine Learning (ICML), highlighting the important trends and papers that emerged.

It’s ML, not magic. Stephen Merity addresses over-hype and mysticism around artificial intelligence. He articulates both the reason why we see this hype, as well as the types of questions we should ask to gut-check and better understand the potential of AI.

Chasing Cats. Robert Bond of NVIDIA develops an end-to-end cat surveillance system for his front yard, which is a nice articulation of the full pipeline from camera to processor to neural network (to sprinklers).

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

Up to Speed on Deep Learning in Medical Imaging

By Isaac Madan and David Dindi

The notion of applying deep learning techniques to medical imaging data sets is a fascinating and fast-moving area. In fact, in a recent issue of IEEE’s Transactions on Medical Imaging journal, there’s a fantastic guest editorial on deep learning in medical imaging, that provides an overview of current approaches, where the field is headed, and what sort of opportunities exist. As such, we pulled out some of our favorite nuggets from this article and summarize/extend upon them in Q&A form, so they’re more easily digestible.

Why is deep learning valuable in the field of medical imaging?

Most interpretations of medical images are performed by physicians; however, image interpretation by humans is limited due to its subjectivity, large variations across interpreters, and fatigue.

What are some challenges in applying Convolutional Neural Networks to medical imaging?

  • CNNs require a large amount of labeled data. Large medical data sets are not readily available because many data sets are proprietary and/or are difficult to obtain due to privacy concerns.
  • Most often, the data sets are not comprehensively annotated, owing to the costliness & scarcity of expert annotation in the medical domain.
  • Moreover, rare diseases, by virtue of being rare, are underrepresented in the data sets. If not accounted for properly, the class imbalance that ensues (i.e. disease label underrepresented while the healthy label is overrepresented) would bias a model to predict the healthy-label.
  • Furthermore, in situations where the features are highly correlated and the normal-class is overrepresented, many of the training samples are redundant and uninformative.
  • In many applications, making an informed diagnosis requires more than just the medical image (e.g. lab values, demographics, prior medical history). Gaining access to and linking these data with the images presents yet another obstacle.
  • Non-standardized evaluation metrics, the use of disparate data sets, and differences in the way that learning tasks are framed each make it difficult to track and compare advancements in the field.

How are the challenges being addressed?

One way is via transfer learning, which has been used to overcome the lack of large labeled data sets in medical imaging. In transfer learning, a separate CNN is trained on a different task using a different data set. The features learned from this separate task are then reused to train a CNN for the medical imaging task of interest. By recycling features in this way fewer examples are needed to achieve good performance. One main caveat to transfer learning is that the recycled features need to be generally useful across the two separate tasks.

Can I try this? Where can I find publicly available data?

Publicly available data sets:

  • Visual Concept Extraction Challenge in Radiology (VISCERAL). Manually annotated radiological data of several anatomical structures (e.g. kidney, lung, bladder, etc.) from several different imaging modalities (e.g. CT and MR). They also provide a cloud computing instance that anyone can use to develop and evaluate models against benchmarks.
  • The Cancer Imaging Archive. Cancer imaging data sets across various cancer types (e.g. carcinoma, lung cancer, myeloma) and various imaging modalities.
  • Grand Challenges in Biomedical Image Analysis. A collection of biomedical imaging challenges in order to facilitate better comparisons between new and existing solutions, by standardizing evaluation criteria. You can create your own challenge as well. As of this writing, there are 92 challenges that provide downloadable data sets.
  • The Lung Image Database Consortium image collection (LIDC-IDRI). A collection of diagnostic and lung cancer screening thoracic CT scans with annotated lesions.
  • Kaggle diabetic retinopathy. High-resolution retinal images that are annotated on a 0–4 severity scale by clinicians, for the detection of diabetic retinopathy. This data set is part of a completed Kaggle competition, which is generally a great source for publicly available data sets.
  • International Symposium on Biomedical Imaging 2015. Eight Grand Challenges presented at ISBI.
  • Multiple sclerosis lesion segmentation challenge 2008. A collection of brain MRI scans to detect MS lesions.
  • Multimodal Brain Tumor Segmentation Challenge (BRATS). Large data set of brain tumor magnetic resonance scans. They’ve been extending this data set and challenge each year since 2012.
  • Coding4Cancer. A new initiative by the Foundation for the National Institutes of Health and Sage Bionetworks to host a series of challenges to improve cancer screening. The first is for digital mammography readings. The second is for lung cancer detection. The challenges are not yet launched.

Why are large publicly available medical image data sets challenging to construct?

As we know, deep learning benefits from massive amounts of training data from large data sets. However, these sort of publicly available medical data sets are hard to construct. This is because, as the article states:

  • It is difficult to obtain funding for the construction of data sets.
  • Scarce and expensive medical expertise is needed for high quality annotation of medical imaging data.
  • Privacy issues make it more difficult to share medical data than natural images.
  • The breadth of applications in medical imaging requires that many different data sets need to be collected.

Data science challenges (like the Grand Challanges aforementioned) — that “provide a precise definition of a task to be solved and define one or more evaluation metrics that provide a fair and standardized comparison between proposed algorithms” — help to crowdsource massive annotated data sets, while also moving the field forward through standardization.

Though, annotations will not always be possible to obtain and of meaningful quality, especially in specialized, rare, or expert use cases. As such, this helps paint a picture of one of many interesting future directions for the field: It is likely that in order to leverage really big data for which hand annotations are unavailable or intractable to obtain, the field will need to move more towards semi-supervised and unsupervised learning.

Where do we go from here? Open questions and future opportunities:

  • How important and meaningful will be a transition to 3D analysis vs. 2D be in terms of performance gains?
  • “The majority of works are in fact using supervised learning.” How meaningful will advancements in unsupervised and semi-supervised approaches be in terms of performance gains?
  • How much data will be required to solve certain types of problems? What sort of things can the research community do to make bigger, higher quality data sets, evaluation criteria, and approaches accessible to other people in the field?

By Isaac Madan and David Dindi. Isaac is an investor at Venrock (email). David is a co-founder of a stealth AI company (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.