Deep Learning in Healthcare: Challenges and Opportunities

“Medicine is an art and a science, but the science dominates the art.”

Dr. Dave Channin received a Bachelor’s degree in computer science and molecular biology from Brandeis University. After graduation, he worked as a programmer for a couple of years, and then left the United States to study medicine at the Faculté de Medicine Lariboisière-St. Louis in Paris. Returning to the USA, Dr. Channin completed medical school and residency in radiology at the Penn State College of Medicine. At the completion of residency, Dr. Channin was recruited to Northwestern University as the principal architect of the Northwestern Memorial Hospital PACS. In 2010, Dr. Channin became Chair of the Guthrie Clinic medical imaging service line. There, he had shared administrative responsibility for imaging at 4 hospital and 7 outpatient locations, performing 240,000 procedures per year. In 2015, Dr. Channin left Guthrie to return to his roots in informatics and technology, founding Insightful Medical Informatics, Inc.

What makes deep learning in medical and imaging informatics different from applications that are more consumer-facing?

This is healthcare and healthcare, itself, is fundamentally different from every other industry. People assign the highest priority to their health (or lack thereof), expect the highest levels of care and service regardless of cost and are more emotional and ideological about this industry than any other. Because it consumes 17.5% of US GDP and still does not meet societal expectations, it is the most regulated aspect of American society.

You are both a physician and an entrepreneur. What are the difficulties in starting a medically-relevant company as a physician, and what advice do you have for those who are looking to do so?

Actually, I was a computer programmer who became a radiologist and through an interest in R&D became an entrepreneur. Radiology, in particular, is a great specialty in which to find a technology driven path and apply the tools of the programmer.

The challenge to starting a medically relevant company is identifying the niche upon which you are going to focus. Work backward from the patient and their pain and suffering. Do not underestimate the size, complexity and regulation of the American healthcare system and the scientific rigor to which you will be held. Consider the American healthcare system as an ugly shrub that only 200 years of carefully metered cuts will transform it into the bonsai we all so desire. It is unrealistic to think you will uproot the entire shrub to plant something new. Even your branch may take decades to change.

Collaborate with people who are already in healthcare. You will be surprised by their insights and their desire to improve the system.

What are the most important factors teams must consider when building healthcare-minded products more generally?

In today’s environment, everything done in healthcare must address the pillars of the Triple AIM; improve the health of populations, lower the cost of care, or improve the patient experience. Some add a fourth aim of improving the provider experience so as to recruit and retain the best people. If your product or service does not address one or more of these, don’t bother.

Medicine is an art and a science but the science dominates the art. Medicine, directly or indirectly, is evidence-based and sooner or later you are going to have to produce hard scientific data to back up your marketing claims. The road from Hippocrates to HIPAA is littered with snake oil and its promoters.

Assume it is a zero sum game. You are going to make money in this business by taking it away from someone else. They, their lobbyists, legal staff and everyone else they can muster are going to try and stop you and maintain their playing field advantages.

You are dealing with a large number of highly educated, highly trained, highly enculturated individuals. Respect the validated, accumulated knowledge and wisdom and the culture of altruism, empathy and compassion; challenge unvalidated beliefs, disrupt bad workflow and bureaucracy and help these people do what they do best, better.

What catalyzed the interest in deep learning applied to healthcare?

It is important to remember that ‘artificial intelligence’ (in the largest, traditional sense) and ‘algorithmic learning’ has been applied to medical data including images since the earliest days of computing. Computer assisted diagnosis systems have been around since the 1970s. Automated processing and analysis of one-dimensional time signals (e.g., electrocardiograms) has been around for decades. Computer aided detection and diagnosis of medical images (e.g., Papanicolau smear cytology, detection of masses and microcalcifications in mammograms) have also been around for quite some time. Some of the latter already use deep learning techniques such as convolutional neural networks.

The current interest in deep learning in healthcare stems from two things. First, the flowering of machine learning techniques, in general, and especially unsupervised learning techniques, in the commercial space with the likes of Google, Facebook and IBM Watson. The second factor is the explosion of available healthcare data (lagging only slightly the explosion of internet data) that was triggered by the HITECH portion of the American Recovery and Reinvestment Act (ARRA). The latter effectively transformed medical records from carbon paper to silicon chips and made that data, structured and unstructured, available.

What hurdles do you see for these first-movers going forward?

Data in, data out and regulation.

Machine learning methods used in a vacuum have next to no utility — you need data to train your model. How significant of a data barrier is there when it comes to medical applications of machine learning concepts, given the significant privacy considerations?

This is the “data in” problem. The problem is not privacy. The use of medical subjects and data in research, including research to develop new technologies, is well established both within the context of Federal Policy for the Protection of Human Subjects (the so-called, “common rule”) and HIPAA. Even the transfer of technology and intellectual property developed with federal research dollars to the private sector has been facilitated for decades by the Bayh-Dole Act of 1980. Companies in this space “only” need to respect policy, paperwork and process.

The real “data in” problem, affecting deep learning applications, especially, but not exclusively, in medical imaging, is truth. Truth means knowing what is in the image. It is very easy to get a large number of images of hats and have people annotate the images that contain red hats or fedoras. Crowdsourcing to millions (billions?) of people, the annotation or validation of data (e.g., CAPTCHA) can also work to create/validate large datasets. Other small and large annotated datasets, for specific recognition tasks, have been created by government, academia and industry at no small cost in time and money.

Medical images are much more complex. There are dozens of kinds of medical imaging devices each producing images according to their respective physical principles. These machines are producing images of hundreds of different anatomic structures and normal variants and pathophysiologic processes resulting in thousands of observable imaging features.

In the case of supervised learning, and creating annotated datasets, it is important to remember that in the United States, there are only approx. 35,000 people trained and licensed to annotate all of those observable imaging features (though there are perhaps triple that number that could contribute annotations in their specialty areas).

Large numbers of patient imaging studies performed with digital technologies over the past 30 years have been annotated by this rolling population of 35,000 experts. The vast majority of those annotations, however, are in the form of unstructured free text and are absent links to the coordinates of the pixels containing the image features that engendered the annotation. The good news is that there is a new standard for Annotation and Image Markup (AIM) that was developed under a National Cancer Institute program and anyone developing annotated medical imaging data sets ignores the importance of standardized annotation at their peril.

But you can’t just take single annotations from one of the 35,000. Even though they are experts and very good at what they do, they are human and make mistakes. So you have to have consensus annotations by multiple expert observers.

What about data for unsupervised learning? Can’t we find millions of, for example, chest X-rays and see what patterns are found?

Well, yes, you could but you might suffer from garbage in — garbage out. There are thousands of imaging procedures. The Current Procedural Terminology (CPT) and other code sets used to classify and bill for these procedures lack the granularity to characterize the exact nature of the imaging performed. It turns out, there are 11 or so ways to produce a radiograph of the chest. The billing code, 71020, can be used for any two of these 11 views. In computed tomography (CT) there are dozens of parameters that can be varied to produce images, including whether or not the patient was injected with contrast media. In magnetic resonance imaging, even more parameters. Which of those parameters are going to affect the output of the unsupervised system? There are no widespread, detailed standards for the acquisition of medical imaging studies. The good news is that there is a developing standard for the nomenclature of imaging studies (the Radiological Society of North America’s RadLex™ playbook now being harmonized with LOINC). Furthermore, medical imaging has one of the best standards, DICOM, that specifies, in infinite detail, the metadata of medical images, so you can use this information to assist an intelligent triage of the images. As the saying goes, “DICOM is always documented in brown, because it is clear as mud, but delivers like UPS.”

Standards for non-image structured data are less, ummm, standardized. Even then, much non-image medical data is still unstructured (e.g., notes or structured laboratory data transformed into unstructured document formats). Vocabularies, lexicons and ontologies are mature but schemata and usage still have large local variance.

Lastly, there is no central clearinghouse or national interoperability for medical record data though some has been in development for a decade or more. Each institution, cluster of institutions or other association of data stewards act on their own within the limits of the law. So, obtaining high quality annotated data sets for both supervised and unsupervised learning will remain a costly challenge for years to come.

What is the “data out” problem?

Let’s say that you’ve overcome the data-in hurdles, you’ve acquired a great, annotated data set and the results on the test set are great. Now you have to validate it; compare the performance of your system to humans for this task and, I would warn, humans are very good at these tasks. This is done by performing an observer performance study and calculating a receiver operating characteristic curve that relates to the observer’s sensitivity and specificity. And since you are hoping the difference between your system and the human is small, the study must be large to have the statistical power to distinguish the two. These experiments take time and are costly to perform. Perhaps the system and the human used together are better than either alone? Does the system speed up the interpretation process or slow it down? I don’t want to throw any shade, but humans can determine gross normality of a chest radiograph in 200 milliseconds (Radiology. 1975 Sep;116(3):527–32).

OK. You’ve got an AI and it’s good enough for clinical use. How are you going to deliver your result to the clinician, radiologist or other anticipated user of the system and incorporate it into the electronic medical record? Their eyes are not fixed to generic development platforms like iOS or Android. Rather, they are attached to large, expensive, proprietary, often regulated devices and systems. There are standards for integration and interoperability but they must be addressed.

Unlike many consumer technology applications of machine learning, healthcare has a dedicated regulatory body in the FDA. As a result, the FDA will play a significant role in determining the future of machine learning in healthcare. What challenges do developers face in working with the FDA?

The first challenge is not to ignore the 800-pound gorilla in the room. Start early. Find out if your device is a device. I would argue that if your deep learning system is going to do anything meaningful it is going to be a device but there is plenty of guidance available to help the developer make that determination. Once you determine that your device is a device, you can determine what class of device it is and whether any exemptions apply. The class of the device is “based on the level of control necessary to assure the safety and effectiveness of the device.” These determinations will define the path you will take to FDA approval of your device.

Again, policy, paperwork, process. One fundamental philosophy of the FDA is “Quality System (QS) Regulation/Medical Device Good Manufacturing Practices.” While we all love ‘garage code’ that gets us 7 million users in 7 days, the FDA will insist that the code was developed with common good manufacturing process (CGMP). There are many software development methodologies that will meet CGMP and you might as well start using one from day one. Similarly, the FDA will look for GMP and appropriate regulations to have been applied to any data you use and any experiments you perform to validate that data.

Identify who is going to shepherd your company and product through the FDA process. Do you have a lawyer, accountant and CFO to deal with the IRS? You will probably need similar for the FDA. Prepare as much as you can in advance and work in parallel as much as possible.

What challenges does the FDA face in its consideration of these technologies? How can regulatory bodies such as the FDA keep up with the speed of development? How should investors and entrepreneurs think about the FDA’s role in the process of development?

How smart is the gorilla and how good is he at his job? Pretty smart and fairly good. The FDA works by assigning devices for evaluation to one of 16 medical specialty “panels”. These panels rely on published and unpublished scientific studies. One power of the FDA is its ability to convoke panels of industry and academic experts to analyze the evidence. The radiology panel has, for example, already approved “Analyzer, Medical Image” (govspeak) systems based on deep learning techniques such as convolutional neural networks.

The system is, admittedly, slow. This is not, however, solely due to the nature of a large government bureaucracy. Following and documenting the CGMP process, even for software, is tedious and time consuming. Performing and documenting the scientific validation is meticulous and time consuming. Statistical analyses, publishing and analyzing the published and unpublished results all take time. Remember, we are talking about a medical device that could diagnose or steer the diagnosis in many directions. It seems like a demonstration of “safety and effectiveness” is only just that for which your mother would ask before she allowed it to be used on her.

What are the benefits that deep learning can provide in healthcare? What is its value proposition, and in what areas of the healthcare system is it most helpful? How does the development of AI fit within the conversation about the rising and unsustainable costs in healthcare?

The value of deep learning systems in healthcare comes only in improving accuracy and/or increasing efficiency. Healthcare, today, is a human — machine collaboration that may ultimately become a symbiosis or even cyborg relationship. We are still at the stage, however, that we have both humans and machines each performing both tasks at which they are suboptimal. As deep learning systems develop and evolve they will more and more assist humans with those tasks at which humans are not good. So, for example, humans are very good at processing information from their senses including vision. They are very good at perceiving human emotions. But humans are not so good at remembering things, searching for and organizing data and not too good at correlating and reasoning about that data. So I foresee DL systems that will make physicians and other providers faster and smarter in their diagnoses and reduce uncertainty in their decisions thereby avoiding costs and hazards and saving time.

A similar debate that is facing industrial automation with robotics could be made about deep learning in health informatics when it comes to job replacement. Do you see backlash from the medical community towards utilizing concepts such as deep learning with regard to its part in changing medical practice? Are there any similar historical analogies you could speak on where technology fundamentally changed the way medicine was practiced, but had significant risks to “traditional” medical practice?

Medicine, in general, and radiology, perhaps more so than any other specialty, has been very good at developing and adapting to new technology. The golden road to the annual meeting of the Radiological Society of North America (the largest medical meeting in the world) is paved with technological innovation. Many fundamental technology “sea changes” have occurred in radiology, in a relatively short time, many within our lifetimes. For example, the transition within a decade or two from film based imaging to digital imaging. Dark room staff (large numbers of whom were blind!)? Eliminated like buggy whip manufacturers. Film file storage (c.f., The Cleveland Clinic X-Ray Fire of 1929) “librarians”? Reduced or eliminated. Job loss? Some, but not as much as you would think. The transformation to digital and the (ongoing) explosion of new imaging modalities opened new opportunities as did work in the information systems and the changing healthcare environment itself. Industrial disruption? Sure (c.f., Kodak where the small, growing digital siamese twin slew the body of the mighty film producer). Job loss? Some, especially locally. But less than expected given the number of healthcare information technology companies that arose in parallel.

What about radiologists? Remarkably adaptable to technology perceived as positive to the patient or the institution. At one institution, in 1999, 25 radiologists went from reading images on film to reading images on computer workstations overnight without a significant degradation in accuracy or efficiency. Eventually, they were faster on the new workstations and with new, learned behaviors could never return to film. Fewer radiologists? Not really as new uses for imaging and new imaging technologies were developed. Look how well radiologists have adapted first to mammography (special techniques and technology) then digital mammography, then digital mammography with computer assisted detection/diagnosis and now digital breast tomosynthesis. Accuracy and efficiency have incrementally increased at each step to the benefit of women everywhere. Fewer mammographers and radiologists? Not really.

We, as a society, are going to have to face the accelerating pace of automation and its impact on the workforce and society. There is, however, nothing to suggest to me that these effects will occur faster or in different form in healthcare and in particular due to deep learning. Do I still recommend Radiology as a career to high school and college students? Absolutely.

Deep learning in healthcare has been thriving in recent years. What do you see for the field going forward? What are the important considerations deep learning researchers need to consider for deep learning to be most effective (both from a cost and computational perspective) and ethical going forward?

I see unlimited opportunity to improve the system. Despite current best efforts, there are innumerable inaccuracies and inefficiencies in the system that are ripe targets for DL and other technologies. The most important consideration is to choose your target wisely. Don’t lose sight of the link between the accuracy and efficiency you improve and the pain and suffering you reduce.

By Dave Channin. Curated by Ascanio Guarini and Isaac Madan.

If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Email Isaac Madan to contribute here.

Up to Speed on Deep Learning: July Update, Part 2

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on July 18th. In case you missed it, here’s the July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

OpenAI announces and opens applications for their special projects.They list several problem areas likely to be important both for advancing AI and for its long-run impact on society: detecting breakthrough, malicious AI; an AI-building AI; cybersecurity; and multi-agent simulations. You can apply to lead one of these efforts.

Neural Networks for Machine Learning course by Geoffrey Hinton will be offered again in September 2016 on Coursera. Learn about artificial neural networks and how they’re being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc. We’ll emphasize both the basic algorithms and the practical tricks needed to get them to work well. You can enroll now.

Modern Face Recognition with Deep Learning tutorial by Adam Geitgey. Part 4 of Adam’s series Machine Learning is Fun. The series introduces machine learning in four detailed segments: spanning an introduction to machine learning to an in-depth convolutional neural network implementation for face recognition. Here are the three prior parts: part 1part 2, and part 3.

A Beginner’s Guide To Understanding Convolutional Neural Networks by Adit Deshpande. Adit, a CS undergrad at UCLA, provides an explanation of convolutional neural networks by walking through an image processing problem, with mathematical descriptions. He also explains the training process and touches upon practical use of CNNs.

CVPR Days 1–4 by Sri Chandrasekar. Review and summaries of the 2016 Conference on Computer Vision and Pattern Recognition (CVPR) held in Las Vegas in late June, highlighting some of the best talks. All parts here: day 1,day 2, day 3, day 4, summary.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: July Update

By Isaac Madan.

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on June 20th. In case you missed it, here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.


Google’s DeepMind partners with the National Health Service’s Moorfields Eye Hopsital to apply machine learning to spot common eye diseases earlier. The five-year research project will draw on one million anonymous eye scans which are held on Moorfields’ patient database, with the aim to speed up the complex and time-consuming process of analyzing eye scans (news article). The goal is that this leads to a better understanding of eye disease, earlier detection, and treatment. We previously wrote about the challenges inherent to deep learning in medical imaging here.

Andrew Ng announces the pre-launch of his book Machine Learning Yearning, to share practical advice & experience around building AI systems, to help practitioners get up to speed faster. Over 35,000 people signed up to receive a free draft copy as of June 21.

Google releases Wide & Deep Learning as part of the TensorFlow API. The project combines the power of both memorization and generalization, to better reflect the properties that make the human brain such an effective learning machine. They provide an in-depth example that illustrates the project’s purpose and potential via a fictional food delivery app.

The Harvard NLP and Visual Computing groups announce LSTMVis, a visual analysis tool for recurrent neural networks (RNNs). RNNs learn a black-box hidden state representation, and changes in these representations are challenging to study. The tool makes it easier to visually observe and isolate patterns in state changes. The Verge provides additional context around the black-box aspect of AI systems here.

Explanation, Review, and Cool Stuff

Experience and Advice for Using GPUs in Deep Learning. Tim Dettmers provides a comprehensive analysis of various GPUs and advice on how to best use them for deep learning. For example, he answers questions likeShould I get multiple GPUs? and What kind of accelerator should I get? — along with discussion of convolutional neural networks, speed, and memory considerations.

ICML 2016 not by the day by Stephanie Hyland. Review of the 2016 International Conference on Machine Learning (ICML), highlighting the important trends and papers that emerged.

It’s ML, not magic. Stephen Merity addresses over-hype and mysticism around artificial intelligence. He articulates both the reason why we see this hype, as well as the types of questions we should ask to gut-check and better understand the potential of AI.

Chasing Cats. Robert Bond of NVIDIA develops an end-to-end cat surveillance system for his front yard, which is a nice articulation of the full pipeline from camera to processor to neural network (to sprinklers).

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

Up to Speed on Deep Learning in Medical Imaging

By Isaac Madan and David Dindi

The notion of applying deep learning techniques to medical imaging data sets is a fascinating and fast-moving area. In fact, in a recent issue of IEEE’s Transactions on Medical Imaging journal, there’s a fantastic guest editorial on deep learning in medical imaging, that provides an overview of current approaches, where the field is headed, and what sort of opportunities exist. As such, we pulled out some of our favorite nuggets from this article and summarize/extend upon them in Q&A form, so they’re more easily digestible.

Why is deep learning valuable in the field of medical imaging?

Most interpretations of medical images are performed by physicians; however, image interpretation by humans is limited due to its subjectivity, large variations across interpreters, and fatigue.

What are some challenges in applying Convolutional Neural Networks to medical imaging?

  • CNNs require a large amount of labeled data. Large medical data sets are not readily available because many data sets are proprietary and/or are difficult to obtain due to privacy concerns.
  • Most often, the data sets are not comprehensively annotated, owing to the costliness & scarcity of expert annotation in the medical domain.
  • Moreover, rare diseases, by virtue of being rare, are underrepresented in the data sets. If not accounted for properly, the class imbalance that ensues (i.e. disease label underrepresented while the healthy label is overrepresented) would bias a model to predict the healthy-label.
  • Furthermore, in situations where the features are highly correlated and the normal-class is overrepresented, many of the training samples are redundant and uninformative.
  • In many applications, making an informed diagnosis requires more than just the medical image (e.g. lab values, demographics, prior medical history). Gaining access to and linking these data with the images presents yet another obstacle.
  • Non-standardized evaluation metrics, the use of disparate data sets, and differences in the way that learning tasks are framed each make it difficult to track and compare advancements in the field.

How are the challenges being addressed?

One way is via transfer learning, which has been used to overcome the lack of large labeled data sets in medical imaging. In transfer learning, a separate CNN is trained on a different task using a different data set. The features learned from this separate task are then reused to train a CNN for the medical imaging task of interest. By recycling features in this way fewer examples are needed to achieve good performance. One main caveat to transfer learning is that the recycled features need to be generally useful across the two separate tasks.

Can I try this? Where can I find publicly available data?

Publicly available data sets:

  • Visual Concept Extraction Challenge in Radiology (VISCERAL). Manually annotated radiological data of several anatomical structures (e.g. kidney, lung, bladder, etc.) from several different imaging modalities (e.g. CT and MR). They also provide a cloud computing instance that anyone can use to develop and evaluate models against benchmarks.
  • The Cancer Imaging Archive. Cancer imaging data sets across various cancer types (e.g. carcinoma, lung cancer, myeloma) and various imaging modalities.
  • Grand Challenges in Biomedical Image Analysis. A collection of biomedical imaging challenges in order to facilitate better comparisons between new and existing solutions, by standardizing evaluation criteria. You can create your own challenge as well. As of this writing, there are 92 challenges that provide downloadable data sets.
  • The Lung Image Database Consortium image collection (LIDC-IDRI). A collection of diagnostic and lung cancer screening thoracic CT scans with annotated lesions.
  • Kaggle diabetic retinopathy. High-resolution retinal images that are annotated on a 0–4 severity scale by clinicians, for the detection of diabetic retinopathy. This data set is part of a completed Kaggle competition, which is generally a great source for publicly available data sets.
  • International Symposium on Biomedical Imaging 2015. Eight Grand Challenges presented at ISBI.
  • Multiple sclerosis lesion segmentation challenge 2008. A collection of brain MRI scans to detect MS lesions.
  • Multimodal Brain Tumor Segmentation Challenge (BRATS). Large data set of brain tumor magnetic resonance scans. They’ve been extending this data set and challenge each year since 2012.
  • Coding4Cancer. A new initiative by the Foundation for the National Institutes of Health and Sage Bionetworks to host a series of challenges to improve cancer screening. The first is for digital mammography readings. The second is for lung cancer detection. The challenges are not yet launched.

Why are large publicly available medical image data sets challenging to construct?

As we know, deep learning benefits from massive amounts of training data from large data sets. However, these sort of publicly available medical data sets are hard to construct. This is because, as the article states:

  • It is difficult to obtain funding for the construction of data sets.
  • Scarce and expensive medical expertise is needed for high quality annotation of medical imaging data.
  • Privacy issues make it more difficult to share medical data than natural images.
  • The breadth of applications in medical imaging requires that many different data sets need to be collected.

Data science challenges (like the Grand Challanges aforementioned) — that “provide a precise definition of a task to be solved and define one or more evaluation metrics that provide a fair and standardized comparison between proposed algorithms” — help to crowdsource massive annotated data sets, while also moving the field forward through standardization.

Though, annotations will not always be possible to obtain and of meaningful quality, especially in specialized, rare, or expert use cases. As such, this helps paint a picture of one of many interesting future directions for the field: It is likely that in order to leverage really big data for which hand annotations are unavailable or intractable to obtain, the field will need to move more towards semi-supervised and unsupervised learning.

Where do we go from here? Open questions and future opportunities:

  • How important and meaningful will be a transition to 3D analysis vs. 2D be in terms of performance gains?
  • “The majority of works are in fact using supervised learning.” How meaningful will advancements in unsupervised and semi-supervised approaches be in terms of performance gains?
  • How much data will be required to solve certain types of problems? What sort of things can the research community do to make bigger, higher quality data sets, evaluation criteria, and approaches accessible to other people in the field?

By Isaac Madan and David Dindi. Isaac is an investor at Venrock (email). David is a co-founder of a stealth AI company (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

“Be in love with your problem, not your solution.”

Requests for Startups: Matt Wallaert (Microsoft Ventures)

A behavioral psychologist and entrepreneur, Matt Wallaert’s focus is “onbuilding products and programs that create behavior change to help people lead better, happier lives.He is currently a Director at Microsoft Ventures, where he works with VCs and startups to help discover how Microsoft’s resources can be best leveraged to address the needs of growth stage companies. Before Microsoft Ventures, he was a Behavioral Scientist at Microsoft, where he headed product design focused on removing obstacles for users, so that they could “take action on their ideas, questions, beliefs, and desires.” He started his tech career at Thrive where he was Lead Scientist (and Product Lead), stayed on at LendingTree when Thrive was acquired, cofounded Churnless, and serves as an advisor/angel for a number of other companies.

How does your academic background in behavioral psychology influence the way you think about building and growing companies?

As a social psychologist, my specialty is something called JDM: judgment and decision making. So when I look at startups, I’m usually evaluating them through the lens of how people will use them to make decisions (or how they will be used to automate so people don’t have to make a decision). I’m also always looking for the behavioral goal: if the startup wins and the world is the way they want it to be, what will people who use their product do? What is the behavior that rises to the top?

Take Uber. Their behavioral statement might look something like “When people want to go from Point A to Point B and they are in a urban area, they’ll take an Uber.” Startups with a clear focus like that are ones I like: Uber didn’t try to make people want to go from Point A to Point B. And they didn’t try to tackle absolutely everyone, everywhere. They knew what existing motivation they were going to capitalize on and they focused that in a place where they felt they could succeed. That’s the kind of sharp, pointy startup I like.

You focus on the idea of competing pressures quite often. Could you summarize this idea and comment on why founders might overlook these competing pressures? What are some examples of companies that confront these competing pressures effectively?

Competing pressures is a pretty simple notion, borrowed from psychology. All human behavior is the product of promoting pressures (reasons to do something) and inhibiting pressures (reasons not to do something). They compete against each other and the winner drives the behavior.

Now, that may not seem very profound, and it isn’t. But designing thoughtfully using competing pressures can bring you to some interesting places. Let’s go back to the Uber example: one of the strengths of the company is that they didn’t try to make people want to go from Point A to Point B. While every other black car company was out there trying to introduce new promoting pressures (“It’ll be a silver Audi, with an awesome driver, and it will play your favorite music and have wifi and a disco ball!”), Uber realized that the problem wasn’t promoting pressures: the promoting pressures were as strong as they needed to be. Instead, the problem was inhibiting pressures: it was hard to get a black car, they were expensive, tough to schedule on demand, etc. That laser focus on inhibiting pressures is a big part of their success.

And they carry it over into their marketing. Look at Uber emails. Every once in awhile they do a promoting pressure, puppies or fancy cars or something. But most their emails are about three things: it is now cheaper than it was, there are now more drivers (and so Ubers are more available, quicker), or they can now go somewhere they couldn’t go before. All three are strong inhibiting pressures and concentrating on these has paid off.

You have said that companies should derive “structured, diverse thoughts about consumers’ behaviors.” What are the best methods for companies to do this? What does such information allow companies to do?

We’ve all heard it a million times and yet so few really do it: talk to your customers. I don’t mean email them generic surveys or run user tests with Camtasia. I mean actually talk to them. Get to know them as people, befriend them. When I was Head of Product at Thrive (sold to LendingTree, competed with Mint), our weekly emails had a little note at the bottom: “Live in New York City? Want a free lunch? Our scientist Matt Wallaert will take you out to talk about Thrive.” And I just met people, over and over and over. I didn’t talk about Thrive. I asked them about how they felt about money and why. I listened to their descriptions of how it affected their lives and relationships. I got to know them and honestly, I’m still friends with a few of those folks, all these many years later.

Some of our best product revelations came from those sessions and just listening. I remember being in a bar with Avi Karnani (Thrive CEO) and talking about finance and drawing other people into the conversation. And I ended up talking to this Italian woman who said something that changed the way I look at goal-based saving. She said Americans always saved for something specific, but in Europe, they save for the possibility of something unknown. Like that spur-of-the-moment trip that you need $100 for. Or the sidewalk artwork that just calls to you. It completely changed the way I did our savings goals at Thrive, for the better.

What is the major piece of advice you give to entrepreneurs in the Microsoft Ventures network?

I try to listen to where they are and where they want to be, rather than just giving generic advice. But if there is any lesson I see over and over again, it is that startups need to move away from the middle, toward the extremes. Either you’re for people who care a lot and want to go deep or you are for people who don’t care at all and want to automate. Everything in the middle falls away.

Maybe the most generic truth I can give is be in love with your problem, not your solution.” I challenge founders all the time that if I can come up with better solution to their problem, would they be willing to throw their existing solution away and start building it? If they say no, that’s a huge red flag.

What are the most frequent reasons you see startups fail?

Well, there are some startups that just shouldn’t have existed in the first place. They didn’t have a real problem to solve and they weren’t interested in finding one. But if you’re talking about the good startups, the ones that should have succeeded and didn’t? I think it is mostly just simple exhaustion. I don’t mean “let’s stay up until 2am every night” exhaustion butmental fatigue. You make a million decisions at startups and if you question and maximize and perfect each one, you die. Good enough has to be good enough or startups just…self-destruct.

What startup verticals interest you most right now?

Automation. Not industrial automation, but “I can pay you money so I don’t have to think about something” automation. As people’s discretionary income increases, they need access to services that allow them to focus their time on things they actually care about. For example, I don’t care about fashion. I need to look good enough but beyond that, I want to spend zero mental energy on clothes. So I built a little script that just auto-buys the clothes that I know fit off eBay. Same brand, same size, under $30? Presto, in my closet.

For double points, automation for women, families, and “unexotic underclass.” Too many startups are focused on people they have easy access to. I love unsexy solutions to unsexy problems that have huge, broad appeal.

What are your biggest predictions for the year ahead?

I know better than to play this game. My prediction for the year is that I’ll continue to be wrong about a lot of things, right about a few key things, and that will be enough. My son will grow up, I’ll grow out (woof, new dad weight is a killer)…it’ll be a good year.

Or maybe this: Lots of good companies won’t get funded, lots of bad companies will. Women and minorities will still get the short end of the stick. Don’t like those predictions? Do something about it. I’m sure trying.

Are there any specific company ideas that you really want someone to build and would potentially fund?

I have an idea around analytics and management of sponsorships as an enterprise SaaS that I’d really like to see built. I’ve pitched it to a million folks looking for good ideas, but in the end, most people gravitate toward something sexier.

You spearheaded the digital literacy program Bing in the Classroom (BitC) at Microsoft to foster digital literacy in K-12 students. How do you see these kinds of socially responsible product extensions within the context of large technology companies like Microsoft? Is this different for earlier-stage companies? Do you think that companies should have a double bottom line baked into their business plan?

Customizing your products to reach new audiences isn’t something you should have to do out of the goodness of your heart; it just makes good business sense. In schools that use BitC, we see 40% increases in search volume. That’s a huge, huge number. Teaching kids to be computer literate, even if it isn’t on our platforms, is good business, because all technology touches our ecosystem, whether we directly own it or not. When technology is doing well, Microsoft does well.

I have sort of mixed feelings about double bottom line companies, because I think profit is actually a fine motive; that isn’t where companies start to behave badly. Companies go awry when they get too focused on the short term and not enough on the long term. It is one of the reasons I think non-public companies are really, really interesting, because they don’t have to show quarter-over-quarter growth. Rather than destroy their customer base through churn-and-burn, they can be thoughtful, cultivate community, etc.

So if we need to institute a double bottom line in order to get people to think far enough ahead, that’s fine, but we should be clear the problem is time horizon. Take one of the more popular double bottom line motives: environment. Polluting the earth is bad for every single business on it if you just take a long enough time horizon, because all the people who buy your products end up dead when the oceans rise. I think we need CEOs to start making those sorts of arguments.

By Ascanio Guarini, recent Swarthmore grad and intern at Bridgewater (email). You can reach Matt Wallaert via email here.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

Up to Speed on Deep Learning: June Update

By Isaac Madan

At the end of April, we published an article on getting up to speed on deep learning, which included 20+ resources to catch up on rapid advancements in the field. Much has happened since then, so we thought we’d pull together a few of the excellent resources that have emerged this month in June. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.


Generative models by OpenAI. The team at OpenAI share five new projects that enhance or use generative models, a branch of unsupervised machine learning techniques:

  • Improved techniques for training generative adversarial networks (paper).
  • Improved techniques for training variational auto-encoders (paper).
  • Improved techniques for interpretable representation learning (paper).
  • Curiosity-driven exploration in deep reinforcement learning (paper).
  • New approach for imitation learning (paper).

Facebook introduces DeepText, its deep learning engine that understands textual content on Facebook with near-human accuracy and at a speed of several thousand posts per second, in more than 20 languages.

Google DeepMind learns to play Montezuma’s Revenge via intrinsic motivation techniques (video). The game requires forward planning. Read the paper, Unifying Count-Based Exploration and Intrinsic Motivation, here.

NVIDIA announces its GPU Ventures Program in which it “provides marketing, operational, financial and other support to young ambitious companies founding their businesses around NVIDIA technologies.” They plan to make $500K to $5M investments in these startups. Consider applying if you’re working on a deep learning startup.

DARPA announces its Data-Driven Discovery of Models program, which is intended to help non-experts build their own models using automated tools that facilitate data science. In effect, leveraging machine learning for machine learning.

Explanation & Review

Neural Network Architectures by Eugenio Culurciello. The history of neural network design over the past few years to help us better craft neural network architectures in the future.

Deep Learning Trends @ ICLR 2016 by Tomasz Malisiewicz. Review of the 2016 International Conference of Learning Representations (ICLR), highlighting the important trends and papers that emerged.

Most Cited Deep Learning Papers by Terry Um. A curated list the most cited deep learning papers since 2010, interesting newly released papers, earlier classics from 1997–2009, and distinguished researchers.

Deep Reinforcement Learning by Andrej Karpathy. An overview of reinforcement learning, explaining the field in the context of the game of Pong, a simple reinforcement learning task.

Tip/Tricks in Deep Neural Networks by Xiu-Shen Wei. Must know implementation details for building and training deep neural networks. Xui-Shen explains the following important concepts: data augmentation; pre-processing of images; initializations of networks; some tips during training;selection of activation functions; diverse regularizations; some insights found from figures; and, methods of ensemble multiple deep networks.

By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

Getting Up to Speed on Deep Learning

By Isaac Madan and David Dindi

For good reason, deep learning is increasingly capturing mainstream attention. Just recently, on March 15th, Google DeepMind’s AlphaGo AI — technology based on deep neural networks — beat Lee Sedol, one of the world’s best Go players, in a professional Go match.

Behind the scenes, deep learning is an active, fast-paced research area that’s proliferating quickly among some of the world’s most innovative companies. We are asked frequently about our favorite resources to get up to speed on deep learning and follow its rapid developments. As such, we’ve outlined below some of our favorite resources. While certainly not comprehensive, there’s a lot here, and we’ll continue to update this list — if there’s something we should add, let us know.

Structured Resources

Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville (2016). A comprehensive and in-depth book on machine learning and deep learning core concepts.

Course notes from Stanford CS 231N: Convolutional Neural Networks for Visual Recognition. This course is a deep dive into details of neural network architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.

Course notes from Stanford CS 224D: Deep Learning for Natural Language Processing. In this class, students will learn to understand, implement, train, debug, visualize and potentially invent their own neural network models for a variety of language understanding tasks.

Blogs, Papers, and Articles

Deep Learning in a Nutshell by Tim Dettmers, via NVidia (2015). These articles are digestible and do not rely heavily on math.

  • Part 1: A gentle introduction to deep learning that covers core concepts and vocabulary.
  • Part 2: History of deep learning and methods of training deep learning architectures quickly and efficiently.
  • Part 3: Sequence learning with a focus on natural language processing.

Podcast with Yoshua Bengio: The Rise of Neural Networks and Deep Learning in Our Everyday Lives. An exciting overview of the power of neural networks as well as their current influence and future potential.

Deep learning reading list. A thorough list of academic survey papers on the subjects of reinforcement learning, computer vision, NLP & speech, disentangling factors, transfer learning, practical tricks, sparse coding, foundation theory, feedforward networks, large scale deep learning, recurrent networks, hyper parameters, optimization, and unsupervised feature learning.

Christopher Olah’s blog. Christopher has in-depth, well-explained articles with great visuals on neural networks, visualization, and convolutional neural networks.

Adrian Coyler’s blog. Adrian selects and reviews an interesting/influential/important paper from the world of CS every weekday morning.

Academic papers & presentations:


Deep learning Google Group. Where deep learning enthusiasts and researchers hangout and share latest news.

Deep learning research groups. A list of many of the academic and industry labs focused on deep learning.

San Francisco AI meetup. A local meetup for AI enthusiasts and researchers that we’re involved in. Pieter Abbeel will be speaking on April 28, and Vinod Khosla on May 5.


  • International Conference on Learning Representations. May 2–4, 2016 in the Caribe Hilton, San Juan, Puerto Rico. Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void. Yoshua Bengio & Yann Lecun are General Chairs.
  • International Conference on Machine Learning. June 19-24, 2016 in New York City, NY. ICML is the leading international machine learning conference and is supported by the International Machine Learning Society (IMLS).
  • Conference on Neural Information Processing Systems (NIPS).December 5–10, 2016 in Barcelona, Spain. A single-track machine learning and computational neuroscience conference that includes invited talks, demonstrations and oral and poster presentations of refereed papers.
  • GPU Technology Conference (GTC). April 4–7, 2016 in San Jose, CA; there are others later throughout the year in other countries. Presented by NVIDIA, GTC is comprised of the annual conference, year-long webinar series, and workshops that connect the global community of developers, researchers, and scientists through unique educational and networking opportunities.


Deep Learning Frameworks in VentureBeat (2015). An overview of major deep learning libraries, as of December 2015.

TensorFlow neural network playground. Play with neural networks visually in your browser to get a feel for what they are and what they do.

TensorFlow tutorial. Google’s tutorial that explains TensorFlow and MNIST, as well as the basics of machine learning and deep learning networks. This is in Python.

OpenAI Gym. A toolkit for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Go.

Neon. Nervana System’s fast Python-based deep learning library. Tutorialshere.

Debugging neural networks by Russell Stewart. Neural networks are hard to debug and this affects the learning curve involved in implementing deep learning. Russell offers some great insight.

Theano. Numerical computation library for Python (faster and more mature than TensorFlow).

Lasagne. Lightweight Python library for deep learning (built on Theano).

Caffe. Deep learning framework.

Model Zoo. Pretrained Caffe models for a variety of tasks.

By Isaac Madan and David Dindi. Isaac is an investor at Venrock (email). David is a grad student at Stanford and TA for CS 224D, mentioned above (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

Stay focused, be curious, and deliver: Dave Weisbeck (CSO of Visier)

A seasoned software executive, Dave Weisbeck’s experience ranges from building development teams to growing multi-billion dollar businesses as a General Manager. Dave is currently Chief Strategy Officer at Visier. With twenty years in the information management and analytics industry, Dave’s prior roles include developing products and product strategy at Crystal Decisions — acquired by Business Objects for $820M — and Business Objects, acquired by SAP for $6.8B. Most recently Dave was the Senior Vice President and General Manager responsible for Business Intelligence, Enterprise Information Management, and Data Warehousing at SAP. Dave holds a position on the Advisory Board.

What advice do you have for entrepreneurs?

We are all looking for that one thing that can provide us with a litmus test that determines whether our business will be successful or not. I know I am still looking. My best candidates to date:

First, be ruthless in prioritization to create a clear focus. I often hear some form of the phrase that great leaders find a way to say “yes”. I believe that very good leaders turn too many “no” answers into “yes”, but truly great leaders know when to say “no”. They provide to the business something incredibly important: focus. It is often easy in a corporate environment to say yes, as every idea has its champions and analysis that supports the idea,but everything has to be weighed against opportunity costs and placed in comparison to other ideas that compete for resources.

Second, be paranoid and curious. An idea I admit whose genesis comes from Andy Grove, but with an additional twist that means you shouldn’t just look over your shoulder to check for the competition, but also look internally to pay attention to all of the details. Have your read the contracts your customers sign and know what all of that language around indemnity means? How does the QA team go about checking the quality of your product? There are countless examples, and regularly invest some time to dig into the details.

Lastly, when your customers are your greatest champions, everything else gets easier. Provide them a great product at a fair price and you will notice the pipeline will get easier to build and sales will close a little sooner.

I have found no single idea above sufficient, but applying all three has served me well.

Tell us about your experience as an executive at Business Objects. How did you feel after the company was acquired by SAP for $6.8 billion?

I have been on the acquired side of two great acquisitions. First, Business Objects acquired Crystal Decisions — most famous for Crystal Reports — and then Business Objects was acquired by SAP. When you are acquired the first feeling is a bittersweet one. You have poured your heart and soul into creating an organization that will take on the world and beat the competition into submission, and the acquisition represents an ending to that mission. What you quickly come to realize is that it isn’t an ending, but a chance to pivot to an expanded opportunity. New ideas, new experiences, new opportunities and resources become available that allow you to accelerate your mission. In the case of SAP acquiring Business Objects, it allowed us to establish a dominant market share in the analytics space that was nearly 2:1 on the competition.

Acquisitions also represent a great opportunity to grow personally. The world of start-ups is very different from the world of large Enterprise vendors, but they both have something to learn from each other.Experiencing both has been an invaluable education in what it takes to make a business successful.

Could you explain the impact of big data on HR organizations? Where does Visier fit in?

Our view is that there are 3 fundamental data elements for a business: customers, products (or services) and employees. Of these, employees have the biggest impact to the success of the business, but are the least measured and analyzed.

Sales and marketing are deep into analyzing the customer, while the operations team is optimizing the supply chain and finance is tracking the money. HR, however, is still struggling to turn insight into the workforce into improved business results. It is this connection to business results that is critical to Visier.

When it comes to data on the workforce, we have an unprecedented amount to mine. Beyond the employee record in the HRIS, or candidate information in an ATS, or performance and compensation in a talent management system, there is engagement, payroll, safety, and absenteeism. This can go further into emails, calendaring, messaging, badge swipes and seating plans as we look at how employees network and collaborate. These only scratch the surface of the larger data landscape, as we haven’t talked about data that exists outside the business in places such as social networks or patent filings. But there is one other category of data that is more important than all the rest — productivity.

The real opportunity here is to find out how to make a sales rep more productive. Or to increase the customer satisfaction in your properties, or how to lower the incidence of patient re-admissions while increasing your HCAPHS score. To achieve these results, you need to uncover the relationships between, for example, industry experience and quota attainment, or training time and patient satisfaction, and making these connections has been too difficult — until now.

Whether it is improving quality, revenue, innovation, customer satisfaction, units produced, or any other measure of the business, the opportunity is to improve your business results through decisions you make about your workforce. This is what Visier is all about.

What does the future of business intelligence look like? Where are things headed?

We believe that the future is a move away from IT-centric toolsets to business leader focused solutions — we are calling this Applied Big Data.

When you look at the evolution of business intelligence, there has been a cycle of innovation that has focused on ease of use. Crystal Reports was able to defeat Actuate because you could build a report in a couple of hours in Crystal, and the same report would take you days to program in Actuate.

Business Objects evolved this further with solutions like Web Intelligence, and now Tableau and Qlik are doing the same again. The innovation has been incredibly incremental rather than disruptive.

Meanwhile, on the backend the debate between the Inmon and Kimball camps was largely centered on a top-down centralized approach versus a bottoms-up approach with a preference for star schemas. However, what was really holding the business back was that the ability to change or update a data warehouse was far slower than the pace of change of the business. New types of data were exacerbating this issue as the volume, speed and variety of data was rapidly increasing and this led to the birth of Big Data approaches and in-memory data computing.

What all of this forgets is that all of this technology exists to help people make better decisions. All of this incremental innovation has been focused on making the toolsets either easier to use or more powerful. I liken it to making a hammer or saw simpler or more powerful. For most people, an easier tool will not make it easier to build that house or dining room table. The application of the tool is the real challenge, and this is part of why enterprise application spend is about 5 times that of middleware.

Business Intelligence has largely been treated as middleware by organizations. With its shift to the Cloud, that model is a mistake as dedicated solutions that focus on domain-specific questions and data management give customers deeper insight at lower costs, in less time.

Do you have any ideas you would like to see funded or built?

Experiences teaches us to be shy about trying to predict what will happen too far into the future. With that caveat safely out of the way.

I did an undergraduate thesis in artificial intelligence, and have been a staunch supporter of the idea that true intelligence is a function of our ability to learn, not of the knowledge we have acquired (what Tim Urban referred to as the “Try to make evolution do what it did before but for us this time” approach). At the time of my thesis, I was particularly excited in the work of John Koza and the idea of genetic programming, where the algorithm — literally the code — could be evolved to solve new forms of problems. So if I could place a long bet on some technology to be funded, I would turn my focus to the application of AI to new business challenges.

When it comes to something to be built, I am impatient to see the Internet of Things become a reality, but I believe that one of the key elements holding it back is the infrastructure. A key component of the success of the Internet has come from the long list of standards such as HTML and HTTP, but also SMTP, IMAP, and at lower levels like TCP/IP. Applying, and improving, the lessons of these standards to create a secure and open means for things to communicate with people, and other things, would accelerate innovation and allow us to make more valuable integrations of the digital and physical world.

If you’re interested in Visier, feel free to reach out to Dave directly via email. This issue was curated by Kaesha Freyaldenhoven (email).

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

7 Insightful Quotes from Amazon’s Letter to Shareholders

Every year, Jeff Bezos publishes a letter to Amazon shareholders. These letters tend to be an interesting lens into the company and its progress, as well as a source of insightful snippets about building & leading an iconic business. We thought we’d share a few of our favorite nuggets from Bezos’ 2016 letter published last week.

On organizational values:

[AWS and Amazon retail] share a distinctive organizational culture that cares deeply about and acts with conviction on a small number of principles. I’mtalking about customer obsession rather than competitor obsession, eagerness toinvent and pioneer, willingness to fail, the patience to think long-term, and the taking of professional pride in operational excellence.

On cultural stability:

The reason [corporate] cultures are so stable in time is because people self-select. Someone energized by competitive zeal may select and be happy in one culture, while someone who loves to pioneer and invent may choose another. The world, thankfully, is full of many high-performing, highly distinctive corporate cultures. We never claim that our approach is the right one — just that it’s ours — and over the last two decades, we’ve collected a large group of like-minded people.

On the value of experimentation:

Failure and invention are inseparable twins. To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there. Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten. We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in a while, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold.

On innovation and customer understanding:

India is another example of how we globalize an offering like Marketplace through customer obsession and a passion for invention. Last year we ran a program called Amazon Chai Cart where we deployed three-wheeled mobile carts to navigate in a city’s business districts, serve tea, water and lemon juice to small business owners and teach them about selling online. In a period of four months, the team traveled 15,280 km across 31 cities, served 37,200 cups of tea and engaged with over 10,000 sellers. Through this program and other conversations with sellers, we found out there was a lot of interest in selling online, but that sellers struggled with the belief that the process was time-consuming, tedious and complex. So, we invented Amazon Tatkal, which enables small businesses to get online in less than 60 minutes. Amazon Tatkal is a specially designed studio-on-wheels offering a suite of launch services including registration, imaging and cataloguing services, as well as basic seller training mechanisms.

On substance versus form:

Many characterized AWS as a bold — and unusual — bet when we started. “What does this have to do with selling books?” We could have stuck to the knitting. I’m glad we didn’t. Or did we? Maybe the knitting has as much to do with our approach as the arena. AWS is customer obsessed, inventive and experimental, long-term oriented, and cares deeply about operational excellence.

On focus:

Many companies describe themselves as customer-focused, but few walk the walk. Most big technology companies are competitor focused. They see what others are doing, and then work to fast follow. In contrast, 90 to 95% of what we build in AWS is driven by what customers tell us they want.

On decision making:

One common pitfall for large organizations — one that hurts speed and inventiveness — is “one-size-fits-all” decision making.

Some decisions are consequential and irreversible or nearly irreversible — one-way doors — and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that — they are changeable, reversible — they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups.

As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention. We’ll have to figure out how to fight that tendency.

Subscribe to our email newsletter here.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.

Requests for Startups: Sumeet Shah (Brand Foundry Ventures)

Sumeet handles sourcing and managing new opportunities at Brand Foundry Ventures. He has 6 years of experience across the startup and private equity industries, formerly running new business strategies at Gist Digital and handling business development and project work at Gotham Consulting Partners. He also assisted in the creation of XRC Labs, a consumer and retail accelerator run by Kurt Salmon and Parsons, as the Program Manager for their inaugural class. Sumeet graduated from Columbia University with a Bachelors of Science in Biomedical Engineering, specializing in Biomechanics.

What startup verticals interest you most right now?

Brand Foundry’s focus is in low-tech and high-tech consumer products and devices. It’s all about things you can see, touch, feel, and interact with. I’ve recently been really interested on more low-tech products challenging antiquated industry verticals that combine strong branding, a seamless supply chain, and great go-to-market process.

Examples include current portfolio companies Caeden (headphones) and LOLA (tampons), and rising startups like AWAY and Raden (luggage). What makes these four so exciting to me is how they’re each using unique ways to resonate with their customer (Caeden with form and function combination, LOLA with an unapologetic social media play, and AWAY and Raden tapping deep into product with strong outreach, AWAY with beautiful travel content and marketing and Raden with form and function fit with just the right amount of tech).

That said, I also am excited about where high-tech consumer plays both on electronics and hardware, ESPECIALLY VR. HUGE fan of Oculus.

What are some key factors necessary to build a robust consumer brand?

PEOPLE/PRODUCT/PIPELINE. I would want to see a strong, cohesive team (filling out the roles of Marketer, Operator, and Technician depending on how technical the startup is) with a focus on building a cyclical brand (a complete strategy on supply chain, production, distribution, and go to market). I’ve seen an unfortunate number of founders who are more focused on “Front-end” (marketing, advertising, beautiful product) and less on “Back-end” (supply chain, infrastructure) and they fail as a result because they basically built a house of cards.

What are your thoughts on direct-to-consumer, fully integrated brands like Warby Parker, Casper, etc.?

Well, we invest heavily in the space so I’m definitely a fan. Moreover, my boss Andrew Mitchell is an investor in Warby Parker. We’ve also done heavy investments in the space from Cotopaxi and Lola to AllBirds and Caeden. You also see some great young ones like Tecovas that are fighting antiquated competitors who are retail and brick and mortar heavy and yet use the same components to keep quality high.

But why am I such a fan? It’s because brick and mortar is absolutely broken; young and/or fast-rising companies have to work with the models of the retailers that are holding them, and as a result, they can’t control their brand. It’s so ass-backwards sometimes that you have to deal with their stubbornness of how “things work” for them. Moreover, the retail markup that brands have to commit to (more than 2.5x usually) screws the customer over.

What are your biggest predictions for the year ahead?

We will see the return of a focus towards building sustainable, strong businesses, but my biggest prediction is that 2016 will be the year that women in startups will rule. Female founders will FINALLY get the funding they so rightfully deserve. We take a lot of pride on investing in a significant number of women-led companies (9 out of 15 at the moment), but we didn’t do it to fill a damn quota. We did it because they are absolute badass founders who are building real businesses and challenging the status quos.

Moreover, I wish we could see more women in the VC top brass positions, but you also see some phenomenal female junior and mid-level VC positions who are slated to take over partner positions and/or start their own funds. Sometimes I get so irked that we STILL see women mistreated, mispaid, and misrepresented in our society. I still cannot understand why women aren’t paid the same as men. It’s not even a feminist opinion, it’s all about simple ethics. If we all are on the same level playing field (as investors, founders, as human beings), nobody would be complaining. It isn’t a hard problem to fix!

Are there any specific company ideas that you really want someone to build and would potentially fund?

As crazy as it sounds, it’s within the dating app space, specifically within apps like Tinder and Bumble. Those apps are more visually focused on one side of the spectrum and you have more content-heavy areas like OKCupid and Match on the other. I would love to see someone build out something in the middle, where push notifications or the like would be the center; these notifications would show questions that are pertinent to popular culture, current events, or values that would be with Yes/No/Decline to Answer options. The data can then be built within each person and as you “match” with someone, those questions can pop up as a way to open the conversation and engage with your match further. Better and deeper conversations can lead to better success stories from these apps.If you could have a magic wand and instantly have any imaginable solution to a problem you’re facing, what problem would you solve?

I think like most people, I’d love for any repeatable or “brain-less” task to be automated so I can spend more time focusing on harder problems to solve/more important tasks. I think AI can also arm people in the near future with a tool that streamlines a lot of these processes vs. fully automating them, so hopefully we’ll see that happen soon.

If you’re interested in Brand Foundry Ventures, feel free to reach out to Sumeet directly via email.

This issue was curated by Akshay Goradia (email, website). Akshay is a student at Carnegie Mellon studying Information Systems and Human-Computer Interaction. Having spent last summer at Entrepreneurs Roundtable Accelerator in NYC, Akshay is interested in venture capital and is looking to gain more experience in venture. Drop Akshay a line here if you’re a VC looking for an intern this summer.

Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. If you think there’s someone we should feature in an upcoming issue, nominate them by sending Isaac an email.