Going Public vs. Being Acquired

We occasionally hear from our readers with interesting questions that they have and that they would like us to answer & write about. This post and the few that follow will be on one such topic that has come up a few times from readers in the past: taking a company public and better understanding an IPO. In this mini-series, we’ll discuss when/why a company may IPO, advantages/disadvantages of an IPO versus being acquired, factors that make a company ready for an IPO, and the IPO process itself. 

Moreover, because the goal of Requests for Startups is to support entrepreneurs, we’ll begin to more formally solicit ideas for interesting topics we should write about in the future. Please submit your ideas here, and as always, we would love your feedback on this mini-series, and the newsletter more generally, via email.

Why/when to IPO

Going public via an IPO or being acquired are two mechanisms by which startups seek liquidity. Timing of an IPO depends on industry. Typically, a company chooses to go public when:

  1. The company has made significant progress often in the form of sustainable profitability, solid revenue growth, or other material milestones. There has been substantial analysis done that demonstrates what companies look like at the time of their IPO by reviewing their S-1 filings — here are many excellent SaaS benchmarks.
  2. Existing investors, founders, and employees are seeking liquidity and would prefer additional financings via the public markets, which may allow for higher valuations that mean less dilution for existing shareholders. As expected, public excitement and high demand for the stock can drive up its valuation. Various studies (like this one) seek to explain the valuation premium that IPOs engender over acquisitions in other ways — one such study suggests a 22% markup, and others (like this one) indicate there is no such premium in certain scenarios.

In some cases, a company may require significant growth capital, but if the company already has a high valuation, private market investors may not find the opportunity to invest as appealing as earlier stage companies that have greater potential upside, thus making an IPO an effective fundraising strategy. The IPO market, however, is volatile — IPO volume varies from year to year. This year has been a slow IPO year, meanwhile we’ve seen companies raise private capital at very high private valuations, like Uber and Airbnb. Pitchbook reports that we’ll see the fewest number of IPOs in 2016 since 2009 after the market crash. Companies exploring both an IPO and a potential acquisition engage in a “dual-track” process typically led by investment bankers.

A potential acquirer may take interest in a company around the time of an IPO, as an IPO implies that the acquirer may need to pay a significant markup on the company’s valuation after the IPO. In this sense, the IPO market and the private M&A market are coupled: if there are lots of IPOs happening, acquirers are more keen to make acquisitions, as they may lose the chance to pay favorable prices if the seller is no longer privately held.


Advantages of going public:

  1. Potential to raise money at a higher valuation than on the private market, as described above.
  2. A public company can effectively use its stock to make acquisitions. Using stock can be more effective for a public company because their stock is liquid, while as a private company it is not, and thus may be less palatable to a potential seller.
  3. Public visibility can allow for additional, favorable fundraising later on the public market. This is great should the company need to raise additional growth capital. This also gives the company additional credibility with potential customers and employees.
  4. Liquidity for founders, investors, and employees — the ability to sell shares on the public market.

Disadvantages of going public:

  1. Significant legal & disclosure obligations and information provided to shareholders. This also applies to the company’s officers and directors, who can be heavily scrutinized.
  2. Required disclosure of specific types of transactions, including stock option practices and executive compensation.
  3. The process is both expensive and time consuming for the management team. We’ll discuss the process in a later post.
  4. Restrictions on stock sales. There is typically a 6-month lock up after the IPO to protect the stock against price volatility as it’s new to the public market. This means shareholders can’t sell their stock in this time window. There is also much regulatory scrutiny around stock sales for fear of insider trading. The SEC’s Rule 144 restricts the amount of stock that can be sold by major shareholders in any 3-month period to prevent major price fluctuations or impact on liquidity. It also may look bad to the public market if a significant stockholder is dumping a lot of stock.

Advantages of M&A:

  1. If the deal is all in cash, the company can get immediate liquidity instead of relying on the public markets. However, in a stock deal, SEC Rule 145 is similar to Rule 144 and applies to the company that is acquired if they will be receiving stock in the acquiring public company. In other words, the acquired company has similar restrictions on stock sales as they would have in the event that their company went public.
  2. Potentially less market risk. In a cash deal, the seller knows what they’re getting. In a stock deal, the company that is getting acquired still bears risk via volatility in the public market price of their parent company’s stock, but this is usually less volatile than a brand new post-IPO company.
  3. None of the regulatory and administrative burdens of running a public company: Disclosures, forecasts, analyst calls, shareholders, etc.

Disadvantages of M&A:

  1. The company could command a higher valuation in the public markets.
  2. Their upside is fixed, based on the purchase price, or is no longer in their hands, if they’re receiving stock in the parent company. If a company IPOs, they control their future upside via control & management of the company going forward.
  3. Liquidation preferences could mean less upside for the founders than for early investors. Preferred stockholders will make most of the proceeds if the liquidation preferences exceed the fair market value of the company. At an IPO, preferred stock is usually converted to common stock so the liquidation preferences are nullified, which transfers upside to the founders.
  4. Less control, affinity, and/or agency. Employees may be required to stick around after the acquisition but they no longer run their own company. The parent company may choose to shutdown or repurpose the acquired company’s product — Microsoft shutdown Sunrise, Dropbox shutdown Mailbox, and Twitter recently announced it is sunsetting Vine, for example.

We welcome any questions or feedback via email. In our next post, we’ll discuss factors that make a company ready for an IPO as well as the IPO process.

Source: The Entrepreneur’s Guide to Business Law, 4th Edition by Constance E. Bagley and Craig E. Dauchy. This book is an excellent resource on this topic.


Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: September, Part 2 and October, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on September 20th. In case you missed it, here are our past updates: September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.


Open Sourcing 223GB of Driving Data
by Oliver Cameron of Udacity. 223GB of image frames and log data from 70 minutes of driving in Mountain View on two separate days. Log data includes latitude, longitude, gear, brake, throttle, steering angles and speed. GitHub repo here.


ImageNet 2016: Large Scale Visual Recognition Challenge
results (ILSVRC 2016). A yearly seminal competition where teams correctly classify and detect objects and scenes in images. The teams operate at the bleeding-edge of image recognition — learn about the teams here.


Generating Faces with Deconvolution Networks
by Michael Flynn. Neural networks that generate and interpolate between faces, based on a large publicly available dataset. Inspired by this paper on image generation. GitHub repo here.


Youtube-8M Dataset
by Google. 8 million video IDs and associated labels from over 4800 visual entities (e.g. vehicle, concert, music video, etc.), making it possible to advance research & applications of video understanding. Blog post here.


Deep3D: Automatic 2D-to-3D Video Conversion with CNNs
by Eric Junyuan Xie. 3D videos are typically produced in one of two ways: shooting with a special 3D camera or shooting in 2D and manually convert to 3D — both are hard. This project demonstrates automatic 2D-to-3D conversion, so you could potentially take a 3D selfie with an ordinary smartphone.


Open Sourcing a Deep Learning Solution for Detecting NSFW Images
by Yahoo. An open-source classifier for identifying NSFW content, based on a CNN architecture and implemented with Caffe. GitHub repo here.


Anticipating Visual Representations from Unlabeled Video
by MIT. Anticipating actions and objects via computer vision is hard (e.g. if someone is gesturing forward to shake hands). Humans do this through extensive experiential knowledge and inference — it’s much harder for a machine. This implementation trains deep neural networks to predict the visual representation of images in the future. Forbes article here.


TensorFlow in a Nutshell
by Camron Godbout. A three part series that explains Google’s deep learning framework TensorFlow. The guides cover the basics, hybrid learning, and an overview of supported models. Part 1, part 2, and now, part 3.


The Neural Network Zoo
by Fjodor Van Veen. A cheat sheet that covers many of the popular neural network architectures. Great way to keep track various architectures and their underlying structures and relations. The cheat sheet has descriptions of each architecture and links to their original academic papers.


Torch Video Tutorials
by Alfredo Canziani. A video collection of intro tutorials on leveraging Torch, providing an overview of Lua, Torch, neural networks, CNNs, and relevant Torch packages. RNNs coming soon.


Show and Tell: image captioning open sourced in TensorFlow
by Google Brain. Chris Shallue and his team make available their image captioning system. It’s faster to train, more detailed, and more accurate than past iterations. GitHub repo here. Original paper here.


The Alexa Prize
by Amazon. A new annual competition for university students to advance the field of conversational AI. Participants develop a bot that converses coherently with humans for 20 minutes. The application process closes October 28, 2016 — apply here.


Bay Area Deep Learning School
held at Stanford in late September and organized by Pieter Abbeel, Samy Bengio, and Andrew Ng. Speakers included Yoshua Bengio, Hugo Larochelle, Russ Salakhutdinov, and many others. All slide decks here and live stream videos from day 1 and day 2 are available.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

The Road Ahead: Autonomous Vehicles Startup Ecosystem

A dynamic view of the current state of self-driving startups. See it here.

The autonomous vehicles startup landscape has been evolving incredibly quickly: this past year has a seen a highlight reel of exits, financings, emergent stealth companies, and promising research advancements. The $1B exit of Cruise Automation to GM, the $680M exit of Otto to Uber, Comma.ai giving an Acura AV capability, an AI research team at Drive.ai helping AVs communicate, Udacity launching the Self-Driving Nanodegree — the list goes on.

The complexity of a startup ecosystem is always high in its early days. The technology is still being validated and few implementations have made their way to market. This is the case with the autonomous vehicles startup landscape. A number of researchers and analysts have done a lot to distill information on AV startups, but there isn’t currently an updated, comprehensive market map for the space.

This is why we’ve taken additional steps to build a dynamic market map view for AV startups. We delineate the different types of players in the AV ecosystem, provide relevant, publicly available info about them, and make it available in our Autonomous Vehicles Market Map web app that we will keep up to date with the most recent data. The list of startups is searchable, sortable, filterable, etc. for ease of use. The design and implementation of the site itself is also a work in progress, on which we’d appreciate any feedback (or help, if you’d like to contribute to the repo). Of course, this list is not comprehensive, so please do let us know of any additions/changes.

Startup Categories

  • Enabling Technologies: LIDAR, Mapping, Security, Parking
  • Vehicles: Cars, Trucks
  • Ecosystem: Fleet Management, Safety/Insurance

View the Autonomous Vehicles Ecosystem here on Requests for Startups.


By Isaac Madan (email), Wiley Jones (email), and Ascanio Guarini (email).

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Indirect Routes to Market

 When startups focused on the enterprise market initially get started, the sales focus is on selling directly to end customers. This is a valid approach early on, as the companies are working to improve their products, messaging, value proposition, etc.
Early on, direct sales are all a company needs.

It is important at this stage for the company to have direct contact with customers, but as these elements solidify, scaling through a direct sales model can become challenging. This is where startups start to think about how to scale through indirect models. The expectation is that a company can leverage partners to grow their business faster and expand their reach.

Later, partners allow a company to scale in ways that are likely impossible through direct-sales approaches.

However, during execution, many of these strategies do not live up to expectations. For example, if a software company’s product requires specialized selling and implementation services, and their reselling partners are not geared for these specialized services, the mismatch can be catastrophic. Another example is when a company decides to leverage service based partners but have not built the sales pipeline to support that route.

In this post, we’ll give a brief overview of indirect routes to market, and a high-level framework for considering which route is best for a given company. In later posts, we’ll go into each route individually and examine some case studies of how these routes play out in practice, both in success and in failure.

When looking at an indirect route to market we need to first categorize the type of routes that are available and then decide which routes are the right fit for the company (a.k.a. the vendor) based on the maturity of the product, the state of evolution of the vendor, and the vendor’s expectations for growth.

The right partner strategy depends many things, but the most important are company stage and product characteristics (mainly technical complexity).

The most important elements are the first two: product and company stage. We begin this topic by breaking out the following indirect markets: Alliances partners, Channel partners, System Integration partners, Technology partners, OEM partners, and finally, Managed Service providers.

  • Alliance Partners — These partners tend to be the smallest group but with the largest influence when it comes to Fortune 1000 companies. These partners make up the big consulting houses like Deloitte, Accenture, Pricewaterhouse, KPMG just to name a few. These partners service the Fortune 1000 as strategic advisors on key business initiatives and often are in the position to define the framework for key technology decisions. So they have a strong, influential role in the marketplace.
  • Channel Partners — These partners make up the largest community of partners and encompass resellers, value-added resellers, and distribution partners. Examples of partners that fall into this category are CDW, Insight, Carahsoft, and Optiv. The main focus of these partners is to resell a company’s products and offer key services on top of the products that they resell. These partners often have technical expertise on staff to help their customers install and achieve the benefits that the vendor claim about their products.
  • Distribution Partners — These partners provide a powerful scaling effect for vendors as they build relationships with all of the channel partners and can take on the responsibilities of enabling their channel partner ecosystem to drive a vendor’s product in the marketplace. Examples of partners that fall into this category are Arrow Electronics, Synnex, and Ingram Micro. Many of these partners also have subsidiaries based around the world. They can act as a powerful extension of a vendor’s sales organization to scale the awareness and enablement of a vendor’s products. Distribution partners play a pivotal role in the channel but it is critical that the products and the company are setup to support their ability to scale.
  • System Integrators — These are partners that are focused purely on services and are less interested in reselling products. Examples of partners that fall into this category are CGI Group, Booz Allen, and Redapt. They are often positioning themselves as “trusted” advisors to their clients and do not want to be seen as receiving any benefit from the companies they may be advocating. However, these companies do have a strong affinity towards products that require services to implement and operate.
  • Technology Partners — These are partners that have an “integration” relationship between a product they sell in the marketplace and the vendor’s product. These companies are interested in partnerships that enable them to fill a gap in their product that is not a core competency for them.
  • OEM Partners — These partners embed a vendor’s technology into their own. These partnerships tend to be highly custom and can be very lucrative if the relationship has been crafted to enable growth for the OEM partner.
  • Managed Service Providers — These partners are a new and growing group of partners that buy a vendor’s product and set it up in their data center and offer services to end customers. This indirect model is becoming more prevalent with the growth of cloud based services.

Now that we have defined a few routes, we should think about what makes the ideal route for a vendor. There is no simple formulaic method for making the decision. However, as the framework above implies, the internal process should start with a thorough evaluation of the maturity of the product and company, not to mention the market landscape. In the next set of posts we’ll dig into the impact of a product’s complexity and a company’s stage in choosing the right indirect route to market.


By Jaimin Patel and Ascanio Guarini of Requests for Startups. Curated by Isaac Madan of Venrock (email).

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: September Update, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 31st. In case you missed it, here are our past updates: August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

One Hundred Year Study on Artificial Intelligence (AI100) by the AI100 Standing Committee (full PDF here). The One Hundred Year Study on Artificial Intelligence, or AI100, is a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play. Researchers explain opportunities to apply artificial intelligence across many domains, such as transportation and healthcare, and the ways in which it has and will affect our lives. They also share a framework around AI policy. The group plans to assess the state of AI every five years.

Infrastructure for Deep Learning by the OpenAI team. In this post, we’ll share how deep learning research usually proceeds, describe the infrastructure choices we’ve made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for Kubernetes.

TF-Slim: A high level library to define complex models in TensorFlow by Nathan Silberman and Sergio Guadarrama of Google Research. TF-Slim is a lightweight package for defining, training and evaluating models in TensorFlow. The Google Research team announces the latest release of TF-Slim, which includes many types of layers, loss functions, evaluation metrics, and handy routines for training and evaluating models.

PaddlePaddle by Baidu (GitHub repo here). Open and Easy-to-Use Deep Learning Platform for Enterprise and Research. PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Deep Learning in a Nutshell: Reinforcement Learning by Tim Dettmers of NVIDIA. A broad overview of reinforcement learning including the major concepts and areas in which the technique shines. The 4th installment in Tim’s excellent series of articles that explain deep learning core concepts.

WaveNet: A Generative Model for Raw Audio by DeepMind (original paper here). The team presents a model called WaveNet, which is able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%. Also, the same network can synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.

Attention and Augmented Recurrent Neural Networks by Chris Olah and Shan Carter. Researchers at Google Brain explain four ways to augment Recurrent Neural Networks with new and more sophisticated properties, as well as the concept of attention that is an underlying principle of each.

Why does deep and cheap learning work so well? by Henry Lin and Max Tegmark. Researchers from Harvard and MIT articulate that the reason behind the efficacy of neural networks can be explained via the laws of physics versus math alone. We show how the success of deep learning depends not only on mathematics but also on physics (summary article here).

How a Japanese cucumber farmer is using deep learning and TensorFlow by Kaz Sato of Google Cloud Platform. The article explains Makoto Koike’s — a Japanese embedded system designer — end-to-end process of building a cucumber sorter that he implemented on his parents’ cucumber farm in Japan. Koike leverages TensorFlow to implement his deep learning based classifier.

Playing for Data: Ground Truth from Computer Games by Richter et al. Researchers at TU Darmstadt and Intel Labs present an approach to creating detailed label maps from the Grand Theft Auto video game due to the game’s high degree of realism. This increases accuracy of self-driving algorithms that are traditionally trained on real-world data sets.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: August Update, Part 2

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 16th. In case you missed it, here’s the August update (part 1), here’s the July update (part 2), here’s the July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Learning to Segment by Piotr Dollar of Facebook. Piotr explains Facebook’s efforts and progress in image segmentation, as well as highlighting use cases and explaining the importance of such advancements. When humans look at an image, they can identify objects down to the last pixel. At Facebook AI Research (FAIR) we’re pushing machine vision to the next stage — our goal is to similarly understand images and objects at the pixel level.

Google Brain begins accepting applications to its Residency program on September 1st. Google’s Jeff Dean will deliver a Youtube livestream to describe the Google Brain team and the Residency program. The Google Brain Residency Program is a one-year intensive residency program focused on Deep Learning. Residents will have the opportunity to conduct cutting-edge research and work alongside some of the most distinguished deep learning scientists within the Google Brain team. To learn more about the team visitg.co/brain. Consider applying here when applications open.

Text summarization with TensorFlow by Peter Liu of Google Brain. The Brain team open sources their TensorFlow model code for generating news headlines on a large dataset frequently used for summarization tasks. Peter explains two approaches — extractive and abstractive summarization, describes the model, and highlights areas of future interest.

Google Brain robot datasets by Sergey Levine, Chelsea Finn, and Laura Dows. The Google Brain team releases massive robotics datasets from two of their recent papers to further drive the field forward. Their grasping datasetcontains roughly 650,000 examples of robot grasping attempts (original paperhere). Their push dataset contains roughly 59,000 examples of robot pushing motions, including one training set (train) and two test sets of previously seen (testseen) and unseen (testnovel) objects (original paper here).

End-to-End Deep Learning for Self-Driving Cars by NVIDIA. The autonomous car team at NVIDIA describes their end-to-end approach to self-driving vehicles, using convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. Original paper here.

NIPS list of accepted papers. The 2016 Conference on Neural Information Processing Systems, cited as the top machine learning conference, takes place from December 5th through 10th in Barcelona, Spain. The list of accepted papers highlights some of the bleeding-edge machine learning & AI research that will be presented, as well as the researchers & practitioners driving the field forward who may be present. Consider attending this year — details here.

Combining satellite imagery and machine learning to predict poverty by Neal Jean et al of Stanford. Nighttime lighting is a rough proxy for economic wealth, and nighttime maps of the world show that many developing countries are sparsely illuminated. Jean et al. combined nighttime maps with high-resolution daytime satellite images. With a bit of machine-learning wizardry, the combined images can be converted into accurate estimates of household consumption and assets, both of which are hard to measure in poorer countries. Furthermore, the night- and day-time data are publicly available and nonproprietary.

Speech Is 3x Faster than Typing for English and Mandarin Text Entry on Mobile Devices by Sherry Ruan et al of Stanford & Baidu. Researchers evaluate Deep Speech 2, a deep learning-based speech recognition system, assessing that the system makes English text input 3.0X faster, and Mandarin Chinese input 2.8X faster than standard keyboard typing. The error rates were also dramatically reduced, and the results further highlight the potential & strength of speech interfaces.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Analyzing The Podcast Renaissance

By Niraj Pant

Before starting to analyze podcasting in its current form, it may be best to go back.

With the birth of podcasts in 2005, the popularity for the search term increased exponentially. Many things were happening at this time — Apple added support for podcasts on the iPod/iTunes, and many early podcasts were coming out with their first episodes. By 2007, there had been over 24 attempts to trademark the term “podcast”.¹

Why the significant drop in interest afterwards? In my eyes, podcasts went through a “false start” during this time period. Mobile wasn’t in full effect until a few years later, a low number of shows were available, and radio was still “good enough” for most consumers. However, since early 2015, the popularity of the search term has started clawing back. As all the right pieces are being put in place, podcasting is finally starting to go through its renaissance.

It’s no coincidence that the number of Americans that have listened to a podcast has doubled since 2008 (the graph’s scale doesn’t do it justice). We’re witnessing a few problems from 2005 being solved today:

  1. Content! A big reason listenership was so low early on was the lack of content. Not only was the number of podcasts low, but variety and selection was also very limited. Now that the # of shows being added is growing exponentially, demand for podcasts have only increased. This is similar to Netflix’s growth: As more content is added to the platform, demand increases in response. iTunes has over 325,000 podcasts and expects users to listen to 10 billion episodes in its apps by the end of 2016.² There are never enough podcasts:
  1. In-car audio. Traditionally, people have listened to AM/FM radio for dynamic content on-the-go. With better sound quality, less obtrusive ads, and a greater range of content, podcasts are becoming the go-to source for in-car listening. As cars become increasingly internet-connected, there’s an opportunity for many new people to become regular podcast listeners.
  2. Growth of mobile. Since podcasts first came out, data speeds have increased 100x. Mobile has become much more widespread, and in turn, there are many more ways to consume podcasts on the go (but there could be better apps to facilitate this).
  3. Apple. They’ve spearheaded the efforts in creating a lot of early listeners. By providing a clean, usable solution to download and listen to podcasts, it reduced many points of friction for listeners, driving higher engagement. Additionally, iTunes is shipped on every new consumer device (iPhone, iPad, Mac, etc.) that comes, which are some of the most popular products of all time.
  4. Monetization is becoming increasingly profitable. Podcast is a lousy word, since many people equate it with the idea of being free. However, many new podcasts are actually profitable. New models for publishing and monetizing are starting to come out as the space matures (some good ideas here), making running a podcast more lucrative than ever before.

With all this growth, the tools used to publish and consume podcasts are stilllacking. More tools need to go into place to give consumers the A+ experience they deserve, starting with a better podcast player. The podcast industry is ripe for some new companies — the tools people use to share, listen, and discover new podcasts just aren’t there yet.

Previous Attempts

The perfect podcast platform has been attempted before. In fact, Ev Williams, before going on to found Twitter and Medium, worked on Odeo, a podcast creation and consumption platform. Ev has helped define both blogging (Blogger, Medium) and social networking (Twitter), and he knew that this space had big potential. Unfortunately, the team behind Odeo later decided to sell the company, due to not understanding the space as well as they had hoped. Ultimately, Twitter came out of this, so we can’t be too mad 😉

More recently, Google added a podcast feature to Play Music, which is now standard on every Android device (similar to iTunes being packaged with every iOS device). Spotify recently added support for podcasts, and are looking into adding hosting and streaming services. Audible, known for being one of the largest audiobook providers, also recently started bringing on new, original podcast content to its platform.

The 2 Sides to Podcasting — Consumption and Publication

There are opportunities for both sides of the podcast experience, publishing and listening, as they are both still stuck in the 80’s. These days, to get a podcast published, you need to set up microphone software, audio editing tools, an RSS feed, and strategies to publish content to different podcast networks.

Getting a show on traditional radio networks are hard — creating a podcast, however, shouldn’t be. It brings a lower barrier to entry for getting your thoughts out there. I won’t go into the publishing side as much, since Ian Miles Ownbey wrote a pretty good post on it here.

Where’s the YouTube/GoodReads/Pocket/x for Podcasts?

Seriously, why hasn’t this been built yet??? The number of requests for podcast startups I’ve seen is at an all time high:

Literally applying any X for Y model would be better than what we’re currently working with.

There’s a lot missing from current podcast players. Aside from the obvious cross-platform sharing issue, here are a few features I would LOVE to see:

  • Ability to share soundbites from podcasts along with transcription. Similar to what Re/code has done here:

The idea of sharing individual soundbites from podcasts is great for leveraging its content. It’s almost like sharing a vine (small bits of audio), and it feeds back into the curation bit. There are so many quotes I’d love to share from a show onto twitter, but am limited to sharing an episode and time the quote occurs. If only I could just share soundbites with transcriptions onto twitter! (similar to highlighting on medium):

Pair this with an Twitter app modal, showing native content like Soundcloud’s modal (Twitter integration is crucial — it really helped accelerate Medium’s growth at the beginning):

Ooooh, look how beautiful this looks
  • Uniform sharing. I’ve seen people share links to podcasts hosted on Soundcloud, Overcast, and iTunes, but there needs to be one uniform platform. How do you share the link to a video? YouTube. Game stream? Twitch. What about a podcast? iTunes? Not so fast. Sharing the link to a podcast on iTunes doesn’t cut it for (obvious) cross-platform issues. The cross-platform solutions that do exist don’t have web players, social integration, or good recommendation systems.
  • Better discovery. Curation is a tough problem to crack. Most recommendations are done on a per-show basis, but it really should be on a per-episode basis. For example, I listen to Jalen and Jacoby and want to hear more sports episodes just like it (preferably from other shows). However, I’m still getting recommended entirely new shows, and not specific episodes within that show. I shouldn’t have to resort tosearches like this either. In iTunes’ case specifically, recommendations are currently hand-curated, and it’s often a mystery how shows are featured:

Additionally, the “Top Podcast” list within iTunes is also the subject of constant speculation among podcasters. An apparent change in the system this year resulted in small podcasts from Disney fans — and a show that hadn’t yet published its first full episode — being ranked alongside shows with hundreds of thousands of listeners (Podcasts Surge, but Producers Fear Apple Isn’t Listening)

The process could be improved greatly — Spotify’s Discover Weekly is a great model to follow, and is the best content recommendation system I’ve seen yet. To put it frankly:

  • A newsfeed. Twitter is my central place for getting news and updates in areas I’m interested in. The ability to see what friends + influencers are listening to/commenting on would have a great impact on the way I consume podcasts.
  • A better looking player. There have been many players over the years, but I’d like to see one that’s really well designed. It needs to be simple, beautiful, and something I keep coming back to. iTunes is a mess, but it’s the best we have. The most promising alternative I’ve seen so far isOvercast, but I’d love to see more well-designed apps overall.
  • Decreased friction. Listening in current apps take a lot of time and effort to find a new episode and start listening. A similar conundrum happened when video streaming services first started popping up. “Netflix was really the first streaming service for movies/TV that made people want to stream in lieu of DVDs. The podcast corollary is that, I think, a lot of potential listeners don’t want to have to ‘figure it out’ (same as a lot of potential streaming customers didn’t want to buy something on iTunes and then have to figure out how to get it on their tv)” (via /r/podcasts).³

Podcasts are not traditional audio, and they shouldn’t be treated as such. In fact, podcasts have more in common with written material than music. They require much more attentiveness, and being longer-form in nature, need different ways to consume and share. This requires a shift in vision from podcast companies, one which is clearly defined by leaders in other media spaces. i.e. YouTube was started on the premise that all videos on the web should be in one central portal. Or better put:

YouTube owns video consumption on the web, and is slowly starting to own creation as well. I think this accurately represents the mission podcast startups should look towards.

I’m stoked for the future of podcasting.


¹ List of US Podcast Trademarks

² Podcasts Surge, but Producers Fear Apple Isn’t Listening

³ The Netflix of podcasting?


Written by Niraj Pant (email). If you’re interested in chatting more about podcasts, I’m @niraj on twitter!

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Email Isaac if you’d like to contribute here.

Student Venture Capital Programs

By Niraj Pant

An increasing number of venture capital firms are going past their traditional avenues for finding companies and looking at a new source: students. Specifically, they’re hiring students to work part-time on sourcing companies that they find interesting.

Many tech investors are struck by FOMO, and are starting to become more aware that young adults are among the first to adopt new popular apps and create/identify new social media trends. No one wants to miss out on the next Facebook or Snapchat (both of which were started on college campuses!). Therefore, investors are putting more confidence into students as scouts for their VC program. While venture capital scouts aren’t a new concept (Sequoia Capital is famous for hiring scouts to invest in companies on their behalf), many new opportunities are being opened up for college students to get involved in venture capital.

Participants in these programs don’t usually get carry, but are often incentivized by compensation and/or networking opportunities with some of the smartest students in tech, as well as partners at their respective firms.

Rough Draft Ventures (RDV) and Dorm Room Fund (DRF) are some of the earliest venture capital firms focused on students investing in companies. Student partners are organized by chapters in Boston (RDV & DRF), Philadelphia, San Francisco, and New York (DRF), and have real input in funding decisions. These funds are usually backed by larger funds (First Round Capital for DRF and General Catalyst for RDV), and receive mentorship/resources from their parent VC firm, as well as deal flow that they find interesting. They often invest in companies that are founded on campus or have student/recent graduate founders.

The results from past investments have looked extremely strong, considering the limited time these firms have been around. Students seem to be picking the right companies to invest in, shown by the $100m+ dollars in follow-on capital raised by portfolio companies from large venture firms such as Google Ventures, Union Square Ventures, and Spark Capital.

Smaller firms, such as Alsop-Louie Partners, have also been hiring student partners for the past few years. However, instead of having a separate student-focused fund, they hire campus associates to help them stay connected to new ideas on campus and bring in interesting companies. StrictlyVC had a great interview with Stewart Alsop about the program.

Larger VC firms, such as Sequoia and KPCB, have seen the value in bringing on students, and recently created student ambassador programs. These programs are run year-by-year until the campus ambassador at a school graduates, who is then replaced with a new ambassador on each campus.

Here’s a list of venture capital firms with student programs I’ve come across (shoot me a message on twitter if there are any missing!):

  • Sequoia: Campus Ambassadors Program
  • KPCB: Campus Catalysts
  • Greylock X
  • Binary Capital: External Council*
  • The Mochary Group
  • Pejman Mar Ventures
  • Chicago Ventures
  • Lightbank
  • Alsop-Louie Partners
  • Accel Partners
  • Lowercase Capital

Student-focused funds

The following funds are primarily student-focused, rather than a program at a larger firm.

  • Free Ventures: Although not a traditional VC firm, Free Ventures is an accelerator for student startups, and very similar to an early Y Combinator.
  • The House Fund: Run by a UC Berkeley alum, The House Fund is a $6m seed-stage fund focused on investing into companies with student founders. They will invest angel-sized checks, and syndicate the rest of the deal.
  • Contrary Capital: A university-focused fund, with student partners at every major school.
  • A-Level Capital: Student-run fund at Johns Hopkins University. Students work in a variety of roles, from associates all the way up to partners, and usually invest in companies created on campus.
  • Dorm Room Fund: Described above
  • Rough Draft Ventures: Described above

Venture capital role placement

These types of programs not only help with learning more about venture capital, but often help place students into exciting startups, large tech companies, and venture roles.

Dorm Room Fund recently published a list of their student partners from the last graduating class and where they were headed full-time. While most startups when to working at startups or large companies in a variety of roles, about 20% of partners went into venture roles, a relatively high number considering most analyst/associate roles at VC firms are unadvertised.


Written by Niraj Pant (email). Thanks to Jay Bensal, Ali Afridi, and Wiley Jones for assisting in developing early iterations of this piece. Ali Afridi wrote a complementary post on how to get involved with VC while still in college. *Disclosure I work on Binary Capital’s External Council.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers. Email Isaac if you’d like to contribute here.

Up to Speed on Deep Learning: August Update, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 2nd. In case you missed it, here’s the July update (part 2), here’s the July update (part 1), here’s the June update, and here’s the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

An Intuitive Explanation of Convolutional Neural Networks by Ujjwal Karn. An thorough overview of CNNs: what they do, why they’re important, how they work, some history, and their underlying concepts. Inspired by Denny Britz’s Understanding Convolutional Neural Networks for NLP — Denny’s blog, WildML, is also an excellent resource with many deep learning explanations and tutorials.

Image Completion with Deep Learning in TensorFlow by Brandon Amos. A deep learning tutorial that explains how to do image completion and inpainting via deep learning. Image completion and inpainting are closely related technologies used to fill in missing or corrupted parts of images — this is important to designers and photographers, who often need to fill unwanted or missing parts of images. The code is also available on GitHub. Based on Raymond Yeh and Chen Chen et al.’s paper Semantic Image Inpainting with Perceptual and Contextual Losses.

Deep Learning Summer School lecture notes. Held the first week of August in Montreal and organized by Aaron Courville and Yoshua Bengio, professors at the University of Montreal, this conference provides a broad overview of current research in the deep neural networks. Speakers include preeminent deep learning researchers from Google, Facebook, Twitter, NVIDIA, and many others. All the lecture slides are available for review.

Vincent AI Artist (GitHub repository) by Saikat Basak. Vincent is an attempt to implement “a neural algorithm of artistic style”. A convolutional neural network (CNN) separates ‘style’ and ‘content’ from artistic images, and combines that artistic style with another image to create a unique expression.Leverage this repo to build your own of Prisma.

Robotics Science and Systems (RSS 2016) Workshop notes and videos.This workshop held in Ann Arbor, MI on June 18, 2016 convened a broad set of experts to discuss the topic of deep learning in robotics, particularly around computer vision. Speakers such as Pieter Abbeel of UC Berkeley and Ashutosh Saxena of Brain of Things spoke about their research in the field. Their recorded talks and slides are available for review.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.