Effective Recruiting Metrics for Fast-Growing Startups


We spent the better part of 2015 working with fast-growing tech companies on their internal HR & recruiting analytics. Effective recruiting can be incredibly challenging: First Round Capital’s State of Startups 2016 Report assessed that talent acquisition is founders’ biggest concern for the second year in a row. Recruiting metrics help to identify problems and optimize the talent acquisition process: if you can’t measure it, you can’t improve it. Below we share some of the most common metrics we observed across startups that serve as the basis for understanding and improving your recruiting process. The list below isn’t comprehensive, nor applicable for everyone, but it’s a good place to start.

Continue reading “Effective Recruiting Metrics for Fast-Growing Startups”

Startup DB: Search for relevant, curated posts by entrepreneurs & investors

The Mattermark Daily is an excellent daily newsletter that curates first-hand perspectives on entrepreneurship, investing, sales, hiring, and more, as it emerges everyday. As a weekend hack, we deep-indexed every article featured in the Daily over the past few years, and built StartupDB: a simple search interface so you can find the best, most relevant startup content from the Daily when you need it. We’ve been using it internally to field questions from readers and pull together some of the best startup resources — for example, this post here on 10 excellent resources for enterprise sales.

Give StartupDB a try here.


We’d love to hear your feedback & thoughts via email. Special thanks to Nick Frost, Editor of the Mattermark Daily, for his support.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: November Update

 Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on October 17th. In case you missed it, here are our past updates: September part 2 & October part 1, September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

Going Public vs. Being Acquired

We occasionally hear from our readers with interesting questions that they have and that they would like us to answer & write about. This post and the few that follow will be on one such topic that has come up a few times from readers in the past: taking a company public and better understanding an IPO. In this mini-series, we’ll discuss when/why a company may IPO, advantages/disadvantages of an IPO versus being acquired, factors that make a company ready for an IPO, and the IPO process itself. 

Moreover, because the goal of Requests for Startups is to support entrepreneurs, we’ll begin to more formally solicit ideas for interesting topics we should write about in the future. Please submit your ideas here, and as always, we would love your feedback on this mini-series, and the newsletter more generally, via email.

Why/when to IPO

Going public via an IPO or being acquired are two mechanisms by which startups seek liquidity. Timing of an IPO depends on industry. Typically, a company chooses to go public when:

  1. The company has made significant progress often in the form of sustainable profitability, solid revenue growth, or other material milestones. There has been substantial analysis done that demonstrates what companies look like at the time of their IPO by reviewing their S-1 filings — here are many excellent SaaS benchmarks.
  2. Existing investors, founders, and employees are seeking liquidity and would prefer additional financings via the public markets, which may allow for higher valuations that mean less dilution for existing shareholders. As expected, public excitement and high demand for the stock can drive up its valuation. Various studies (like this one) seek to explain the valuation premium that IPOs engender over acquisitions in other ways — one such study suggests a 22% markup, and others (like this one) indicate there is no such premium in certain scenarios.

In some cases, a company may require significant growth capital, but if the company already has a high valuation, private market investors may not find the opportunity to invest as appealing as earlier stage companies that have greater potential upside, thus making an IPO an effective fundraising strategy. The IPO market, however, is volatile — IPO volume varies from year to year. This year has been a slow IPO year, meanwhile we’ve seen companies raise private capital at very high private valuations, like Uber and Airbnb. Pitchbook reports that we’ll see the fewest number of IPOs in 2016 since 2009 after the market crash. Companies exploring both an IPO and a potential acquisition engage in a “dual-track” process typically led by investment bankers.

A potential acquirer may take interest in a company around the time of an IPO, as an IPO implies that the acquirer may need to pay a significant markup on the company’s valuation after the IPO. In this sense, the IPO market and the private M&A market are coupled: if there are lots of IPOs happening, acquirers are more keen to make acquisitions, as they may lose the chance to pay favorable prices if the seller is no longer privately held.


Advantages of going public:

  1. Potential to raise money at a higher valuation than on the private market, as described above.
  2. A public company can effectively use its stock to make acquisitions. Using stock can be more effective for a public company because their stock is liquid, while as a private company it is not, and thus may be less palatable to a potential seller.
  3. Public visibility can allow for additional, favorable fundraising later on the public market. This is great should the company need to raise additional growth capital. This also gives the company additional credibility with potential customers and employees.
  4. Liquidity for founders, investors, and employees — the ability to sell shares on the public market.

Disadvantages of going public:

  1. Significant legal & disclosure obligations and information provided to shareholders. This also applies to the company’s officers and directors, who can be heavily scrutinized.
  2. Required disclosure of specific types of transactions, including stock option practices and executive compensation.
  3. The process is both expensive and time consuming for the management team. We’ll discuss the process in a later post.
  4. Restrictions on stock sales. There is typically a 6-month lock up after the IPO to protect the stock against price volatility as it’s new to the public market. This means shareholders can’t sell their stock in this time window. There is also much regulatory scrutiny around stock sales for fear of insider trading. The SEC’s Rule 144 restricts the amount of stock that can be sold by major shareholders in any 3-month period to prevent major price fluctuations or impact on liquidity. It also may look bad to the public market if a significant stockholder is dumping a lot of stock.

Advantages of M&A:

  1. If the deal is all in cash, the company can get immediate liquidity instead of relying on the public markets. However, in a stock deal, SEC Rule 145 is similar to Rule 144 and applies to the company that is acquired if they will be receiving stock in the acquiring public company. In other words, the acquired company has similar restrictions on stock sales as they would have in the event that their company went public.
  2. Potentially less market risk. In a cash deal, the seller knows what they’re getting. In a stock deal, the company that is getting acquired still bears risk via volatility in the public market price of their parent company’s stock, but this is usually less volatile than a brand new post-IPO company.
  3. None of the regulatory and administrative burdens of running a public company: Disclosures, forecasts, analyst calls, shareholders, etc.

Disadvantages of M&A:

  1. The company could command a higher valuation in the public markets.
  2. Their upside is fixed, based on the purchase price, or is no longer in their hands, if they’re receiving stock in the parent company. If a company IPOs, they control their future upside via control & management of the company going forward.
  3. Liquidation preferences could mean less upside for the founders than for early investors. Preferred stockholders will make most of the proceeds if the liquidation preferences exceed the fair market value of the company. At an IPO, preferred stock is usually converted to common stock so the liquidation preferences are nullified, which transfers upside to the founders.
  4. Less control, affinity, and/or agency. Employees may be required to stick around after the acquisition but they no longer run their own company. The parent company may choose to shutdown or repurpose the acquired company’s product — Microsoft shutdown Sunrise, Dropbox shutdown Mailbox, and Twitter recently announced it is sunsetting Vine, for example.

We welcome any questions or feedback via email. In our next post, we’ll discuss factors that make a company ready for an IPO as well as the IPO process.

Source: The Entrepreneur’s Guide to Business Law, 4th Edition by Constance E. Bagley and Craig E. Dauchy. This book is an excellent resource on this topic.


Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: September, Part 2 and October, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on September 20th. In case you missed it, here are our past updates: September part 1, August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.


Open Sourcing 223GB of Driving Data
by Oliver Cameron of Udacity. 223GB of image frames and log data from 70 minutes of driving in Mountain View on two separate days. Log data includes latitude, longitude, gear, brake, throttle, steering angles and speed. GitHub repo here.


ImageNet 2016: Large Scale Visual Recognition Challenge
results (ILSVRC 2016). A yearly seminal competition where teams correctly classify and detect objects and scenes in images. The teams operate at the bleeding-edge of image recognition — learn about the teams here.


Generating Faces with Deconvolution Networks
by Michael Flynn. Neural networks that generate and interpolate between faces, based on a large publicly available dataset. Inspired by this paper on image generation. GitHub repo here.


Youtube-8M Dataset
by Google. 8 million video IDs and associated labels from over 4800 visual entities (e.g. vehicle, concert, music video, etc.), making it possible to advance research & applications of video understanding. Blog post here.


Deep3D: Automatic 2D-to-3D Video Conversion with CNNs
by Eric Junyuan Xie. 3D videos are typically produced in one of two ways: shooting with a special 3D camera or shooting in 2D and manually convert to 3D — both are hard. This project demonstrates automatic 2D-to-3D conversion, so you could potentially take a 3D selfie with an ordinary smartphone.


Open Sourcing a Deep Learning Solution for Detecting NSFW Images
by Yahoo. An open-source classifier for identifying NSFW content, based on a CNN architecture and implemented with Caffe. GitHub repo here.


Anticipating Visual Representations from Unlabeled Video
by MIT. Anticipating actions and objects via computer vision is hard (e.g. if someone is gesturing forward to shake hands). Humans do this through extensive experiential knowledge and inference — it’s much harder for a machine. This implementation trains deep neural networks to predict the visual representation of images in the future. Forbes article here.


TensorFlow in a Nutshell
by Camron Godbout. A three part series that explains Google’s deep learning framework TensorFlow. The guides cover the basics, hybrid learning, and an overview of supported models. Part 1, part 2, and now, part 3.


The Neural Network Zoo
by Fjodor Van Veen. A cheat sheet that covers many of the popular neural network architectures. Great way to keep track various architectures and their underlying structures and relations. The cheat sheet has descriptions of each architecture and links to their original academic papers.


Torch Video Tutorials
by Alfredo Canziani. A video collection of intro tutorials on leveraging Torch, providing an overview of Lua, Torch, neural networks, CNNs, and relevant Torch packages. RNNs coming soon.


Show and Tell: image captioning open sourced in TensorFlow
by Google Brain. Chris Shallue and his team make available their image captioning system. It’s faster to train, more detailed, and more accurate than past iterations. GitHub repo here. Original paper here.


The Alexa Prize
by Amazon. A new annual competition for university students to advance the field of conversational AI. Participants develop a bot that converses coherently with humans for 20 minutes. The application process closes October 28, 2016 — apply here.


Bay Area Deep Learning School
held at Stanford in late September and organized by Pieter Abbeel, Samy Bengio, and Andrew Ng. Speakers included Yoshua Bengio, Hugo Larochelle, Russ Salakhutdinov, and many others. All slide decks here and live stream videos from day 1 and day 2 are available.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

The Road Ahead: Autonomous Vehicles Startup Ecosystem

A dynamic view of the current state of self-driving startups. See it here.

The autonomous vehicles startup landscape has been evolving incredibly quickly: this past year has a seen a highlight reel of exits, financings, emergent stealth companies, and promising research advancements. The $1B exit of Cruise Automation to GM, the $680M exit of Otto to Uber, Comma.ai giving an Acura AV capability, an AI research team at Drive.ai helping AVs communicate, Udacity launching the Self-Driving Nanodegree — the list goes on.

The complexity of a startup ecosystem is always high in its early days. The technology is still being validated and few implementations have made their way to market. This is the case with the autonomous vehicles startup landscape. A number of researchers and analysts have done a lot to distill information on AV startups, but there isn’t currently an updated, comprehensive market map for the space.

This is why we’ve taken additional steps to build a dynamic market map view for AV startups. We delineate the different types of players in the AV ecosystem, provide relevant, publicly available info about them, and make it available in our Autonomous Vehicles Market Map web app that we will keep up to date with the most recent data. The list of startups is searchable, sortable, filterable, etc. for ease of use. The design and implementation of the site itself is also a work in progress, on which we’d appreciate any feedback (or help, if you’d like to contribute to the repo). Of course, this list is not comprehensive, so please do let us know of any additions/changes.

Startup Categories

  • Enabling Technologies: LIDAR, Mapping, Security, Parking
  • Vehicles: Cars, Trucks
  • Ecosystem: Fleet Management, Safety/Insurance

View the Autonomous Vehicles Ecosystem here on Requests for Startups.


By Isaac Madan (email), Wiley Jones (email), and Ascanio Guarini (email).

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Indirect Routes to Market

 When startups focused on the enterprise market initially get started, the sales focus is on selling directly to end customers. This is a valid approach early on, as the companies are working to improve their products, messaging, value proposition, etc.
Early on, direct sales are all a company needs.

It is important at this stage for the company to have direct contact with customers, but as these elements solidify, scaling through a direct sales model can become challenging. This is where startups start to think about how to scale through indirect models. The expectation is that a company can leverage partners to grow their business faster and expand their reach.

Later, partners allow a company to scale in ways that are likely impossible through direct-sales approaches.

However, during execution, many of these strategies do not live up to expectations. For example, if a software company’s product requires specialized selling and implementation services, and their reselling partners are not geared for these specialized services, the mismatch can be catastrophic. Another example is when a company decides to leverage service based partners but have not built the sales pipeline to support that route.

In this post, we’ll give a brief overview of indirect routes to market, and a high-level framework for considering which route is best for a given company. In later posts, we’ll go into each route individually and examine some case studies of how these routes play out in practice, both in success and in failure.

When looking at an indirect route to market we need to first categorize the type of routes that are available and then decide which routes are the right fit for the company (a.k.a. the vendor) based on the maturity of the product, the state of evolution of the vendor, and the vendor’s expectations for growth.

The right partner strategy depends many things, but the most important are company stage and product characteristics (mainly technical complexity).

The most important elements are the first two: product and company stage. We begin this topic by breaking out the following indirect markets: Alliances partners, Channel partners, System Integration partners, Technology partners, OEM partners, and finally, Managed Service providers.

  • Alliance Partners — These partners tend to be the smallest group but with the largest influence when it comes to Fortune 1000 companies. These partners make up the big consulting houses like Deloitte, Accenture, Pricewaterhouse, KPMG just to name a few. These partners service the Fortune 1000 as strategic advisors on key business initiatives and often are in the position to define the framework for key technology decisions. So they have a strong, influential role in the marketplace.
  • Channel Partners — These partners make up the largest community of partners and encompass resellers, value-added resellers, and distribution partners. Examples of partners that fall into this category are CDW, Insight, Carahsoft, and Optiv. The main focus of these partners is to resell a company’s products and offer key services on top of the products that they resell. These partners often have technical expertise on staff to help their customers install and achieve the benefits that the vendor claim about their products.
  • Distribution Partners — These partners provide a powerful scaling effect for vendors as they build relationships with all of the channel partners and can take on the responsibilities of enabling their channel partner ecosystem to drive a vendor’s product in the marketplace. Examples of partners that fall into this category are Arrow Electronics, Synnex, and Ingram Micro. Many of these partners also have subsidiaries based around the world. They can act as a powerful extension of a vendor’s sales organization to scale the awareness and enablement of a vendor’s products. Distribution partners play a pivotal role in the channel but it is critical that the products and the company are setup to support their ability to scale.
  • System Integrators — These are partners that are focused purely on services and are less interested in reselling products. Examples of partners that fall into this category are CGI Group, Booz Allen, and Redapt. They are often positioning themselves as “trusted” advisors to their clients and do not want to be seen as receiving any benefit from the companies they may be advocating. However, these companies do have a strong affinity towards products that require services to implement and operate.
  • Technology Partners — These are partners that have an “integration” relationship between a product they sell in the marketplace and the vendor’s product. These companies are interested in partnerships that enable them to fill a gap in their product that is not a core competency for them.
  • OEM Partners — These partners embed a vendor’s technology into their own. These partnerships tend to be highly custom and can be very lucrative if the relationship has been crafted to enable growth for the OEM partner.
  • Managed Service Providers — These partners are a new and growing group of partners that buy a vendor’s product and set it up in their data center and offer services to end customers. This indirect model is becoming more prevalent with the growth of cloud based services.

Now that we have defined a few routes, we should think about what makes the ideal route for a vendor. There is no simple formulaic method for making the decision. However, as the framework above implies, the internal process should start with a thorough evaluation of the maturity of the product and company, not to mention the market landscape. In the next set of posts we’ll dig into the impact of a product’s complexity and a company’s stage in choosing the right indirect route to market.


By Jaimin Patel and Ascanio Guarini of Requests for Startups. Curated by Isaac Madan of Venrock (email).

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.

Up to Speed on Deep Learning: September Update, Part 1

Continuing our series of deep learning updates, we pulled together some of the awesome resources that have emerged since our last post on August 31st. In case you missed it, here are our past updates: August part 2, August part 1, July part 2, July part 1, June, and the original set of 20+ resources we outlined in April. As always, this list is not comprehensive, so let us know if there’s something we should add, or if you’re interested in discussing this area further.

One Hundred Year Study on Artificial Intelligence (AI100) by the AI100 Standing Committee (full PDF here). The One Hundred Year Study on Artificial Intelligence, or AI100, is a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play. Researchers explain opportunities to apply artificial intelligence across many domains, such as transportation and healthcare, and the ways in which it has and will affect our lives. They also share a framework around AI policy. The group plans to assess the state of AI every five years.

Infrastructure for Deep Learning by the OpenAI team. In this post, we’ll share how deep learning research usually proceeds, describe the infrastructure choices we’ve made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for Kubernetes.

TF-Slim: A high level library to define complex models in TensorFlow by Nathan Silberman and Sergio Guadarrama of Google Research. TF-Slim is a lightweight package for defining, training and evaluating models in TensorFlow. The Google Research team announces the latest release of TF-Slim, which includes many types of layers, loss functions, evaluation metrics, and handy routines for training and evaluating models.

PaddlePaddle by Baidu (GitHub repo here). Open and Easy-to-Use Deep Learning Platform for Enterprise and Research. PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Deep Learning in a Nutshell: Reinforcement Learning by Tim Dettmers of NVIDIA. A broad overview of reinforcement learning including the major concepts and areas in which the technique shines. The 4th installment in Tim’s excellent series of articles that explain deep learning core concepts.

WaveNet: A Generative Model for Raw Audio by DeepMind (original paper here). The team presents a model called WaveNet, which is able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%. Also, the same network can synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.

Attention and Augmented Recurrent Neural Networks by Chris Olah and Shan Carter. Researchers at Google Brain explain four ways to augment Recurrent Neural Networks with new and more sophisticated properties, as well as the concept of attention that is an underlying principle of each.

Why does deep and cheap learning work so well? by Henry Lin and Max Tegmark. Researchers from Harvard and MIT articulate that the reason behind the efficacy of neural networks can be explained via the laws of physics versus math alone. We show how the success of deep learning depends not only on mathematics but also on physics (summary article here).

How a Japanese cucumber farmer is using deep learning and TensorFlow by Kaz Sato of Google Cloud Platform. The article explains Makoto Koike’s — a Japanese embedded system designer — end-to-end process of building a cucumber sorter that he implemented on his parents’ cucumber farm in Japan. Koike leverages TensorFlow to implement his deep learning based classifier.

Playing for Data: Ground Truth from Computer Games by Richter et al. Researchers at TU Darmstadt and Intel Labs present an approach to creating detailed label maps from the Grand Theft Auto video game due to the game’s high degree of realism. This increases accuracy of self-driving algorithms that are traditionally trained on real-world data sets.


By Isaac Madan. Isaac is an investor at Venrock (email). If you’re interested in deep learning or working on something in this area, we’d love to hear from you.

Subscribe to our email newsletter here. Requests for Startups is a newsletter of entrepreneurial ideas & perspectives by investors, operators, and influencers.