Photo by KOBU Agency on Unsplash

How to invest better!

Rohit
5 min readDec 10, 2020

--

A decade and half ago, when I first started getting interested in public markets investing, I had started by making a small portfolio that I could play with. Just to see if I actually knew what I thought I knew, and to try and make sense of my beliefs about the market. And no surprise, I didn’t know what I thought I knew. Also turns out, I was only the millionth person to have this epiphany that testing against reality was the best way to see if you understood anything.

But weirdly, when I came to venture capital almost a decade later, it was treated as an art. Every company was unique, every investment thesis was one of a kind. And the decisions about which factors were important either was an extremely obvious list of 20 or a constantly changing list of 3. I couldn’t make any sense of it. Why was it that traction was the most important metric at times, when other times it was easily kept aside in favour of team? If someone equally smart as you turned down a company, what could make you more confident that you knew (or at least believed in) something unique?

While there is a clear argument to be made about the benefit of pattern recognition over time, there is no doubt that to improve one’s pattern recognition you need to constantly test against reality. 10000 hours of deliberate practice after all requires deliberation.

So what I started to do was to create a bunch of categories and start putting my (metaphorical) money where my mouth is, and see what it taught me. Now that a few years have passed and I have enough data points, I thought it was time for a post-mortem, and try do a Bayesian updating of my framework.

Dashboard of performance

Before we jump in, two immediate points:

  1. It was important that there was a point in time analysis of any opportunity that came along, and the dollar invested and valuation are naturally not set in stone, but moving targets
  2. The MOIC is calculated based on exit valuations, which also include any later rounds that I knew about. So it’s not a perfect proxy, but since a perfect proxy doesn’t exist it is as good as I can get without waiting a decade
  3. Most importantly, the data is constantly in flux, and this is a snapshot, and that too with limited information since a lot of the relevant data is yet to come by
Assessment of ranking factors

So some takeaways, starting with my biggest misapprehensions:

  • Misapprehension #1: I overemphasised on Go-to-market method in my ranking of companies, which turned out to be less relevant to the MOIC. This has definitely been eye opening to me, since it clearly lays out that an early analysis of how a company might take its product to market isn’t particularly instructive of its later success. Looking deeper, I can see several cases where what I thought was a strong GTM tactic turned out to be incompatible with a wider cross section of the market, or where competitors copied it successfully and speedily, necessitating a quick change!
  • Misapprehension #2: This one I feel particularly guilty of, since it’s a very “VC” mistake, of overemphasising Unicorn potential. While I used the category as a catch-all to give some qualitative love to the model, and to find a way of identifying who could potentially become a unicorn, turns out my assessment of this doesn’t really matter. The ones which had high potential had other factors which were of more relevance, and the ones which didn’t often could either grow into it, or their weaknesses were revealed in other variables.
  • Misapprehension #3: I vastly underestimated the Traction potential, as I’d identified, as a predictor of growth. It just brings to fore the idea that speed of growth is really a key differentiator, and even when you know this (I did help write a paper on this before!), it still remains something that I underestimate.
  • Misapprehension #4: This is more a sin of omission, but in a flurry of methodical and quantitative analysis, it seems I have a blind spot still for product, and feature parity specifically. I didn’t have it as a key factor to consider, but turns out it’s number one!

What’s especially interesting is the assessment of where I went most wrong in my reasons to say no. The biggest issue is speed. As VCs our primary mode of engagement towards most companies is to just say no, and to get away from that inertia for some opportunities seems like it’s too hard a hurdle for most people.

Overall, what else’s there to learn from this? One is the rather important fact that it’s vitally important to make one’s pattern recognition aspects explicit, so we can learn the sub-factors that are accurate vs those which are not. Two, this is one where the process itself is part of the point, where forcing oneself to think methodically through twenty factors for every company makes you develop a muscle memory for evaluating companies. Three, I have a renewed appreciation for both product and customer traction as both early and late indicators of what could eventually constitute success. It’s a testament to the fact that sometimes the key factors one assumes are important are actually pretty important — it’s so easy to let that fact slip away when surrounded by so many other juicy things to analyse. And fourth, and perhaps most importantly, it points to the importance of going back to the basics and doing something that most of us are terrible at, making decisions faster!

--

--