Friday, August 08, 2014

Does your SaaS startup have product/market fit?

Product/market fit is a topic that I've touched on a few times on this blog. It's that extremely crucial but somewhat hard to define (and even harder to measure) step which every startup needs to cross as it goes from an idea to a product to a real, scalable business. It's also a very important concept for us at Point Nine Capital since we tend to look for some level of proof of product/market fit when we evaluate potential investments.

Sean Jacobsohn of Emergence Capital has just published an excellent post titled "Here’s how to find out if your cloud startup has product-market fit". It's easy to fool yourself into thinking that you've found product/market fit, and Sean's post mentions some of the most important of these pitfalls. "All my customers are fellow startups in my incubator class" might be an obvious one, but there are also less obvious ones. :-)

I like Sean's article so much that I've turned it into a Typeform

So, if you're curious how your SaaS startup is doing in terms of product/market fit on a scale of 5-25, answer these five questions!


Sunday, July 27, 2014

A/B testing is like sex at high school

A few days ago I went on record saying that A/B testing is like sex at high school. Everyone talks about it, not very many do it in earnest. I want to follow up on the topic with some additional thoughts (don't worry, I won't stretch the high school analogy any further).

When talking to people about A/B testing I've noticed that there are four (stereo) types of mindsets which prevent companies from successfully using split tests as a tool to improve their conversion funnel.

1) Procrastinative

The favorite answer to suggestions for website or product improvements from people from this camp is "we'll have to A/B test that" – as in "we should A/B test that, some time, when we've added A/B testing capability". It is often used as an excuse for brushing off ideas for improvement, and the fallacy here is that just because the best way to test assumptions is an A/B test doesn't mean that all assumptions are equally good or likely to be true.

Yes, A/B tests are the best way to test product improvements. But if you're not ready for A/B testing yet, that shouldn't stop you from improving your product based on your opinions and instincts.

2) Naive 

People from this group draw conclusions based on data which isn't conclusive. I've seen this several times: Results are not statistically significant, A and B didn't get the same type of traffic, A and B were tested sequentially as opposed to simultaneously, only a small part of the conversion funnel was taken into account – these and all kinds of other methodological errors can lead to erroneous conclusions.

Making decisions based on gut feelings as opposed to data isn't great, but in this case at least you know what you don't know. Making decisions based on wrong data – thinking that you understand something which you actually don't – is much worse.

3) Opinionated

There's a school of thought among designers which says that A/B testing lets you find local maxima only. While I completely agree with my friend Nikos Moraitakis that iterative improvement is no substitute for creativity, I don't see a reason why A/B testing can't be used to test radically different designs, too. 

Designers have to be opinionated. Chances are that out of the 1000s of ideas that you'd like to test, you can only test a handful because the number of statistically significant tests that you can run is limited by your visitor and signup volume. You need talented and convinced designers to tell you which five ideas out of the 1000s are worth a shot. But then do A/B test these five ideas.

4) Disillusioned

The more you learn about topics like A/B testing and marketing attribution analysis, the more you realize how complicated things are and how hard it is to get conclusive, actionable data. 

If you want to test different signup pages for a SaaS product, for example, it's not enough to look at the visitor-to-signup conversion rate. What matters is the entire funnel conversion rate, starting from visitors all through the way to paying customers. It's well possible that the signup page which performs best in terms of visitor-to-signup rate (maybe one which asks the user for minimal data input only) leads to a lower signup-to-paying conversion rate (because signups are less pre-qualified) and that another version of your signup page has a better overall visitor-to-paying conversion. To take that even further, it doesn't stop at the signup-to-paying conversion step as you'll want to track the churn rate of the "A" cohort vs. "B" cohort over time.

If you think about complexities like this, it's easy to give up and conclude that it's not worth the effort. I can relate to that because as mentioned above, nothing is worse than making decisions which you think are data-driven but which actually are not. Nonetheless I recommend that you do use split testing to test potential improvements of your conversion funnel – just know the limitations and be very diligent when you draw conclusions.

What do you think? Did you already fall prey to (or see other people fall prey to) one of the fallacies above? Let me know!



Friday, June 13, 2014

Uber's Wonderlamp

Uber's uber large funding round has been the talk of the day in the tech community in the last week. And it should be, since it doesn't happen very often that a four year old company raises $1.2B at a $17B valuation. In fact, according to this Bloomberg story, Uber's new valuation sets a record for investments into privately-held tech startups.

When I first heard about Uber a few years ago, I didn't quite get it in the beginning. The traditional taxi system works quite well in Germany, and I thought that the advantage of using an app to order a cab as opposed to making a quick call wasn't such a big deal. Also, the expensive "private limo" service, which Uber started with in the beginning, didn't appeal to me.

After using mytaxi in Germany, I started to like the idea, but it was the launch of UberX and my recent two-months stay in San Francisco which turned me into a huge Uber fan. What is it that makes Uber so compelling? It's a number of smaller and bigger factors, which, combined with a slick mobile app, make Uber a highly habit-forming service:

  • Speed: In San Francisco, Uber has such a large number of drivers that no matter where you are in the city, it rarely takes more than 5-10 minutes until your car arrives. It happened to me several times that "my" Uber arrived in less than a minute because a driver was just around the corner, which gives you an Aladdin's wonderlamp feeling: You hit the order button on your phone, and almost instantly a car shows up to pick you up. 
  • Transparency: You get an ETA and you can watch your car on the map as it's getting closer to you, so you know pretty exactly when your car will arrive.
  • Price: The company's budget option, UberX, is cheaper than normal taxis.
  • Convenience: The fact that you only have to enter your credit card once makes the payment process extremely convenient and saves you a lot of time every time you arrive at your destination. Related to that, Uber has constructed its business model in such a way that the drivers aren't allowed to take tips, so you don't have to think about how much tip to give. That leads to another almost magical experience – you arrive at your destination and off you go. No waiting for your credit card to be processed or for the driver to look for change. You don't have to worry about getting a receipt neither, since a receipt is emailed to you after the ride. The driver stops and 5 seconds later you're out of the car. Brilliant.

Last but not least, virtually all of the drivers I drove with were very friendly and courteous. Maybe that was just professional friendliness in some cases, but my feeling was that almost all of them were very happy working for Uber and were genuinely trying to provide a great service (besides making sure that they maintain a great rating).

So Uber is great for riders, and based on what I know, it's good for the drivers, too. But is it also a great business? I think so. If a company delivers so much value to both sides of a marketplace, it can take a significant cut and acquire buyers and sellers profitably. I also think that although driver and rider loyalty might not be huge in principal (as this WSJ piece suggests), Uber will be able to create significant moat around its business through network effects and the building of its brand.

If Uber manages to sign up more and more drivers in an area (something which I don't doubt they'll be able to do), those magical moments which I described above – where your car arrives almost instantly – will occur more and more frequently. Competitors with less driver density won't be able to deliver the same level of uber user experience. In theory, an extremely well-funded competitor might be able to attack one of Uber's markets by offering both drivers and riders a much better deal. In practice that will be very, very difficult given Uber's lead and the quality of its execution. And the fact that Uber has now more than a billion dollars in its war chest won't make it easier.

Is Uber worth $17B? I don't know enough about the company to judge that, but what's clear is that Uber has a very realistic chance to revolutionize the worldwide taxi industry. What's more, Uber's long-term vision is much bigger. As Travis Kalanick puts it, they want to make "car ownership a thing of the  past", and my guess is they'll try to disrupt a few other industries (such as last-mile delivery) along the way. Huge congrats to Bill Gurley and his partners at Benchmark for betting on Uber early!



Thursday, June 05, 2014

Learning More About That Other Half: The Case for Cohort Analysis and Multi-Touch Attribution Analysis (Part 2 of 2)

Note: This is the second part of a post which first appeared on KISSmetrics' blog. The first part is here, and here is the original guest post on the KISSmetrics blog. Thanks go to Bill Macaitis, CMO at Zendesk, for providing extremely valuable input on multi-attribution analysis.

Multi-touch Attribution Analysis – Giving Some Credit to the “Assist”

Multi-touch attribution, as defined in this good and detailed post, is “the process of understanding and assigning credit to marketing channels that eventually lead to conversions. An attribution model is a set of rules that determine how credit for conversions should be attributed to various touch points in conversion paths.”

It’s easier than it sounds, and, since this is the year of the World Cup, let me explain it using a soccer analogy. Multi-touch attribution gives the credit for a goal to not only the scorer but also gives some credit to the players who prepared the goal. Soccer player statistics often calculate scores based on the goals and the assists of the players. That means the statistics are based on what could be called a double-touch analysis that takes into account the last touch and the touch before the last one.

Since the default model in marketing still seems to be “last touch” only, it looks like soccer has overtaken marketing in terms of analytical sophistication. :-)

Time for Marketing to Strike Back!

If you are evaluating the performance of a marketing campaign solely based on the number of conversions, you are missing a large piece of the picture. Like a great midfielder who doesn’t score many goals himself but prepares goals for the strikers, a marketing channel might not be delivering many conversions but could be playing an important role in initiating the conversion process or assisting in the eventual conversion.

This is especially true for B2B SaaS where sales cycles are much longer than in, say, consumer e-commerce. When you’re selling a SaaS solution to a business customer, it’s not unusual for there to be several touch points before a company becomes a qualified lead, and then many more before the lead becomes a paying customer. The process could easily look like this:

  • A piece of content that you produced comes up as an organic search result and the searcher clicks on it
  • A few days later, the person who looked at the content piece sees a retargeting ad
  • A few days later, she sees another retargeting ad, visits your website, and signs up for your newsletter
  • A week after that, she clicks on a link in your newsletter
  • A few days later, she receives an invitation to a webinar, signs up for it, and attends the webinar
  • After the webinar, she signs up for a trial
  • The next day, one of your customer advocates gives her a call
  • Close to the end of her trial, your lead does some more research, happens to click on one of your AdWords ads, and signs up for a paid subscription

If you look at this conversion path, it becomes clear that if you attribute the customer only to the first touch point (SEO) or to the last one (PPC), you’ll draw incorrect conclusions. And keep in mind that the example above is still quite simple. In reality, the number of marketing channels and touch points that contribute to a conversion can be much higher.

Data Integration in a Multi-device World

Maybe you use Google Analytics or KISSmetrics for Web analytics, Salesforce.com for CRM, and Zendesk for customer service. If you want to get a (more or less) complete picture of your user’s journey, you need to get and integrate the data from all of the major tools you’re using and track user interactions.

A big complicating factor here is that we now live in a “multi-device world”. It’s very possible that the person in the example conversion path above used a tablet device, a smartphone, and two different computers to access your content and visit your website. Since tracking cookies are tied to one device, there’s no simple way to know that all of these touch points belong to the same person, at least not until the person registers.

Going deeper into the data integration and multi-device attribution problem would go beyond the scope of this post, but there’s a lot of valuable information available on the Web. And, please feel free to ask questions or share experiences in the comments section.

Toward a Better Attribution Model

The next question to tackle is how credit should be distributed to touch points in a conversion path. A simple approach is to use one of these rules:

  • Linear attribution – Each interaction gets equal credit
  • Time decay – More recent interactions get more credit than older ones
  • Position based – For example, 40% credit goes to the first interaction, 40% to the last one, and 20% to the ones in the middle

While using one of these rules is a big improvement over a “first touch only” or “last touch only” model, the problem is that all of the rules are based on assumptions as opposed to real data. If you’re using “linear attribution,” you’re saying “I don’t know how much credit each touch point should get, so let’s give each one equal credit.” If you’re using “time decay” or “position based,” you’re making an assumption that some touch points are more valuable than others, but whether that assumption is true is not certain.

A more sophisticated approach is to use a tool like Convertro, which takes a look at all touch points of all users (including those who didn’t convert!) and then uses a statistical algorithm to distribute attribution credit. The advantage of this approach is that the model gets continuously adjusted based on new incoming data. Explaining exactly how it works, again, would go beyond the scope of this post, but there’s more information available on Convertro’s website, and I assume there are additional tools like this on the market.

Is It Worth It?

Implementing a sophisticated multi-touch attribution model is obviously a large project, and so the next question is whether it’s worth it. The answer depends mainly on these variables:

  • Product complexity and sales cycle – The more complex your product and the longer the sales cycle, the more likely you are to have several touch points before a conversion happens
  • Number of simultaneous campaigns and size of marketing budget – The more campaigns you’re running in parallel and the more you’re spending on marketing, the more important it is to account for multi-touch attribution

While cohort analysis is something you should do as soon as you launch your product, I think multi-touch attribution analysis can usually wait until you’re spending larger amounts of money on advertising. Until then, spending too much money or time getting your attribution model right probably is not the best use of your resources. So, as an early-stage SaaS startup, don’t worry too much about it just yet. Just remember to take your single-touch attribution CACs with a grain of salt.


Wednesday, June 04, 2014

Learning More About That Other Half: The Case for Cohort Analysis and Multi-Touch Attribution Analysis (Part 1 of 2)

Note: This article first appeared as a guest post on the popular KISSmetrics blog. Thanks to Hiten Shah and Sean Work at KISSmetrics for publishing it. I'm republishing the post here as a series of two shorter posts, with a few small edits.

Anyone who has ever worked in marketing or advertising has heard the quote, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” It is from John Wanamaker and dates back to the 19th century.

Fortunately, the industry has come a long way since then, and especially in the last 10 to 20 years, new technologies have made advertising more measurable than ever. However, there’s still a considerable gap between what people could measure and what they actually are measuring, and that leads to significant under-optimization of advertising and marketing dollars.

In B2B SaaS, which we at Point Nine Capital focus a lot of our efforts on, there are two techniques that I feel are particularly important but not used widely enough – cohort analysis and multi-touch attribution analysis. In this series of posts, I’ll try to provide a brief introduction to both methodologies and explain why I think they are so important.

A Quick Primer about Cohort Analysis

If you're a reader of this blog or know me a bit, you know that I'm a huge fan of cohort analysis and have written about the topic before. If you’re new to the topic, a cohort analysis can be broadly defined as a dissection of the activities of a group of people (such as users or customers), who share a common characteristic, over time. In SaaS, the most frequently used common characteristic for grouping customers is “join date”; that is, people who signed up or became paying customers in the same period of time (such as a month).

Let’s look at an example, and it will become much clearer:


In this cohort analysis, each row represents all signups that converted to become paying customers in a given month. Each column represents a month in your customer’s life. The cells show the percentage of retained customers of the respective cohort in the respective “lifetime month.”

So What?

Why is it so important to do a cohort analysis when looking at usage metrics or retention and churn? The answer is that if you look at only the overall numbers, such as your overall churn in a calendar month, the number will be a blend of the churn rate of older and newer customers, which can lead to erroneous conclusions.

For example, let’s consider a SaaS business with very high churn in the first few lifetime months and much lower churn from older customers – not unusual in SaaS. If the company starts to grow faster, the blended churn rate will go up, simply because the percentage of newer customers out of all customers will grow. So, if they look at only the blended churn rate, they might start to panic. They would have to do a cohort analysis to see what’s really going on.

What else can you see in a cohort analysis? Whatever the key metrics are in your particular business, a cohort analysis lets you see how those metrics develop over the customer lifetime as well as over what might be called product lifetime:



If you read the chart above (which I've borrowed from my colleague Nicolashorizontally, you can see how your retention develops over the customer lifetime, presumably something that you can link to the quality of your product, operations, and customer support. Reading it vertically shows you the retention at a given lifetime month for different customer cohorts. This might be called product lifetime, an, especially if you look at early lifetime months, it can be linked to the quality of your onboarding experience and the performance of your customer success team.

The Holy Grail of SaaS!

Maybe most importantly, a cohort analysis is the best way to estimate CLT (customer lifetime) and CLTV (customer lifetime value), which informs your decision on how much you can spend to acquire a new customer. As mentioned above, churn usually isn’t distributed linearly over the customer lifetime, so calculating it based on the blended churn rate of the last month doesn’t give you the best estimate. A better way is shown in the second tab of this spreadsheet, where I calculated/estimated the CLT of different cohorts.

A cohort analysis is even more essential when it comes to CLTV. Looking at how revenues of customer cohorts develop over time lets you see the impact of churn, downgrades/contractions, and upgrades/expansions:



This chart shows a cohort analysis of MRR (monthly recurring revenue) of a fictional SaaS business. As you can see in the green cells, it’s a happy fictional SaaS business as it has recently started to enjoy negative churn, which many regard as the holy grail in SaaS.

Still not convinced that you need cohort analyses to understand your SaaS business? :-) Let me know in the comments.




Thursday, May 15, 2014

It's a ZEN day!

Today is a very special day for me as as an entrepreneur and investor. About an hour ago, Zendesk went public on the New York Stock Exchange. The last time I watched an IPO so carefully was when Shopping.com, the company that had bought my price comparison startup, went public – almost ten years ago.

Here are a few visual impressions of my love affair with Zendesk, which began six years ago:



Huge congrats and thanks to the entire Zendesk team – I couldn't be more proud of you guys!

Wednesday, May 07, 2014

Three more ways to look at cohort data

I've just added three new charts to my Excel template for cohort analysis.

The first one shows the MRR development of several customer cohorts over the cohorts' lifetime:



Each of the green lines represents a customer cohort. The x-axis shows the "lifetime month", so the dot at the end of the line at the bottom right, for example, represents the MRR of the January 2013 customer cohort (all customers who converted in January 2013) in their 9th month after converting.
Here are some of the things that you can see in this chart:




The second chart is based on exactly the same data but shows MRR for calendar months as opposed to cohort lifetime months, and it uses a slightly different visualization:


One of the things you can see here is the contribution of older cohorts to your current MRR (something to keep in mind if you're considering a price increase and are thinking about the impact of grandfathering):




The third chart shows cumulated revenues minus CACs for different customer cohorts, i.e. it shows how much revenues a customer cohort has generated less the costs that it took to acquire the cohort:


The purpose of this one is to show if you're getting better or worse with respect to one of the most important SaaS metrics: The CAC payback time, i.e. the time it takes until a customer becomes profitable. Note that for simplicity reasons the chart is based on revenues. If you use it in real life, it should be based on gross profits, i.e. revenues minus CoGS.



What you can see here is that the first cohorts cross the x-axis (a.k.a. become profitable) around the 6th lifetime month, whereas newer cohorts are crossing or can be expected to cross the x-axis further to the left, i.e. become profitable faster.

If you want to take a closer look, here's the latest version of the Excel template, which includes the new charts. Or even better, download it and pay with a tweet! :)




Spinnakr Active Analytics