A/B Testing

We launched SeatGeek in 2009 at TechCrunch 50 (now renamed “Disrupt”). The elevator pitch was “Farecast for sports and concert tickets.” The site forecasted if ticket prices for an event would rise or fall so that a buyer could time her purchase optimally.

I diligently practiced the pitch. It was be the first time I’d be presenting to so many people. When it came time to present, we saw that Paul Graham would be one of our judges. Great news! We loved Paul Graham’s essays. I got on stage and delivered my spiel. It went pretty well–no major messups. Then time for the Q&A with the judges. We locked up. We’d practiced the pitch, but hadn’t prepared for what they asked us. Graham dismantled us. It was painful. It still is. I just forced myself to watch the video again and my chest now aches.

Afterwards, trying to ease my embarrassment, I wrote a comment on Hacker News and emailed Graham:

email1

He responded three days later:

email2

I focused on that first paragraph (he doesn’t think I’m an idiot!) and ignored the second (this is what you should do instead). That was stupid, because his gut feel was right.

After TechCrunch50 we built more features that made our ticket forecasting even more powerful. We A/B tested them; the data indicated that giving users more forecasting power increased conversion. But after a few months of this, we stopped caring about more powerful ticket forecasts. Despite the A/B test results, I was getting the feeling that we might be building the biggest clothing store in a nudist colony–we were optimizing the experience for those who cared, but that was a small group indeed.

So we disregarded the A/B data and began to de-emphasize price forecasts. Most people, we decided, don’t care about forecasts. Like Graham had said, they want things to be simpler, not more complicated.

That was ultimately a good decision. We focused on ticket search instead of forecasting. We turned SeatGeek into a product that is useful for most event-goers, not a few data enthusiasts [1]. A little over a year ago we removed forecasts altogether. Late, but better late than never.

Since then, we’ve used A/B testing as an important consideration, but not a be-all-end-all. It’s an input in any conversation about making an interface change, but not the only input. A few months ago we tested a new design for our map UI. The new UI converted a bit worse. But we preferred it, so we kept it anyway. There are all sorts of potential rationalizations about why that might be the right choice [2] but the real reason was that it felt right to us in our guts–we have a vision of what we think the experience of searching for tickets online should be like, and this new UI was part of it. We want to build SeatGeek into something we’re hugely proud of, and I don’t think you can always do that by blindly following A/B test results.


[1] We now spend even more time with data work at SeatGeek than we did back then. But the goal is now to use data to simplify the ticket buying experience (e.g. via a feature we call Deal Score that lets people quickly find the best deals to a show) instead of complicate it.

[2] For example, “right now users are more comfortable with the old UI, but over time they will grow accustomed to this new version and convert better.”

 
572
Kudos
 
572
Kudos

Read this next

The Value of Time

My co-founder and I have run our startup with religious frugality. When first looking for an office, we found a shared workspace that allowed us pay by the area, and we squeezed in folks for $160/mo per person. When traveling for... Continue →