Experimentation is an essential part of growth for a startup. It doesn’t end with finding product-market fit. The ingredients and tools for experimentation at a startup’s disposal only grows from there, as data is collected, talented product and growth professionals are hired, and processes for discovery are operationalized.
With cloud storage, APIs, and product design software available, it has become cheaper to build and launch but harder to find the right market segment that boosts distribution and later on, monetization opportunities. Experimentation allows startups to play around with new and potentially game-changing ideas while being able to zero in on the ones that click with users. When designed creatively and run correctly, experiments serve as a low-cost, rapid-fire way of reducing execution risk and maximizing market discovery.
And that means more than soft-launching MVPs and running AB tests. To get a better sense of the nuances of experimentation, we talked to product leaders from our portfolio and we asked them one question: what is the top advice that you would give on product development and experimentation? Their answers have been assembled into this makeshift guide and can be classified into three stages: finding problems, designing experiments, and implementing experiments.
(1) Speak to users often.
Having one-on-one conversations with users can be both a source of feedback on existing features and ideas on future ones. These conversations can supplement more quantitative feedback mechanisms like UI/UX usability testing.
“It’s important to hear from users directly. If everyone [on the product team] does it, then we end up with a sizable sample too. You get tons of ideas from these calls,” says Yada Piyajomkwan, COO and co-founder of retail investment platform Ajaib.
Their product team tries to reach out to users every week, but they also do not just take a blanket approach to calling up users. They separate users into two segments.
1. People who finish the flow and/or are loyal users. They ask these users what they like about Ajaib. They try to find good moments or features to engage these users.
Questions to ask: What did you like? Why do you continue using us? When you tell your friends about us, what do you say?
2. People who drop off the product. Ajaib’s product team delves deeper into what they can do better to keep them.
Questions to ask: Why did you drop off? What do you want to see more in our app? What can we do to make you continue to use us?
This segmentation helps them cover more ground and achieve more with this practice of calling users. Depending on the capacity of the team and the nature of the user journey (e.g. going from free user to paying customer), further segmentation could be done to drill into what drives conversion, which leads us to our next piece of advice.
(2) Understand what drives conversion.
Startups find product-market fit by solving a key pain point with an app, but the pain points don’t end there. It is like playing a game of whack-a-mole — you slam one problem with the hammer, then new problems appear somewhere else, only these were likely a result of the solution prior. The nature of product development requires a smart approach to problem-finding and narrowing down which issues need to be prioritised.
One method used by Indonesian interbank transfer platform Flip is looking at the user activation funnel. This narrows down the issues to what is slowing down or preventing users from benefiting from the intended value of the application. When conversion from one step to another is lower than expected, their product team knows to examine that transition.
For example, Flip’s product team found out that it took too many steps before the users actually realize the benefits of their free interbank transfer service. The long KYC process led to some drop-offs before users even got a sense of the value of their service. With this in mind, the product team designed a trial experience for new users, allowing them to see how Flip works even before verification. This secures a certain level of buy-in that motivates them to get through the KYC process, which is non-negotiable.
Since then, according to Flip’s lead product manager Riza Herzego, their conversion rate has increased. There will always be tradeoffs on user experience, hence the whack-a-mole game, but what Flip demonstrated is the importance of being smart about addressing pain points. And behind this thinking is looking at the user activation funnel and understanding what drives conversion.
(3) Don’t jump straight to solution-building with an experiment.
Experiments are not run for solutions. Experiments are run for validation, be it a problem or a solution “So many people want to “run” experiments as a solution without thinking of the “why”,” says Indonesia fintech AwanTunai COO and co-founder Windy Natriavi. “Say you’re facing an issue in merchant acquisition. Then you decide you want to run an experiment giving them discounts and promotions, without finding out why exactly they’re not converting.” It might not be an issue of price, and this is where problem finding comes in.
“If you deep-dive, you might find out that your UI/UX makes it hard for people to convert,” says Windy. “That means you should be doing an experiment on UI/UX instead of doing experiments on price.”
An effective approach runs experiments both to find problems and test what kind of solution will work. In this regard, it’s important to know the difference between a prediction and a hypothesis. “Subsidizing customer costs will drive more conversion” is a prediction. “Conversion is going down because of high costs” is more a hypothesis. Both could be wrong, and both can be tested, but the latter doesn’t close off other possibilities. The former already assumes that high costs are the definitive cause of the problem and then offers subsidies (discounts or promotions) as a solution. The latter opens up examination of other factors that may have contributed to lower conversion.
(4) Be fast, scrappy, and clear with experiment parameters.
There’s the adage, “You miss 100% of the shots you don’t take.” In the same way, effective experimentation doesn’t leave any idea behind. But with limited engineering resources (time, talent, infrastructure), execution risk renders it impractical to make these shots on an actual court. Instead, startups have to set up makeshift courts and bounce balls off wooden backboards, all the while keeping their eyes on landing as many three-pointers to eventually launch in the big leagues.
It is with this creativity that Flip actually got its first wheels. When Rafi Putra Arriyan and his co-founders were mulling the idea of a service that does interbank transfers cheaper than the status quo bank rates, they didn’t use their technical skills to build out a solution right away. Though this was, for them, a clear problem, they wanted to find out if it was a problem that others were willing to risk money on. So instead they made a Google Form, set up a few bank accounts, and rolled out a makeshift bank transfer service to their network. They soon discovered strangers who were willing to risk millions of rupiah to pass on the 6500 rupiah transfer fee. To this day, the product team at Flip employs this speed and scrappiness in validating new ideas.
And this operationalization of experimentation allows startups to be nimble in addressing user pain points and discovering new ways to engage the market. At Ajaib, they define a minimum viable product for every idea and employ a lot of testing with already existing online tools and software before committing to developing these into features. For example, they’d use Typeform to test an in-app pop-up. And then once the MVP is ready, they track specific metrics over a two week to one month period to guide how they will roll-out the feature.
This methodology was used to great effect for their asset price notification. As Yada shares, “We experimented with different notifications through a third-party tool, which didn’t require any engineering time. Then we manually sent 30 different notifications for a week, measuring click rates and invest rates. After we were confident it worked and figured out which type of notification worked best, we developed an automatic engine to send it.”
Even though the MVP was not scalable, because Ajaib’s product team had pinned down the parameters of the experiment, they could transition to automation without wasting a lot of resources. When scrappiness and speed meets clear experiment parameters, the non-scalable — like sending 30 different notifications manually — can result in definitive insights. It’s far more efficient to cast a makeshift net in waters you know have fish rather than waste time dipping an expensive fishing rod into waters where there are likely none.
(5) Always have a control group to really determine whether your experiment is a success.
Scientific experiments always include a control group (or a setup without any change/treatment) to balance out the impact of factors that are not being tested. The same practice is critical for product development to reduce the likelihood of making a false positive or false negative conclusion.
“I often see products launched to improve a metric within a very short period of time, and if it fails to do so, it is deemed a failed experiment,” says Windy. “But that’s not exactly true.”
AwanTunai had run experiments on merchant adoption that take time. Accounting only for conversion without other factors like the length of the experiment would have led them to believe that the feature would not work. “Because we had a control group, we were able to see the gradual improvements and keep tweaking the experiment until we were able to consistently prove it worked.”
(6) Don’t wait to launch to see if the product works.
Launching a product inherently has risks, but these risks can be reduced by testing critical assumptions as early as the research and discovery process. Windy explains, “Let’s say you have an issue where you think you will be able to get more suppliers or merchants using your app if you launch a particular feature. Then you lead your team to basically build the product and only then will you have an inkling of how it turns out. That’s not exactly the right approach.” While product launch can and should ideally be designed to gain insights, launching should not be the first experiment.
“We have a process that goes from product discovery to product pitch then product development. Starting from the product discovery, we already run experiments on assumptions that are critical for the eventual launch to succeed. What indicators or experiments can you run even before the product is launched?” Windy points out.
These indicators or experiments don’t even have to involve the actual user experience itself, and can be done even before the team has a concrete idea of how to design the product. “For example, we share a fact sheet of our potential future products, spread it to our suppliers, and see how many call back. From that alone we can see the initial feedback and use that to revise the product.”
This incremental testing of assumptions reduces the risk of each individual experiment while maximizing the overall gains of the process from idea to launch.
(7) Work in small, independent teams.
So far we’ve been referring to product teams as an unbroken whole, but in reality, most teams are composed of smaller units that move independently. Yada subscribes to this squad model where a product manager works with a handful of engineers — “It’s super flexible and everyone can contribute to ideas as well.”
Now the challenge here is defining the scope of each squad (app, web, metric-linked) and ensuring that communication among squads is useful and efficient.
“We encourage close discussions between teams both formally and informally. We have inter-squad catch up regularly but the teams also talk to each other often to maintain information flow,” says Yada.
Data and Experiments
While Flip started by reducing the cost of interbank transfer by acting as a “middleman” platform, the tradeoff was processing speed of transactions. Last year, they tested the hypothesis that users would be willing to take on a higher cost per transaction if it meant faster transaction speed.
Back then, it was a huge decision, according to Riza. Many in the team were questioning it since we didn’t have enough data to prove that this idea would work. This idea would not only make the cost higher, which could bring down conversion but also it would create a bigger burden for the operation team to execute.
By conducting experiments that increased gradually in scope, they validated this hypothesis and eventually rolled the feature out as Flip Instant. After six months, this new feature contributed to doubling their monthly gross transaction volume (GTV).
“Figuring out what to build next is not easy,” says Riza, “You will come up with a lot of hypotheses, but they could not always be supported by existing data. It doesn’t mean that they are all wrong. You just need to be brave to validate it, especially if the potential impact is huge.”
Data is indeed becoming more and more invaluable for the growth of a tech company, as we’ve discussed in this previous article, but experiments, when designed and implemented well, can fill in the gaps where data is not available.
Get an essay and podcast straight in your inbox every other week here: http://eepurl.com/gy98D9