Copy
View this email in your browser
GrowthDesigners.co
November 2019

Dear Readers, 

When my team first started running growth experiments, I often felt like we were just throwing darts against a wall, hoping something would stick. We had so many ideas about how to achieve our business goals, but we didn’t have a strong process to prioritize them. We wondered how to become more strategic in building our experimentation pipeline, instead of saying “Let’s test that!” to every stakeholder who presented an idea.

In this issue, I’ll show you how to get more out of your growth experiments. Remember, your time and effort are scarce resources, so use them only for experiments that’ll get you where you need to go!

Shoot for the moon 
- Janey
 

Articulate your goal
What’s the current versus ideal status?

Just as you need a clear goal for any design challenge, you’ll also need one for any growth challenge. Do you have a metric you’d like to increase or decrease by a certain amount? Articulate the current status of that metric, and then articulate where you’d like it to be.

For instance, let’s say you work on an app that helps people discover, browse, and ultimately buy books. Your goal is to increase purchase conversion to 50%. 

Current status: 30% of users who add books to their “book bag” on our mobile app make purchases.

Ideal status: 50% of users who add books to their “book bag” on our mobile app make purchases. 
Basketball hoop
List potential barriers preventing you from reaching your ideal status

What barriers are in the way of moving the metric you care about? Think about what might be preventing users from completing the action you’d like them to take. Draw from insights you’ve gathered in qualitative research or quantitative data. 

In our example, barriers could include users thinking:
  • “I can buy, rent, or borrow books for cheaper elsewhere”
  • “It’s hard to find any books worth buying”
  • “I don’t know if I’ll like this book so I don’t want to spend money on it”
     
Brainstorm high-level strategies that will address those barriers 

Here are some generally applicable ones:
  1. Reduce friction
  2. Make difficult decisions easier
  3. Increase user motivation through delight
  4. Adjust our pricing or business model
Keep these high-level, because you can brainstorm specific tactical solutions later. 

Also, be open to the fact that some strategies may have stakeholders beyond your team. Those may be more difficult to execute, but are certainly conversations worth having! 
 
Under each strategy, brainstorm more specific tactics you can run as experiments

This is a great one for a group brainstorm. I’d recommend doing some Crazy 8’s with a cross-functional team to get your creative juices flowing. 

Within each strategy you’ve laid out, think of tactics you can use the address the barriers. For instance, tactics underneath your “reduce friction” strategy could include:
  • Allow payments from Venmo
  • Build a 1-click functionality like Amazon
  • Allow users to sample a book before buying
  • Explore a subscription model instead of pay-per-purchase
Diagram

Thinking through both strategies and tactics helps you explore a vast range of solutions before picking just one. 

Evaluate and make hypotheses about the most impactful strategies and tactics

Now that you’ve expanded the range of potential solutions, it’s time to narrow these ideas down. Use your product intuition and customer knowledge to pick the most impactful ideas. Then put them up for evaluation.

Your evaluation criteria will depend on the constraints of your team. I have found it most effective to evaluate ideas on 2 dimensions: evidence for potential impact and effort. (see the Resources below for other approaches)

Evidence for potential impact
Ask yourself: What evidence do we have that this solution will work? 

If you dedicate scarce resources to building an experiment, you want to be confident that it will produce high impact.

Keep in mind that evidence isn’t limited to results from previous A/B tests that your product team has run. It can come from other functional teams’ learnings as well. 

Let’s return to our book app example. Say your marketing team runs an email campaign testing several messages to encourage people to come back to the app and buy books. If one outperforms the others, that’s evidence in favor of an experiment that tests similar messaging within the product. 

You can also consider:
  • Qualitative evidence from user interviews, customer care calls, or sales conversations 
  • Historical funnel conversion metrics 
  • Results from Facebook ad tests gauging interest on certain features
  • Results from tools like Pendo that help you create experiments without engineering effort
  • Research from behavioral scientists and psychologists (but use these with a grain of salt - every business and market is different!)

This exercise is best done with a smaller group, including your PM and lead engineer. Bring in marketing as well, if you’re working on activation or top-of-funnel experiments. 

When my team did this exercise, we found that many of our ideas actually had zero evidence for potential impact!

Effort 
For a team with limited engineering resources, it’s important to consider how much effort an experiment will take to build.

Our job is to get 80% of the learning with 20% of the effort. Therefore, we prioritize high-impact, low-effort experiments that can generate strong learning results. In the best case scenario, we get the same amount of learning from an email campaign that takes 10 minutes rather than 2 sprints to set up.

Some experiments will inevitably require an engineer. In these cases, work with them to better understand an experiment’s build complexity before moving forward.
Decide on a next step for learning

Once you’ve evaluated an idea, decide whether you want to move forward with it. This is much more of an art than a science. But if you remain focused on your learning goals, you’ll always have results, whether your hypothesis turns out to be true or dismally false.  

Start with the ideas that rank positively on both evidence and effort, and build (or abandon!) cases for the other ones by digging into more data or having conversations with engineers. 

 
Run your experiments

This is the fun part! Run your experiments and share your results. Remember: other teams can benefit from what you learn about your customers. 

I’ve found it helpful to check in every 2 weeks on an experiment to decide whether to:

  1. Continue the experiment if we need more data

  2. Pause the experiment if there’s a clear outcome

  3. Iterate on the experiment if you’ve reached an inconclusive result

Go forth, rally the troops, and learn!

Growth is a team sport. It’s never easy to manage multiple stakeholders, but including others in the growth design process is crucial to your success. Set expectations by letting people know that any new ideas will be evaluated and prioritized before going into development. Having a strong process for this will make sure you stay focused and achieve the results you’re looking for. Good luck!

Janey Lee
 

Janey Lee is a product designer currently based in Stockholm, Sweden. She loves thinking about experimentation, design for social impact, and ice cream. 

 
Events

The Growth Mentor Summit
November 18-22 (Online)

CXL Live
April 5-7 (Austin)

Growth Hackers Conference
June 4 (San Francisco)
 
Jobs

Design Manager, Growth
Atlassian (Sydney)

Product Designer, Growth
Front (San Francisco)

Design Director, Growth
Nike (Beaverton, OR)


This is the Growth Design Newsletter. We produce monthly issues on topics relevant to growth design. Want to write or edit an issue? Hit reply!


In the December issue

Finding Growth Opportunities
Copyright © 2019 Growth DesignersLLC, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp