Holly Hester-Reilly on Product Science and Experimentation

Holly is a product development expert. Our 10-question interview with her is truly a master-class of product thinking.

Perspectives
November 12, 2018
Image of John Cutler
John Cutler
Former Product Evangelist, Amplitude
Holly Hester-Reilly on Product Science and Experimentation

Holly Hester-Reilly has a fascinating, experiment-driven meets systems-thinking perspective on product management. Prior to jumping into product roles at Shutterstock and Media Math, Holly studied chemical engineering, and practiced research science for several years. It is safe to say that “Experiment” is not a word she throws around lightly.

In 2017, Holly was a guest on the This is Product Management podcast. Holly’s focused approach to experimentation and learning—what Holly refers to as “product science”—has gained a lot of popularity internally here at Amplitude. So…we were eager to ask her some follow-up questions! This interview is truly a master-class of product thinking, and we hope you enjoy it. Thanks Holly.

John Cutler (Amplitude): We’re excited about your upcoming podcast, Product Science. Why science? Where is the intersection of art and science when it comes to product management?

Holly: I think the best product managers are like scientists. They have ideas, theories, visions, but they question everything and are always open to changing their minds upon seeing new evidence.

Back when I decided to start a training and consulting company, I thought to myself, what is my perspective on product management? And I realized that it was summed up well in the idea of product management as a science. We so often hear people talk about intuition and the icons with great “product sense”—but I don’t believe that’s really true. I think the product managers and designers who have a good intuition, who are able to tell what users will really do or what will drive a business forward, are applying principles that everyone can learn. Whether they are the principles of behavior science, experiment design, or team function, I believe what’s going on can be observed, understood, tested, and taught.{quote:left}”I think the product managers and designers who have a good intuition are applying principles that everyone can learn.”

“I think the best product managers are like scientists.”

And that’s what I want to do, to pull out those principles and help more people learn and apply them. And the best way I know to do that is to use the scientific method to test and learn.

You have a background in chemical engineering. How has it influenced your approach to product management?

In studying chemical engineering, I learned a lot about how systems operate at scale, and how the forces within those systems were observed, tested, and described by great scientists like Isaac Newton and Albert Einstein. We used equations to describe in great detail what was going on in things like turbulent fluids, chemical reactions, and complex molecules like polymers.

When I moved into tech and product management, I always visualized the systems around me kind of like the molecules and fluids that I had studied. The information flow within a complex, growing organization like the ones I was working in could be imagined similarly, and each person’s actions and decisions could be understood better that way.

Also, I practiced research science for several years—working in a chemistry lab, designing and conducting experiments, working with others who conducted original research and ultimately publishing a small paper of my own. So I think that influenced my perspective, that just because no one has written a textbook on a topic yet doesn’t mean that there aren’t laws to be discovered and shared.

Related Reading: Why Experiment?

How do you balance experimentation rigor with “real world” pressures? When you are advising teams, how do you make sure they don’t go too far down the rabbit hole?

Ah, the perennial question. When I was working for the first time in a pre-market startup back in 2008, we had some team members, trained in academia, who would seek the beautiful, perfect solution, over the pragmatic, good enough ones. And you know what? That startup didn’t survive. One of the biggest things I’ve learned since then and would do differently is to ship more often and test with customers more often. So I learned that lesson the hard way. And that was the last time I was on a team that didn’t ship or test with users on a regular basis.

Related Reading: How to Thrive in the Product-Led Era

But I do sometimes encounter people who’ve gotten stuck in the research quagmire. It’s much more common in enterprises, where they have the runway and resources to pursue “the perfect decision.” So to keep teams from letting that happen, I ask them to go through a pre-mortem exercise. We brainstorm the ways that our product initiative might fail and then identify which risks are the biggest. From there we can ask, what’s the smallest experiment that we can run to help us de-risk this?

“What’s the smallest experiment that we can run to help us de-risk this?

At the end of the day, whether it’s for shipping code, sharing designs, or making decisions from research, I consider myself a recovering perfectionist and try to remember and espouse the mantra, “Don’t let the perfect be the enemy of the good.”

There’s a common model (“double diamond”) for discovery and development that describes a divergent and convergent “explore” phase, and then a divergent and convergent “exploit” (information) phase. Does experiment design differ depending on what phase you’re in? And how do you help teams keep moving when there’s typically so much to learn?

Good question. In the explore phase, teams are usually looking to get wider samples. They’re trying to describe an unknown area—typically a problem space, where they need to understand, who are the people, how do they describe this problem, how bad is the problem, what makes it the same or different for them as someone else in the space? So the experiments tends to be more about generating understanding and laying foundations. We’ll often be doing lots of customer interviews, market sizing, surveys, competitive research, and observation. Depending on a person’s background, they might call this generative, ethnographic, or discovery research. It’s research to help us define our strategy. The biggest difference in this phase is that we don’t need to have designs to react to. (I’ve written about doing this kind of research in an article for the User Interviews blog, Never Ask Users If They Like Your Idea, Do This Instead)

In the later phase, once we’re into that “exploit” step, we’ve settled in on what area we want to be in, what problem we want to solve. Now, we’re testing things that are more tactical. Which designs are resonating? Which directions are making an impact? The best experiment-driven teams might create several prototypes or design directions and put them in front of users. They also would not just put what they want to test in front of users, but ask themselves first, what do we want to know? For example, is there an element of this design that would be expensive to build? If so, is it really necessary? Can we test whether users can get a similar result with something easier to build?

As to keeping teams moving, I ask them how reversible a decision is. If we’re talking about something that is easy to change, let’s try it and keep moving. If we’re talking about something that will be hard to change, something like elements of the product vision and strategy that will influence the tech stack or architecture decisions, I tell them to take the time to inform decisions like that, but for anything where building and testing is cheaper than researching, to go forward and do it. (The most commonly known example of this approach is from Jeff Bezos’s annual shareholder letters.)

“To [keep] teams moving, I ask them how reversible a decision is.”

I’ve observed teams run ONLY successful experiments. There’s never a failure. And it seems to grind down the team and trust begins to fray. How do you make the mindset shift to get to a point where unearthing disconfirming information is seen as a “win”?

That’s a tough one. I’ve been exploring different approaches. What I’ve gotten some results on is telling stories of how finding disconfirming information has helped teams avoid disaster, or telling stories of how ignoring it has led to disaster. Then to help them get over the “but not here” syndrome, I tend to go with “lead by example”—what can I find, even the smallest thing, where I can craft a story around something unexpected that we learned and what value that brought us?

I also push teams to test qualitative things, which are harder to define as success or failure. They should be hunting for stories, nuggets of information. That can help them open their minds to learning something new in an area where they don’t have as much anxiety around failure.

“I push teams to test qualitative things, which are harder to define as success or failure.”

Related Reading: 3 Mental Models Every PM Needs to Make Decisions

We love stories. Do you have a story of data myopia where someone missed the forest through the trees?

I worked with one client that had an abundance of data, but they had historically always been looking at in pretty generalized ways. They saw things like “customers who visit our retail brick&mortar store spend more” and came away wanting to push people to the store, figuring that someone who visited their app or website would spend more once they got to the store. This led to lots of internal debates and tension between teams, and was headed towards a classic case of HiPPO (going with the Highest Paid Person’s Opinion). Luckily, we were able to show them that even their highest value customers wanted the app and mobile experiences and valued that they could shop in whatever way they wanted, but it took real data to tell the story.

The growth movement is a “thing”. What is it about the word “growth” that seems to capture so much attention? When you’re out there working with customers, how do you weave in growth-centric concepts, with what might be termed “traditional” product management?

It’s interesting, as I’ve been working with customers from lots of different size and stages of company I’m seeing that in some circles, “growth” seems to be even more of a thing than product management! I think it captures attention because it’s so easy to understand. If you ask someone from outside the tech industry what a growth manager does, they would probably guess more or less correctly. Product management as a discipline and a role has so much nuance. And there are nuances to practicing growth well, but it’s just easier to understand the name.

For me, it’s like when you ask someone what their business goal is and they say, “To make money”. And well, sure, that’s one of the reasons they run a business, but it’s not a strategy. It doesn’t help make decisions, understand nuances, balance short and long-term factors…

When I’m working with clients and coachees, for me the growth-centric concepts come from looking at real business numbers and talking about metrics. I believe it’s really important to identify what metrics you want to track and to prioritize no more than one metric per team. Once you have identified your one metric that matters, you can evaluate your experiments and efforts on their impact on that metric. That’s a growth concept as well as a product management concept.

“I believe it’s really important to identify what metrics you want to track and to prioritize no more than one metric per team.”

What is the easiest to fix mistake you see out there with your clients? How do you fix it?

Hm. The easiest one to fix is not communicating enough. Most of the other mistakes take learning and patience, but we can usually address communication and teamwork pretty quickly with some new practices around standups, demos, retrospectives, framework for interaction, etc.

Song that captures the ups and down of product development?

That’s a tough one. I think Dolly Parton’s 9 to 5 captures some of the feelings and fears…

Finally, do you have any new things cooking for late 2018 into 2019? Care to share?

I’ve started recording interviews to launch a podcast, which I’ve been thinking about and planning for what feels like forever. I have done so many customer interviews in my life, I thought, I should be using my interviewing skill to help teach others about product science. So, The Product Science Podcast is coming sometime around the new year!

Learn more about Holly’s workshops, connect with her on Twitter, and keep your eyes and ears open for the Product Science podcast.

About the Author
Image of John Cutler
John Cutler
Former Product Evangelist, Amplitude
John Cutler is a former product evangelist and coach at Amplitude. Follow him on Twitter: @johncutlefish