S3 Ep. 3: Do You Nail Before You Scale Your Products?

 Subscribe: Apple Podcasts | Google Podcasts | Spotify | RSS

Hope Gurion: Odds are you’ve heard an entrepreneur or investor proclaim “Nail it, then scale it!” but what does that really look like within a product organization?   What constitutes “nailing it” and what are the practical risks if you scale too early? On the other hand, if you wait too long until you “nail” something, are you putting your company at risk of losing ground to more nimble competitors? In this episode of Fearless Product Leadership, we’re going to hear from 6 experienced product leaders as they answer the question “Do you nail before you scale your products?”


Welcome to the Fearless Product Leadership podcast. This is the show for new product leaders seeking to increase their confidence and competence.  In every episode I ask experienced and thoughtful product leaders to share their strategies and tactics that have helped them tackle a tough responsibility of the product leader role. I love helping emerging product leaders shorten their learning curves to expedite their professional success with great products, teams and stakeholder relationships. I’m your host and CEO of Fearless Product, Hope Gurion.  


The principle behind the popular product concept and catchphrase “Nail it, then scale it” is really grounded in risk mitigation.  But this phrase is easily misunderstood by leaders inexperienced in modern product development.  It most certainly does not mean you have to check off every item in your spec and then release it in a big bang to all of your customers.  If you listen to this podcast, you already know better than that. So what does it mean, and what’s the best way to apply this concept at your organization?  We have to acknowledge that there are risks and costs that product teams incur with every release.  The bigger the launch, the higher the risks, which manifest in a variety of potential costs to an organization.  In this episode you’ll hear about several of the cost and risk considerations experienced product leaders need to balance in their role.  You’ll discover how experienced product leaders build confidence that their products are sufficiently “nailed” before they scale the release of these products to customers and customer-facing teams in their organizations.  

Fearlessly tackling the question “Do you nail before you scale your products?” are:

First, Polly Howden shares why context such as the size of your company and customer base plays a key role in determining how much certainty you need before scaling a new product or feature.

Polly: So, I really love this question around, what's the level of certainty you need before you actually go ahead and make a sale or ship something. I've thought about this quite a lot and I've been influenced I think, by having worked in two very different types of contexts. So, having worked in hyper strategic sales-driven businesses where there's a lot riding on new products are going to launch teams going out, physically, on the road sell these things. Also then switching to a startup space where, actually, you've got a smaller number of customers and the risk is high in terms of you've got less resources but at the same time, you can be more nimble in some ways so. So I would say the context makes a massive difference. I would say that I had to unlearn being super detailed in some cases as I moved into the startup world and recognize sometimes you don't have a massive rollout plan, but every, not that you need to have it every time anyway but something that seemed quite big as a new piece of functionality. It can go. It's fine.

 

Then at the same time, so looking back at, you know, a company like Gumtree where has established brands and, you know, if you're doing a migration of a website for example, the risk is much higher because you've got a high volume of users. That's the point at which you then can employ more detail in terms of like A-B testing, staged rollouts, you know, and you know you can be, you've been up more planned, more organized. And then, I think in terms of certainty. It's human nature to want to feel like there’s a certain outcome and a certain degree of planning is important and useful, but actually I think if I was talking to people in my team, I would actually want them to remain kind of confidently uncertain, if that makes sense. So I think you can only improve the quality of diligence on the product shaping and the data behind it the insights that you then have to remain humble but once it goes out, you don't know. you've got to be quick to respond and have the tools in place to do that. So, I would say that you want to invest as much effort in being ready to react and keeping your finger on the pulse as actually doing it from planning.

 

 

Hope: Next, Zabrina Hossain of Shopify discusses how anticipating and checking for downside risk influences how quickly her teams scale their product releases.

 

Zabrina: So, how certain, I want to be before scaling release or investment of a product really 2 depends on what the product is, and the level of investment. So, there are two planks, that our product team should have a good balance of, in my opinion, and that's experiments and scaled initiatives. I think you need a balance of both because experiments are kind of those leaps of faith, where it will result in learnings which could result in bigger innovations that set you apart and create immense value for your users in the future. And then, there are also scaled initiatives for more established product groups where you utilize more data and your user research to develop longer term product plans. These are generally investigated very thoroughly and there's a good idea of the total addressable market and the user impact. The bigger your user base gets, the more data and research you will have to get to know them better. As you get to know them better, you'll be able to anticipate their needs more accurately in order to build the right product for them.

So, when I'm thinking about releasing an experiment, it depends on the goals of that experiment. We look at the impacts of emergence, so if there are little tunnel, negative impacts that could possibly happen, and really only learnings or potential upsides like say, a partner has a daily deals offering that they want to offer to a set of merchants to see if buyers will would take on that, right, there's really no downside to participating, we want to make sure it's still an easy process and it's not taking away their time from other really important things; but if there's no downside, then I would just say release it quickly and release it to as many as you can. If it's something that could affect your users, and I really think deeply like if it's going to affect our merchants like actual revenue or their customers or their outward facing brand. Then, we really think deeply we would recruit alpha and beta merchants that maybe have a higher tolerance for some risk and understand but want to try those more innovative things, and we will trial with them to mitigate any risks before deciding to roll it out further or deciding to pull back an experiment, which often happens. And then, when thinking about releasing a scaled initiative. I would always start with a beta so now Shopify has over a million users so depending on the product, a rough estimate would be that I would start with maybe 1% to 5% of users and just roll something out make sure it's working, really closely analyze the effects and the metrics; and if that goes slowly ramp up fairly quickly. And at that point it's like okay we can go exponentially, and we don't have to go super slow or incrementally but we just want to make sure again no negative or adverse effects. And if the positive feedbacks coming in, then we go fairly quickly.

 

Hope: That's great. And do you do go from that 5% to 100% or do you step it up in a stair step?

 

Zabrina: It's happened in both ways. So again, it kind of varies on the type of product. So we've gone 5% and then just thought, this looks great, it's, it's going to be fine and we'll ramp it up, and in some cases we will go 5% that looks good, 20%, 40%, 80% and then all the way up to the hundred so we'll go. We'll go faster, but still in steps. So, it really depends.

 

Hope: Next up Dave Wascha shares how the amount of time, effort, cost and whether the product investment is a one-way or two-way door decision, impacts how he expects his product teams to release and scale products.

 

Dave: This one is a perennial challenge faced by all of us. There are a few bits of sage advice that I always turn to when we're having this debate. I think one comes from Paul Graham, and pretty much anything ever written about product is derivative of some blog post that Paul Graham wrote 20 years ago. So, if you if you are a product person who's not read up on Paul Graham, go reprogram everything I'm saying is pretty much derivative of anything he said, but I like his whole concept of do things that don't scale. Don't try to perfect the product or the process or the execution of it before you get it in front of customers like you know if you have to do something in a really inefficient way; and do it manually, but it helps you really understand if you've got a product market fit. Then do it in a really unscalable way, that's inexpensive and fast, even if it's inelegant and ugly and there's no way you could possibly productize it because I think what you're looking for is  evidence in a feedback loop as to whether or not you should continue to invest. What you want to do is minimize the amount of work you have to do in order to start getting those signals and so I'm a big fan if you haven't read a Paul Graham blog post on “Do Things That Don’t Scale”, go read that, because I think it helps.

 

The second thing is anytime a team comes to me with a plan where kind of getting the evidence to justify moving forward with a plan is more work than actually just putting something out in front of customers, and I'm always surprised by how frequently that is occurs. Teams feel the burden of proof. The amount of work that they need to do. Probably a bigger corporate environment, often is these weeks or months longer than it would take to just mock something up and put it in front of people. That's the irresponsible use of resources, frankly, and I think people lose perspective. So, if it's going to take longer to prove it, then it would be just to do it. I would say optimize for doing it, and then that'll give you again what you're looking for is any kind of signal or feedback loop as to whether or not you're moving in the right direction.

 

The third the third bit of wisdom that I look to is our friend, Mr. Bezos. The concept, Amazon has really popularized around one way doors and two way doors and the quick definition is you know if you make a decision that you can easily undo, they would call that a two way door you can walk through a door and then can walk back through it. If it's no regrets if it's low risk, there's not a lot to lose and just doing it right because you can roll it back, a one way door is, this is, you cannot easily come back from making this decision and executing on it. So, the massive investments you know you're going to invest if you're going to do a 50 million pound acquisition or a $50 million acquisition or something like that, that's harder to undo, and therefore requires some more thought and the burden of proof on just a decision is higher.

So, I frequently encourage teams in pretty much give anyone license at any level that if it is a two way door, just go in and do it, and then find out right again in the turn of reducing the amount of work you need to do in order to make a decision sometimes it's just faster and more accurate just to make a decision and move forward. So, if it's a two-way door do it. If it's a one way door, then the burden of proof is really proportional to the size of the risks the size of the investment, and there's a quick conversation if it's a $50,000 investment in a piece of software, it's probably worth spending an hour’s meeting on that and making sure that all the stakeholders in the business are aligned on that decision. If the $500 purchase or a $50 purchase or, a single feature that you know a couple of customers might impact a couple of customers Then, why not just go do it and see what happens. worst cases you know you've lost an hour or $50 or something like that but you can undo that decision quite easily.

 

Hope: Audrey Cheng shares how the size of the investment and its opportunity cost play a role in determining how much the team should focus on certainty before scaling or committing to that product decision.  And we compare products to babies.

 

Audrey: When to think about how certain my team needs to be in order to make an investment in product, quite often what I'm asking them for is to really understand what the problem is that they're trying to solve for our customer, and what kind of impact that would make on our customers’ lives. That's really at the heart of what we're trying to achieve. Then on top of that we really need to focus on what problems can we solve that actually can realize some value for our business too, because what I'm always asking them is that if we can solve any problem in the world like our customers have a lot of things that are challenging them, that are facing them, but really it's those really key problems that would really help them, and that they're willing to actually make an investment themselves in that will help us not only serve our customers as we go forward into the future, but also be able to grow a sustainable business that will be here in the long term to support them.

So, when we think about what that means in practice, you know what I'm asking them for is to understand the problem, kind of understand you know how we might go about solving that and what it would take to actually invest into solving it. So, problems that are perhaps more complex so if we were to say, introduce or swap out our search engine or something like that that's quite a big change, and we want to really understand like what would we realize if we were actually to go forward and do that, you know, and so we want to have more certainty in the impact that that would make for our customers; and for our users, rather than something that might be like hey you know actually the, you know, we're experiencing some drop out here we think this we can make this small change, we can fix this problem. That smaller change we'll probably have less certainty that the exact solution is going to make the impact that we have; because we if you get that one wrong we can actually make some small tweaks and fix it again. But we get something fundamental wrong like some of those big features like a search engine that would be more detrimental and it would be you have that lost opportunity, have that investment in something that perhaps doesn't get delivered to market becomes too big and wasn't scoped properly or doesn't actually have any impact to your customer base at all so then you've lost opportunity to work on something that could have realized some real benefits and impacts for your customers. So, a lot of the things that we were trying to do. Also, when we think about those investments even with the smaller example that I had mentioned was that we're actually trying to measure. Measurement is a really key part of what product managers should also be focusing on because quite often we hear stories of, build it and it's done. Thing with product is, it's like a baby, right? The baby arrives and you're going to have to keep nurturing it monitoring it and sort of seeing, you know, what's the best, what's the right decision to make. For this child to grow and nurture and likewise with your product it's the same thing as you build and release something to help solve the problem. It should be additive as you go and it should be constantly monitoring it to see like, “Hey, how are we doing the best thing that we can for our customers and at what point should we start investing in that too?”

 

Hope: Great example I love the baby analogy. I've had some awkward conversations with product managers on my team over the years where it's like, using the baby analogy or like you're the parent you're trying to make the best decisions for this child. But at some point, if you're putting the child, if you're endangering the child you know you don't always have the right to keep that child. I keep telling them that analogy starts to relate. 

 

Audrey: Yes, Step back from the baby!

 

Hope: Seth Roe shares how scaling at his company is more about scaling resource investment and what he and his leadership team want to see to unlock additional investment in staffing the product and engineering teams.

 

Seth: The way I look at it as you go through the phases of discovery and experimenting at scale to really identify the winners and bring confidence back to the leadership team that we're moving in the right direction, is all done through what I call staged investment. So, as we go through each of those stages, you're continuing to bring back more and more confidence to the executive team. Yep, we're moving in the right direction. So, as we go through that process. I don't have any issue with teams, bringing back failures that we've learned along the way. The motto that you and I have talked about previously, which you actually coined the term for me after I said it greatly appreciate it because I use it all the time now is,  “teach me something new.” We were all about embracing failure. As long as, as we're learning about what works for consumers and advertisers. As long as it's not something that we already know, and it's not research we've done previously are stuff that we've tested previously. I'm always excited for what the teams can teach me, and I never feel bad about hearing about things that didn't work because more often than not, I think, honestly, on all of these teams they always find success so you know that they're obviously doing something right. So, it doesn't matter if it takes 10 wrongs to find the one thing that achieves our outcome. That's totally cool with me.

So it's, it's all in that same breath that it's, it's just little steps that ultimately lead to a place where we feel like we want to double down on it, and then we accelerate the winners.

 

Hope: And for that, are you reviewing the team's like you have an outcome goal, and the team is learning what might get you closer to that goal or learning what's pulling you away from that goal. But you call it staged investment. Yet there's a team. So what is it, what's the lever of investment that it that can either, you know, go up or down based on what the team is learning, and what are the intervals at which they're teaching you what they've learned?

 

Seth: When I say staged investment it's usually the number of engineers. So some things just, you know, really are a $10 to $20 million idea may take six people versus other areas where folks have identified a bunch of great optimization plays that are going to generate $5 million but you know they can get through it with a couple people on their team. So, the stages of investment for me are usually around how big the opportunity is how much of that opportunity needs to be broken into smaller pieces, but it's still when you look at the single opportunity and the number of people that you have on here like wow now we have eight people working on this one big thing type thing. So, it's usually that type of stuff but the, I guess the point I'm trying to make is like, I don't like putting the eight people on something that has no proven success and I think we were doing that at one point where from a little bit of a top down perspective, we were just making bets that you know in the grand scheme of things didn't have any evidence that it was going to work and. I think that bit us a couple times honestly and so we've been trying to avoid that as we move forward and make sure we're always getting that little bit of learning and confidence to make sure we keep moving in that same direction.

 

Hope: Yeah, I think that's a healthy tension because I have seen that with other companies that it feels like we have a great idea. Let's say we even decided that it was worth investing in. We know what we'll need some number of engineers to do it and they jump to staffing a team, and yet the team is still actually figuring out what it should be or how big the opportunity really is. And then there's morale issues and, you know, it's not progressing as quickly and so it seems like you found that right balance of, how do we keep the teams focused on learning; and when it's obvious that there's a much bigger opportunity here, we need more resources to actually realize that opportunity that you have formed by which that can be enabled for the team.

 

Seth: Yeah, no, that's, that's exactly right. One thing that was happening was we were, we would usually restructure on like a, an annual basis and that first quarter, specifically the first month. This is the engineers would never be working on what their new team was signed up for right and so we'd still be using them on, you know, other sorts of leftover projects or tech debt or whatever it was. You absolutely nailed it. So we're really trying to give these triads the breathing room that they need to give us the confidence that they then are ready for engineering and then, you know, as long as they continue to demonstrate they're moving in the right direction based on whatever opportunity they identified. We're going to keep resourcing them until something more valuable comes along.

 

Hope: It's also beneficial for the engineers because it feels like that so often, and it's like even calling it resources like I hate those terminology is so clunky right. If you've got evidence of this opportunity, it's grounded in what you've learned from your customers and the opportunity in the market, you actually can get more, excitement and participation, and engineers, opting in for, “Yeah I want that opportunity” as opposed to you know we have this like tweet of an idea that we put dollars against and, like, now go do this thing where it's not, they don't get to make as much of an informed choice about where they're, how they're going to contribute to the success of that initiative.

 

Seth: Yeah. Very well said.

 

Hope: Finally, Ben Newell shares the advice that stuck with him that impacts how he thinks about the relationship between cost and confidence when building, scaling and maintaining new products and features.

 

Ben: I think this is a really interesting question and one that is one of the hardest for, for product managers to do. I once had a boss who asked a group of product managers, “What's the worst thing that can happen to a product?” and someone raised their hand and said, “Nobody uses it” and he was like, “Well, actually, it's that one person, because now you've got to continue to maintain it.” So, when you think about scaling items out and you think about rolling them out more broadly, you've got to make sure you understand what the maintenance of that capability is, as well as what would happen, what would the feedback be from your clients who did use it, if you told them they could no longer have it. Those are a couple important considerations when you want to try to understand how certain it is. I think they're pretty. If you do put confidence and cost on a x-y axis, the more costly it is to continue to maintain it, the more confident, you're going to want to be. But, one of the things that I have pressed very hardly over the years, is our job as product managers, is to generate that confidence. So we need to make sure that we're doing the product discovery, that we're doing the research that we're talking to clients that we're working with sales so that we can be more confident, as we roll into production. Leveraging our engineering team members to build things is one of the most, costly things we can do as a company. As a steward of our money it's important as product managers to make sure that we understand that. So, obviously, AB testing frameworks and capabilities are key. If you know these days if you don't have that, you're not making good decisions. You really need and it's quite simple to implement great frameworks. We used a framework called ‘Apptimize’ for mobile apps Optimizely is another great framework which has, you know both web and mobile app discussions and so it's pretty straightforward to put those items in, and it really gives you clear understanding of what kind of lifts you can expect to see off of these particular features. So, when we think about how confident we need to be obviously we're setting goals. And a lot of times that's tricky. So, in some cases we're just making them up for choosing a goal; moving forward, and then measuring against those goals, and when it comes time to scale it up out, you want to make sure you understand the cost, both in maintenance, and in customer interaction to scale, those kinds of things and then plot that out, relative to confidence.

 

Hope: I'm glad you brought up the cost of maintenance but it's not just how much is it going to take to build it to scale, but it's the cost on the engineering side. I have also found like just the cost from like especially if you're in a b2b environment, like the cost to get the customer success team understanding what it is, do all the knowledge, like knowledge based training, to educate the sales team and then if it changes you've got now that you know overhead of being able to help the whole organization understand the capability at scale. So, there's a lot that if you feel really confident about its impact on your goals and it sort of helps justify the engineering and the organizational cost of absorbing this great new thing.

 

Ben: It's a really important point, not just the cost of building it, but of rolling it out and training everyone and making sure that it continues to get investment. We had a particular capability inside, Like To Know It, which is a consumer app for Reward Style. I really liked the capability a lot but it was very much like a different app, and the different app had been in market for six or seven years; and we were kind of dipping our toe in that direction. You could totally see a path where we would be a lot like that other application. That really scared me. Because it was a bit of, well, as soon as we start kind of exposing that to customers, they're going to want all the capabilities that this other product has. And we're either going to have to build them or take it away from them. And, you know, once we have it out there, it's hard to sunset features, particularly those that are used by a heavy loyal base. So that was a really difficult decision. we actually did roll it out. And we had lots of clients who pressed it into the next path. Now, you know, we eventually were able to morph it into something that we were really proud of and was different enough that we were making our own planting our own flags. But it was a really tricky process to navigate.

 

 

 Hope: Every product investment is a risk-reward decision.  As we heard in this episode, different organizations have different risk tolerances.  Some are more concerned with managing internal costs and don't want to overinvest in building out too large a product team until their confident in the market potential and product-market fit of their product.  Others are more concerned with not wanting to create harmful experiences for their customers until they see both positive and potential negative impacts on customers before they scale up exposure.  And of course for product leaders, they want their teams to learn from prospective customers and users as quickly as possible and help them understand and navigate both the risks and rewards in their product decision-making.

 

For any new product leader at an organization well past start-up phase, it’s important to know about which costs your organization is most concerned.  I suggest new product leaders gauge this with their peers in the leadership team to mitigate the downside risks of scaling too quickly for the customers’ or organization’s comfort-level.  You can surface these perspectives by asking questions such as:

  • Have you ever released a product to customers and it crashed and burned?  Why do you think it did?  What practices did you implement as a result?

  • How do you determine when you’re underinvesting in a product vs its potential?

  • Have you rolled back a release? If so, what lead to its rollout and rollback?

  • Have you ever run an alpha or beta program for a new product?  How did you know you were ready to scale?

 

I want to say thank you to Polly, Zabrina, Dave, Audrey, Seth and Ben for sharing their expertise on this episode.

 If you’re a product leader seeking to nail and scale products and product teams in your organization, I’d love to be of help.  Contact me on Linkedin or Twitter or schedule an initial consultation with me using the Contact Me page at www.fearless-product.com/contact.

Previous
Previous

S3 Ep. 4: What Was Your Defining Moment as a Product Leader? Part 1: Process

Next
Next

S3 Ep.2: How Do You Hire the Right Product Managers?