(I hate when people try to make some facet of their little niche industry into a professional maxim, but I’m going to do it anyways, because it’s a really easy way to write on the internet. So just know that I work at small-medium sized software companies and take my myopic view of the universe for what it’s worth.)
One of the most frustrating things for me at nearly every company I’ve ever worked for is inconsistency in decision making. I’m not talking about the actual decisions being made — I’m talking about the way they are made, and what’s expected/required to justify them.
I often mention my favorite quote — “if we have data let’s use data, if all we have are opinions, let’s use mine” — but this isn’t really about that, either. Or at least, it’s a more nuanced version of that. Because now, as I rapidly approach middle age (ed. You’re already there, champ), I’m completely at peace with the idea that people are going to pull rank in any kind of “tie goes to the runner” kind of decision scenario. That’s cool, I get it, “that’s what the money is for”, etc.
Instead, what I struggle with is one level up — what IS the role of data and objective evidence for any given decision, and more importantly, can we decide that before we start debating potential solutions?
More On Data (see what I did there?)
Now, I swear, I’m not here to beat my usual drum about people being wildly delusional about their ability to collect, normalize, or analyze data. In fact, let me give you an example of a theoretical discussion where what I’m talking about is a challenge.
QUESTION: Should we build Product A, or Product B?
PERSON 1: 65% of customers said they would consider buying Product A, if we built it.
PERSON 2: Our competitor does not have Product A.
PERSON 3: It would be especially difficult for our company, of all companies, to build Product A.
PERSON 4: Companies that offer a version of Product A are able to charge an average of 25% more than our current price.
PERSON 5: Product A is basically just a simpler version of Product X, which we want to build anyways.
PERSON 6: Product A is not something our normal customer would buy — we would need someone with greater/different purchasing authority.
PERSON 7: Everyone is talking about Product A like things. It’s on the cover of Wired.
PERSON 8: We will need to hire 10 experts to build Product A.
PERSON 9: Some rich guy will give us 10 million dollars if we promise to build Product A.
PERSON 10: Our current Product will be illegal by 2024, so we should definitely learn how to build something else.
Now, I very intentionally didn’t put anything worthless in these examples — all of this stuff is important, and while you can argue about the relative weight each should receive, I would never blame company/management team for (a) bringing one of these things up, or (b) considering it in the decision. But… OVERALL, this is a horrendous way to make decisions, and also how basically 95% of the important meetings in my life have gone (just make everything vaguer and extend this to an hour or two).
Obviously, part of the problem is just how complicated of a decision my theoretical situation appears to be. There are (apparently) a lot of highly consequential things to consider! So it’s not surprising that one of the hardest decision making things groups of people face is successfully breaking down complex discussions into smaller ones, without losing the overall context of the larger decision. For example, you could have some giant series of meetings about the costs & challenges of hiring the necessary experts to build Product A, only to then realize later that you have plenty of money and your current product is going to become illegal. But of course, if you try to solve all of this stuff in one mega conversation, it’s going to train-wreck and you’re going to both over-cover and under-cover various factors.
As usual (for me), I blame this entire nightmare on typical business thinking that values precision over coherence. And as I listened to my son explain to me the many small differences between his three to five (the exact number is unclear) imaginary dogs the other day, I thought about how to solve for this.
Performance Decisions & Non-Performance Decisions
“Performance” may have a real business-school term (the hell if I know), but it’s come up a lot for me lately as sort of a clarifier for “quantitative” things. To me, it represents the idea of getting a very specific, very visible return on something that you do. Now, obviously, it’d be nice if everything we did was like this, because then there’d be no risk to anything that we do (or at least no risk greater than what we put in to find out the performance of something). In fact, that’s SO compelling to people (especially rich and powerful people with a lot to lose), that the business world likes to pretend the secret to making an entirely performance-driven organization is simply to demand that everything be a performance decision all of the time. And you can do that! That is, as long as you are an enormous monopoly, a bank, or some other form of established value-extraction.
If you’re building something new, or trying to grow on the non-margins, you are kidding yourself if you think you can make exclusively performance driven decisions. You won’t have the data, and just as importantly, relying solely on reasonably short-term data as the arbiter for everything is going to restrict what you choose to do to a suffocatingly narrow breadth of things. Companies that make purely performance driven decisions are often sitting ducks to be — yes, I’m bringing this word back, just for today — DISRUPTED!!! That’s because you aren’t going to be able to A/B test the impact of sitting still while someone else changes the fundamentals of your market, customer experience, pricing structure, etc., so when it happens you’ll just be standing there scrolling through Excel trying to figure out where it all went wrong.
That’s why to either build something new, or keep your company from getting knifed to death in an alley by Apple or some other non-bank, you simply HAVE to make some non-performance decisions. These are decisions that DON’T necessarily have a measurable impact, or at least not one that you’ll want to use to validate whether you made the right decision or not.
Wait, this already exists, doesn’t it?
Ok, yes, you could write this off as “strategy vs. tactics”, or “qualitative vs. quantitative”, but I actually don’t think that’s quite right for a couple reasons.
- Non-performance decisions can and probably SHOULD consult some quantitative data. For instance, you might move into a market that — via actual numbers — you decide is potentially very lucrative. Or you might see customer behavior that indicates an easy path to up or cross-sell. But in a performance decision, those things wouldn’t be enough to get you to pull the trigger. You’d need conversion rates and all this other forward looking stuff you won’t have to be able to say “THIS WILL HAPPEN IF WE DO X, SO LET’S DO X”.
- Performance decisions are often tactical, but I don’t necessarily think they have to be, and I actually think I’m starting to see more and more growth-stage organizations try to set strategy via performance. Theoretically, with good enough, broad enough data (and a really wise hand guiding the whole decision making process), I think you could use performance style data to generate strategy. Think of something like Apple’s decision to get rid of the big HomePod. Yeah, sales probably weren’t great, but I bet they weren’t HORRIBLE, either. Given their history and willingness to stick things out (like the watch) I feel like there was something more specific about that sales data, usage data, or something else that made Apple think “let’s do this with small, cheaper speakers instead” besides “that’s what everyone else is doing”. Unfortunately, YOU are not Apple, and you do not have anything even remotely approaching their customer base, operational skill, or data literacy, so you can’t really do this unless you want to delude yourself.
What I’m asking for
So, let’s get back to my original terrible meeting point (my egg sandwich is gone and I’m going to try to bring this home). If all that stuff is important, how are we supposed to make this decision?
Well, again, what drives me up the wall isn’t that we make the wrong decision. You can’t control for that. But we can get rid of the apples and oranges thinking that lumps all of this stuff together and causes people to pretend they are making performance decisions when they are not, by doing some of the following:
Identify risks and rewards.
Most people won’t want to quantify the big, broad, existential ones, but back of the envelope math can help you at least decide whether something is small, medium, or large. Saying “we will need to hire people to integrate these platforms” is important, but taking a minute to roughly outline that it’ll cost about five million bucks or whatever is worth it, because now you can compare that cost to other costs, and potential benefits. Too often, everyone has a different idea in their head about what a proposal is supposed to cost & do, and then it just turns into whether people like it or not.
“Why?”
I had to adjudicate a lot of GDPR concerns at a relatively small company with a relatively junior staff that was way too eager to drop everything in the name of GDPR compliance. While I’m all for following the law, it was a useful exercise to unpack the actual, operational impact of some of their many doomsday scenarios, especially when the law was brand new and enforcement and penalties were entirely unclear. The same thing applies to other broadly “good” or “bad” things — make sure you aren’t comparing 65% improvement in X versus “Y is better for customers”. If this is a non-performance decision and we want to do Y because Y makes logical sense, just do it, and throw the 65% away. But be prepared to not get that decision validated by data in three months, because it’s not going to happen.
Don’t kid yourself
Way, way, WAY too many people (especially executives) just want to do something for reasons they can’t (or are scared to) articulate, and desperately want numbers to cover their butt for them. I get it, that’s fine, everyone does it — the problem is that you’re basically exchanging your expertise and the value of your intuition for numbers that can be generated by anybody, and if you DO have good intuition you probably aren’t as good with that data. So now because we did your idea, we have to do this other, much dumber idea, because it TOO will increase retention by 5% using the terrible math you don’t even believe. And what are you going to say then?
I think a lot of more senior leaders are concerned with becoming “because I said so” types of people. But here’s the thing — the problem with that isn’t your lack of data, it’s your CRAPPY EXPLANATION of why you want to do things a certain way. If you’re bad at explaining your intuition, relying on numbers you are probably also bad at explaining isn’t solving anything, especially since (as in the aforementioned scenario) when someone else brings those same numbers to undermine you, you’re going to tell them to go back to their desk, or pat them on the head and never actually prioritize their idea.