The last two companies I’ve worked for have used OKRs, a system for company, team, and employee goal setting popularized (and possibly invented, for all I know) by Google. Personally, I’ve been very frustrated by OKRs, and if anything, they’ve made me less productive. I don’t think I was alone, and that’s one of several reasons Contactually scrapped them last spring.
It’s not a bad system, but like a lot of a business theories coming out of the startup world these days, it suffers from a desire to constantly evaluate everything quantitatively that isn’t matched by an ability to effectively measure things that way. In fact, OKRs are kind of the embodiment of this philosiphy, where for some reason startups still attempting to do basic things like define what their product is for and how it works decide that they’re going to operate like a Google, a billion dollar public company with one of the most (if not the most) advanced data infrastructures in the world.
The motivation behind being data-driven is fantastic, because it’s a motivation to not be driven by bullshit or blind ideology. This is a truly wonderful characteristic of the stereotypical venture-backed startup, and probably the main reason why I enjoy being a part of them so much. These companies want to do things that make a difference, not just things that make them feel like important companies, and that’s a big part of why startups have successfully shaken up or even dismantled long-established markets.
But that’s the emotional drive behind OKRs; the practical application is another matter altogether, and one that I think a lot of organizations have really struggled with. Recently, I’ve heard marketing people start referring to being “data-informed”, instead of “data-driven”, which I think attempts to fix the wrong part of the term. What you really want is to be reality-driven, where data is often an idealistic proxy for reality. If I’ve learned anything from the last ten years of work, it’s that getting reliable data and making sense of it is actually very difficult, and very expensive in both time and dollars. It’s easy to say “let the data decide”, until two people are waving contradictory Salesforce reports at each other that they both spent hours on — hours that could have been spent doing unquestionably useful things like talking to customers or fixing obvious problems in your product.
Basically, despite all of our wishful thinking, even rudimentary data science is still really hard for most businesses. But we’ve allowed the still-young analytics industry to convince us that it doesn’t have to be, as long as we buy the right tools. Well, by now I’ve used many of the best analytics tools, and while lots of them are really cool, none of them make the full business data experience — gathering, analyzing, interpreting, and auditing — easy, or even predictable, for a company without significant dedicated resources. Not one.
Back To OKRs
That brings us back to the OKR, an acronym that stands for “objectives and key results”. The “objective” part is fine — OKRs are designed to nest goals so that everything everyone is trying to do ultimately bubbles up to a company goal. If you do OKRs right, they ensure that everyone’s individualized little tasks are all generally pointing in the same direction, which is important if you want to make any significant progress as an organization. Strategically, it’s a really, really good way to operate, even if it’s essentially just a framework for continually using common sense and proper priorities in the workplace.
The problem is the second part — “key results”. See, with OKRs, your objectives aren’t really objectives unless they cause measureable change, which can be a problem for several reasons. If you’re trying to measure something qualitative, for instance, the reflexive OKR-style approach is to bolt some kind of metric to it. Unfortunately, this has a tendency to cause people to radically oversimplify extremely complex, qualitative parts of their business with iron-clad sounding, but actually very dubious logic.
“If our messaging is bad, and we improve it, our conversion rate should go up. Ergo, if our conversion rate doesn’t go up, the messaging isn’t any better, or messaging doesn’t matter. We will let the data decide.”
But wait — why are we assuming that effective communication is directly related to people doing what we want transactionally, like clicking a button? Where’s the statistically significant data that demonstrates how people interact with our messaging, evaluate our product, and eventually make purchasing decisions? In other words, where’s the data that validates our approach to data?
The fact is, for growth-based companies doing anything remotely new, that data is often completely non-existent. So ironically, many companies using OKRs end up building up highly quantified evaluation methods that our powered almost entirely by subjective, qualitative assessments of data. You’re not eliminating subjective decision making — you’re just obsfucating it behind a layer of numbers that make everything feel less random.
Worse Than Nothing
People who desperately want to be data-driven (for good reasons) often respond to these observations with the argument that some data is better than no data at all. I have always vehemently disagreed with this, probably based on the fact that I’ve justified a lot of really dumb things in both my personal and professional life with an incomplete, or logically flawed subset of technically accurate data. A simple example — I make pretty good judgments about personal spending. I’m inherently pretty conservative about cash flow and what I “need” versus what I want, and that’s served me well through a series of wildly different personal finance scenarios, from having literally no money and no job, to buying a house a start a family.
But you know what I do when I want to justify a bad financial decision of mine, or push back against something my wife wants to spend money on? I go online, and I start pulling data. How much cash we have. What we’ve spent recently. What we can cut back on. Anticipated future costs. Because we’ve never successfully built a strong enough financial model to respond to various purchase ideas with a simple, 100% metric-based “good idea/bad idea”, instead it’s up to me to equip myself with whatever amount of data I find useful, and make a decision from there. And unsurprisingly, whatever I thought before I had any data turns out to always be the conclusion “the numbers” indicate as well. What an amazing coincidence! To cite one of my favorite quotes, “If we have data, let’s look at data. If all we have are opinions, let’s go with mine.”
And that’s the Achilles heel of the OKR system, as well — tying qualitative objectives to quantitative key results outside of anything other than the most transactional jobs and tasks requires huge logical leaps of faith, and radical oversimplifications that in the end, rarely help you build a stronger business. “Building a product people love” becomes “increasing logins per month”, which probably makes sense in a thousand ways, and makes no sense at all in a thousand other ways. “Increasing brand awareness” becomes “unique blog visitors per week”, because hey, we need a key result, and that seems like something we can measure that would be good. But when the end of the quarter comes, you’re no longer thinking about brand awareness. You’re thinking about unique blog visitors, and in that way, OKRs are often responsible for shackling your team’s very powerful, entrepreneurial brains and turning them into stressed-out, metrics-hunting robots.
Accept That You Can’t Know Why Everything Happens, All The Time.
The bottom line is that everyone is at work for a reason, which means they already have a goal. And what’s ultimately important is that they get as close to reaching that goal as possible, and don’t do things that make reaching that goal more difficult. Marketing teams are probably going to be responsible for generating leads. Product teams are usually supposed to build products that work well and provide value. Sales teams have to close. None of this is rocket science — the real numbers that matter to those teams are obvious, and should be religiously tracked (although it helps to have the patience and perspective to not fool yourself into thinking you can immediately, effectively respond to daily, weekly, or even monthly tracking).
But micro-tracking at more granular layers is often little more than a vanity exercise, and the kind of pointless, administrative navel gazing that startups should be avoiding, not installing. What’s more important is for team leads to look at that one number that really does matter — be it sales, close rate, inbound leads, or whatever — and use their powerful little human brains to assess if the things their team is doing are aligned towards that goal. “Why am I doing this?” is a great question to ask yourself at work every day, and at home, too. When I talk to frustrated co-workers (especially younger ones), and I ask them why they’re doing something they think is a distraction or a waste of time, the answer is still too frequently “because (senior person X) wants this by the end of the week”, or even worse, “because it’s one of my OKRs”. To me, for all of it’s theoretical, well-intentioned benefits, that’s the sign of a system that’s not working.