Incentives & Rules

One thing I’ve noticed over the last year or so is that I spend most of my time trying to get people to do things. I’ve been in marketing for a while now, and while I’d like my efforts to be focused on getting people to do things that make sense for them, at the end of the day I work for a business and the business is happy when our market does what I want them to do, and sad when they do not. So there’s that.

The other thing I’m increasingly involved in is management and organization or process building, where I’m trying to get people I work with, or manage (or maybe I work with someone who manages them) to do things. This sounds super manipulative, but it’s really not — I’m very upfront with everyone I work with about what I’m trying to get them to do, partially because I’m a terrible liar, and partially because every once in a while people will just do what you ask them to because you asked them to.

Most of the time, however, they don’t, because this is work, and what I’m asking people to do at work is usually annoying, counter-intuitive, or effort intensive for them — otherwise I wouldn’t have to ask for it. So here I am, with a thing I want people to do that isn’t interesting or valuable enough for them to do automatically, and two basic ways to solve it.


The first thing I can do is adjust people’s incentives. Messing with incentives is very in-fashion these days, possibly because it’s the information age and it’s easier to test things, or possibly because everyone I met in college had or wished they could have an economics degree, and all of those people are in their thirties now and writing books and getting into management. I suppose there could be a third reason, but I’m pretty sure it’s one of those two.

Traditionally, I’ve been very pro-problem solving via incentives, because like many (older) millennials, I am irritatingly conflict-averse and also slightly lazy, at least to the extent that I hate repetitive tasks with no obvious ending point. Unfortunately, my love of incentives has spent the last few years crashing into the brutal reality of actual implementation, and I’ve had trouble with a few things.

  • Money is a great way to create or adjust incentives, but people become irrational economic actors in a much larger number of scenarios than I ever imagined they would. Basically, people get really excited about the idea of controlling how much money they get out of doing something, until they end up having to actually do it, at which point they often choose to find a equilibrium between effort (or emotional satisfaction or whatever) and return that does not align with the outcome I’m trying to incentivize. I’m not saying this is unsolvable, just that it’s a lot harder to execute in practice than you might think.
  • Measuring behavior and outcomes accurately and with proper context is usually much, much harder than it first appears. Nothing is more frustrating than building out a logically air-tight incentive program, and then being told that you need to rebuild it without a key piece of information on which your program relies. Sometimes (many times), you can’t, or the resulting half-measure isn’t compelling enough to generate the behavior you want.

In short, I still love incentives and they are theoretically superior to any other way I’ve come up with to get people to do what I want, especially at any kind of scale. However, trying to solve everything with incentives in an actual business with resource limitations and plenty of other important things to do is often a great way to spend a lot of time in front of a white-board without, in the end, actually changing anything.

Of course, there’s another way…


Yes, that’s right — it’s the old “because I said so” school of management/parenting/Little League coaching, which was a staple of my younger days. In fact, I think one of the reasons young-ish professionals are so enamored with incentives-as-management is that they’re simply excited about the prospect of any incentive other than “not getting in trouble”. However, the simplicity of implementing compliance gets more and more appealing as you deal with the complexity of building and executing elaborate, often contradictory incentive schemes for different groups of people, and before long, it’s easy to find yourself fantasizing about simply crushing every obstacle in your way with the merciless hammer of autocracy.

To quote my favorite management consulting resource, Green Day’s 1994 album “Dookie” :

“Do you ever want to lead a long trail of destruction and mow down any bullshit that confronts you?”

If your answer is “yes, yes I do”, you’re going to love compliance, at least in theory. Unfortunately, the fact that implementing compliance is “simple” has zero connection to it being “workable”, or “effective”, and even in my limited deployment of this strategy, I’ve run into a couple problems.

  • Compliance isn’t actually all that different than incentivizing — you’re often just sort of threatening negative incentives, or maybe just inferring them if you prefer not communicating clearly. So in a lot of scenarios, compliance is just as complicated, because you have to build negative incentive schemes that are just as complicated, but much less exciting or motivating. For instance, if I just tell you to do something, what happens if you don’t? Is that clear? Is it established? Is it… anything? The idea of not being yelled at, or not disappointing someone, can be extremely compelling or not compelling at all based on lots of factors — it’s just that with most compliance plans we don’t think about any of it until later, so it seems simple enough.
  • Leaders, managers, and even governments all constantly overestimate their ability to enforce compliance. Keeping people from smoking pot by putting them in jail or harassing them (i.e., compliance) has never worked, and still doesn’t work, but there’s still a vast army of politicians, police administrators, and others who are so disgusted by the idea of soft, subtle positive incentives for good behavior (people who smoke a reasonable amount will perform better at work and generally be happier than people who are constantly stoned, etc.) that they’ll just keep banging the drum forever. When I worked at Efficiency Exchange, we spent a lot of time engineering the right set of positive incentives for manufacturers in developing economies to behave the way their customers wanted them to, and people were constantly baffled by the idea. They all just wanted to make requirements and throw them over the wall, assuming they had sufficient economic weight to enforce their will when they absolutely do not.

oh no, not a mix…

So no, neither of these are one-size-fits all solutions in the real world. I can’t imagine you’re surprised. I battle with this every day, and here’s what I’ve learned so far.

  • Start by removing positive, perverse incentives for actively undesirable behavior. There are probably more of these than you realize, especially if you have data problems. (hint: you totally do)
  • Save compliance for things where people have recently suffered from their own refusal to comply. Nobody wants to carefully fill out a report every Friday, but good people will do it if they remember terrible things happening due to a lack of good reporting. Same thing with project management behavior, etc.
  • Don’t resort to compliance just because you don’t feel like figuring out the incentives, or because you are annoyed by the fact that people who work for you aren’t just happy to have a job. It’s understandable, but I’ve never seen it work in the private sector.
  • Don’t create incentives you aren’t equipped to measure accurately — your flawless whiteboard theory is worth less than nothing if you can’t build an effective bureaucracy to execute it. And don’t be arrogant or dismissive about the way that stuff is calculated. If you can’t trace back a couple edge cases and audit the results yourself, you’re playing with fire.
  • Create a very simple, rock-solid incentive that is bigger than anything else, and more important than anything else, and then experiment on the margins with more granular, less important stuff. If your experiments are stupid, the bigger thing that works will prevent people from doing anything too damaging until you can fix the smaller stuff or kill it.

I certainly haven’t done all of this stuff yet, or done it well enough to get the outcomes I want, but I can at least say that this kind of thinking has helped me make a number of stupid things significantly less stupid. And I’ll take that any day.