About Me

I am just some guy with a cool wife and funny kids who likes making things that probably don’t need to exist, like this website, a bunch of albums, and all these words.

I made Resolution and I just finished an acoustic album.

About Me

I am just some guy with a cool wife and funny kids who likes making things that probably don’t need to exist, like this website, a bunch of albums, and all these words.

Here’s some of my work.

I’m also the lunatic behind a what-if scenario planning & goal setting application called Resolution. You can use it for free here, or check out our fairly large set of examples

Look at This Hat

I recently finished an acoustic album, and it came out pretty good! If you like stripped down, half-earnest half-winking-at-the-camera punk rock songs recorded by some Dad in his living room, you should listen to it.

Listen Now:

Spotify | Apple Music

“Am I so out of touch?”

Some time around my freshman year of college, my Dad’s company gave him a phone with Windows CE on it. It quickly became a running joke in my house, because the phone was both kind of incredible, and the dumbest thing in the world. Dad hated it, surely in part because he didn’t like the idea of work trying to bother him at home, but also because he was an engineer and the phone was a truly bizarre collection of choices in resource allocation. It had a bunch of Windows-style bells and whistles that made it sort of feel like Windows, but was laughably bad at basically any task you’d ever actually want to do on the phone. It took forever to turn on. It was impossibly slow. Even getting and responding to an email was a huge chore with a fiddly little plastic stylus. If you had asked us in 2002 whether smartphones would take over the world in less than a decade, I think we would have had a good laugh.

The two major shifts that changed that trajectory were the Blackberry, and the iPhone. I never had a Blackberry, but I remember my wife’s office giving her one temporarily for some reason, and the difference in philosophy was obvious. I have no idea how many different things the Blackberry did or presumed to do, but it was really, really good at reading and sending short emails. I know this because my wife (at the time) had zero experience doing anything productive on a smartphone — I’m not even sure we knew how to text each other yet — and within a couple hours she looked like a legislative aide trying to push a bill through committee. She was hooked to that thing, to the point that I made her promise not to keep it if they let her. But as a pure messaging product, the Blackberry was a home run because (at least at first) it made vastly better resource prioritization decisions and focused entirely on making the useful experience of triaging emails while on the go better.

The iPhone was an even more impressive achievement because it made Blackberry-level allocation and optimization decisions in service of Microsoft’s much more ambitious goal, which was mobile, general purpose computing. And while, without a doubt, there were (and continue to be) amazing hardware innovations that made the iPhone idea work, they were all in service of software that was extraordinarily well-designed and built for the task. Multitouch hardware is cool, but multitouch hardware attached to an entire platform built around pinching and pulling and tapping actually solves many of the problems Windows CE simply stared at blankly and hoped people would accept.

Today, it’s easy enough to think about my Dad’s old phone (or “Pocket PC”, or whatever it was called back then) and think that Microsoft basically had this right and just dropped the ball. But they never actually had it, because their “it” was just “small computers”. They could have banged away at that for another twenty years with incrementally better hardware and made the horrendous, Start-menu-clicking Windows CE experience marginally better, but they never actually would have figured out what we have today. They were never going to solve a problem that required transcendently great software because they make shitty software and they don’t think that’s a problem. Instead, their solution was to push their laughably bad devices (sorry, “partner devices”) to corporate customers and try to pressure the world into adopting them. In fact, it’s a credit to Microsoft’s incredible sales and distribution skills that people like my Dad even had these devices in their hands at all, as they provided almost zero utility, were pretty expensive, and generated absolutely no mainstream, organic demand. They were stupid, fundamentally broken, almost nobody wanted them, and yet through sheer force of dollars and executive desire, they were a real product category and frequently discussed in business media as “the future”.

Sound familiar?

“No… it’s the children who are wrong.”

Ultimately, I think mobile “happened” because the software that powered it and defined the experience for the people using it made those massive leaps. The Blackberry made people realize the “tiny Windows PC” concept was stupid and insufficiently useful, and the iPhone gave people a fundamentally different way to interact with small devices that made them not just usable, but so usable that people became comfortable using them instead of desktops and laptops in many (most?) circumstances. There was no “adoption” problem because the products made obvious sense. In fact, at the enterprise level, the problem was backwards — companies didn’t know how to deal with the fact that everyone wanted to use their mobile device for work, and they actually dragged their feet and tried to slow the whole thing down.

One of the massive red flags about this generation of “AI” technology is the increasingly frequent (to the point of being almost constant now) reference to those pesky “adoption” problems. I get a little salty about “adoption” because it’s the kind of thing people complain to Product Marketing about, and feel like my team should be able to fix, or have prevented in the first place. And sure, every so often, there’s an awareness or education problem, and that’s something we can tackle. But much more frequently, people are aware of what we want them to do, and they do know how to do it. They just don’t want to do it, dislike the process of doing it, or don’t think it’s worth the effort. As a last ditch effort, I can always argue with them about it, but the honest truth is that if you’re arguing, you’re losing. We probably just made something no one wants or needs, or maybe even something that is actively bad.

Windows CE had adoption problems, too. I guess we could have written op-eds about how the world of 2002 needed to “transform itself” into a “mobile-first” world — I assume people wrote these, honestly — but none of that actually happened because the world wasn’t going to reorganize itself to mitigate the problems of new, unproven products caused by shitty, uncreative software. In reality, the world only grudgingly adapted once transcendently great software made these products shockingly, undeniably useful for interacting with the world around them. Adaptation is a reaction, after all. Businesses don’t preemptively adapt to anything unless they really enjoy wasting time and money.

If generative AI actually matters, its utility or ease of use will force the changes today’s investors and consultants are insistent that we make ahead of time — a trail to be blazed, so to speak, versus a red carpet to be rolled out. To the extent those changes exist today, they have very little to do with legitimate business and a lot more to do with society, as generative AI’s most dramatic efficiency improvement so far has been the creation of spam, slop, and disinformation. The world really is scrambling to adapt to massive adoption of the technology by bad actors, because their preferred workflows really are better with software that uses this technology. It’s disgusting and gross, but it’s not illogical, and it doesn’t require think pieces on adoption.

But these use cases, in addition to being socially dubious, are also pretty niche in the grand scheme of things. For returns that justify the investments being made, it’s going to take great, innovative software that turns whatever value exists in probability-based asset generation into something useful and accessible to people. Hilariously though, the one legitimate area this technology seems to be impacting is… creative software development! That’s right — creating derivative, marginally customized versions of software that have existed forever is now incredibly cheap, and incredibly compelling to companies. So while what the industry desperately needs is a completely new set of original software ideas, that same industry is aggressively discouraging craft and originality in software. In short, I don’t believe generative AI will really matter to the broader economy (outside of unsustainable data center expenditures) by remixing old ideas and conventions. It’s just not that kind of innovation. We got away with — and I personally profited from — twenty years of building cloud-based versions of things that we knew people wanted and needed. And while it didn’t work every time, there were tons and tons AND TONS of cases where that made sense, and simply bolting “internet” onto something useful made it better for a variety of reasons. That same approach is clearly being taken by today’s AI founders (stick AI onto things), but the benefits of AI are way, way more nebulous, the downsides more obvious, and the costs higher than anything in the cloud era, and you’re seeing that manifest in the “adoption problem” that simply isn’t going away.

There’s some amount of time you can just charge ahead and hope some version of Moore’s Law becomes apparent and everything just becomes easier on its own. That certainly seems to be the plan, albeit one crafted and promoted by a bunch of executives my age and younger who have been drafting off of the actual Moore’s Law their entire professional lives. I’m not sure they know what else to try, which is yet another reason why I think this is going to end very, very badly.

Push, push, push

Football has always had problems, but from a fan experience perspective, it was basically the perfect setup for a long, long time. The sport simply owned Sunday afternoons, and in many ways, Monday night, not just because it was popular, but because it was so well engineered. No baseball/basketball/hockey-style fatigue or regular season grind, everything happening at once, everything important all the time. The league printed money, dominated TV and culture, and set whatever terms it wanted in everything it did.

But… it wasn’t enough. And it never will be. Now, football is on too much. Thursday night games suck. There are random bad games in Europe at odd hours. The 16 game schedule is now 17 games, with a corresponding increase in injuries that make the results a little less about skill, and a little more about depth and attrition. Instead of just being “on”, games are gated behind stupid services like Peacock, and Amazon Prime. Things are basically the same, just a little bit worse every year.

People can feel it. Football is still a dominant money machine, but almost no one is a fan of the direction the league is moving. And even that’s not enough, because nothing is enough. Now the owners want 18 games, consequences be damned.

“I want to tell you guys that we’re going to push like the dickens now to make international [games] more important with us,” Kraft told the “Zolak & Bertrand” show. “Every team will go 18 [regular-season games] and two [preseason games] and eliminate one of the preseason games, and every team every year will play one game overseas.”

I gotta tell you, I hate that everything is like this. We sometimes reprimand my intense, ambitious daughter for constantly pushing the envelope in an attempt to get the most out of things, or move them in the direction she wants. “Push, push, push”, my wife will say, to remind her how exhausting it is to have even wonderful things stretched and exploited and distorted until they become frustrating facsimiles of themselves.

For our sake, we hope she reigns it in as she grows up, but if I’m being honest, I don’t think it’s a quality that will keep her from being successful. Maybe she’ll own an NFL team.

Abolishing Things

I get why people are hesitant to abolish large parts of the government. I also get why, despite this, more and more people want to abolish the Immigration, Customs, and Enforcement (ICE) agency.

While ICE’s recent psychopathic-clown-show tactics have put it under a particularly bright spotlight, ultimately I think people want to “abolish” it for the same reasons some people wanted to “abolish” various local police organizations in 2020. At some point, it starts to feel like the safeguards on positions of state power are entirely voluntary, and that the people with that state-granted power are the ones who get to decide whether to obey them or not. Those aren’t really safeguards! For all the talk of rounding up people who “broke the law”, regular-ass Dads like me have been watching videos of morons in tactical gear blatantly violating a wide variety of rules, laws, and even Constitutional rights with absolutely no consequences whatsoever, let alone appropriate ones.

For instance, I’m sure I have seen over a hundred videos in the last six months of ICE and CBP employees doing something that made me think “that guy should go to prison for that”, or maybe even “I think you’re supposed to go to prison for that”, and none of those guys are going to go to prison for those things, or even lose their jobs. Personally, I don’t think it’s weird for someone, at a certain point, to just say “the hell with all of this” and demand a fundamental rewrite of this part of the social contract.

Immunity as a Given

Many policy debates are essentially an exercise in trying to make people accept certain “givens”. One given that’s been pushed forever is the idea that law enforcement, for the most part, needs to be in charge of regulating itself. If someone from the government with a gun does something that looks or feels illegal, it should basically be up to a bunch of people who have the same job to decide whether there should be any repercussions, and usually the answer is “no”.

Now, you can make an argument for this. You can say a world where law enforcement is held accountable by some other function, and not allowed to be given vast immunity to a host of otherwise criminal actions, would be a world where laws could not be enforced, and everything would go to shit. You can say that if police, or ICE agents, or whomever is zip-tying you today could not operate with absolute confidence that if they get scared, it’s okay for them to shoot whatever they are scared of, that there would functionally be no policing at all. You can argue that! However, I do not find it especially compelling.

What I think is much, much more likely is that most people have done some math in their head that goes something like this. If a policy makes the police 5% more effective at protecting you from things you don’t like, but 5000% more likely to hurt, rob, or kill innocent people while they do so, that seems like a bad trade. BUT, if you think the 5000% increase in abuse will happen exclusively to people who you are not concerned about, that trade seems fine. I mean, it shouldn’t really seem fine, morally, but I would guess that it probably does, in fact, seem pretty fine to a lot of people. But this is just run of the mill “leopards eating faces” stuff. The point isn’t “no real-world accountability breeds bad outcomes for state power”, even though that’s also true. The point is “if you’ve removed all real-world accountability you can’t fix bad outcomes by adjusting the particulars of accountability”.

In this case, that’s the whole “do we fix ICE or get rid of it?” argument, which is a good argument to have. From my perspective, though, it’s also not a very complicated one. ICE — and unfortunately, most 21st century law enforcement — have insisted on building policy around the idea that law enforcement cannot be held accountable the way regular people are, and in fact need to be fundamentally immune to almost all of the consequences a normal person would face for doing various violent or grossly illegal things to another normal person. That’s non-negotiable, apparently. And now, with that principle in place, let’s make some rules!

See why this is a waste of time? There are already lots of rules! They’re simply being ignored because they can be ignored, and the only answer we can come up with are more rules that will also be ignored, because underneath it all too many of us still think law enforcement should get to decide if it did something bad or not.

In my heart of hearts, I think this is what animates people who want to abolish various law enforcement functions. I don’t actually think — in most cases — it’s some hippy-dippy belief that crime doesn’t exist (although it really would help if the police solved more crimes) or that there is no need for law enforcement of any sort. Instead, it’s a rejection of the idea that these organizations should exist as so many of them do — self-regulated, armed, well-funded, unionized, highly political groups that receive massive judicial and legislative deference in almost every scenario.

I think a world where turning off your legally-required body camera, for instance, and then arresting someone, was potentially a form of criminal negligence is absolutely unthinkable to a lot of people. But I think that position understates the seriousness of the task given to armed agents of the state. I didn’t invent it, but I think the idea of “higher standards + real accountability + much higher salaries” isn’t crazy even though it’s not a panacea and the devil’s in the details. Law enforcement is hard, but so is nuclear physics — and we don’t let nuclear power plants self-regulate or operate with the knowledge that everyone working there will be immune from almost any criminal charge if they decide to kill a bunch of people.

And those are real cops! ICE makes the disconnect here even more obvious. What utility are we even getting out of these goons? I’m not going to stand around and listen to a theoretical argument about utility ICE might have if the people who work for it did their job in a completely different manner than the way — given carte blanche to do whatever they want and total deference from local authorities — they choose to actually do it. I’m going to take it as it is. They don’t follow rules. They lie constantly. They willfully commit sociopathic violence with no consequences. Logically, how would you even fix that? Find some magic politician who will be our benevolent dictator and use his unaccountable army thoughtfully?

We had versions of that for a long time. No, not an actually benevolent ICE (imagine that, lol), but one that put at least some limits on itself because the idea of just telling the public, in every… single… interaction, to eat shit and die seemed… unseemly. But it was only a matter of time before thugs and goons would be drawn to the warm flame of unaccountable, masked violence and bullying, and now ICE simply is what it is — a relationship defined by its lack of boundaries, which means we can’t fix it with boundaries. It just has to go, and then we try again with something completely different.

Actually, Hacks are Bad

If I may — a “hack” is not a good thing. At their best, hacks are clever, temporary exploits that get you where you need for the time being, or if you’re a criminal, a way to bypass security that (by definition) you aren’t supposed to be bypassing. Most hacks are just crappy, lazy, unrealistic implementations that obscure cost, sometimes deferring it but also usually increasing it in the process. The worst ones are literally scams.

Don’t aspire to come up with hacks. There are way too many hacks already. Aspire to come up with real, thoughtful solutions, and advocate for the time and resources to execute them.

Rage as a Service

Trolling people is easier than being funny or clever. This is not a new challenge, but here we are, once again pretending that failing at it is a tactic and not a weakness.

“Some advertising and marketing agencies are now intentionally leaning into the volatility of the current political climate, not to take a stand but to manufacture outrage,” said Mikah Sellers, an advertising consultant who has worked with major brands including Booz Allen Hamilton and Carfax. Companies are tapping into cultural rifts and driving wedges in hopes of capitalizing on the strife, he said.

How will they capitalize on pissing people off? Unclear! In fact, here are the companies (sometimes a generous classification) referenced in this article that are allegedly getting exactly what they want from this tactic:

  1. Friend
  2. Nucleus Genomics
  3. Skims (Kim Kardashian’s clothing brand)
  4. Sabrina Carpenter 
  5. Pippa
  6. Cluely
  7. Clad Labs
  8. Artisan

Leave the pop musician and the reality star aside, because I understand the commercial viability of trolling in those industries. But the rest? These companies are not successful. They can scream all they’d like about how they’re getting what they want as they slowly (or quickly) go out of business, but this largely startup-fueled delusion that life is either TikTok or the Trump administration, and you basically need attract eyeballs at any cost and then go from there is not actually backed up by the facts or financials. 

And the thing is, the smart counter to bland, play-it-safe branding isn’t stupid, insecure edge-lording. It’s staring you right in the face — it’s CostCo, dammit! If you want to be edgy, get your name in a story like this, which has spread around the internet organically so much on its own that it has its own Snopes article confirming its authenticity:

“I came to (Jim Sinegal) once and I said, ‘Jim, we can’t sell this hot dog for a buck fifty. We are losing our rear ends.’ And he said, ‘If you raise the effing hot dog, I will kill you. Figure it out.”

Just an incredible quote. No notes. This is the kind of shit people want. That they feel. It’s not generation or culture-coded, other than the fact that people are constantly being screwed by companies acting in their rational, best financial interests at our expense, and here’s a guy running a company who is just MAD about it, stomping on everyone’s least favorite title (“CEO”) and even threatening (tongue-in-cheek) violence over it!

None of this is new. How about this Dollar Shave Club launch ad, from over a decade ago?

It’s the same thing. It works because (1) it’s violently pro-customer, and (2) it’s about what the company actually does to/for those customers. The edgy tone is fine, but the cause of that edginess feels more like genuine excitement about what they’re able to do for customers, and anger at the worst, most anti-customer aspects of their competitors.

When everyone was watching this ad and talking about how they needed one just like it, we didn’t have to figure out what percentage of them were hate-watching it, because it was zero. In fact, when everyone tried to do this, the problem they ran into was (a) they weren’t actually this passionate about their business or, more specifically, their differentiator, and (b) they’re not fun, creative, or funny!

Conversely, this latest run of pathetically click baiting people into looking at you for a second, rolling their eyes, and then, as they walk away, shouting “Ha! You’re in the funnel now!” completely misses the point. It fees like it comes from a generation of attention-grabbers who have spent too much of their lives on phones, in feeds, and quantifying every aspect of their social experiences. And while generational preferences adjust (I’m sure much of their audience is afflicted by this same problem), we’re not talking about music and language choices here. We’re talking about whether people want to feel happy or angry, like a great new day is dawning or the dumbest, most obviously wrong person in the world just slid into their DMs.

Nobody wants that. My kids don’t want to feel that way, and neither do their friends. When they do feel that way, sure, it’s hard to escape. But they hate it, and banking on people who hate you and what you represent to become customers (let alone repeat customers) is insane.

Accountablli-Buddy

My Theory of Productivity

Allow me a theory. I probably didn’t invent this, but if I did, bully for me.

Work output, when increased beyond the amount of available accountability, without a matching increase the available accountability, will never generate a lasting increase in productivity.

Here’s what I mean by all these terms. Work output is the amount of stuff you make or things you get done. Maybe you make toys, maybe you insure businesses against floods, maybe you send emails. Work output is necessary but not sufficient to generate productivity, which is often why doing actual, productive work is challenging. If all you needed was raw work output, you could just jam out whatever all day as quickly and easily as possible and solve lots of problems. Nothing works this way.

This is why we have accountability. Accountability, in my parlance, is sort of a backstop against work output. It doesn’t necessarily guarantee that all work output is good (that would be “micromanagement”, which is a great way to drive work output to zero), but it ensures that for every kind of bad output, there is time, energy, and brainpower available to figure out why that’s happening and take on the responsibility of preventing it. In other words, mistakes are okay, but just making the same ones over and over again at similar (or increasing) rates is not.

Productivity is when these two things align. If work output goes up, and a person, system, or process is available to make sure that work output is positive (or at least not negative), you will almost always see a commensurate improvement in productivity. I define that as whatever positive outcome your organization is trying to accomplish (usually whatever gets us paid — the insuring, the toys, the emails, etc.)

Accountability as a Limiting Factor

As we are frequently reminded by people trying to get us to invest in insane exciting things, the world has gone through many productivity revolutions thanks to various technological innovations. While some of things improved the quality of things, or the safety of the people making them, the most obvious impact of the biggest ones is a large, comparatively inexpensive increase in work output. My new factory can make a thousand pairs of pants in the time it used to make a hundred, resulting in both cheaper pants and higher margins for me. Everyone wins. My new corn can grow on land that used to be barren, and using water from far away brought in by pumps that didn’t exist before. More corn, less useless land, everyone wins, at least until I start growing fake, inedible corn to take advantage of massive agricultural subsidies.

I am a huge fan of these developments in general, and always have been, even though I am not an industrialist but merely a simple 21st century office drone. I am constantly seeking ways for technology to improve my work output, because when I succeed at this, I usually receive praise (and occasionally promotions!) for actually doing less work. It’s fantastic, and if you’ll allow me, just very American in the best way.

(music swells)

But there’s a catch. I mean, it’s not really a catch so much as a limiting factor. My technological innovation can’t increase my work output beyond my ability to make sure my work output isn’t actively harmful. Have you ever sent out an automated email to a large customer base that said “Hi, {FIRST NAME}”? Obviously I haven’t, I am totally asking for a friend. But that’s an example of the risk of increasing work output beyond accountability bandwidth.

Now, in the case of the embarrassing auto-email, we make that tradeoff 10 times out of 10, but there’s still accountability. Just ask… my friend! When you send that stupid email, you’re going to hear about it. Recipients will respond and yell at you, co-workers will forward angry customer replies, and eventually you’re going to hear about it from your boss. If they are a bad boss, they will tell you this is unacceptable and maybe call you an idiot and then walk away and start working on your Performance Improvement Plan so they can fire you. If they are a good boss, they will help you figure out some way to audit the “Name” information in the CRM, or use “no data fallback” feature in your email designer. But either way, these emails will not just blast out with FIRST NAME on them forever. And that, my friends… is accountability.

Innovations that increase work output must, by the rules of my theory, be paired with innovations in accountability, or else that automation will not generate productivity.

By and large, this is exactly what has happened with most work output-increasing technological innovations, otherwise they would never have stuck around. Various technologies have improved our ability to review things, reduce random errors, and more quickly and easily work with various problem solving experts in the event that something goes wrong. These are good things, even though they are less sexy than, you know, the cotton gin or the assembly line or whatever. Hell, unions are an accountability innovation in a lot of ways; an organizational check on the staggering increase in work output (and some of the resulting bad effects) resulting from the Industrial Revolution.

First Salads, Then Robots

Okay, so as legally required by Internet Rules of 2025, let’s now apply this to artificial intelligence, or more accurately, large language models, image generators, and maybe (if you squint) “agents”. There is a funny little dance going on when we talk about these things, because we can’t quite decide if they are supposed to replace us, or be used by us. In general, I find that the conversation — whether about society in general or in a sales pitch for some AI tool — tends to start with replacing, only to then inevitably downshift to “it’s a just a tool/massive force multiplier/etc.” as details and reality begin to weigh down the conversation. The tech industry has been able to dodge a lot of difficult logical questions by flipping this bit back and forth as it suits them.

“Layoffs in the streets, force multiplier in the sheets”, if you will.

But here’s the thing — it actually doesn’t matter because accountability is the limiting factor in either case. Think of it this way. You’d never hire someone with zero accountability. Maybe their accountability is generous, or forgiving (maybe they’re a cousin, I don’t know, I’m from Rhode Island), but unless you’re talking about a legitimate charity case, there’s gotta be some. Even something as simple as saying “if you screw up badly enough, you’ll get fired and then the paychecks will stop” (again, optional in Rhode Island) is a form of accountability, however crude.

So there’s at least an element of accountability in every function, or it’s simply not a productive job, because if there’s no accountability, you’re effectively saying the output doesn’t matter. And that means the accountability bandwidth thing comes into play with every function as well. Somebody was responsible for every caesar salad I made in the summer of 2001 at 22 Bowen’s Bar & Grill. Mostly it was me, but it was also my boss. That doesn’t mean I couldn’t make a mistake on any given salad (oh Lord did I make those), just that you could take any one of those mistakes and say “who is dealing with this by either fixing the cause of this or leaving the organization?” and there’d be an answer every single time. But the system we had in place was predicated on three college kids making every salad to order, by hand. There were only so many we could make, and thus there was a hard cap on both our possible work output and the accountability needed to ensure the value/utility of our output. In short, three college kids who didn’t want to get yelled at or fired in the middle of the summer, plus a full time assistant chef who really didn’t want to get fired, together, could handle the task.

If for some reason the restaurant exploded in popularity and we needed ten guys making salads at once like a Chopt in Midtown, or if the three of us got magic AI salad machines that let us spit out hundreds or thousands of salads a minute, this system would need to evolve for us to use it. Either those machines would need iron-clad safeguards in them (making them effectively error-proof), or you’d need a massive investment in quality control to hold yourself accountable for making good, edible salads. You wouldn’t just 10x the process and say “look how productive we’re being” without anything in place to make sure people weren’t just getting dirty bowls hastily filled with handfuls of chick peas.

Okay, Now Robots

The fundamental business promise of generative AI has a giant, accountability sized hole in it. So far, many vendors have dodged this by making accountability sound like ethics, which many companies will discard for a big enough ROI, but this is a major misread of how business works. Accountability can be about ethics, but it’s really just about outcomes, and there is no reason to think that businesses are willing to throw away their control over outcomes because generating a lot of arbitrary, uncontrolled outcomes now requires little to no human effort.

Bluntly, if you’re selling a massive increase in work output without some way to also massively increase accountability bandwidth, you’re essentially proposing one (or maybe more) of four possibilities:

1. Workers have plenty of unused accountability bandwidth now, so a huge increase in work output is actually great

This doesn’t sound like 2025 to me. Companies have been finding new and more creative ways to avoid being held accountable for things for decades now, with “innovations” ranging from binding arbitration clauses, to liability shield laws, to shutting down support lines and dumping people into user forums. The accountability that remains is almost entirely related to raw financial performance. Companies have been talking about “doing more with less” for years now; I find it hard to believe people are sitting around checking their work twice for lack of anything else to do.

2. It’s worth it for businesses to pay increase accountability bandwidth to meet the huge increase in cheap work output

Is it, though? Every marketing department has to figure out things like “the right number of campaigns to run” precisely because it’s often not worth the cost of installing sufficient accountability. It wouldn’t be worth the cost of doing thousands of additional hours in post-mortems and data analysis just because you could now afford to spin up those campaigns for zero dollars. Then again if we’re not 100x-ing our output here, what’s the value of this innovation? What I could really use is one automated campaign manager who was just as accountable as my human ones…

3. Accountability bandwidth can be increased with the same types of technologies that increase work output

… but that is not a thing, because computers are not accountable. They don’t care. They don’t need to eat. They don’t need to make their parents proud. They don’t need anything, because they’re not alive, and being alive is a non-negotiable part of giving a shit about anything. You can program rules into software, but rules are not the same thing as giving a shit, which you will immediately realize if you ever have to work with someone who doesn’t give a shit about work, but is sufficiently capable of following rules. In some ways that kind of person is just a slow computer, but more importantly, every computer is just a really, really fast version of that annoying employee. There are lots of ways to get different kinds of people to give a shit about different kinds of things (this process is called “middle management”, ask ChatGPT about it), but there zero ways to make computers give a shit about anything. In fact, one of the reasons software can work/write/execute a decision tree so quickly is that it doesn’t give a shit about literally anything, so nothing slows it down.

4. Accountability is irrelevant if you scale work output enough, or it’s just irrelevant in general

Lastly, despite it being a running joke for — I dunno — a hundred years or so, there’s an increasingly weird, inexplicable interest in the “a thousand monkeys on a thousand typewriters will eventually write the greatest novel ever” approach to work. Maybe it’s because some of the world’s most successful high margin businesses are automated, algorithmic trash dispensers with seemingly no limit to how big they can scale. Maybe it’s because we’ve over-financialized basically everything at this point, and we can’t conceive of any system where “N” has some sort of fundamental limit. Maybe it’s something else! But I’m here to tell you that you can’t scale any work output to the point where nothing matters. Facebook is certainly going to try, but for the rest of us, there are only so many places to show your infinite, dynamic ad variations. So even if the numbers are large, they are still numbers, and they still require some form of boring old accountability for results.

Good Automation is Still Good, Bad Automation is Still Bad

Automation is an incredibly valuable thing, but one of the many downsides of the blind rush to automate anything we can find is that some of the most important skills in making automation decisions seem to be atrophying as we race to lower the marginal cost of arbitrary work output. For instance, my little event management project automates the gathering and tracking of events and their related deadlines. You just give it a website, and it goes and finds all the events, all the deadlines, and organizes everything. I hate doing this, it takes me forever, and I make a lot of mistakes, so in general I like this painless increase in work output.

But there’s obviously an accountability issue here as well that I don’t want to ignore. No one is going to want my application to automatically grab tons of irrelevant events and deadlines and shove them into the system. They won’t even want me to automatically grab events and deadlines that are sort of relevant. They want the right ones and only the right ones and since events are not labeled on the internet as “ones you should care about”, there is no ironclad, computer-powered way to do this with 100% certainty, which pushes accountability to my user. So I actually throttle the process by (a) separating the process of finding events from finding deadlines, and (b) making you confirm which ones you want in each process. I could increase the number of events and deadlines you find, and reduce the time and effort it takes to do so (wheeee, work output!), but that would be dumb and counter-productive because it would ignore how difficult that makes the unavoidable challenge of making sure those things are correct and useful.

This is a really simple, small example, and it’s not something I’m doing because I’m some product genius or especially sensitive to user needs (history would indicate that I am quite bad at this, actually). I’m doing it because I actually care about the true productivity of using this application, because when I originally needed it, I was held accountable for effective, intelligent event management and having good, accurate answers to logistical questions. No one cared how long my little tracker spreadsheet was, or how many columns it had, or whether I knew the load-in dates for events we shouldn’t be going to.

The problem 99% of generative AI uses cases I see have is that they don’t really care about true, accountability-supported productivity and outcomes, primarily because they don’t have an answer to the accountability bandwidth problem they themselves are creating, and their business models and cap tables dictate they make a 1000% impact, not a 50% impact, so being held back by the bandwidth is simply unacceptable.

But that’s a vendor/product/business model problem, not the market’s. If you’re trying to sell these things as solutions to actual, functioning organizations who care about outcomes, or ideally, thinking though how to build them before you do that, I’d keep that top of mind.

Sure, Jan

The story of 2025 continues to be — never concede anything. Just stare ahead as you plow into the merge and the other guy will yield.

In that spirit, here’s Sam Altman, turning up the music and accelerating as he rapidly approaches the oncoming produce truck of reality.

As if this wasn’t hand-wavy enough, keep in mind I’m reading this the day after OpenAI launched, with great fanfare, Sora 2 — “deepfake TikTok“, if you will — which is literally just a slop machine for making fake AI videos of “your friends”, which actually means “people you want to see in fake videos”.

This company is the mullet of technology. Business in the front, party in the back. Scientific discovery in the streets, ShitTok in the sheets. Whichever one of those things gives them enough money to survive (the Department of Defense, or advertisers) first will be what they claim is what they wanted to do all along.

And if, as more than a few people are starting to suspect, neither one happens? Well, it’s… it’s gonna get real ugly.

Incredulity

Every day, there are many things I don’t do because I think they are fundamentally wrong. Some of those things might get me something I want, or closer to something I want, or away from something I don’t want to do, but I find them wrong so I don’t do them. This does not make me special, this just makes me a functional, civilized adult.

However, I don’t see myself as some inherently perfect paragon of virtue. I really do try to operate outside of simple, transactional self-interest, but simple, transactional self-interest does affect me. For example, my very self-righteous, Protestant-work-ethic desire to be employed and financially productive has wavered in direct response to how badly I need money. I measure my words online and at the office in large part because I (reasonably) fear the short and long term consequences of saying whatever I think whenever I think it.

In other words, I do a lot of things the right way because I have morals, but there’s a gray area where even I don’t know if I do some things because they are right, or because they are safe. That’s why, as a society, we try to make the right things safe, and the wrong things dangerous. It’s moral hazard!

Steve Ballmer and Moral Hazard

Steve Ballmer bought the Los Angeles Clippers a few years ago because he has effectively an unlimited amount of money. Despite this, he can’t just do whatever he wants with his team (like spend his infinite money to just sign all the best players), because he’s part of an association that has rules and restrictions on that sort of thing. Ballmer is in trouble because through some excellent journalism, it’s come to light that his star player Kawhi Leonard has been getting huge amounts of endorsement money from a weird, unprofitable startup owned by Ballmer and his Clippers ownership partner, for doing nothing. Effectively, this looks like money laundering to avoid the rules on how much teams can spend on their salaries, which is a pretty massive deal in the NBA.

Ballmer has denied everything and pleads ignorance on everything from his investment in this company to the particulars of how Leonard was compensated, but as more evidence is revealed each day, things are looking worse and worse for him and the team. Like, a lot worse.

When this story initially broke, one of the loudest forms of skepticism that I heard can basically be paraphrased as such — “Steve Ballmer is really smart, and this whole scheme seems incredibly stupid. While it’s nominally money laundering, it’s lazy, ham-fisted, and the idea that a kajillionaire business magnate who clearly loves owning and operating his basketball team would risk so much strains credulity.”

On the one hand, this is a pretty reasonable take. But it’s actually just a great reminder that culturally and morally, there is a huge disconnect between our expectations for how the world should/will work, and what’s required to ensure that it does that. While there’s this sort of baked-in assumption that these activities are risky for people like Ballmer, is that assumption actually valid? One of the first social science things I learned in college that really raised eyebrows was that while people spent a lot of time arguing about criminal punishments, the real driver for crime was whether people got caught at all. We focused on sentencing because it’s easy — those people have been found and convicted, so putting them in prison for any amount of time is trivially easy. One year, five years, a hundred years, whatever! However, actually catching people who commit crimes is much, much harder than you would think. It requires more than power and authority; it requires skill, resources, and lots of hard, boring work.

It’s hard to catch — really catch — people doing bad things, which is why so many bad things have been discouraged with penalties like shame, embarrassment, and reputational damage that don’t necessarily require a famous or powerful person to get truly, legally, undeniably caught. Historically, simply being associated with something like a corrupt, self-dealing cryptocurrency or shady real-estate scam has been harmful to people with big ambitions, regardless of whether it’s been specifically proven that they did whatever bad thing that association implies. It’s not “go to jail” harmful, and it never should be — the power of the state to take someone’s freedom away is enormous, and dangerous. But one of the reasons people lost their minds over “cancel culture” is because the non-legal, indirect consequences of people believing you are bad are really potent, and people don’t like when they have to pay them.

(As an aside, as with many social punishments, I don’t have a fundamental problem with “cancelling” famous people from fame and media exposure. I just sometimes have a problem — as many/most people do — with when we do it and who we do it to. That’s a boring answer, but I’m quite confident that it’s the correct one. The idea that as a culture, we should continue to focus our attention on the same people even if they say or do awful things is obviously stupid. It has nothing to do with free speech, and everything to do with bad, unprincipled editorial choices.)

But here’s the thing — just like we lack the physical resources to bring every criminal accusation to trial (and thus depend on a system of plea bargains just to keep the lights on), we lack the same resources to adjudicate every terrible thing that powerful people are capable of doing, so we depend on shame and yes, the threat of being “canceled” (it’s just… such an eye-rolling term, my God) to keep the world working the way we expect it to. In a shameless world, though, the system starts to break, and it makes sense for people with a ton of resources to simply fold their arms, deny everything, and demand the system adjudicate everything to the full extent of whatever the law is, because the law alone is actually pretty easy to survive if you are rich and powerful.

So I don’t know what Steve Ballmer did or didn’t do in this case, but I do agree he’s not stupid. However, given what a ruthless, smart, powerful person might see as his best option to move forward in 2025, I think Ballmer’s savvy might be as much of a potential explanation for his guilt as his innocence, which is a bad sign for everyone.

All done.