About Me
I am just some guy with a cool wife and funny kids who likes making things that probably don’t need to exist, like this website, a bunch of albums, and all these words.
I made Resolution and I just finished an acoustic album.
About Me



I am just some guy with a cool wife and funny kids who likes making things that probably don’t need to exist, like this website, a bunch of albums, and all these words.
Here’s some of my work.
I’m also the lunatic behind a what-if scenario planning & goal setting application called Resolution. You can use it for free here, or check out our fairly large set of examples.
Look at This Hat
I recently finished an acoustic album, and it came out pretty good! If you like stripped down, half-earnest half-winking-at-the-camera punk rock songs recorded by some Dad in his living room, you should listen to it.
Listen Now:
The Journey
Struggling to make software work is a really specific type of experience, and you either get that mental place or you don’t. As an objectively poor developer, I avoid writing code whenever possible, which is why I spend a lot of time using stuff like Bubble. But I’m not vibe-coding — every time my application doesn’t do what it was supposed to, it’s because of me. And when I actually figure out what’s wrong, and change it, it stays that way and I move onto something else.
I think one of the reasons I’ve found LLM-based vibe-coding so unsatisfying has been the fact that it fundamentally alters this experience. If you’re purely looking to make a computer do something, and you’re not very good at that (I qualify for that designation), I get that just asking for it to happen a couple times may be a faster and ultimately better way to make the computer do it.
(It’s also entirely possible that it’s not faster and/or not better — this is a different discussion, but an important one for another time.)
But… it’s absolutely not flexing the same muscles as building with a deterministic system. It’s not using them less, it’s actually not using them at all. If you’ve never flexed them, you might not notice. You might have a long back and forth with a chatbot, iterating on some idea, and think you’ve entered the sort of “flow state” your developer friends have talked about. You have not. Seriously, I am telling you, you have totally not done this. You may be in some other sort of state, and maybe that state has value, or maybe it’s the future of work, or maybe you’re just a guy hitting “refresh” and hoping for a different outcome, but you are not doing the thing.
Again, it’s not about code. It’s about systems, learning them, altering them, and ultimately mastering them. You interact with a lot of systems with code, and I suck at that, and I know that I suck at it because I’ve done it more than I probably should have, and I’ve been evaluated and coached and critiqued (heavily) by people who are really good at it. For me, failure was common, but success did happen sometimes and it’s an unforgettable feeling. When I vibe-code decent stuff, it doesn’t feel anything like that.

The Agency Model
The best thing I can compare this sort of thing to is working with an agency. If you’ve never been able to hire one, suddenly having Claude or whatever at your beck and call is probably thrilling. And sure, you can tell a chatbot to interact with you a certain way, to “partner” with you, or pretend you’re pair programming even if you don’t know how to program, and it will indulge you. But guess what? So will a lot of agencies. You’re the customer, dummy! They’ll tell you whatever you want if you let them.
I’ve had more or less involved roles with different agencies. I’ve sat there and barked orders, I’ve been hands off, and I’ve tried to save money by doing pieces of the work myself. It doesn’t matter — asking for stuff is different than figuring out stuff. Really building software is not asking for things; it’s doing things and creating iron-clad decision trees for things and tweaking things so that they must do what you want. No one is coming to help you. No one is going to just make it work, especially not your computer.
It doesn’t have to be about building apps, either. People configure complex IT or DevOps systems, CRMs, automations, and other stuff this way, and a lot of those people get the same runner’s high when they master these systems and bend them to their will.
I can’t predict the future. But I can’t shake the feeling that regardless of how anyone feels about any given tool, moving these experiences, these journeys, to non-deterministic finger crossing is going to create more problems than it solves. I think there’s a reason many of the people most excited about eliminating the journey are people who aren’t on it very often.
Narratives
I’ve had a lot of fun and done some cool things working at companies that lose more money than they make, but I’m not going to sit here and tell you that it’s the healthiest environment. It is — by definition — an inherently untenable situation, which creates both urgency and nagging sense of existential dread. Maybe you find that motivating (I do, sometimes!), but more often than not I’ve found it eventually lead to paranoia and panic.
Even weirder, success in this world is often more distorting than failure, because it’s so tied to fundraising and valuations. That might seem reasonable enough, until you consider that the first thing is just “someone giving you money with no oversight or rules”, and the second thing is using the first thing as proof that you’re making progress.
I’ve been doing this for a long time now, but I’ve never quite seen an environment like this, where two companies operating entirely inside this sort of business mirror dimension are just throwing billion-dollar-losing haymakers at each other in the hopes of being… declared the winner, I guess?

As of this post in March of 2026, it seems clear to me that Anthropic is clearly winning the perception war. Maybe that’s because they’re inherently less shady (although possibly more fundamentally deluded) than OpenAI, or maybe it’s because they were smart enough to focus on selling to non-users (business people) instead of today’s broke-ass consumers. Or maybe it’s that OpenAI has been around longer, burning money longer, and is simply closer to reaching the logical, obvious end state of a company like this where it starts flailing around and trying to monetize things people only like because they’re free.
Either way, Anthropic is “winning”, and that means OpenAI is “losing”. Are they both losing unsustainable amounts of money? Yes! Are they both relying on fundamental changes to society that don’t appear to be happening yet (mass AI adoption and huge productivity improvements across white-collar work), while ignoring fundamental changes to society that do appear to be happening right in front of us (the internet it needs to function/improve turning into a cesspool of AI spam)? Most certainly, yes!
But… Anthropic is winning, you see. They’re WINNING! Claude is COOL, man! It’s got momentum! Have you made your crappy single page custom notes app, yet?
(Oh, I have.)
Look, a lot of this is fine. Startups need to spend money, take big swings, and figure it out later. I myself have advocated for this many times, in fact. But this whole era is taking the concept beyond even my wild desires for the ambitions of venture backed companies, largely because I never envisioned the entire economy placing one large combined bet on, essentially, one technology spread out among maybe five companies, two of which have never been in the black for one second and have no clear path to ever getting there and seemingly feel no obligation to even consider the issue.
The “winning the narrative” thing is like “winning free agency” in sports. I get where it comes from, and I get why people get excited, and I get the relevance of the challenge on the actual results of the real games that occur several months later. But we give free agency the level of primacy it gets because we’re bored and there are no games at the time. We don’t just replace the season with it. There are actual games to be played that matter way more!
There’s a lot of hand-waving going on with the AI industry, and tech in general right now, for sure. But even when Amazon was sinking money for years and years, they could very clearly and cogently explain that choice to you. The upsides, the downsides, and how if push came to shove they could pivot to making money in a matter of weeks or months.
That’s just really different than where we are now. I kinda think the people in charge know that, but the longer this goes on, the less sure I am about it, and the less confident I am in any kind of graceful landing here.
Major Scale
One thing I think a LOT of smart people misunderstand is that the power of computers is always SCALE. Computers are very fast, don’t get tired, and they have no shame. If you are smart, you can point those characteristics in useful directions — and people have been doing exactly that for decades.
Large language models — especially the really, really large language models that dominate discourse and investment in 2026 — are definitely an example of this. Computers can’t understand things, but they can identify extremely subtle patterns in data sets that are too massive for a person to even comprehend, and then simply respond to inputs as those patterns would. How they use those patterns to formulate output can be tweaked, or adjusted, or even randomized because all of that is just math and computers will never get tired of doing math, no matter what boring or weird or disgusting thing you make them turn that math into.
People respond to patterns too, although people are both much better and much worse at pattern recognition. They’re worse because they’ll never operate at the scale of a computer network, but at the same time they are vastly better at finding patterns outside of prescribed data sets, or even working with completely unstructured data they’ve never worked with before.
Basically:
- People are slow, but they can be “prompted” by literally anything, including things like feelings, instincts, and other elements of existence that we can’t even fully articulate.
- Computers are very fast, but you have to structure things for them (you can try to get a computer to structure things, and sometimes with enough patterns to consult it’ll figure it out, but it’ll miss tons and tons of relationships that are obvious to any person).
There’s a huge mistake being made (and frequently published) by even very smart, often technically competent people. That mistake is believing that human thinking is simply this kind of pattern matching done at almost incomprehensible scale, but that if we’re able to build software that can do it, we’ll have computers that work like people. I’m not a neurologist (or whatever the appropriate discipline is) so I can’t prove this point academically, but I have spent most of my time working with software engineers, data scientists, and venture capitalists, so I can tell you that this is exactly the kind of mistake these people have been making their entire careers.
But I’m not here to judge. Instead, I’m here to warn everyone — again — that the world-shaking impact of generative, model-based software (“AI”) is not going to be its ability to replace what people do, because despite what everyone is extrapolating, we don’t have a ton of evidence that a lot of important work can be done effectively by a pattern matching robot, no matter how much training data it has. I guess anything could happen, but the reason I’m not that interested in it is because “high competence” is not the most likely effect of scale. We didn’t see it in cloud computing, social networking, cryptocurrency, or anything else we threw money at.
Instead, the most likely effect of scale is (1) more supply, and (2) more trash. It will probably become very, very, very cheap to get an image of basically anything you can describe generated by a computer using pattern matching. If you’re willing to accept VC/hyperscaler-subsidized pricing, we’re already there. An entirely separate question is “how good of an image can you get from pattern matching”, which depends on everything from what the image is, to how you define “good”, to whether you believe in the existence of any sort of intellectual property rights. That question is extremely important if you are looking to replace the actual image creation skills that exist today with cheap, high speed pattern matching, but it’s not important at all if the goal involves making tons and tons of pattern-derived middling trash.
And that’s the thing! There are many business models that involve leveraging the ability to make middling trash much more quickly and easily than before. The most obvious one is spam, which is both a huge business and something predicated entirely on scale, low-cost, and finding the minimum level of plausibility necessary to work. Sure, spam is also a huge social and economic problem because it exhausts people and erodes trust, but if you make money by making spam that’s someone else’s problem.
Music & Spam
Music now has a spam problem. More accurately, music’s existing spam problem is now exponentially worse, to the point where it’s challenging the viability of the entire economic enterprise. Now, I happen to think this will get slowed or resolved simply because there’s so much money in music, and it’s an industry that isn’t new to the idea of protecting its intellectual property. Not all of that money is smart, but I think Spotify + Labels will be better at teaming up to protect their shared interests than an army of small-time grifters.
But there’s a lesson here for other industries that might not be as ready to fight spam. This is the challenge that AI will create, and it’s much scarier because unlike a lot of AI doom scenarios, it’s not speculation or based on extrapolating massive capability improvements that don’t exist yet but maybe seem possible if you squint or just raised a Series C. AI tools can generate better spam, more spam, and forms of spam that weren’t viable five years ago, and it can do it right now. The spam is stupid, of course, but almost all spam is stupid, so it doesn’t matter. In fact, while AI spam is stupid, it’s often less stupid than non-AI spam, not because AI is so great but because spam is so bad. It’s a legitimate revolution.
Honestly, I think the term “slop” for the AI detritus we’re all seeing everywhere is pretty good, so I don’t have a lot of pushback on it. But I do think we should remember that in most cases, slop has the same objectives as spam. It’s not to replace the work we do, but instead to overwhelm society with scale so it can collect the economic value that ordinarily would go to the superior, “real” product. When spam succeeds, it’s because it is so overwhelming that the distinctions between “good” and “merely plausible” go away, and whoever gets more at-bats wins by default. Spam always gets more at-bats than real work, and AI will always get more at bats than real content.
For all of the futurism and noodling about guaranteed income or whatever, there’s no real reason to think this technology won’t plug into actual, current economic behaviors and systems, and simply find the shortest path to money. That’s how capitalism works, especially when unchecked, and the American capitalism of 2026 is more unchecked than any in my lifetime. So maybe AI will iterate for however long it takes to generate music as good as what people make, or to become a useful workflow tool that actually improves the work musicians do (please, please, learn to mix and EQ my tracks for me). But there’s no maybe about it that AI will be used to extract value from existing markets at the expense of that market, its buyers, and its sellers by third parties, because it’s an incredible tool for that.
It brings me no joy to predict this, but if I’m being honest, I expect this exact type of problem to be the defining legacy of everything we’re calling “AI” today.
“Am I so out of touch?”
Some time around my freshman year of college, my Dad’s company gave him a phone with Windows CE on it. It quickly became a running joke in my house, because the phone was both kind of incredible, and the dumbest thing in the world. Dad hated it, surely in part because he didn’t like the idea of work trying to bother him at home, but also because he was an engineer and the phone was a truly bizarre collection of choices in resource allocation. It had a bunch of Windows-style bells and whistles that made it sort of feel like Windows, but was laughably bad at basically any task you’d ever actually want to do on the phone. It took forever to turn on. It was impossibly slow. Even getting and responding to an email was a huge chore with a fiddly little plastic stylus. If you had asked us in 2002 whether smartphones would take over the world in less than a decade, I think we would have had a good laugh.
The two major shifts that changed that trajectory were the Blackberry, and the iPhone. I never had a Blackberry, but I remember my wife’s office giving her one temporarily for some reason, and the difference in philosophy was obvious. I have no idea how many different things the Blackberry did or presumed to do, but it was really, really good at reading and sending short emails. I know this because my wife (at the time) had zero experience doing anything productive on a smartphone — I’m not even sure we knew how to text each other yet — and within a couple hours she looked like a legislative aide trying to push a bill through committee. She was hooked to that thing, to the point that I made her promise not to keep it if they let her. But as a pure messaging product, the Blackberry was a home run because (at least at first) it made vastly better resource prioritization decisions and focused entirely on making the useful experience of triaging emails while on the go better.
The iPhone was an even more impressive achievement because it made Blackberry-level allocation and optimization decisions in service of Microsoft’s much more ambitious goal, which was mobile, general purpose computing. And while, without a doubt, there were (and continue to be) amazing hardware innovations that made the iPhone idea work, they were all in service of software that was extraordinarily well-designed and built for the task. Multitouch hardware is cool, but multitouch hardware attached to an entire platform built around pinching and pulling and tapping actually solves many of the problems Windows CE simply stared at blankly and hoped people would accept.
Today, it’s easy enough to think about my Dad’s old phone (or “Pocket PC”, or whatever it was called back then) and think that Microsoft basically had this right and just dropped the ball. But they never actually had it, because their “it” was just “small computers”. They could have banged away at that for another twenty years with incrementally better hardware and made the horrendous, Start-menu-clicking Windows CE experience marginally better, but they never actually would have figured out what we have today. They were never going to solve a problem that required transcendently great software because they make shitty software and they don’t think that’s a problem. Instead, their solution was to push their laughably bad devices (sorry, “partner devices”) to corporate customers and try to pressure the world into adopting them. In fact, it’s a credit to Microsoft’s incredible sales and distribution skills that people like my Dad even had these devices in their hands at all, as they provided almost zero utility, were pretty expensive, and generated absolutely no mainstream, organic demand. They were stupid, fundamentally broken, almost nobody wanted them, and yet through sheer force of dollars and executive desire, they were a real product category and frequently discussed in business media as “the future”.
Sound familiar?
“No… it’s the children who are wrong.”
Ultimately, I think mobile “happened” because the software that powered it and defined the experience for the people using it made those massive leaps. The Blackberry made people realize the “tiny Windows PC” concept was stupid and insufficiently useful, and the iPhone gave people a fundamentally different way to interact with small devices that made them not just usable, but so usable that people became comfortable using them instead of desktops and laptops in many (most?) circumstances. There was no “adoption” problem because the products made obvious sense. In fact, at the enterprise level, the problem was backwards — companies didn’t know how to deal with the fact that everyone wanted to use their mobile device for work, and they actually dragged their feet and tried to slow the whole thing down.
One of the massive red flags about this generation of “AI” technology is the increasingly frequent (to the point of being almost constant now) reference to those pesky “adoption” problems. I get a little salty about “adoption” because it’s the kind of thing people complain to Product Marketing about, and feel like my team should be able to fix, or have prevented in the first place. And sure, every so often, there’s an awareness or education problem, and that’s something we can tackle. But much more frequently, people are aware of what we want them to do, and they do know how to do it. They just don’t want to do it, dislike the process of doing it, or don’t think it’s worth the effort. As a last ditch effort, I can always argue with them about it, but the honest truth is that if you’re arguing, you’re losing. We probably just made something no one wants or needs, or maybe even something that is actively bad.

Windows CE had adoption problems, too. I guess we could have written op-eds about how the world of 2002 needed to “transform itself” into a “mobile-first” world — I assume people wrote these, honestly — but none of that actually happened because the world wasn’t going to reorganize itself to mitigate the problems of new, unproven products caused by shitty, uncreative software. In reality, the world only grudgingly adapted once transcendently great software made these products shockingly, undeniably useful for interacting with the world around them. Adaptation is a reaction, after all. Businesses don’t preemptively adapt to anything unless they really enjoy wasting time and money.
If generative AI actually matters, its utility or ease of use will force the changes today’s investors and consultants are insistent that we make ahead of time — a trail to be blazed, so to speak, versus a red carpet to be rolled out. To the extent those changes exist today, they have very little to do with legitimate business and a lot more to do with society, as generative AI’s most dramatic efficiency improvement so far has been the creation of spam, slop, and disinformation. The world really is scrambling to adapt to massive adoption of the technology by bad actors, because their preferred workflows really are better with software that uses this technology. It’s disgusting and gross, but it’s not illogical, and it doesn’t require think pieces on adoption.
But these use cases, in addition to being socially dubious, are also pretty niche in the grand scheme of things. For returns that justify the investments being made, it’s going to take great, innovative software that turns whatever value exists in probability-based asset generation into something useful and accessible to people. Hilariously though, the one legitimate area this technology seems to be impacting is… creative software development! That’s right — creating derivative, marginally customized versions of software that have existed forever is now incredibly cheap, and incredibly compelling to companies. So while what the industry desperately needs is a completely new set of original software ideas, that same industry is aggressively discouraging craft and originality in software. In short, I don’t believe generative AI will really matter to the broader economy (outside of unsustainable data center expenditures) by remixing old ideas and conventions. It’s just not that kind of innovation. We got away with — and I personally profited from — twenty years of building cloud-based versions of things that we knew people wanted and needed. And while it didn’t work every time, there were tons and tons AND TONS of cases where that made sense, and simply bolting “internet” onto something useful made it better for a variety of reasons. That same approach is clearly being taken by today’s AI founders (stick AI onto things), but the benefits of AI are way, way more nebulous, the downsides more obvious, and the costs higher than anything in the cloud era, and you’re seeing that manifest in the “adoption problem” that simply isn’t going away.
There’s some amount of time you can just charge ahead and hope some version of Moore’s Law becomes apparent and everything just becomes easier on its own. That certainly seems to be the plan, albeit one crafted and promoted by a bunch of executives my age and younger who have been drafting off of the actual Moore’s Law their entire professional lives. I’m not sure they know what else to try, which is yet another reason why I think this is going to end very, very badly.
Push, push, push
Football has always had problems, but from a fan experience perspective, it was basically the perfect setup for a long, long time. The sport simply owned Sunday afternoons, and in many ways, Monday night, not just because it was popular, but because it was so well engineered. No baseball/basketball/hockey-style fatigue or regular season grind, everything happening at once, everything important all the time. The league printed money, dominated TV and culture, and set whatever terms it wanted in everything it did.
But… it wasn’t enough. And it never will be. Now, football is on too much. Thursday night games suck. There are random bad games in Europe at odd hours. The 16 game schedule is now 17 games, with a corresponding increase in injuries that make the results a little less about skill, and a little more about depth and attrition. Instead of just being “on”, games are gated behind stupid services like Peacock, and Amazon Prime. Things are basically the same, just a little bit worse every year.
People can feel it. Football is still a dominant money machine, but almost no one is a fan of the direction the league is moving. And even that’s not enough, because nothing is enough. Now the owners want 18 games, consequences be damned.
“I want to tell you guys that we’re going to push like the dickens now to make international [games] more important with us,” Kraft told the “Zolak & Bertrand” show. “Every team will go 18 [regular-season games] and two [preseason games] and eliminate one of the preseason games, and every team every year will play one game overseas.”
I gotta tell you, I hate that everything is like this. We sometimes reprimand my intense, ambitious daughter for constantly pushing the envelope in an attempt to get the most out of things, or move them in the direction she wants. “Push, push, push”, my wife will say, to remind her how exhausting it is to have even wonderful things stretched and exploited and distorted until they become frustrating facsimiles of themselves.
For our sake, we hope she reigns it in as she grows up, but if I’m being honest, I don’t think it’s a quality that will keep her from being successful. Maybe she’ll own an NFL team.
Abolishing Things
I get why people are hesitant to abolish large parts of the government. I also get why, despite this, more and more people want to abolish the Immigration, Customs, and Enforcement (ICE) agency.
While ICE’s recent psychopathic-clown-show tactics have put it under a particularly bright spotlight, ultimately I think people want to “abolish” it for the same reasons some people wanted to “abolish” various local police organizations in 2020. At some point, it starts to feel like the safeguards on positions of state power are entirely voluntary, and that the people with that state-granted power are the ones who get to decide whether to obey them or not. Those aren’t really safeguards! For all the talk of rounding up people who “broke the law”, regular-ass Dads like me have been watching videos of morons in tactical gear blatantly violating a wide variety of rules, laws, and even Constitutional rights with absolutely no consequences whatsoever, let alone appropriate ones.
For instance, I’m sure I have seen over a hundred videos in the last six months of ICE and CBP employees doing something that made me think “that guy should go to prison for that”, or maybe even “I think you’re supposed to go to prison for that”, and none of those guys are going to go to prison for those things, or even lose their jobs. Personally, I don’t think it’s weird for someone, at a certain point, to just say “the hell with all of this” and demand a fundamental rewrite of this part of the social contract.
Immunity as a Given
Many policy debates are essentially an exercise in trying to make people accept certain “givens”. One given that’s been pushed forever is the idea that law enforcement, for the most part, needs to be in charge of regulating itself. If someone from the government with a gun does something that looks or feels illegal, it should basically be up to a bunch of people who have the same job to decide whether there should be any repercussions, and usually the answer is “no”.
Now, you can make an argument for this. You can say a world where law enforcement is held accountable by some other function, and not allowed to be given vast immunity to a host of otherwise criminal actions, would be a world where laws could not be enforced, and everything would go to shit. You can say that if police, or ICE agents, or whomever is zip-tying you today could not operate with absolute confidence that if they get scared, it’s okay for them to shoot whatever they are scared of, that there would functionally be no policing at all. You can argue that! However, I do not find it especially compelling.
What I think is much, much more likely is that most people have done some math in their head that goes something like this. If a policy makes the police 5% more effective at protecting you from things you don’t like, but 5000% more likely to hurt, rob, or kill innocent people while they do so, that seems like a bad trade. BUT, if you think the 5000% increase in abuse will happen exclusively to people who you are not concerned about, that trade seems fine. I mean, it shouldn’t really seem fine, morally, but I would guess that it probably does, in fact, seem pretty fine to a lot of people. But this is just run of the mill “leopards eating faces” stuff. The point isn’t “no real-world accountability breeds bad outcomes for state power”, even though that’s also true. The point is “if you’ve removed all real-world accountability you can’t fix bad outcomes by adjusting the particulars of accountability”.
In this case, that’s the whole “do we fix ICE or get rid of it?” argument, which is a good argument to have. From my perspective, though, it’s also not a very complicated one. ICE — and unfortunately, most 21st century law enforcement — have insisted on building policy around the idea that law enforcement cannot be held accountable the way regular people are, and in fact need to be fundamentally immune to almost all of the consequences a normal person would face for doing various violent or grossly illegal things to another normal person. That’s non-negotiable, apparently. And now, with that principle in place, let’s make some rules!
See why this is a waste of time? There are already lots of rules! They’re simply being ignored because they can be ignored, and the only answer we can come up with are more rules that will also be ignored, because underneath it all too many of us still think law enforcement should get to decide if it did something bad or not.
In my heart of hearts, I think this is what animates people who want to abolish various law enforcement functions. I don’t actually think — in most cases — it’s some hippy-dippy belief that crime doesn’t exist (although it really would help if the police solved more crimes) or that there is no need for law enforcement of any sort. Instead, it’s a rejection of the idea that these organizations should exist as so many of them do — self-regulated, armed, well-funded, unionized, highly political groups that receive massive judicial and legislative deference in almost every scenario.
I think a world where turning off your legally-required body camera, for instance, and then arresting someone, was potentially a form of criminal negligence is absolutely unthinkable to a lot of people. But I think that position understates the seriousness of the task given to armed agents of the state. I didn’t invent it, but I think the idea of “higher standards + real accountability + much higher salaries” isn’t crazy even though it’s not a panacea and the devil’s in the details. Law enforcement is hard, but so is nuclear physics — and we don’t let nuclear power plants self-regulate or operate with the knowledge that everyone working there will be immune from almost any criminal charge if they decide to kill a bunch of people.
And those are real cops! ICE makes the disconnect here even more obvious. What utility are we even getting out of these goons? I’m not going to stand around and listen to a theoretical argument about utility ICE might have if the people who work for it did their job in a completely different manner than the way — given carte blanche to do whatever they want and total deference from local authorities — they choose to actually do it. I’m going to take it as it is. They don’t follow rules. They lie constantly. They willfully commit sociopathic violence with no consequences. Logically, how would you even fix that? Find some magic politician who will be our benevolent dictator and use his unaccountable army thoughtfully?
We had versions of that for a long time. No, not an actually benevolent ICE (imagine that, lol), but one that put at least some limits on itself because the idea of just telling the public, in every… single… interaction, to eat shit and die seemed… unseemly. But it was only a matter of time before thugs and goons would be drawn to the warm flame of unaccountable, masked violence and bullying, and now ICE simply is what it is — a relationship defined by its lack of boundaries, which means we can’t fix it with boundaries. It just has to go, and then we try again with something completely different.
Actually, Hacks are Bad

If I may — a “hack” is not a good thing. At their best, hacks are clever, temporary exploits that get you where you need for the time being, or if you’re a criminal, a way to bypass security that (by definition) you aren’t supposed to be bypassing. Most hacks are just crappy, lazy, unrealistic implementations that obscure cost, sometimes deferring it but also usually increasing it in the process. The worst ones are literally scams.
Don’t aspire to come up with hacks. There are way too many hacks already. Aspire to come up with real, thoughtful solutions, and advocate for the time and resources to execute them.
Rage as a Service
Trolling people is easier than being funny or clever. This is not a new challenge, but here we are, once again pretending that failing at it is a tactic and not a weakness.
“Some advertising and marketing agencies are now intentionally leaning into the volatility of the current political climate, not to take a stand but to manufacture outrage,” said Mikah Sellers, an advertising consultant who has worked with major brands including Booz Allen Hamilton and Carfax. Companies are tapping into cultural rifts and driving wedges in hopes of capitalizing on the strife, he said.
How will they capitalize on pissing people off? Unclear! In fact, here are the companies (sometimes a generous classification) referenced in this article that are allegedly getting exactly what they want from this tactic:
- Friend
- Nucleus Genomics
- Skims (Kim Kardashian’s clothing brand)
- Sabrina Carpenter
- Pippa
- Cluely
- Clad Labs
- Artisan
Leave the pop musician and the reality star aside, because I understand the commercial viability of trolling in those industries. But the rest? These companies are not successful. They can scream all they’d like about how they’re getting what they want as they slowly (or quickly) go out of business, but this largely startup-fueled delusion that life is either TikTok or the Trump administration, and you basically need attract eyeballs at any cost and then go from there is not actually backed up by the facts or financials.
And the thing is, the smart counter to bland, play-it-safe branding isn’t stupid, insecure edge-lording. It’s staring you right in the face — it’s CostCo, dammit! If you want to be edgy, get your name in a story like this, which has spread around the internet organically so much on its own that it has its own Snopes article confirming its authenticity:
“I came to (Jim Sinegal) once and I said, ‘Jim, we can’t sell this hot dog for a buck fifty. We are losing our rear ends.’ And he said, ‘If you raise the effing hot dog, I will kill you. Figure it out.”
Just an incredible quote. No notes. This is the kind of shit people want. That they feel. It’s not generation or culture-coded, other than the fact that people are constantly being screwed by companies acting in their rational, best financial interests at our expense, and here’s a guy running a company who is just MAD about it, stomping on everyone’s least favorite title (“CEO”) and even threatening (tongue-in-cheek) violence over it!
None of this is new. How about this Dollar Shave Club launch ad, from over a decade ago?
It’s the same thing. It works because (1) it’s violently pro-customer, and (2) it’s about what the company actually does to/for those customers. The edgy tone is fine, but the cause of that edginess feels more like genuine excitement about what they’re able to do for customers, and anger at the worst, most anti-customer aspects of their competitors.
When everyone was watching this ad and talking about how they needed one just like it, we didn’t have to figure out what percentage of them were hate-watching it, because it was zero. In fact, when everyone tried to do this, the problem they ran into was (a) they weren’t actually this passionate about their business or, more specifically, their differentiator, and (b) they’re not fun, creative, or funny!
Conversely, this latest run of pathetically click baiting people into looking at you for a second, rolling their eyes, and then, as they walk away, shouting “Ha! You’re in the funnel now!” completely misses the point. It fees like it comes from a generation of attention-grabbers who have spent too much of their lives on phones, in feeds, and quantifying every aspect of their social experiences. And while generational preferences adjust (I’m sure much of their audience is afflicted by this same problem), we’re not talking about music and language choices here. We’re talking about whether people want to feel happy or angry, like a great new day is dawning or the dumbest, most obviously wrong person in the world just slid into their DMs.
Nobody wants that. My kids don’t want to feel that way, and neither do their friends. When they do feel that way, sure, it’s hard to escape. But they hate it, and banking on people who hate you and what you represent to become customers (let alone repeat customers) is insane.