When I’m not pretending to be a tech visionary, I work in product marketing. This means that everyone in my personal life (most of whom do not in work in marketing for software startups) thinks my job is basically to come up with cool advertisements, and most of those people have suggestions. Meanwhile, every job listing I’m qualified for is essentially an elaborate demand to prove with 100% certainty that hiring me will make everyone want to buy their product.
This divide has dominated my career, mostly because I came of professional age in the “let the data decide” era. If you squint really hard, every once in a while it feels we just might be able to track everything important, put it all in a big pivot table, and then just… do what it says. But most of the time, things haven’t worked that way at all. Instead, non-experts with massive confirmation bias now just have access to an infinite number of allegedly unbiased justifications for whatever it is they wanted to do.
In the words of my AI-generated co-worker Brian:
Anyways, this is already veering into more of an anti-data rant than I want it to, mostly because I’ve gone on my own versions of those before and the full posts have the necessary disclaimers and squishy “not-all-data!” qualifiers, so just read those if you’re looking for that.
Instead, today I’m here to talk about limitations.
The Limitations of Art
I don’t know if any amount of squinting is sufficient to view me as an “artist”, but go ahead and try. It’ll probably go better than trying to view me as anything else. Still, I think a lot of people assume my data skepticism is rooted in my own love of, for lack of a less soulless term, “qualitative creativity”. I like to write, I like narratives that make people excited and my kids laugh and my wife remind me that there are other people in the restaurant. I’m guilty of all of that, and it’s not exactly a major leap to assume I’m always going to pick a good story over whatever the data says.
But you’d be wrong. I’ve heard a lot of big, exciting narratives in my time about what was going on with the business and what we needed to do. Eventually, though, I also had Salesforce. Was our Sales data bad and wildly deceptive? Yes, it was. But that just made it take a little more work to figure out what was really going on, and once I did, my interest in exciting stories that didn’t align with what the system was logging dropped to approximately zero. Because those stories were bullshit.
That’s the downside of relying entirely on the art side of things. Everybody can have an idea, and no matter how stupid or delusional it is, it’s probably going to seem great to someone. Even worse, if presented properly and cynically enough, that idea can prey on people’s emotions and insecurities and drive you in the absolute wrong direction until you fly off a cliff, hand-in-hand like two fugitives in a convertible fleeing local law enforcement.
My Dad’s an electrical engineer who was drafted into the executive ranks when I was in high school, so he rarely had to settle people down with business data. He was able to lean on physics, and explain to his fellow leaders that something was an incorrect assumption because it wasn’t how, you know, thermodynamics worked or whatever.
Salesforce data is not thermodynamics. In fact, if thermodynamics was as reliable as your GA4 data, there’d be no way to determine how long it would take to make toast. But if you take proper precautions and analyze things very critically, they can be powerful tools for tethering yourself to reality.
And obviously, science doesn’t have to come in the form of numbers. A customer can say they wanted a feature, and that’s obviously a more concrete observation than just a feeling that a customer would want it. Is it better to pull out more than a single random anecdote? Yes, it is. Do people who go too far down the art road with their narrative often fail to even do that? Yes, they do, because often that’s how far away they are from reality.
In other words, the worst case scenario for leaning solely on the art of marketing is… bullshit. And while non-marketers and casual observers of marketing might think “hey, that’s what you guys do!”, those of us trying to grow businesses know that drifting into bullshit is usually the beginning of the end.
The Limitations of Science
Weirdly enough, my criticism of people’s data usage is often that they’re using data incompetently, in quasi-bad-faith, or even outright unethically to advance… well, bullshit. And that is one huge challenge that comes with mass access to data. It’s actually why I’m so worried about mass access to generative AI at work — it’s not the “replacing jobs” part I’m worried about, it’s the idea of a bunch of people cranking out spam with the lame lack of discipline or literacy they showed for twenty years when given easy access to things like user and buyer data.
But, to be fair, “you can do it wrong” is not really a unique limitation of data or science — it’s just easier to dismiss bad art than bad science. Most people feel comfortable dismissing art that they think is worthless, but the language of data and science carries with it an authority you have to overcome with your own knowledge of the material. Hipsters and scenesters who didn’t think my band was punk rock enough aside, it’s easier to bully with science than art.
The bigger, more impactful problem with data as your source of inspiration is that it rarely leads to truly new, outside-the-box ideas. Maybe you don’t collect the relevant data, which makes sense because you don’t actually do it yet. Maybe there’s a benefit or impact that isn’t consistently shown in the data, or it’s only shown in broad measures like customer satisfaction that don’t confirm specific assumptions about your product. Either way, there’s a fundamental limitation on how creative you can get if you’re 100% tethered to basing your ideas on observable things that already exist.
I’ve certainly triggered the art part of my brain with some realization provided to me by data, but I’ve never had a transformative, exciting idea that was simply telegraphed by the data in front of me. And while that might not seem like an existential blocker to innovation, it’s a lot harder than you’d think to get a 2020s-era organization to act on a great idea that was only inspired by data. There will always be a less radical, less artistic idea with more support from data, and in my experience, that’s the idea most companies will be willing to commit to.
In fact, this process has happened so frequently, and so consistently in my professional journey that I’m honestly kind of shocked when the opposite takes place, as it has with virtual/augmented reality platforms and generative AI. While you can now dig up some data to justify basically anything at this point (and I assume it’s been done with these ad nauseam in various meetings at the companies pushing these concepts), it seems pretty clear that this is coming from somewhere else. The only concrete data points we seem to have about generative AI, for instance, is that ChatGPT got a ton of usage right out of the gate, and that businesses are convinced they want to use AI as long as you don’t actually define what its capabilities, costs, or limitations are. Cool, let’s ship it, I guess. In some ways, “the computer is a person!” is one of those data-inspired, art-led approaches to product innovation I mentioned earlier. If it’s right, you end up with what many in the media and investment world seem to expect — a revolutionary new platform that, when we look back, will seem too obvious to have missed but required a fundamental, visionary leap away from the past.
Or… you get the opposite. A derivative, copy-cat idea that’s then poorly applied to nonsensical use cases by a bunch of people with their fingers in their ears who watched too many of the same movies growing up. Either way, the art vs. science debate won’t get resolved, because either approach could plausibly take a good chunk of the credit (or blame, depending on how this plays out). If everyone migrates to the metaverse (or we hilariously and self-servingly redefine the term to encompass something different that people actually want), is that because of a qualitative adherence to Mark Zuckerberg’s vision against all indicators? Or will some team of analysts tell us the whole thing was obvious based on a rigorous review of such and such indicators? If we end up in my personal hell arguing with our computers through command lines, does that mean the tech bros obsessed with building the computer from Star Trek saw the future, or was this an obvious evolution from engagement with Siri & Alexa over the years?
I have my takes. I don’t think they matter, though, because the entire thing will look different depending on who’s looking at it. And my friends will still think I make ads for a living.