Why I Remember Accuracy & Precision

I don’t actually remember when we learned this, but for some reason one of the random school things I distinctly remember learning was the difference between accuracy and precision. Now, as it turns out, I don’t remember it 100% correctly (yes, I see the irony there given the subject), but I looked it up again and for the most part the difference stuck with me pretty well over the years.

Here are the basics — essentially, accuracy is the distance of a measurement from a reference sample (a.k.a., the objective truth), and precision is the variability of the measurements you take. So let’s say you throw three darts, and you’re within an inch of the target, but in three different places. Then, you throw three more, and hit the exact same spot three times in a row, six inches from the target. Your first “sample” of darts would be more accurate, and less precise, than the second sample.

That’s pretty much the gist of it. Now, the more philosophical takeaway from this kind of thing is that it’s possible to conflate hyper-specific knowledge of a situation with an accurate assessment of that same situation — and I think this is an increasingly dangerous problem in the information age.

Anecdotal Precision in Current Events

One of the obvious places I’ve noticed this issue is in the ridiculous, never-ending intellectual tire fire that we call the 2016 U.S. presidential election, and around politics in general lately. When I was growing up, and first started paying attention to political arguments, they mostly revolved around people’s different preferred best practices for problem solving, as opposed to especially damning quantitative facts. This wasn’t always great — for instance, certain candidates could get away with measurably irresponsible or dishonest policies because people felt like that person was inherently responsible and honest — but it also left a lot of room for discussion. When you’re having an ideological argument about the logic behind something like gun control, or free trade, there’s really no card that anyone can just pull out and say “you are empirically wrong”. You’re forced to argue about why you think certain things happen, and that can very instructive if you’re open to it.

Like anything else, accurate (there’s that word again) data is an incredibly helpful (and important) tool for grounding these discussions in reality, and the collection and understanding of data has exploded since my first election in 2000. Unfortunately, a lot of people have taken that the wrong way, and started looking for data that justified their largely unexamined, qualitative views — and of course, that’s easier to find than ever through perfectly legitimate online research, or lazier, vulnerable-to-confirmation-bias channels like Facebook.

What I find jarring is how comfortable people are falling back on data, but how bad they often are at validating any of it. I got the “there’s way more violent crime in the U.K. than in the U.S. because the police are unarmed” argument thrown at me the other day, and I was actually kind of frozen by it. “Really?” I thought. It seemed weird for a couple reasons that my brain immediately flagged :

  • It had never come up in any of the discussions I’ve had with gun rights advocates (and I live in VA, so those discussions are pretty common) before.
  • It didn’t make a ton of logical sense — were police being overpowered by armed criminals? Was the idea of being shot at some kind of deterrent for people who probably didn’t expect to be caught anyways?
  • If this was the case, why wouldn’t the U.K. react? Giving patrol cops guns (or taking them away) isn’t an insanely radical policy anyways, and you’d think it’d be something you’d experiment if you had an emperical policing problem.
  • It seemed weirdly specific, even though in some way that gave it an odd feeling of credibility.

But of course, this is me, so even though all these red flags went up, I was curious about what could have been a huge personal blind spot. So, as I often do, I started looking for information on the issue, and I immediately found two things that explained the conversation.

I can't believe people get their news this way.

  1. a low-resolution JPEG meme (pictured here) stating this same “fact” with a pithy rejoinder at the end about how gun control doesn’t work
  2. a long, boring Politifact examination of the argument which convincingly argued that the data indicated no such trend, but that the argument was effectively comparing apples and oranges and would require better information to conclusively make

So basically… this statement is bullshit, and once I thought about it a bit more, that actually seemed pretty obvious. And yet, I didn’t dismiss it the way I dismiss vague conspiracy theories. Why? I think it’s because of that last thing — the claim was so specific, and I feel like human beings (myself included) have a tendency to assume specific arguments are inherently more sound than broad ones.

And sure enough, when I think back over my interactions with different people, and I assess all the different sources of truth I’ve dealt with in the past — from outright liars to the well-intentioned but incompetent — this is really does feel like a pattern. It really does feel like I’m constantly (and increasingly) innundated with hyper-specific, objectively wrong arguments.

Data Illiteracy

I’ve read lots of things about the problem of financial literacy in a world where making good, often complex financial decisions is a bigger and bigger part of getting and/or staying out of poverty. I think data literacy is the professional corollary to this challenge. Sure, real data people are fine — actual data scientists, certified accountants, professional economists, etc., because they’ve always needed to be data-literate. But almost everyone else has seen data injected into their various professions and areas of expertise seemingly overnight, and while we understand many of the things all this data is supposed to represent, we’re often fundamentally unprepared to assess the data itself. Where did it come from? How are different labels and thresholds defined, and what happens to your data if you change them? What is a sufficient sample size for various measurements? What’s the difference between causation and correlation? Do you see things like daily tracking polls and Salesforce reports as objective facts, or is your initial response to ask about their methodology?

To those real data people I mentioned (i.e., not me), these are obvious, basic questions. To regular people in business functions, they may not seem important at all. I’m fortunate enough to have grown up as the son of an engineer who literally designed test & measurement equipment, so while I’m personally unqualified to do any kind of academically rigorous data analysis, concepts like accuracy and precision have an emotional weight to me that I don’t think a lot of people necessarily share. That, combined with just enough education to get by (thanks, Social Science Statistics 101!) has given me a decent toolset for challenging the kind of flawed data conclusions I encounter every day. Even when those conclusions are incredibly specific.

Tips for Non-Experts

Just like we don’t all need to become developers to benefit from a broad, conceptual understanding of how software works, we don’t all need to become data scientists to get better results from our increasingly data-infused careers. We just need data literacy, which to me is mostly the ability to understand what we know, and what we don’t. In my experience, that has a lot more to do with the ability to ask intelligent, often humbling questions and our own insecurities than anything else, which is probably why it’s such a challenge for so many of us. There are just too many people walking around, armed with “data”, who lack either the humility, experience, or skills to resist using it to clumsily justify incorrect assumptions.

For me, I’ve spent the last couple of years trying to figure out how I can get the most out of everything from business to product usage data, and I’ve had the most success doing the following :

  • Simplifying what I’m trying to track, and what I’m trying to conclude.
  • Using the extra time and effort I save from that to focus on consistency of collection and larger sample sizes.
  • Sharing what I find with people outside my functional area, and encouraging them to ask about my methods and logic.
  • Iterating on my methods based on holes those people find, including cutting my losses on things I’ve been tracking that aren’t useful.
  • Being generally skeptical of big trends you discover, and eager to poke holes in the logic behind them.

In other words, don’t start looking at data when you need to prove to someone else that you’re right about something really specific. Start looking when you can afford to prove yourself wrong about nearly anything — and use your findings to improve your process. Then, when you really do need to make a data-driven decision, you’ll hopefully have more relevant data, consistent collection methods, and some amount of fluency in thinking about it. That doesn’t necessarily mean you’ll make the right conclusion (remember, you still probably don’t really know what you’re doing), but it’s a lot less likely you’ll make the wrong one.