Accepting Your Error Rate


No matter how intelligent, rational, or knowledgeable you may be, you are going to be wrong pretty regularly. And you’ll be wrong far more often than pretty regularly when dealing with complex topics like politics, people or philosophy. Even if you’ve freed yourself from thinking in terms of true and false dichotomies, and made the effort to convert your beliefs to probabilities or degrees of belief, you’ll still be wrong by way of assigning high probabilities to false propositions.

Most people underestimate how often they are wrong. Not only is there a common human tendency to overestimate one’s own abilities, but beliefs have the property that they feel right to us when we focus on them. So even if we admit that we likely have a number of false beliefs, it’s easy to go on acting as though each of our individual beliefs is beyond serious doubt. Worse still, we assume that if a belief of ours hasn’t yet been proven wrong, then it’s right (it feels that way, after all) so it seems to us that we have made far fewer errors than we really have.

It’s disturbing to discover we’ve been mistaken about something important – especially when we’ve wasted time or effort because of the belief, or expressed the belief in front of others. So we’re incentivized  to come up with justifications for why we weren’t actually wrong. We try to avoid psychological discomfort, and we try to save face in front of others. But there is a healthier way to think about wrongness: recognizing that we have an error rate.

Since we have to assume that we will be wrong sometimes, we can think of ourselves as having a frequency with which things we claim are actually false (or, if we’re thinking probabilistically, a rate at which we assign high probabilities to false propositions). As was pointed out in the comments below, it may be helpful to think of your error rate as being context specific: we make errors more frequently when discussing philosophy than when remarking on the weather. But if you wanted a single overall rate, you could define it, for example, as the fraction of the last 1000 claims you made that actually were not true (or were not even very nearly true). This rate will be different than, but generally quite predictive of, the fraction of your next 1000 claims that will be wrong.

Our error rate is connected to the chance that any one of our individual beliefs will be wrong, though we obviously should be much more confident in some of our beliefs than others. When evaluating the probability of a particular belief being right, there are a variety of indicators to look at. For example, we should be more skeptical of one of our beliefs if a large percentage of smart people with relevant knowledge dispute it, or if we have a strong incentive (financial or otherwise) to believe it, or if we can’t discuss the belief without feeling emotional.

Once we fully accept the fact that we have an error rate, we can think about wrongness in a new light: we can expect to be wrong with regularity, especially when reasoning about complex subjects. Once we start expecting to be wrong, it is no longer as disturbing to find that we are wrong in a particular case. This merely is a confirmation of our own predictions: we were right that our being wrong is a common occurrence. That way, being wrong doesn’t have to be so frightening. When it happens, it indicates our error rate may be slightly higher than we previously believed, but it is not abnormal.

Estimating our actual error rate is hard, in part because we’re wrong much more often than we notice it. So even in theory it doesn’t work to just count up how many times we’ve discovered being wrong as a fraction of the number of things we’ve claimed were true. But nonetheless, we can benefit psychologically from remembering that we have an error rate, even if we don’t know what that rate is.

If in your experience you’re almost never wrong, that is indicative of a serious problem: it is far more likely that you are wrong fairly regularly (and are simply bad at processing the counter evidence that should make you aware of your wrongness) than it is that you really are wrong so infrequently. Put another way: failure to detect your own wrongness doesn’t imply you’re right, it indicates you’re very likely deceived about your rate of wrongness. Presumably, you’ve noticed that those around you are wrong quite regularly. Do you really think you’re the incredibly rare exception who is pretty much always right?

When you deeply accept the fact that you’re wrong with a certain error rate, it becomes easier to convert fear of being wrong into curiosity about when your wrongness is occurring. Whereas seeking out your thinking failures may have scared you before, it may now seem dangerous to not seek them out: you already know that you’re going to be repeatedly wrong, so the responsible thing is to figure out when that wrongness is occurring.

Yet another advantage of thinking about your error rate is that it naturally leads to thinking about how to reduce this rate. This can be done by learning to rely on more reliable procedures for forming beliefs (something I’ll say much more about later), and using these procedures to check what you previously believed to be true.

Remember: you too have an error rate. You don’t need to fear being wrong. Instead, you should expect it.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


  1. A notion that’s been stewing in my mind is the idea that if you know someone who is even slightly better on average at being right than you are, you should abandon all your beliefs and use that person’s, as that will make you less wrong, on average. In practice, no one does this, perhaps because it makes them seem mindless, and perhaps because everyone thinks they are the best in the world at being right.

    More relevantly, if I have a thousand beliefs, each at 99% credence, and ten of them are wrong, I have not made any errors.

    Even more relevantly, it might not be so fruitful to think of person-based error rates, as this is a case of the fundamental attribution error. It may be more useful to think of situation-based error rates, e.g. “I am currently thinking about the origins of the universe. I am wrong with probability so close to 1 that I might as well stop,” or “I am currently evaluating my real-world efficacy. There is a very high chance I’m being biased, so I’ll ask someone else.”

    1. I work in a community with some amount of idol worship, which does lead to people setting aside their findings to agree with someone smarter. These people don’t improve their thinking as quickly. Reasoning is a skill that needs to be practiced, and setting aside your beliefs for those of another doesn’t exercise what’s important for developing complex thought. You don’t get good playing on easy.

      If you find a friend who is more correct than you, and you follow his beliefs, then what will you do when said friend gets hit by a bus? How will you teach your kids to think? Or will they also be parrot-folk, always mimicking greatness but never adding to it, content to be good but never the best?

      If you’re betting on the other horse, why are you entering the race?

  2. Very (and typically) smart, Spencer! I’m now more willing to just flat-out admit that I’m wrong about analytical things on which I’ve staked out a position. What’s changed is that ‘being almost invariably right about analytical things’ is no longer my stock-in-trade (as it was when I was a policy consultant who had to compensate for haphazard organizational skills with some other talent). This change just supports your point about cases in which we have a vested interest in being considered right in some sense even when on the face of things we looked wrong.

  3. If self-analysis can reduce the error rate, as the end of your article implies, then it is possible for some to have a much lower error rate than their peers’.

    So if a person finds themselves to be “almost never wrong,” how should they tell if it’s a problem with their self-analysis or if they simply have a lower error rate than those around them? For example, someone with a 10% error rate in basic reasoning will assume he’s “almost never wrong” if his peers have an average of 30%. After all, most of the time he gets in a disagreement, he’ll be correct. Yet, your article implies that he’s misleading himself, and that it’s arrogant of him to assume he’s right so often. Taking this logic to heart, he might overly credit those around him and end up regressing.

    I’m not arguing against realizing one’s own fallibility, but the “what’s more likely” argument doesn’t hold much water. For example, what’s more likely, that a man who says “I am the president of the United States” is a crazy person, or the actual president? Odds are he’s crazy, but there’s always an exception, and the government wouldn’t work very well if, every inauguration period, the actual president went with the odds and voluntarily committed himself to an insane asylum.

    Maybe people need to think that they’re right, or the one person who is wouldn’t get anything done.