By definition, we believe that each of our beliefs is true. And yet, simultaneously, we must admit that some of our beliefs must be wrong. We can’t possibly have gotten absolutely everything right. This becomes especially obvious when we consider the huge number of beliefs we have, the complexity of the world we live in, and the number of people who disagree with us. The trouble though is that we don’t know which of our many beliefs are wrong. If we knew that, we should have stopped believing them already.
But all hope is not lost. We can effectively reason about which of our beliefs are more likely to be correct, and which are more likely to be in error. Even if we feel equally strong feelings of belief for two ideas, further considerations can make us realize that we are more likely to be correct in one of the cases than the other. In other words, there are traits that go beyond our strength of belief that can help us identify where we are likely to have made errors.
Consider the following properties that beliefs can have. Each of these is an indicator that a belief is less likely to be true.
- Many smart, knowledgeable people disagree with you (e.g. you think that evolution didn’t happen). If many such people think you are wrong, it is not obvious why your belief is more likely to be correct than the beliefs of those who disagree.
- You have a financial (or other) incentive to believe it (e.g. you think that the product you created really does regrow hair, and you value providing a product that helps people). When we have an incentive to think a certain way, we are less likely to seek out or listen to evidence that contradicts this way of thinking.
- If the belief were not true you would find it psychologically disturbing (e.g. you believe that your wife does not fantasize about any other men). Our minds tend to veer away from thoughts that disturb us, making it less likely that we believe them, even when they are true.
- You originally came to believe for reasons that don’t have much to do with logic, evidence or reason (e.g. growing up, your mom wouldn’t let you pet dogs on the street, so you believe that doing so is dangerous).
- Your argument as to why your belief is true is long and complex (e.g. you believe that a convicted criminal is innocent, because when you evaluate the twelve pieces of evidence given against her, you find that they each don’t hold up). When our arguments are long and complex it is more likely that we have made an error at some point in our thinking.
- There are lots of possible outcomes, and your belief is that just one of them will occur (e.g. you think Hillary will beat out the other seven candidates in this primary). Typically, the more possible outcomes there are, the less likely it will be that any particular one of them is correct.
- A large number of factors influence whether your belief will end up being true (e.g. you’re convinced that GDP growth will decline over the next year). When many factors influence an occurrence, it is really hard to be sure that you have properly taken into account all of the important ones.
- You don’t understand the arguments of those that disagree with you, or see how they could believe what they believe (e.g. you know that a fetus is obviously a person). When you don’t understand contrary opinions, it is an indicator that you have mainly researched one side of an issue, and so are less likely to have really weighed the strength of arguments on all sides.
- You become emotional when people disagree with you about the belief (e.g. you think that insurance companies should not cap health expenditures for illnesses that are usually terminal, and you become upset when challenged on this issue). The problem here is that strong emotions can interfere with our ability to evaluate arguments objectively, and prevent us from engaging in open-minded discourse about a subject.
- You can’t clearly explain what your belief means (e.g. you’re convinced that you have free will). When we find it hard to explain what we mean by one of our beliefs, it may be the case that we have merely become attached to an idea or intuition, rather than having considered the evidence and made a decision based on that.
To be good at identifying and stamping out our false beliefs, we need to go beyond just considering how strong our feeling of belief is. We need to consider the properties of our beliefs, and decide whether each is the sort of belief that we should have skepticism about.
Influences: Kathryn Schulz
5.a – If you have a complicated proof, but you’ve individually checked all the bits, it is evidence that it’s likely to be true. The world is complicated and your understanding of the world should be similarly complicated. There’s a particular feel of simple explanation that is wrong to the extent it is simple.
I can also have fun with 10. I’m not sure if I’ve exactly defined free will, but I do have the property of not-determinism. Not-determinism is the fact that my actions are not predictable in advance, even in principle, even probabilistically. It’s trivial to prove I have not-determinism is practice relative to predicting agents. (And also why that’s biologically adaptive.) It’s even not impossible I have not-determinism relative to physics.
Though actually, I want to dramatically extend 10. If you stopped believing your belief, how would your behaviour change? If you don’t know, there’s no point in defending that belief. It’s just dead weight. Beliefs about GDP, evolution, politicians, and criminals in the news (but I repeat myself) usually fall into this category. Free will should fall into this category but doesn’t because voters are illogical and/or badly educated. (They checked.)
For instance, I can only justify my interest in evolution due to my self-profession as an epistemologist. Is my belief justified? Are the skeptic’s beliefs justified? My interest in the Singularity is identical.
Regarding 5, if you have a complicated proof with many bits, there is some chance you make an error while checking each bit. The more bits there are, the more likely it is that you made an error in one or more bits. Lots of bits means lots of places you could have had a mistake in your reasoning, and so less overall reliability in conclusion, generally speaking.
You’re repeating yourself.
Though as a result, I realized it depends on the structure of the proof. If it’s a tower and your only epistemic tool is logic, then that applies, yes. However, in the crime example, the various proofs are empirical and independent. It’s more like an amoeba. Each one can be grounded just as reliably as any individual proof. Combined they become more powerful, not less.