Image by S. Widua on Unsplash
Image by S. Widua on Unsplash

Five rules for good science (and how they can help you spot bad science)

I have a few rules that I aim to use when I run studies. By considering what it looks like when these rules are inverted, they also may help guide you in thinking about which studies are not reliable.

(1) Don’t use a net with big holes to catch a small fish

That means you should use a large enough sample size (e.g., number of study participants) to reliably detect whatever effects you’re looking for!


(2) Don’t use calculus to help you assemble IKEA furniture 

That means using and reporting the simplest analysis that is a valid test of your hypothesis (even if you also decide to do fancier analyses). I call this the “Simplest Valid Analysis.” It’s easy to deceive yourself (and others) with overly fancy math!


(3) Don’t claim you saw a bear if all that happened is you heard a growl in the distance

Papers often claim more than they actually show. It’s best not to make such claims OR to point out gaps between what was shown and what was claimed (i.e., other interpretations of the data).


(4) Finding out you’ve backed the wrong horse is better than being a horse’s ass

It feels bad when a theory we’re fond of turns out to be wrong. Even more so when we’ve claimed it in public. But it’s FAR worse defending falsehoods for years because we won’t update on evidence.


(5) When you win at poker, remember that you’re in a casino

Results are sometimes just an artifact of chance. If you tested lots of hypotheses, you should be more skeptical of your own p < 0.05 findings. Don’t forget that all of the averages that you estimate come with confidence intervals.


You can also think about applying these ideas when you’re reading research rather than conducting it. Be more wary of a study when you notice that it has any of these characteristics:

(1) Is small

(2) Overcomplicates things

(3) Overclaims

(4) Is run by people whose incentives don’t align with truth-finding

(5) Runs many tests that fail, and just focuses on a few that don’t, without acknowledging this 


This piece was written on September 22, 2023, and first appeared on this site on October 18, 2023.


  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


  1. For an organization which specializes in clear thinking, the recent essay about identifying bad science is so full of poor analogies and crude expressions that it is not at all clear in the sensible meanings of its claimed ideal! After becoming aware of a phenomena, a good scientist will then construct an hypothesis to explain it, “in the most simple form providing it is not oversimplified.” (This criterion was provided by Prof Albert Einstein in 1936, derived from Occam’s Razor.) Good science starts from clear thinking and straight direct expressions without using any superlative to color the thinking process.