Always Conduct the “Simplest Valid Analysis”

Getty Images
This piece was cross-posted on the Transprent Replications blog. A significant and pretty common problem I see when reading papers in social science (and psychology in particular) is that they present a fancy analysis but don’t show the results of what we have named the “Simplest Valid Analysis” – which is the simplest possible way of analyzing the data that is still a valid test of the hypothesis in question. This creates two potentially serious problems that make me less confident in th...
More

How can big problems get solved?

Image by Matt Ridley on Unsplash
I think that big problems in the world (like chronic homelessness, loneliness, depression, poverty, underrepresentation of groups, risks from A.I., global warming, etc.) are ridiculously complex - way more complex than the narratives about them suggest. The only approach I know of that I think has a meaningful shot to help solve such huge problems, which you might call “Scientific Entrepreneurship,” combines two methods into one: (1) Rigorous science to deeply understand the causal struct...
More

Does money buy happiness, according to science?

By Spencer Greenberg and Amber Dawn Ace  This piece first appeared on ClearerThinking.org on February 28, 2024, was edited on February 29, 2024, and appeared here with minor edits on March 27, 2024. Does money buy happiness? Intuitively, the answer is yes: common sense tells us that poverty and hardship make people unhappy. We can use money to buy a lot of things that might make us happier – things like a nicer home, fancier vacations, education for our children, or just the oppor...
More

I’m an extreme non-credentialist – what about you?

Photo by Good Free Photos on Unsplash
I'm an extreme (>99th percentile) non-credentialist. Does that mean if I find out someone has a nutrition Ph.D., then I don't think they know more about nutrition than most random people? Of course not. Credentials are evidence of what someone knows (e.g., having a nutrition Ph.D. is evidence that you have nutrition knowledge). But part of what makes me an extreme non-credentialist is that if I spend an hour watching someone with a nutrition Ph.D. debate a completely self-taught person, a...
More

How great is the U.S., really?

This piece was coauthored with Travis Manuel. This is a cross-post from the Clearer Thinking blog. According to YouGov polling, 41% of people in the United States think that it is the greatest country in the world. Others see the U.S. as a place full of arrogance, violence, and inequality. So, what's the truth?  The truth is that there isn't a single notion of what makes something the "best." To explore how great (or not) America is, we'll start by looking at the question from mu...
More

Five rules for good science (and how they can help you spot bad science)

Image by S. Widua on Unsplash
I have a few rules that I aim to use when I run studies. By considering what it looks like when these rules are inverted, they also may help guide you in thinking about which studies are not reliable. (1) Don't use a net with big holes to catch a small fish That means you should use a large enough sample size (e.g., number of study participants) to reliably detect whatever effects you're looking for! (2) Don't use calculus to help you assemble IKEA furniture  That means...
More

Three reasons to be cautious when reading data-driven “explanations”

Photo by Sunder Muthukumaran on Unsplash
Did you know that fairly often, there will be multiple extremely different stories you can tell about identical data, none of which are false? In other words, the mapping from statistical results to true stories about those results is not unique. This leads to a lot of confusion, and it also implies that claims about "the reason" behind a complex social phenomenon should be interpreted with caution. Here are 3 common situations of this happening, each illustrated with realistic political ...
More

How to avoid feeding anti-science sentiments

Photo by Nila Racigan on Pexels
A major mistake scientists sometimes make in public communication: they state things science isn't sure about as confidently as things it is sure about.   This confuses the public and undermines trust in science and scientists.   Some interesting examples:   1) As COVID-19 spread early in the pandemic, epidemiologists confidently stated many true things about it that were scientifically measured (e.g., rate of spread). Some of them were also equally confidently stating things that were just spec...
More

Importance Hacking: a major (yet rarely-discussed) problem in science

Image created using the A.I. DALL·E
I first published this post on the Clearer Thinking blog on December 19, 2022, and first cross-posted it to this site on January 21, 2023. You have probably heard the phrase "replication crisis." It refers to the grim fact that, in a number of fields of science, when researchers attempt to replicate previously published studies, they fairly often don't get the same results. The magnitude of the problem depends on the field, but in psychology, it seems that something like 40% of studies i...
More

How can we look at the same dataset and come to wildly different conclusions?

Image by Ludomił Sawicki on Unsplash
Recently, a study came out where 73 research teams independently analyzed the same data, all trying to test the same hypothesis. Seventy-one of the teams came up with numerical results across a total of 1,253 models. Across these 1,253 different ways of looking at the data, about 58% showed no effect, 17% showed a positive effect, and 25% showed a negative effect. But that's not even the oddest part.  The oddest part is that despite a heroic attempt to do so, the study authors failed to...
More