The “Lizardman’s Constant” Causes the “Nonsense Factor:” a hidden source of false conclusions in research

The “Lizardman’s Constant” is a fascinating idea from Scott Alexander (Astral Codex Ten). Scott argues that approximately 4% of survey responses on polls/surveys are not sincere – hence why 4% of respondents are found to report believing that lizardmen run the Earth and 5% of atheists say they believe in god, etc.

I’d like to introduce a related idea that can ruin studies: the “Nonsense Factor.” As Scott rightly points out, in any study, some respondents do not answer based on their sincere beliefs (due to wanting to mess with the researcher, being a spammer or bot, or simply not caring enough to read properly). Some people’s answers are essentially nonsense. In the spirit of Scott’s idea, let’s call people who respond non-sincerely for any reason “lizardmen” respondents.

On questions where sincere respondents give a highly varied range of answers, this is usually not a big deal – the lizardmen respondents distort the numbers slightly, but not enough to matter for most purposes.

But consider what happens when there is a question that sincere respondents almost all give the same answer to. For instance, it could be a true/false question like “Have you ever been charged with XYZ crime?” or “Do you have XYZ rare disease?” or “Have you ever been to XYZ rarely visited country?” If only 2% of sincere respondents answered these questions in the affirmative, but 50% of the lizardman respondents answered in the affirmative, then slightly more than 50% of the yes responses would be from lizardmen!

A very strange thing happens as a consequence: the yes responses to all of these questions become correlated with each other! In the case where there is truly NO correlation between responses to these questions among sincere respondents, we will erroneously measure a correlation of 0.23 due to the lizardmen! In other words, our statistics will show that being charged with this sort of crime, having this rare disease, and having traveled to this rarely visited country are all linked when there’s actually no relationship at all between the three!

Or, take the case of a rare demographic characteristic (such as belonging to a small ethnic minority). Belonging to this minority will falsely be found to correlate with all these other rare traits, even in situations where it has no relationship. So, even though only ~4% of respondents are lizardmen, these lizardmen cause a non-negligibly-sized false correlation between traits!

This is what I call the “Nonsense Factor” – the “factor” that makes all rare traits seem correlated to each other due to lizardmen responders.

The Nonsense Factor can seriously screw up research because, in practice, even if you try hard to detect and throw away data from spammers, bots, trollish respondents, etc., it’s quite challenging to get the rate of such responses below 4%. And higher rates of lizardmen will make this problem even worse. For instance, with 8% lizardmen, the false correlation for rare yes/no traits (that are truly uncorrelated) would be pushed up to 0.31! I would certainly not have predicted (before simulating this) that such a small percentage of bad responders would cause such big fake correlations. The intuiting here is that although the lizardmen responders make up a small percentage of all of the people, they end up being a substantial percentage of those who answer yes for the rare traits and hence have a large influence.

If, on the other hand, the trait is less rare among sincere respondents, the correlation will be reduced a lot. With a 4% lizardman constant and a 12% rate of the trait in the sincere population, the erroneous correlation falls to just 0.05 – small enough that it would usually be considered not meaningfully different than zero. So, this really is a phenomenon that only applies in the case of studies that include rare traits.

So this means that, in many kinds of research, when multiple rare traits are measured, they may be found to erroneously correlate with each other when they have no true correlation!

To my knowledge (please correct me if I’m wrong), this effect is not widely known among the academic research community (though I’m certain I’m not the first to observe it). My own thinking on this was heavily influenced by Aella, both by a comment I saw her make that relates to strange correlations and because it was through analyzing a data set that she gave me access to that I honed my thinking on this topic.

So, how can this be solved? Well, for many sorts of questions (where the full range of answers is pretty likely for sincere respondents), it doesn’t need to be solved. However, for questions in studies that are asked about rare characteristics, the only way I know of to solve this is to somehow achieve an extremely clean data set where the substantial majority of lizardmen respondents have been identified and removed. And that’s really hard to do.

It is a standard best practice to include “attention check” questions to try to catch bad respondents (and that’s essential). When the bad responders are answering at random throughout a study, they are quite easy to find. But non-random bad responders, which are pretty common, are much harder to catch. I think that researchers often fail to get the rate of bad responses below 4%.


This piece was first written on July 5, 2024, and first appeared on my website on December 26 2024.



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *