Zombie theories: why so many false ideas stick around

When I talk to friends or family members who do not work in academia, they have beliefs about how science works — beliefs that appear entirely sensible.

  1. Most published results are correct or at least plausible, because scientific journals are the most thorough outlets.
  2. Errors occur very rarely, and if they do, they are corrected swiftly.
  3. If people really cheat, which happens very rarely, they are kicked out, especially if they are repeat offenders.
  4. When prominent news outlets cover science, they do so accurately: New York Times, Guardian and other sources can (largely) be trusted.

As I’ve shown in a recent blog post, none of this is the case.

Antidotes to cynicism creep in academia

I do not want to revisit this debate, but instead take a look at why this is: why do problematic studies get published in the first place? Why do ideas stick around even when they’re shown to be false? And what can we do about that?

The main reasons are vicious cycles and self-sustaining feedback loops inherent in how academia and science operate.

1. Examples of zombie theories

The motivation for thinking about the topic comes from recent discussions about psychiatric genetics I’ve seen online. About 25 years ago, the psychiatric genetics literature started trying to identify single genes responsible for mental health problems. Many careers were built on this idea. Many Science and Nature papers were published, tons of grants were awarded to study these ideas, and many newspapers articles were published. These steps were taken despite lucid warnings from researchers, not only about methodological issues, but also the basic plausibility of the idea of single genes having important impact on mental illness.

All of this continued for about 20 years, until a publication in 2019 by Border et al. finally drove the nail in the coffin of this idea, which I discussed in detail in another blog post.

The replication crisis hits psychiatry: No candidate genes for depression

There are many other examples of zombie theories — theories that stick around and simply won’t die. Correcting the scientific record is so difficult, and retractionwatch has countless examples.

2. Why zombie theories stick around

A 2020 paper showed that retractions after misconduct basically have no impact on citation count1. That means that retracted papers keep gathering citations in the same way they would if they were not retracted.

The reason the scientific record is so hard to correct is that nearly all relevant players — relevant meaning that they have power in the system — benefit, which is incredibly harmful to the idea of science as a cumulative, self-correcting enterprise. It is also harmful to society, of course, because false ideas stick around that limit progress.

Let’s look at the individual players.

2.1 Researchers

Researchers have greatly benefitted from really questionable work. This is due to two reasons.

First, there are immediate benefits from high-impact publications that get tons of press. It’s great for your CV, it’s great for applying for grant funding, it’s great for promotion, and so on. You give TED talks, you make money from invited keynotes, you write books that sell well, and so on.

Second, there is no science police. Even when people are caught cheating, consequences are usually rather minor. There are situations where researchers lose their jobs, of course. At my own university, for instance, there was an egregious case of scientific misconduct, with at least 15 papers having been identified as fraudulent. But in my view, the researcher was let go mostly because there was a criminal offense (the person carried out a medical study without having obtained ethics). But even in such a severe case, within a very short time, the researcher was hired at another university (hello, Dresden).

In sum, there are benefits from false positive findings, and if findings do not replicate, this is of little consequence. My colleague and friend Anna van’t Veer has made a nice figure visualizing this problem: scientists often work to advance their careers, which does, unfortunately, not automatically mean that they advance science.

This may explain, in part, why researchers are often so hesitant to share data. Transparency allows others to interrogate research findings carefully and vet it for reproducibility and replicability. This is a big plus for the goals of science, but not necessarily for the goals of an individual researcher, unfortunately.

2.2 Journals and scientific publishers

Scientific journals are often for-profit, and scientific publishers have ridiculously high profit margins (details). Scientific journals benefit financially when papers are discussed in the literature, because it provides them with citations. Citations boost journals’ impact factors, and higher impact factors are used by journals as evidence that they are ‘good’ journals, which in turn is used as justification for steeper and steeper prices (currently, publishing a single open access paper in Nature costs the researchers €10,290). Curiously, even when a paper is cited solely negatively — “see Myers et al. (2013) for an example on how not to do science” — this will increase the citation count and impact factor.


Get notifications for new blog posts on eiko-fried.com:

Join 1,146 other subscribers

Many scientific journals make a direct profit when more people read a given paper. On the one hand, it can increase the price they charge universities and other institutes to access journals. The prices have become so steep that even universities like Harvard say they can no longer afford subscription prices. In addition, journals also sell access to individual papers for folks who do not have institutional access—here an article from a Nature journal.

Some journals have focused on selling in-credible research findings for decades. Nature, for example, one of the journals often considered to be among the best journals, until recently asked researchers to primarily submit findings that are “novel, arresting, illuminating, unexpected or surprising” — which is the perfect recipe for a study that will not replicate (this is a screenshot I took directly from the “author instruction” website of Nature).

For these reasons, journals are also hesitant to take action in correcting the scientific record. There are virtually hundreds of prominent examples. I wrote about a 2022 paper just a few weeks ago which still isn’t corrected and retracted, and which contains massive problems. Or take this example of an economics professor having tried to rectract his own paper—two years later, the paper was still online. Corrections are rare, and as shown for problematic research during the COVID-19 pandemic, the current system “is unsustainable, unfair, and dangerous, and the pandemic has acted to magnify its unsuitability and numerous limitations”.

But there is a second way in which journals contribute to this situation: they often do not publish null-findings. Null-findings are studies in which authors show that there is no association between two things, or that a previous study does not replicate. The reason is the same as described above: it’s hard to get a lot of citations for these claims, it is boring, not novel or unexpected, and universities and the Guardian are not going to write press releases and news articles on “no relationship between A and B”. That null-findings are not popular can be seen in the fact that nearly 100% of psychological studies support initial hypotheses, which simply isn’t plausible. See e.g. “Psychology needs to get tired of winning” by Haeffel.

James Heathers published a really good overview article on publication laundering, which fits here well, showing that there is no reputational cost to publishing nonsense. And if you don’t have enough yet, check out this blog by Joe Hilgard2 trying to correct the scientific record in a situation that could not be more clear, without much success—basically, if an editor in chief of a journal does not act, there is not much you can do. These stories are online in the dozens, and I myself contacted many journals over the last decade. For example, in January 2024 I reached out to an editor in chief about a paper asking whether there are relations between mindfulness training and brain anatomy (the authors find that they are, but they do so because they simply excluded all studies from their analyses in which there aren’t).

2.3 Media

So researchers and scientific publishers stand to gain when big bold claims or unexpected findings are published. Media do, too. I’ve been so disappointed by prominent news outlets like the New York Times or the Guardian who have uncritically covered some really shoddy research I have expertise on. This is perhaps most obvious in the last decade when it comes to the topic of using psychedelics as treatments for mental health problems — and I am so happy to see more critical pieces these days. In fact, just yesterday, Science Friday published a critical 20-minute segment on MDMA assisted psychotherapy for PTSD for which they interviewed Sarah McNamee (and also yours truly, yay).

But big headlines generate clicks, and clicks generate money. And if a newspaper has to issue a correction a month or year later, who cares?

2.4 Universities

Let’s not forget the role universities play in this. I have seen so many press releases that dramatically overstated evidence contained in scientific papers. For example, the press release for a recent paper on a biomarker for a mental health problem read like an advertisement for a specific product:

“The test is anticipated to be available later this year from the IU spin-out company [company]. For more information about precision psychiatry and blood testing, visit the [company] website.”

For one, this happens because people writing university press releases are press folks — they are trying to make studies look good, because they are hired to work on the reputation of a university. Please don’t misinterpret this, there are fantastic people working at universities responsible for press releases, and many of them try to write nuanced and careful pieces. But generally, universities write press releases not about small null findings, but about the most exciting papers. And this alone already means false positive studies are more likely to be covered.

Another reason press releases such as the above example contain concrete information on products is that universities often are involved in patents that scientists develop, and stand to directly gain money from advertising for related products.

3. Ways forward

Benefits for multiple actors in the academic and scientific space in the short run have sustained a system where many false positive studies are published; where consequences for any of the participating parties are exceptionally rare; and where the scientific record is rarely corrected. Costs for this are massive—we can use the above example again of psychiatric genetics. So much research money was spent on a scientific endeavor that did not produce any insights, and, as a consequence, was unable to work towards solutions for people affected by mental health problems. And scientific funding is a zero sum game: if we spend money on X, money is not available for Y. In this case here, the money could and should have been spent on other and more fruitful initiatives.

This is the main reason I wanted to write this blog post: some folks on Twitter have sold this as a success. Oh, these non replicable studies were super important, and we learned a lot. Yes, well, we could have learned the same thing with a lot less financial investment, had we done things properly. The self-sustaining system between universities, individual researchers, publishers, and media stood in the way to do so, unfortunately.

There have been efforts to compute these costs more directly. For example, the 2015 paper by Freedman, Cockburn and Simcoe concludes that “the cumulative (total) prevalence of irreproducible preclinical research exceeds 50%, resulting in approximately US$28,000,000,000 (US$28B)/year spent on preclinical research that is not reproducible—in the United States alone”. This is why open science initiatives often make points about research waste, tying initiatives increasing transparency directly to a reduction of research waste (e.g., talk by the amazing Ioana Cristea on research waste in clinical sciences; two papers on the topic).

How do we move forward? I don’t have a panacea and will only tackle this briefly, but one way to think about this is tackling the different players.

For individual researchers, we need to make sure to encourage robust, careful, thorough work in our hiring and promotion policies. Going back to the figure I posted above, one perspective is to try to better align incentives for individual researchers with the goals of science. A welcome counter perspective is Tal Yarkoni’s “Not, it’s not the incentives—it’s you“, which puts the responsibility on individual researchers. As Tal writes:

There’s a narrative I find kind of troubling, but that unfortunately seems to be growing more common in science. The core idea is that the mere existence of perverse incentives is a valid and sufficient reason to knowingly behave in an antisocial way, just as long as one first acknowledges the existence of those perverse incentives […]. A good reason why you should avoid hanging bad behavior on ‘The Incentives’ is that you’re a scientist, and trying to get closer to the truth, and not just to tenure, is in your fucking job description.”

Tal is right, of course. Teaching a number of workshops on Responsible Research on the topic of Responsible Research (currently writing this up as a paper, please be patient) with Anna has shown me how many resources exist that are very clear on how not to behave as a scientist. Dutch researchers, for example, are bound by a specific Code of Conduct, which clearly lists many of the problems I talk about above.

Still, I am not sure that “please behave more ethically” is something that by itself will work as a solution. Current situations at universities really can suck, with “a publish or perish mentality, long work hours, mental health problems and burnout, and a lack of sustainable and permanent jobs” (details). So I think we can work on both fronts, this one, and the one Tal discusses.

For scientific publishers, many good solutions have been put forward. Basically, we need to transition away from for profit publishing. The current system is ridiculous and broken, as I explained in a blog post a few years back (also discussed here).

Academia: trapped in the upside down of publishing

Just imagine that it can a decade (!) for a journal to retract a paper with clear evidence of problems—that’s just wild.

For media, this isn’t my expertise, and I don’t have solutions (other than that good journalism costs money, and supporting good journalists financially may help, perhaps). I’d be curious what the view of experts in this area is.

For universities, please talk carefully to folks writing your press releases. Keep them in check, make sure they understand the strengths, but also shortcomings of your work. I don’t do the sort of work that gets covered a lot in press releases, but when that was the case, I was quite nitpicky, and made sure a press release only stated what was, in my opinion, supported by evidence.

Perhaps most importantly, we mustn’t give up hope ;). Things look pretty bleak sometimes, but there are tons of amazing initiatives and people improving things as we speak.


PS1: Another reason why zombie theories stick around is that theories in psychology are usually vague and narrative descriptions, which makes them essentially unfalsifiable. I will not cover this issue here, but I wrote about it in detail previously. As Paul Meehl famously said: psychological theories “tend neither to be refuted nor corroborated, but instead merely fade away as people lose interest”.

PS2: I’d like to thank Ian Hussey for providing numerous relevant references.

  1. There is more work on this, e.g. this preprint
  2. Apologies for the typo, Joe — thanks Malte for the help XD

2 thoughts on “Zombie theories: why so many false ideas stick around

  1. Ed Minchau

    This is precisely why the internet and hyperlinks were invented. Journals made sense when publishing meant owning a printing press and distribution meant mailing out physical copies.

    There really is no need for journals at all anymore. All of the “gatekeeper of information” careers are obsolete. A paper can be published in pdf form as part of a blog post. Citation is by hyperlink and pingback. The whole world can referee in the comments.

    Reply
  2. Michael

    This is exactly why Artificial Intelligence will fail. It will be trained on the very acme of human folly.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.