Failure of the week

      7 Comments on Failure of the week

In our Clinical Psychology group in Leiden, we have the tradition to share new papers, accepted grant proposal, and other achievements in emails called “Paper of the Week”, “Grant of the Week”, and so forth. I just started here, so I don’t know where the tradition comes from, but it’s great to receive regular updates, especially since it’s such a large group and impossible to keep up with everyone’s work.

A few days ago, I proposed the new rubric “Failure of the Week”, and wrote the following email:

We all grew up in a research culture where successes are communicated, and we pimp our CVs, but we don’t really talk about failures. This is a pretty horrible climate for young researchers to grow up in: Everything always shines and is perfect, things go well for everybody, and if you make a mistake, you think you’re the only one ever, and that you’re a failure. So I wanted to suggest to also share things that didn’t work out, and I’ll start today with my failure of the week: Someone found a small coding error I made in a paper published in 2017, and notified me about it yesterday. I will issue a correction, it doesn’t impact on the results a lot, so it’s not the end of the world, but of course that’s not a pleasant experience.

But I think about it this way: When coding 50 or 100 papers, most people will end up making errors. And the fact that I shared my code online enabled another scientist to find the mistake, and alert me to it, so it can be rectified. Yay science! But yeah, it’s a bummer, and if anybody wants to buy me hot chocolate and provide some hugs, I’m not going to say no ;).

How to avoid making the same mistake in the future? For complex projects, what some people have started doing is get a person who is not involved in the project on board as co-author for independent code-review. I think that’s a great idea, and I will consider doing this from now on!

Of course, I’m not the first one who had this idea or talks about failure: Several people have published their “shadow CVs”1, and I blogged about Kaitlyn Werner’s open science story just last week who had a paper rejected because there was something wrong with her data — and the reviewer only found out because she submitted her data along with the paper.

In any case, we don’t talk enough about failure, so here’s mine. It’s nothing to be proud of, in fact I made a really stupid coding mistake when averaging columns in a matrix.

Correction

What happened? On August 28 2018, I received an email from Rachel Visontay, Research Assistant at the National Drug and Alcohol Research Centre, who had looked at the online supplementary codes I had shared for the paper entitled “The 52 symptoms of major depression: lack of content overlap among seven common depression scales”, published in the Journal of Affective Disorders. She alerted me to a mistake in the syntax, and I am incredibly glad I shared my code online so it could be identified by others, and can now be rectified2.

I have uploaded a detailed correction note to the Open Science Framework, along with the updated R syntax, and sent a note to the Editorial Board of JAD. The main figure some of you might know already stays the same, but some of the overlap values change slightly. In sum, the results section of the Abstract should state correctly (changes in italics):

“Results: The 7 instruments encompass 52 disparate symptoms. Mean overlap among all scales is low (0.36) moderate (0.41), mean overlap of each scale with all others ranges from 0.27–0.40 0.31-0.47, overlap among individual scales from 0.26–0.61. Symptoms feature across a mean of 3 scales, 40% of the symptoms appear in only a single scale, 12% across all instruments. Scales differ regarding their rates of idiosyncratic symptoms (0%–33%) and compound symptoms (22%–90%).”

Since journals don’t update their final papers, I was wondering whether it would make sense to also upload a corrected postprint on the open science framework. Suggestions?

PS: Jeff Rouder actually scooped me with his blog post entitled “making mistakes”: After I had written up the first draft of this blog and left it lying around for a few days for corrections, he posted his own blog post about failure. It’s definitely worth reading. Damnit Jeff.

Footnotes:

  1. E.g., Bradley Voytek in 2013, or Devoney Looser in 2015 .
  2. I asked Rachel whether I could thank her here, and she said that is ok.

7 thoughts on “Failure of the week

  1. Pingback: Antidotes to cynicism creep in academia » Mental health & data science

  2. Pingback: Summary of my academic 2018 - Eiko Fried

  3. Pingback: Looking back at 2018 - Eiko Fried

  4. Steve Lindsay

    Some journals (e.g., APS journals) correct the online version of an article. The online version of the article is flagged to indicate that a correction was made, with a link to a Corrigendum (if the errors were the author’s fault) or Erratum (if the editor/publisher caused the errors) explaining the nature of the errors, and the article is amended so correct the error.

    For a recent example, see http://journals.sagepub.com/doi/full/10.1177/0956797614535810. This is a model of authors stepping forward and alerting the journal to an error and then following through to correct that error. In this case, the correction had no material effect on the results but even if it had it would better to correct the error than to let it stand. Correcting errors is cool!

    Reply
    1. Eiko Post author

      Thanks Steve! I wonder if it would be possible to have a small box in PDF files of final papers that can ‘phone home’. The box is green if everything is in order. If in 3 years, the paper is corrected, the box turns red and alerts the user of the PDF to update check for updates or corrections. There’s probably lots of issues with that I haven’t thought through though.

      Reply
  5. Michael Seto

    Journals will publish errata if you realize an error after final publication.

    The erratum would summarize the error and the specific corrections. Unfortunately, errata aren’t always directly and clearly linked to the original article, even though that is trivial in the digital era (not so easy in the paper days).

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.