September and October news from clinical psychology, methods, and open science. With a heavy focus on open science this time around — a lot happened in the summer.
- Guardian piece by Chris Chambers & Pete Etchells: “Open science is now the only way forward for psychology”.
- Interesting Twitter discussion on paying peer-reviewers, with a 2014 economics paper where paying referees led to decreased review times, increased acceptance rates, and unaffected quality.
- Nature initiative led by Jessica Polka with a call on journals to make reviewers’ anonymous comments part of the official scientific record.
- Twitter description of two research transparency initiatives implemented by the journal Cortex; these practices can serve as a model for other journals.
- Most of you have seen the recent social science replicability paper, featuring 21 experiments published in Science and Nature. Here’s a very interesting blog post by Gideon Nave on the paper. Interestingly, researcher did very well a priori to predict which studies would replicate, and which ones would not. Can you guess what studies replicated just from reading a brief description of their results? Here’s a quiz!
- Google has started the initiative Google Dataset Search “so that scientists, data journalists, data geeks, or anyone else can find the data required for their work and their stories, or simply to satisfy their intellectual curiosity”. I haven’t had time to look at it, and am very curious how this works.
- Jon Tennant summarized Publons’ Global State of Peer Review report, based on a survey of 11k researchers. For instance: “Each year […] researchers spend roughly 68.5 million hours reviewing. That’s 2.9 million researcher days or 7,800 researcher years spent reviewing annually.” Nature also discussed the report in detail.
- New preprint by Torrin Liddell & John Kruschke entitled “Analyzing ordinal data with metric models: What could possibly go wrong?” Spoiler: a lot.
- The project Our Data Helps seems worth looking into: They have been collecting donated social media data (with the specific goal of getting data from people who have attempted suicide) for the last 2 years, and donors have identified the point in time where they attempted suicide. They shared this dataset with researchers to help accelerate research on suicide. An explanation of the project can be found here.
- Interesting preprint showing that many task measures of cognitive control have low retest reliability.
- New paper in press at AMPSS by William Davis et al., entitled “Peer Review Guidelines Promoting Replicability and Transparency in Psychological Science”. The paper provides guidance for peer reviewers in the open science era.
- Adam Chekroud summarized a recent Lancet study by Jaime Delgadillo and colleagues, showing that giving therapists feedback about patient outcomes helps get patients back on track.
- Filip Raes summarized a number of recent papers discussing questions relevant to teaching, such as “should students receive slides prior to giving lectures” or “should students take notes via pen & paper or on the laptop”.
- Crash Course Statistics with a new video that offers a decent intro to the replicability crisis, covering many recent examples and initiatives in psychology (Cuddy, Nosek, p<0.005, etc). Maybe useful when teaching undergraduate students. I'm a bit disappointed that after defining replicability and reproducibility clearly, they keep mixing up the terms though; maybe worth discussing if you show the video.
- Great Twitter discussion initiated by Kevin King about replicability/open science questions a number of clinicians asked him recently, such as “What are red flags to look out for in a results section?” and “When can you really trust the literature on a treatment, and when should you worry about publication bias?”.
- Very insightful blog post by Tal Yarkoni, pushing back the common narrative/excuse that engaging in questionable research is ok “because the system is broken”. Tal concludes: “If you find yourself unable to do your job without regularly engaging in practices that clearly devalue the very science you claim to care about, and this doesn’t bother you deeply, then maybe the problem is not actually The Incentives—or at least, not The Incentives alone. Maybe the problem is You.”
- Great regression to the mean example: A study without control group where researchers selected participants to be high on specific biomarkers. Participants improved on these markers over time, and the researchers concluded that their intervention “proved that personlized nutrition is able to improve a significant amount of blood biomarkers” …
- Laura de Ruiter shared her experiences about being a European on the US job market / system.
- Nature news discussed a novel radical open-access plan ‘Plan S’ that might spell the end to journal subscriptions; so far, 11 funders in Europe announced to make all scientific works free to read as soon as they are published.
- Preprint on preregistration “as an acquired skill” by Coosje Veldkamp and colleagues, entitled “Ensuring the quality and specificity of preregistrations”.
- JAMA journals retracted 6 more studies of Brian Wansink (media coverage). Wansink responded on Twitter with an apology for his mistakes; Chris Chambers explained, in detail, why he considers the problems uncovered that led to the retractions much more than just “mistakes”.
Pingback: Looking back at 2018 - Eiko Fried