Brief psychology news 01/2019

      No Comments on Brief psychology news 01/2019

January 2019 news from Clinical Psychology, Quantitative Psychology, Meta Psychology, and Open Science. For prior news, see the rubric Psychology News on this blog.

The subheaders below are only rough approximations, since many items fit multiple categories; I only list each item once though. If you have suggestions how to better order this, please let me know. I share these items because I think they are interesting, not necessarily because I endorse them.

Clinical

  1. Null-finding machine learning paper by Richard Dinga and colleagues entitled “Predicting the naturalistic course of depression from a wide range of clinical, psychological, and biological data: a machine learning approach”, showing that out of 81 variables, only baseline severity predicted depression 2 years later.
  2. New preprint by Bruno Verschuere et al.: “A plea for preregistration in personality disorders research: The case of psychopathy”.
  3. New paper by Joel Thomas & Paul Sharp in Clinical Psych Science: “A New Approach to Comprehensive Psychopathology Research That Relates Psychological and Biological Phenomena”.
  4. New JAMA Psychiatry paper on comorbidity in 6 million Danes over 84 million person-years. Includes a fantastic companion website with interactive visualizations. As Sacha Epskamp pointed out correctly: it’s sort of funny they include inferential statistics and confidence intervals in their paper. THEY HAVE THE ENTIRE POPULATION ;)! I found it a bit disappointing the authors only provided information on relations among 2 disorders each — beautiful example where network psychometrics could help to estimate and visualize dependencies in the data.
  5. The new paper by Amy Orben & Andrew Przybylski made a huge splash on (social) media, and rightfully so: it’s a massive multiverse analysis of the association between adolescent well-being and digital technology use (n = 355,358). Conclusion: “The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change.”
  6. Critical new piece by Jim van Os and colleagues in World Psychiatry: “The evidence‐based group‐level symptom‐reduction model as the organizing principle for mental health care: time for change?”

Methods

  1. New paper by Joost van Ginkel et al.: “Rebutting Existing Misconceptions About Multiple Imputation as a Method for Handling Missing Data”.
  2. Richard Morey argues that “this study is underpowered” without further information is poor critique.
  3. New preprint by a student of ours, Jill de Ron, on Berkson’s bias, which is a considerable issue for the psychological network literature in clinical psychology. Jill won the master thesis award for the paper!
  4. 2 interesting new papers on situations where at least 1 variable in a causal system is non-manipulable (e.g. biological sex); it has long been argued that such variables should not be included in causal systems, but the notion has changed now. Paper 1: “Structural Causal Bandits with Non-manipulable Variables” by Lee & Bareinboim. Paper 2: “On the Interpretation of do(x)” by Pearl.
  5. New measurement paper by Kemper et al. entitled “Short Versus Long Scales in Clinical Assessment: Exploring the Trade-Off Between Resources Saved and Psychometric Quality Lost Using Two Measures of Obsessive–Compulsive Symptoms”.
  6. New preprint by Ian Hussey & Sean Hughes, entitled “Hidden invalidity among fifteen commonly used measures in social and personality psychology; Ian and Sean provide a comprehensive assessment of structural validity in ~150,000 experimental sessions to investigate the psychometric properties of some of the most wide used self-report measures (k= 15) in social and personality psychology.
  7. Call for papers by the new APS journal AMPPS with the goal to provide a special section of tutorial manuscripts focused on the use of simulations in developing statistical intuitions.
  8. Workshop on EMA data & time-series network models by my old group in Leuven, Belgium, in March 2019. The Begijnhof is also an unforgettable location, & Leuven worth a visit.
  9. Jessica Flake & I have a new preprint online on: “Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them“.

Meta

  1. New paper by Kevin Gross & Carl Bergstrom “Contest models highlight inherent inefficiencies of scientific funding competitions” is yet another source of evidence that a (partial) lottery may be better to award grants than the current system.
  2. Online resource The Generalizer for researchers doing empirical work with student data to test whether the sample under study is representative of the population.

Open science

  1. Blog post by Veronika Cheplygina: “How I Fail in Open Science”.
  2. The whole editorial board of the Elsevier journal Informetrics resigned over a dispute regarding open access policies, and started a new open access journal. Related, 10 editors of the journal Nutrients resigned after pressure to publish mediocre papers.
  3. The first CREP paper was published in Collabra; CREP is a Collaborative Replication and Education Project that facilitates conducting large-scale replication studies by students as part of their courses.
  4. Neuroscience replication crisis preprint by Hong, Wager & Woo, entitled “False-positive neuroimaging: Undisclosed flexibility in testing spatial hypotheses allows presenting anything as a replicated finding”; of 135 papers that claimed to replicate effects, >50% should probably not be considered replications because peak coordinates more than 15mm away.
  5. The paper introducing the collaborative effort Psychological Science Accelerator (“Advancing Psychology Through a Distributed Collaborative Network) was published recently.

(Totally random news, but hey, this is a list of my personal interests: A neural network based Artificial Intelligence won 10:0 against pro-players in the computer game Starcraft II. This is a breakthrough — high paced computer games that include several hundred decisions per minute, and include base building and handling of up to 80 units simultaneously, have long been considered the next frontier for artificial intelligence challenges.)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.