Brief psychology news 03/2019

      5 Comments on Brief psychology news 03/2019

March 2019 news from Clinical Psychology, Quantitative Psychology, Meta Psychology, and Open Science. For prior news, see the rubric Psychology News on this blog.

Clinical

  1. Cristea et al. performed a systematic review and meta-analysis of biological markers evaluated in randomized trials of psychological treatments for depression. They find that the beneficial effects of psychotherapy for depression symptoms do not translate to biomarkers. From the first author: “We believe our results provide a much-needed reality check to the enthusiasm & hype surrounding studies on biological consequences of #psychotherapy, by showing that by the highest standards of evidence- #RCTs, this research is still inconclusive at least for #depression.”
  2. Tackett et al. provide some examples of work in Jennifer’s lab on how to leverage the Open Science Framework in clinical psychological assessment research.
  3. Tackett et al. on “Psychology’s Replication Crisis and Clinical Psychological Science”.
  4. Border et al. conducted a multiverse analysis and find no consistent evidence for relations between depression and 1) candidate genes or 2) candidate gene x environment interactions. I summary the paper and implications of the paper in a blog post.
  5. Related, Reimers et al. discuss whether or not the genes implicated by GWAS form a coherent story on their own and thus could in principle lead to insight into the biological mechanisms underlying the trait or disorder.
  6. An editorial in JAMA Psychiatry calls out the underpowered clinical trials in psychiatry.
  7. Samea et al. show that while many individual brain imaging studies have reported local abnormalities in ADHD, a meta-analysis of almost 100 studies finds no significant convergence among findings.
  8. Horowitz & Taylor summarize that SSRIs can lead to severe withdrawal symptoms, and show they should be tapered hyperbolically, not in a linear fashion
  9. Gobbi et al. with a systematic review linking adolescent cannabis use to depression and suicidality in early adulthood. OR were 1.4 for depression, 1.2 for anxiety (not significant), 1.5 for suicidal ideation, and 3.5 for suicide attempts. I haven’t checked thoroughly, but I imagine it would be difficult to control for all relevant covariates here.
  10. Dinga et al. tried to replicate a prior study by Drysdale et al. (2017) who found so-called ‘biotypes’ for depression; the replication was unsuccessful: “We argue that the methods used in Drysdale et al. (2017) do not provide convincing evidence for biotypes of depression”
  11. Bastiaansen et al. conducted a many-analysts project in which they sent the time-series data one of patient to 12 teams, and asked them to analyze the data and provide clinical advice. The teams dealt with the data very differently; fit very different models; and came to very different conclusions regarding the treatment of the patient.
  12. Preprint: Brandes et al. estimated the P-factor (psychopathology) and the N-factor (neuroticism) in a sample of 695 children. They estimate a dual bifactor model with 8 latent variables (including method factors) and 22 indicators, and find the correlation between N and P to be 0.81. I wonder if these latent variables are meaningfully different psychological phenomena . I personally also find bifactor models really difficult to interpret (would they not increase the relation between N and P because all other factors are estimated to be orthogonal), and struggle how to relate them to underlying theories, but that’s on me, not the authors.
  13. Preprint: Collison et al. with a non replication of the 3 factor model for psychopathy (the “Triarchic Psychopathy Measure”); they found that 5-7 seven factors should be extracted to properly describe the data. Figure 1 is cool: a visualization of 1 through 7 factor solutions, and how the factors split, in a pyramid.
  14. Preprint: Sakaluk et al. wrote up a meta-scientific review, evaluating the evidential value of empirically supported psychological treatments. From browsing the paper briefly, the results look very concerning. Money quote: “This review suggests that alt-hough the underlying evidence for a small number of empirically supported therapies is consistently strong across a range of metrics, the evidence is mixed or consistently weak for many, including some classified by Division 12 of the APA as ‘Strong’.”
  15. Ketamine was approved by the FDA as an antidepressant. Just a few weeks back, I reread the literature and summarized my skepticism together with Lucy Robinson in this blog post: the literature is full of very small studies that often lack control groups; there is publication bias and outcome switching; and the dependent variables are usually measured 2 to 7 days after giving patients Ketamin. It’s irresponsible to talk about “treatment” for a one week biological intervention for patients, most of which are depressed because they are stuck in an abusive marriage, or cannot find a job, or hate their job, or struggle with social isolation, and so forth. A nasal spray will not fix that.

Methods

  1. Preprint: Anvari & Lakens introduce 2 methods from clinical/health research & show how they can be used in basic psychology research to determine the smallest noticeable change/difference & thus the smallest effect size of interest for research questions interested in noticeable change
  2. Preprint: Christensen et al. offer a new psychometric network perspective on the measurement and assessment of personality traits
  3. Preprint: Piccirillo & Rodebaugh on “Foundations of Idiographic Methods in Psychology and Applications for Psychotherapy”; related, a preprint by Piccirillo et al. on “A Clinician’s Primer for Idiographic Research: Considerations and Recommendations”.
  4. Latent variable models have been around a while, but power is still not very well understood; this is obvious from how sample size for latent variables is justified (everybody uses a different guidelines, and there are at least a dozen different ones). A priori power analyses are rare. In a new paper, Kretzschmar & Gignac show that sample size requirements to estimate reliable correlations between latent variable correlations are larger than previously assumed. From what I can tell, the authors only simulated a scenario with 2 latent variables and 3 indicators each; I assume power would be considerably lower in more typical settings with more latent variables; not sure about more indicators, they provide more information about the latent variable but also increase the number of parameters need to be estimated)
  5. Simms et al. investigate whether there a “psychometrically optimal” number of response options for psychological scales, by querying 1,358 undergraduates who were assigned to answer the same scale from between 2 to 11 options and a visual analog condition
  6. Clark & Watson published a follow-up paper to their highly cited 1995 paper, entitled “Constructing validity: New developments in creating objective measuring instruments”.
  7. Amrhein et al. published a paper calling to retire significance testing. There have been some interesting discussions about the paper online, although I found the paper wasn’t always fairly presented (e.g. in PsychMAD, the paper was harshly attacked for its proposed ban on p-values, which contrasts with “We are not calling for a ban on P values” from the paper itself)
  8. Insightful cheat-sheet, showing that most commonly used models are linear models.
  9. List of R-studio cheat-sheets.
  10. VanderWeele wrote up a short primer on “Principles of confounder selection”; would love to hear your thoughts on this, actually.
  11. Aydin et al. looked at 275 papers using factor models published in 4 Turkish Education/Psychology journals. They found that in >50% of the studies, the sampling method either unclear or convenience sampling; that information on how to deal with missing data was absent from 76% of the articles; that half of the models did not describe the data well; and nearly no studies reported taking into account the item types (I assume this means e.g. ordinal vs continuous items, e.g. via MLR or WLSMV estimators).

Meta

  1. Sick of bad posters? Here is some good poster design for academic conferences.
  2. Editor who handled >1000 Nature submissions gives some insightful tips on how to
    write / format papers
    (for climate papers, but most of it translates to other disciplines well)
  3. A new paper format is out: an interactive paper where you can look at multiverse outcomes in the same paper. Scroll down to page 2, on the right side you will see blue text (that the analyses are corrected or uncorrected for multiple testing). You can change this in the paper, interactively. Lateron, you can look at different visualizations, depending on the CI you set. SO. FREAKING. COOL.
  4. Mengel et al. exploited a quasi-experimental design where 19,952 were randomly allocated to female or male instructors; women received systematically lower teaching evaluations
  5. Meta science meeting in Stanford, September 5-8, 2019

Open science

  1. In March, the Interacting Minds Center at Aarhus University hosted an Open Science workshop. All 10 talks were recorded, and I put together a thread on twitter with all talks, brief summaries, and links to presentations & slides. You can also find all talks & materials on the website of the Center.
  2. An editorial in Nature Human Behaviour higlights the importance of publishing null-findings.
  3. Hunt on “The life-changing magic of sharing your data”; sadly paywalled.
  4. Cassie Brandes put together a list on Twitter of folks interested in open science & clinical psychology. Let her know if you want to be added, and follow each other to make sure we get an open science network in clinical psychology going. You can also subscribe to the list here.
  5. Related, we initiated a mailing list for open science topics in clinical psychology and related discipline; you can sign up here! We hope this is a good way to get discussions going, in addition to Twitter.
  6. Interesting discussion whether strictly adhering to open science practices can also lead to disadvantages on the job market. Money quote: “How is it fair that in a grant/job application I will only be defined by how many studies I have already published? I wouldn’t change anything about my work – I am very proud of it. I am only anxious that I cannot provide the required ‘evidence’ for my abilities at this stage.”

Favorite paper of the month

“Here, we show that robots socially integrated into animal groups of honeybees and zebrafish, each one located in a different city, allowing these two species to interact. This interspecific information transfer is demonstrated by collective decisions that emerge between the two autonomous robotic systems and the two animal groups. The robots enable this biohybrid system to function at any distance and operates in water and air with multiple sensorimotor properties across species barriers and ecosystems.” (Source)

Someone please turn this into a apocalyptic horror story.

“And you see kids, the 3 species rose together — bees, zebrafish, & minirobots. Using computational power and combined swarm intelligence, they were able to forecast the stock market, crashing the economic system within a few hours after the experiment had started. And you all know what followed … ”


Cannot find the full of the paper I’m linking to? I describe a way around the problem here; keyword ‘sci-hub’.

5 thoughts on “Brief psychology news 03/2019

    1. Eiko Post author

      Thanks! Currently you can subscribe to the blog (right side in the menu). I’m hesitant setting up a second mailing system just for a subset of the blog posts to be honest, but will do it if there is sufficient interest.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.