Summary of my academic 2017

      4 Comments on Summary of my academic 2017

Below you find a summary of my 2017 academic year, inspired by a recent blog post by Alexander Etz who concluded:

I don’t see these kinds of period recap posts from my blogging colleagues very often. I’m not sure why. Maybe posts like this could feel like bragging about all the good stuff that’s happened to us, so it feels a little awkward. But even so, so what! Presumably someone reads and follows this log because they want to know what I’m thinking and what I’m doing, and this kind of post is a good way to keep them updated.

This motivated me to do the same. Additionally, academia can be extremely stressful at times, so it’s nice to sit down and take the time and celebrate the things that do go well occasionally.

This post contains information about our papers that were published or accepted in 20171, preprints, workshops and summer schools, lectures and talks, reviews I conducted, my collaborator network, the publication of several datasets and some other open science practices, and many blog posts on eiko-fried.com and psych-networks.com. It also contains a few things that didn’t go well, and concludes with some goals for 2018.

1. Publications

2017 resulted in 19 accepted or published peer-reviewed papers/editorials/letters in journals across various disciplines (general psych, clinical psych, methodology, psychiatry), such as Perspectives on Psych Science, Psych Methods, World Psychiatry, Schizophrenia Bulletin, Clinical Psych Science, and Psych Medicine. And while hard work and little sleep certainly contributed to making this a successful year, the success is largely due to the brilliant academic environment I have the honor to working in: with fantastic colleagues, an incredibly supportive boss and department, and tons of luck.

I’m first author on 8 of the 19 papers2, last author on 5, and middle author on 6 — this is also the order I will present the papers in.

Published papers

  1. Fried, E. I., Eidhof, M. B., Palic, S., Costantini, G., Huisman-van Diujk, H. M., Bockting, C. L. M., Engelhard, I., Armour, C., Nielsen, A. B. S., Karstoft, K.-I. (accepted). Replicability and generalizability of PTSD networks: A cross-cultural multisite study of PTSD symptoms in four trauma patient samples. Clinical Psychological Science. Preprint. [expand title=”(Abstract)”]
    Introduction. The network approach to psychopathology understands disorders like Posttraumatic Stress Disorder (PTSD) as networks of mutually interacting symptoms. The prior literature is limited in three aspects. First, studies have estimated networks in one sample only, leaving open the crucial question of replicability and generalizability across populations. Second, many prior studies estimated networks in small samples that may not be sufficiently powered for reliable estimation. Third, prior PTSD network papers examined community or subclinical samples, rendering the PTSD network structure in clinical samples unknown. In this cross-cultural multisite study, we estimate and compare networks of PTSD symptoms in four heterogeneous populations of trauma patients with different trauma-types, including civilian-, refugee-, combat-, post-war off-spring-, and professional duty-related trauma.
    Methods. We jointly estimated state-of-the-art regularized partial correlation networks across four datasets (total N=2,782), and compared the resulting networks on various metrics such as network structure, centrality, and predictability.
    Results. Networks were not exactly identical, but considerable similarities among the four networks emerged, with moderate to high correlations between network structures (0.62 to 0.74) and centrality estimates (0.63 to 0.75); only few edges differed significantly across networks.
    Conclusion. Despite differences in culture, trauma-type and severity of the four samples, the networks showed substantial similarities, suggesting that PTSD symptoms may be associated in similar ways. We discuss implications for generalizability and replicability. A step-by-step tutorial is available in the supplementary materials, including all analytic syntax and all relevant data to make the paper fully reproducible.

    [/expand]
  2. Fried, E. I. & Cramer, A. O. J. (2017). Moving forward: challenges and directions for psychopathological network theory and methodology. Perspectives on Psychological Science. PDF. [expand title=”(Abstract)”]Since the introduction of mental disorders as networks of causally interacting symptoms, this novel framework has received considerable attention. The past years have resulted in over 40 scientific publications and numerous conference symposia and workshops. Now is an excellent moment to take stock of the network approach: what are its most fundamental challenges, and what are potential ways forward in addressing them? After a brief conceptual introduction, we first discuss challenges to network theory: (1) What is the validity of the network approach beyond some commonly investigated disorders such as major depression? (2) How do we best define psychopathological networks and their constituent elements? (3) And how can we gain a better understanding of the causal nature and real-life underpinnings of associations among symptoms? Next, after a short technical introduction to network modeling, we discuss challenges to network methodology: (4) Heterogeneity of samples studied with network analytic models; and (5) a lurking replicability crisis in this strongly data-driven and exploratory field. Addressing these challenges may propel the network approach from its adolescence into adulthood, and promises advances in understanding psychopathology both at the nomothetic and idiographic level.

    [/expand]
  3. Fried, E. I.*, van Borkulo, C. D.*, Cramer, A. O. J., Lynn, B., Schoevers, R. A., Borsboom, D. (2017). Mental disorders as networks of problems: a review of recent insights. Social Psychiatry and Psychiatric Epidemiology. PDF. [expand title=”(Abstract)”]The network perspective on psychopathology understands mental disorders as complex networks of interacting symptoms. Despite its recent debut, with conceptual foundations in 2008 and empirical foundations in 2010, the framework has received considerable attention and recognition in the last years. This paper provides a review of all empirical network studies published between 2010 and 2016 and discusses them according to three main themes: comorbidity, prediction, and clinical intervention. Pertaining to comorbidity, the network approach provides a powerful new framework to explain why certain disorders may co-occur more often than others. For prediction, studies have consistently found that symptom networks of people with mental disorders show different characteristics than that of healthy individuals, and preliminary evidence suggests that networks of healthy people show early warning signals before shifting into disordered states. For intervention, centrality—a metric that measures how connected and clinically relevant a symptom is in a network—is the most commonly studied topic; and numerous studies have suggested that therapeutic interventions aimed at the most central symptoms may offer novel powerful clinical interventions. We conclude by sketching future directions for the network approach pertaining to both clinical and methodological research, and conclude that network analysis has yielded important insights and may provide an important inroad towards personalized medicine by investigating the network structures of individual patients.

    [/expand]
  4. Fried, E. I. (2017). What are psychological constructs? On the nature and statistical modelling of emotions, intelligence, personality traits and mental disorders. Health Psychology Review. PDF. [expand title=”(Abstract)”]Many scholars have raised two related questions: what are psychological constructs (PCs) such as cognitions, emotions, attitudes, personality characteristics and intelligence? And how are they best modelled statistically? This commentary provides (1) an overview of common theories and statistical models, (2) connects these two domains and (3) discusses how the recently proposed framework pragmatic nihilism (Peters & Crutzen, 2017) fits in. For this overview, I use an inclusive definition of the term ‘psychological construct’ that also encompasses mental disorders, similar to Cronbach and Meehl (1955). This is consistent with recent efforts such as the research domain criteria (RDoC) that aim to refine such constructs (Cuthbert & Kozak, 2013), and is relevant given many recent discussions on the nature of psychopathology.

    [/expand]
  5. Fried, E. I. (2017). The 52 symptoms of major depression: lack of content overlap among seven common depression scales. Journal of Affective Disorders. PDF. [expand title=”(Abstract)”]Background: Depression severity is assessed in numerous research disciplines, ranging from the social sciences to genetics, and used as a dependent variable, predictor, covariate, or to enroll participants. The routine practice is to assess depression severity with one particular depression scale, and draw conclusions about depression in general, relying on the assumption that scales are interchangeable measures of depression. The present paper investigates to which degree 7 common depression scales differ in their item content and generalizability.
    Methods: A content analysis is carried out to determine symptom overlap among the 7 scales via the Jaccard index (0=no overlap, 1=full overlap). Per scale, rates of idiosyncratic symptoms, and rates of specific vs. compound symptoms, are computed.
    Results: The 7 instruments encompass 52 disparate symptoms. Mean overlap among all scales is low (0.36), mean overlap of each scale with all others ranges from 0.27–0.40, overlap among individual scales from 0.26–0.61. Symptoms feature across a mean of 3 scales, 40% of the symptoms appear in only a single scale, 12% across all instruments. Scales differ regarding their rates of idiosyncratic symptoms (0%–33%) and compound symptoms (22%–90%).
    Limitations: Future studies analyzing more and different scales will be required to obtain a better estimate of the number of depression symptoms; the present content analysis was carried out conservatively and likely underestimates heterogeneity across the 7 scales.
    Conclusion: The substantial heterogeneity of the depressive syndrome and low overlap among scales may lead to research results idiosyncratic to particular scales used, posing a threat to the replicability and generalizability of depression research. Implications and future research opportunities are discussed.

    [/expand]
  6. Fried, E. I. (2017). Moving forward: How depression heterogeneity hinders progress in treatment and research” Expert Review of Neurotherapeutics. PDF. [expand title=”(Abstract)”]Has depression research in the last two decades led to major improvements of clinical care for patients? Pharmacological innovations have not resulted in considerable progress, and although the biology of Major Depression (MD) is among the most-studied and best-funded topics, answers remain elusive. Why is that? The current editorial argues that depression heterogeneity is at the heart of the slow progress, and suggests that a novel research framework may provide an inroad to new insights. Most researchers sum disparate depression symptoms such as sadness, insomnia, and concentration problems to one sumscore representing depression severity, and use thresholds to diagnose MD. This leads to highly heterogeneous depressed samples in which patients often have few symptoms in common. Recent work suggests that this unaddressed heterogeneity explains the lack of research progress. A novel research framework—Depression Symptomics—may offer a way forward by focusing on the study of individual depression symptoms, their causal interactions, and by paying attention to differences among patients diagnosed with MD.

    [/expand]
  7. Armour, C.*, Fried, E. I.*, Deserno, M. K., Tsai, J., & Pietrzak, R. H. (2017). A Network Analysis of DSM-5 posttraumatic stress disorder symptoms and correlates in U.S. military veterans. Journal of Anxiety Disorders. PDF. [expand title=”(Abstract)”]Object: Recent developments in psychometrics enable the application of network models to analyze psychological disorders, such as PTSD. Instead of understanding symptoms as indicators of an underlying common cause, this approach suggests symptoms co-occur in syndromes due to causal interactions. The current study has two goals: (1) examine the network structure among the 20 DSM-5 PTSD symptoms, and (2) incorporate clinically relevant variables to the network to investigate whether PTSD symptoms exhibit differential relationships with suicidal ideation, depression, anxiety, physical quality of life (QoL), mental QoL, age, and sex. Method: We utilized a nationally representative U.S. military veteran’s sample; and analyzed the data from a subsample of 221 veterans who reported clinically significant DSM-5 PTSD symptoms. Networks were estimated using Gaussian graphical models and the lasso regularization. Results: The 20-item DSM-5 PTSD network revealed that symptoms were positively connected within the network. The most central symptoms were negative trauma-related emotions, flashbacks, detachment, and physiological cue reactivity. Especially strong connections emerged between nightmares and flashbacks; blame of self or others and negative trauma-related emotions, detachment and restricted affect; and hypervigilance and exaggerated startle response. Incorporation of clinically relevant covariates into the network revealed paths between self-destructive behavior and suicidal ideation; concentration difficulties and anxiety, depression, and mental QoL; and depression and restricted affect. Conclusion: These results demonstrate the utility of a network approach in modeling the structure of DSM-5 PTSD symptoms, and suggest differential associations between specific DSM-5 PTSD symptoms and clinical outcomes in trauma survivors. Implications of these results for informing the assessment and treatment of this disorder, are discussed.

    [/expand]
  8. Santos, H.*, Fried, E. I.*, Asafu-Adjei, J., & Ruiz, J. (2017). Network Structure of Perinatal Depressive Symptoms in Latinas: Relationship to Stress-Related and Reproductive Biomarkers. Research in Nursing & Health. PDF. [expand title=”(Abstract)”]Emerging evidence shows that mood disorders can be plausibly conceptualized as networks of causally interacting symptoms rather than as latent variables where symptoms are passive indicators. Here we introduce an empirical application of network analysis to nursing research by estimating the network structure of 20 perinatal depressive (PND) symptoms. Further, we conduct two proof-of-principle analyses: incorporating stress-related and reproductive hormone variables into the network, and comparing the network structure of PND symptoms between healthy and depressed women. We analyzed a cross-sectional sample of 515 Latina women at the second trimester of pregnancy; networks were estimated using the Gaussian Graphical Model via lasso regularization. The main analysis yielded five strong symptom-to-symptom associations among PND symptoms (e.g., lack of happiness—lack of joy), and five symptoms of considerable clinical importance (i.e. high centrality) in the network. In exploring the relationship of PND symptoms with stress-related and reproductive biomarkers (proof-of-principle 1), biomarkers had few and weak relationships with PND symptoms. A comparison of healthy and depressed networks (proof-of-principle 2) showed that depressed participants have an overall more connected network, but that there is no difference regarding the type of relationships in the network. Our study provides preliminary evidence that PND symptoms can be conceived of as a network of interacting symptoms, and hope they will encourage future network studies in the realm of PND research, including investigations of symptom-to-biomarker mechanisms and interactions related to perinatal depression. Future directions and challenges are discussed.

    [/expand]
  9. Epskamp, S. & Fried, E. I. (accepted). A Tutorial on Regularized Partial Correlation Networks. Psychological Methods. Preprint. [expand title=”(Abstract)”] Recent years have seen an emergence of network modeling for psychological behaviors, moods and attitudes. In this framework, psychological variables are understood to directly interact with each another rather than being caused by an unobserved latent entity. Here we introduce the reader to the most popularly used network model for estimating such psychological networks: the partial correlation network. We describe how regularization techniques can be used to efficiently estimate a parsimonious and interpretable network structure on cross-sectional psychological data. We demonstrate the method in an empirical example on post-traumatic stress disorder data, showing the effect of the hyperparameter that needs to be manually set by the researcher. In addition, we list several common problems and questions arising in the estimation of regularized partial correlation networks.

    [/expand]
  10. Miloyan, B. & Fried, E. I. (2017). A reassessment of the relationship between depression and all-cause mortality in 3,604,005 participants from 293 studies. World Psychiatry. PDF. [expand title=”(Abstract)”]The objective of this study is to explain inconsistencies in the relationship between depression and all-cause mortality by performing a reassessment of the included studies of previous systematic reviews. We assessed study-level methodological variables with a focus on sample size and follow-up period, measurement and classification of depression, and model adjustment. We included the constituent studies of fifteen systematic reviews on depression and mortality, yielding 488 articles after the removal of duplicates. 333 studies were extracted, 40 of which used data that overlapped with other included studies. We included 313 estimates from 293 articles in the meta-analysis with a total sample of 3,604,005 participants and over 417,901 deaths. We identified a pronounced publication bias favoring large, positive associations in imprecise studies. Several factors moderated the relationship between depression and mortality. Most importantly, the 20 estimates adjusting for at least one comorbid mental condition (Pooled Effect: 1.08; 95% CI: 0.98-1.18), and the fraction of 8 of those estimates also adjusting for health variables (e.g., smoking, alcohol use, or physical inactivity; Pooled Effect: 1.04; 95% CI: 0.87-1.21), reported considerably smaller associations than the 204 unadjusted estimates (Pooled Effect: 1.32; 95% CI: 1.28-1.36). The sizable relationship of depression and mortality reported in previous systematic reviews is largely based on low-quality studies; controlling for important covariates attenuates the association considerably. Higher quality studies are needed based on large community samples, extensive follow-up, adjustment for health behaviors and mental disorders, and time-to-event outcomes based on survival analysis methodology.

    [/expand]
  11. Haslbeck, J. M. B. & Fried, E. I. (2017). How Predictable are Symptoms in Psychopathological Networks? A Reanalysis of 18 Published Datasets. Psychological Medicine. PDF. [expand title=”(Abstract)”]Background. Network analyses on psychopathological data focus on the network structure and its derivatives such as node centrality. One conclusion one can draw from centrality measures is that the node with the highest centrality is likely to be the node that is determined most by its neighboring nodes. However, centrality is a relative measure: knowing that a node is highly central gives no information about the extent to which it is determined by its neighbors. Here we provide an absolute measure of determination (or controllability) of a node – its predictability. We introduce predictability, estimate the predictability of all nodes in 18 prior empirical network papers on psychopathology, and statistically relate it to centrality.
    Methods. We carried out a literature review and collected 25 datasets from 18 published papers in the field (several mood and anxiety disorders, substance abuse, psychosis, autism, and transdiagnostic data). We fit state-of-the-art net- work models to all datasets, and computed the predictability of all nodes.
    Results. Predictability was unrelated to sample size, moderately high in most symptom networks, and differed considerable both within and between datasets. Predictability was higher in community than clinical samples, highest for mood and anxiety disorders, and lowest for psychosis.
    Conclusions. Predictability is an important additional characterization of symptom networks because it gives an absolute measure of the controllability of each node. It allows conclusions about how self-determined a symptom network is, and may help to inform intervention strategies. Limitations of predictability along with future directions are discussed.

    [/expand]
  12. Epskamp, S., Borsboom, D., Fried, E. I. (2017). Estimating psychological networks and their accuracy: a tutorial paper. Behavioral Research Methods. PDF. [expand title=”(Abstract)”]The usage of psychological networks that conceptualize psychological behavior as a complex interplay of psychological and other components has gained increasing popularity in various fields of psychology. While prior publications have tackled the topics of estimating and interpreting such networks, little work has been conducted to check how accurate (i.e., prone to sampling variation) networks are estimated, and how stable (i.e., interpretation remains similar with less observations) inferences from the network structure (such as centrality indices) are. In this tutorial paper, we aim to introduce the reader to this field and tackle the problem of accuracy under sampling variation. We first introduce the current state-of-the-art of network estimation. Second, we provide a rationale why researchers should investigate the accuracy of psychological networks. Third, we describe how bootstrap routines can be used to (A) assess the accuracy of estimated network connections, (B) investigate the stability of centrality indices, and (C) test whether network connections and centrality estimates for different variables differ from each other. We introduce two novel statistical methods: for (B) the correlation stability coefficient, and for (C) the bootstrapped difference test for edge-weights and centrality indices. We conducted and present simulation studies to assess the performance of both methods. Finally, we developed the free R-package bootnet that allows for estimating psychological networks in a generalized framework in addition to the proposed bootstrap methods. We showcase bootnet in a tutorial, accompanied by R syntax, in which we analyze a dataset of 359 women with posttraumatic stress disorder available online.

    [/expand]
  13. The Centrality of DSM and non-DSM Depressive Symptoms in Han Chinese Women with Major Depression (2017). Kendler, K. S., Aggen, S. H., Flint, J., Borsboom, D., & Fried, E.I.. Journal of Affective Disorders. PDF. [expand title=”(Abstract)”]Introduction: We compared DSM-IV criteria for major depression (MD) with clinically selected non-DSM criteria in their ability to represent clinical features of depression.
    Method: We conducted network analyses of 19 DSM and non-DSM symptoms of MD assessed at personal interview in 5,952 Han Chinese women meeting DSM-IV criteria for recurrent MD. We estimated an Ising model (the state-of-the-art network model for binary data), compared the centrality (interconnectedness) of DSM-IV and non-DSM symptoms, and investigated the community structure (symptoms strongly clustered together).
    Results: The DSM and non-DSM criteria were intermingled within the same symptom network. In both the DSM-IV and non-DSM criteria sets, some symptoms were central (highly interconnected) while others were more peripheral. The mean centrality of the DSM and non-DSM criteria sets did not significantly differ. In at least two cases, non-DSM criteria were more central than symptomatically related DSM criteria: lowered libido vs. sleep and appetite changes, and hopelessness versus worthlessness. The overall network had three sub-clusters reflecting neurovegetative/mood symptoms, cognitive changes and anxiety/irritability.
    Limitations: The sample were severely ill Han Chinese females limiting generalizability.
    Conclusions: Consistent with prior historical reviews, our results suggest that the DSM-IV criteria for MD reflect one possible sub-set of a larger pool of plausible depressive symptoms and signs. While the DSM criteria on average perform well, they are not unique and may not be optimal in their ability to describe the depressive syndrome.
    [/expand]
  14. Murphy, J., McBride, O., Fried, E. I., & Shevlin, M. (2017). Distress, impairment and the extended psychosis phenotype: A network analysis of psychotic experiences in a US general population sample. Schizophrenia Bulletin. PDF. Preprint. [expand title=”(Abstract)”]It has been proposed that subclinical psychotic experiences (PEs) may causally impact on each other over time and engage with one another in patterns of mutual reinforcement and feedback. This subclinical network of experiences in turn may facilitate the onset of psychotic disorder. PEs however are not inherently distressing, nor do they inevitably lead to impairment. The question arises therefore, whether non-distressing PEs, distressing PEs, or both, meaningfully inform an extended psychosis phenotype. The current study first aimed to exploit valuable ordinal data that captured the absence, occurrence and associated impairment of PEs in the general population to construct a general population based severity network of PEs. The study then aimed to partition the available ordinal data into two sets of binary data to test whether an occurrence network comprised of PE data denoting absence (coded 0) and occurrence/impairment (coded 1) was comparable to an impairment network comprised of binary PE data denoting absence/occurrence (coded 0) and impairment (coded 1). Networks were constructed using state-of-the-art regularized pairwise Markov Random Fields (PMRF). The severity network revealed strong interconnectivity between PEs and nodes denoting paranoia were among the most central in the network. The binary PMRF impairment network structure was similar to the occurrence network, however the impairment network was characterised by significantly stronger PE interconnectivity. The findings may help researchers and clinicians to consider and determine how, when and why an individual might transition from experiences that are non-distressing to experiences that are more commonly characteristic of psychosis symptomology in clinical settings.
    [/expand]
  15. Dejonckheere, E., Brock, B., Fried, E. I., Murphy, S., & Kuppens, P. (2017). Perceiving social pressure not to feel negative predicts depressive symptoms in daily life. Depression and Anxiety. PDF. [expand title=”(Abstract)”]Background: Western societies often overemphasize the pursuit of happiness, and regard negative feelings such as sadness or anxiety as maladaptive and unwanted. Despite this emphasis on happiness, the amount of people suffering from depressive complaints is remarkably high. To explain this apparent paradox, we examined whether experiencing social pressure not to feel sad or anxious could in fact contribute to depressive symptoms.
    Methods: A sample of individuals (n = 112) with elevated depression scores (Patient Health Questionnaire [PHQ-9] ≥ 10) took part in an online daily diary study in which they rated their depressive symptoms and perceived social pressure not to feel depressed or anxious for 30 consecutive days. Using multilevel VAR models, we investigated the temporal relation between this perceived social pressure and depressive symptoms to determine directionality.
    Results: Primary analyses consistently indicated that experiencing social pressure predicts increases in both overall severity scores and most individual symptoms of depression, but not vice versa. A set of secondary analyses, in which we adopted a network perspective on depression, confirmed these findings. Using this approach, centrality analysis revealed that perceived social pressure not to feel negative plays an instigating role in depression, reflected by the high out- and low instrength centrality of this pressure in the various depression networks.
    Conclusions: Together, these findings indicate how perceived societal norms may contribute to depression, hinting at a possible malignant consequence of society’s denouncement of negative emotions. Clinical implications are discussed.

    [/expand]
  16. Borsboom, D., Fried, E. I., Epskamp, S., Waldorp, L. J., van Borkulo, C. D., van der Maas, H. L. J., & Cramer, A. O. J. (2017). False alarm? A comprehensive reanalysis of “evidence that psychopathology symptom networks have limited replicability” by Forbes, Wright, Markon, and Krueger (2017). Journal of Abnormal Psychology, 126(7). PDF
  17. Heino, M. T. J., Fried, E. I., & LeBel, E. P. (2017). Commentary: Reproducibility in Psychological Science: When Do Psychological Phenomena Exist? Frontiers in Psychology, 8(1004). PDF.
  18. Schweren, L., van Borkulo, C. D., Fried, E. I., & Goodyer, I. M. (accepted). Assessment of Symptom Network Density as a Prognostic Marker of Treatment Response in Adolescent Depression. JAMA Psychiatry. PDF.
  19. van Loo, H.M., van Borkulo, C. D., Peterson, R.E., Fried, E.I., Aggen, S.H., Borsboom, D., Kendler, K.S. (2017). Robust symptom networks in recurrent major depression across different levels of genetic and environmental risk. Journal of Affective Disorders. PDF.

Some people asked me how I got so much work done this year. As I said above, it’s largely the supportive environment here, the fact that I don’t sleep and have very little teaching or administrative duties. And then, of course, the fact that we are legion ;) …

2. Papers submitted or under revision

We have a few papers under revision, but this is the one I’m most excited about: because I got to fit network models to a dataset of 21,000 people3.

  1. Fonseca-Pedrero, E., Ortuño-Sierra, J., Debbané, M., Chan, R. C. K., Cicero, D. C., Zhang, L. C., … Fried, E. I. (submitted). The network structure of schizotypal personality traits. Schizophrenia Bulletin. [Schizophrenia Bulletin did not allow us to post the preprint online, unfortunately] [expand title=”(Abstract)”]
    The main goal was to examine, for the first time, the empirical network structure
    of schizotypal personality. Moreover, as a secondary analysis, we explored whether network structures differed across gender and cultures (North America vs. China). The study was conducted in a sample of 27,001 participants from 12 countries and 21 sites. The mean age of participants was 22.12 years (SD=6.28) and 37.5% (n=10,126) were males. The Schizotypal Personality Questionnaire (SPQ) was used to assess 74 self-report schizotypal items. We used the Ising model, a state-of-the-art network model for binary data, to estimate the conditional dependence relations among items. The resulting network showed item relations not only within, but also across the nine schizotypal domains. Men scored slightly higher on the sum-score of the 74 SPQ items than women (p=.02), although the symptom profiles were highly similar (r=0.91). The networks of men and women were similar (r=.74), node centrality was similar across networks (r=.90), as was connectivity (195.6 and 199.7 for men and women, respectively). For North America vs. China, no differences emerged on the SPQ sum-score (p=.25), but networks showed lower similarity than the gender comparison both in terms of structure (r=.44), node centrality (r=.56), and connectivity (North American network 180.35, China network 153.97). The present paper points to the value of conceptualizing schizotypal personality traits as a complex system of interacting cognitive, emotional, and affective characteristics. We discuss how network models may open new avenues for research into psychiatric and psychological nosology, aetiology, assessment strategies, prevention, and treatment.

    [/expand]

3. My “52 symptoms” paper is out

The paper that took the longest time to publish by far is finally out ; turns out assessment is a very thorny issue that applied journals would rather avoid4. In any case, it has the prettiest figure I ever made5, and I’ll use the opportunity to present it here again it all its glory. In summary, the paper shows that 7 commonly used rating scales that all aim to measure the same disorder — major depression — differ dramatically in symptom content. This means that depression studies that use only 1 scale and relate it to an outcome might not replicate if another scale was used instead.

And because it makes for a fun story: here the sad screenshot from the folder of the paper in which I list the order of journals I submitted this paper to. It didn’t make it past the editors until in JAD both editor and reviewers were very positive about it …

4. Publications metrics

I find it hard to make sense of publication metrics, and comparisons across field seem difficult. What I can say is that citations are coming in steadily. I am first-author on my 6 most cited papers, so I don’t have any of these big Science or Nature papers with 150 authors that bring a couple of thousand citations. But given that I published my first paper in 2014, and that most of my citations come from papers I first-authored, I am told I should be happy with breaking 1k citations in the near future.

5. Collaborations

I’ve had the honor and pleasure to work with a ton of new collaborators this year; thanks for approaching me, and for teaching me about schizophrenia, psychosis, PTSD, perinatal depression, CBT, epidemiology, meta-analysis, reproducibility, and philosophy of science! When it comes to collaborators in general, I specifically want to mention Denny Borsboom here, who has been an inspiring, enthusiastic, and very supportive mentor/supervisor/boss, and Sacha Epskamp, who has been involved in many of my projects. Thanks, and thanks also to everybody else who inspired me this year, helped me to improve the quality of my work, listened when I had questions, taught me things, and pointed out my mistakes. A big shoutout to you all, especially the students I got to work with from whom I learned a ton (Sophia, Marlene, Ria, Julian, and George: you rock, and I’m proud of you).

The collaborations inspired me to update last year’s collaborator graph, and Sacha Epskamp and I wrote a blogpost enabling you to estimate your own collaborator networks in R. Edge strength is the number author co-occurrences on my papers 6.

6. Lectures, workshops, conferences

In 2017, I got to teach and give talks in Munich, Madrid, Amsterdam, Leiden, Amersfoort, Eindhoven, Heeze, Ghent, Vienna, Odense, Belfast, Sussex, San Diego, and Boston (bonus points if you know in which countries all these cities are). I had the opportunity to teach 5 international workshops on network analysis, give several lectures in the winter and summer schools organized by the Psychosystems Group of the University of Amsterdam, and also give numerous presentations and invited talks. I also gave my first two invited keynotes (slides 1; slides 2)! Pew-pew-pew ;). The photograph below is from a workshop I taught in Madrid.

Madrid

6.1 Workshops

All materials from my most recent workshops are available here. The materials contain slides, syntax, and data, making all analysis for both between-subjects and within-subjects analyses fully reproducible. The much shorter version of this workshop — a 2-hour webinar of the American Statistical Association — is available here (for folks more savvy with R).

  • 2017/01: 3-day workshop, University of Gent (with Maarten De Schryver & Sacha Epskamp)
  • 2017/02: 2-day workshop, Experimental Psychopathology meeting (with Jonas Dalege)
  • 2017/02: 5-day Psychological Networks Amsterdam Winter School, University of Amsterdam (big collaborative project, I taught 1 course)
  • 2017/08: 5-day Psychological Networks Amsterdam Summer School, University of Amsterdam (big collaborative project, I taught 2 of the courses; all materials online for free here)
  • 2017/09: 2-day workshop, University of Southern Denmark
  • 2017/09: 2-day workshop, Madrid University
  • 6.2 Lectures & talks

    For the first time, I got to chair symposia at international conferences this year, and was lucky to be invited to several national and international conferences (most importantly: ICPS, APS, and ABCT). You can find all talks and slides below.

    1. “Studying dynamical processes: vector autoregressive models for psychopathological time-series data”; “The network approach to psychopathology: where do we come from, and where do we go?”; and “Testing the limits: a network analysis of 74 schizotypal items in 27,000 participants” at Association for Behavioral and Cognitive Therapies, San Diego, USA (slides)
    2. “Introduction to psychological networks” and “Markov Random Fields II: stability, replicability and challenges”. Invited talks at Psychological Networks Amsterdam Summer School, Amsterdam, The Netherlands (slides)
    3. “Symptomics: studying symptoms and their network configurations”. Invited keynote at Symptom Research in Primary Care 2017, Amersfoort, The Netherlands (slides)
    4. “What are psychological constructs such as personality facets, intelligence, emotions, or mental disorders? Different theories require different statistical models.” Invited keynote, Arbeitstagung der Fachgruppe Differentielle Psychologie, Persönlichkeitspsychologie und Psychologische Diagnostik, Munich, Germany (slides)
    5. Chair for the two symposia “Investigating the heterogeneity of Major Depression with novel symptom-, factor-, and network-based models” and “Symptomics: analyzing the differential relationships of individual depression symptoms with risk factors, biological markers, and treatment response”. International Convention of Psychological Science, Vienna, Austria (slides)
    6. “The 52 symptoms of depression: a content analysis of 7 depression rating scales”; “Depression as a complex system of causally interacting symptoms”; “An introduction to Depression Symptomics”. Three presentations at International Convention of Psychological Science, Vienna, Austria (slides)
    7. Chair for the symposium “From the Brain to Emotions: Modeling Dynamics for a Better Understanding of Psychopathology“. Association for Psychological Science, Boston, USA.
    8. “Cross-sectional and time-series network models for psychopathology: an overview of challenges and future directions“; “Replicability and generalizability of PTSD networks: a cross-cultural multisite study of PTSD symptoms in four samples of trauma patients“. Two presentations at Association for Psychological Science, Boston, USA (slides)
    9. “Introduction to network analysis”. Invited lecture in the Innovations in statistical methods series at the Research Expert Meeting, Leiden University, The Netherlands
    10. “Introduction to psychological networks.” Invited talk at Psychological Networks Amsterdam Winter School, Amsterdam, The Netherlands

    7. Open science

    Here are my 2017 open science highlights:

  • Jennifer Tackett and Dan Morgan invited me to become associate editor for the Open Access Journal Collabra: Clinical Psychology. Collabra is the official partner journal of the Society for the Improvement of Psychological Science (SIPS) and a partner of the Open Science Framework (OSF)
  • We shared 5 datasets (4 datasets; 1 dataset)7
  • Plenty of preprints on the Open Science Framework and PsyArxiv
  • Several open access papers; however, not all publications this year were published open access, something I can and should improve upon next year
  • And my first open materials badge for this paper forthcoming on Clinical Psych Science

And then, while not directly open science, I see these topics as being closely related to the open science movement:

  • Work on robustness of statistical methodology (PDF)
  • Work on replicability (PDF1, PDF2)
  • Work on reproducibility (PDF)

I’m sad that I wasn’t able to attend SIPS, but will definitely join SIPS 2018 in Grand Rapids Michigan (and visit all my friends in Ann Arbor where I spent 2 years during my PhD as a visiting scholar).

8. Reviewing

For the last year, Publons ranked me 17 in terms of reviews written in the last 12 months in psychology, and gave me a cute medal for it. The large majority of these reviews were quite detailed and recommended major revision8, and I signed all except 29. Here is my verified reviewer record from Publons.

In 2017, I reviewed for (in chronological order, journals appear multiple times meaning I reviewed multiple papers): International Journal of Methods in Psychiatric Research, Plos One, Advances in Methods and Practices in Psychological Science, Journal of Affective Disorders, Journal of Clinical Psychology, Psychological Medicine, The Lancet Psychiatry, Journal of Clinical Child & Adolescent Psychology, Psychological Medicine, BioSystems, Clinical Psychological Science, Psychotherapy and Psychosomatics, The Lancet Psychiatry, Behaviour Research and Therapy, Journal of Affective Disorders, Psychopharmacology, European Journal of Psychotraumatology, Journal of Abnormal Psychology, Journal of Affective Disorders, Journal of Abnormal Psychology, Journal of Abnormal Psychology, International Journal of Environmental Research and Public Health, Health and Quality of Life Outcomes, Clinical Psychological Science, Psychological Medicine, Behaviour Research and Therapy, Journal of Clinical Psychology, Psychological Assessment, and BMJ Open.

9. Personal blog

I didn’t find the time to blog much about clinical trials for depression this year, which I did much more in 2015 and 2016. Instead, I started to broaden the scope of my blog, with posts about tone in scientific discussions among psychologists10, about small samples in clinical psychology, the numerous skills we expect psychological researchers to have today, and an R-tutorial on estimating collaborator networks. I also put up a few shorter pieces about becoming a journal editor in 15 minutes, clickbait emails academia.eu sent out, and about the time a journal uploaded my R syntax supplementary materials after printing them, scanning them, and pasting them as jpg files into a word document.

10. Psych-networks.com

In addition, I started psych-networks.com, and it makes me very happy to see that the blog is so well read. There were 16 posts this year, covering the following topics:

  1. Free materials to learn about network models and their application (1, 2)
  2. Free datasets that can be used for time-series network modeling (1, 2)
  3. Tutorials on network stability, community detection, and tips for reviewing network papers
  4. Summaries of papers on: challenges to the network approach, predictability and controllability of networks, network replicability, how differential item variability can drive network structures, a tutorial paper on estimating networks, and a review of 5 papers that appeared at roughly the same time
  5. Two blogs on the equivalence of network models and latent variable models (1, 2)
  6. A blog arguing that we should move beyond symptoms in network models
  7. And a 15 minute podcast I recorded with Laura Bringmann in which we talked about ESM data, potential advantages of temporal over cross-sectional data, recent methodological advances in network modeling, time-varying network models, and how to choose the best model for the data from the large amount of models that are available now.

Some of the most-read posts were contributions from the guest bloggers Denny Borsboom, Aaron Fisher, Payton Jones and Haley Elliott, Jonas Haslbeck, Joost Kruis, Sacha Epskamp, and Giulio Costantini. And I want to highlight that without the guest blogs, psych-networks.com would be much less successful, and I hope to feature many more guest blogs (and podcasts!) in 2018. Seeing this list, however, made me realize than nearly all guest bloggers were guys, so when I post my 2018 summary, my goal is to have 50% female contributors to psych-networks (and do call me out if I don’t reach that goal).

11. Things that did not go well

We’ve read about shadow CVs in the past, and I wanted to highlight the academic thing that sucked most in 2017: I did not get the very prestigious and highly competitive Dutch VENI grant, which might have fairly large consequences for my career (there aren’t many other funding options available in the Netherlands). I initially got kicked out in the first stage where they only look at your CV. This was a bit surprising: it’s an ECR grant, and you can only apply up to 3 years after finishing your PhD, and my CV is therefore fairly competitive. After we appealed successfully, the appeal process took so long that they didn’t have sufficient time to find 2 reviewers for my proposal. So I only had 1 reviewer, and that happened to be reviewer 2 …

In any case: I wanted to use the opportunity to congratulate my colleagues who did get the grant (I am talking to you Lynn, Maarten, and Lotte!): you deserve it, I’m very happy for you, BUT I HOPE YOU FEEL A BIT GUILTY AND WILL BUY ME ICE-CREAM ;).

12. Personal growth

I wanted to pick up on how to do more advanced simulation studies, seeing that my colleagues here in Amsterdam are so good with it, and improve my knowledge of machine learning / neural networks. This hasn’t happened, because I got bogged down in the many collaborations I agreed to in 2016 — and all the other things we do in academia ;). So the goal for 2018 is to get back to that, and to decrease the number of collaborations. Not that it wasn’t fun — it was, and I learned a ton — but I’ve barely found any time for my own work in the last months of 2017.

13. Best of 2017

While I wanted to limit this blog to academic topics, the best thing that happened 2017 is so much better than anything that happened in academia: doctors saved the life of a person I love dearly with the help of stem cells I had donated. It’s incredible what progress medicine has made in terms of cancer treatment, and inspiring for those of us working in mental health research (we’re so incredibly far behind).

Here are my sister and me visiting the recovering patient in the hospital!

14. Worst of 2017

My worst mistake this year? Shaving my beard in the summer … I promise to grow it back until the next international conference so you don’t have to deal with this face of mine.

See you in 2018!
Signed: The Eikobot


Footnotes:

  1. Some of these were accepted in 2016 and published in 2017, but I can’t be arsed to start reading up on what was published exactly when, and try to figure out the difference between published online and published in print.
  2. Disclaimer: 3 are shared first-authorships.
  3. Actually, the n is 21,001 but that triggers me so I usually lie about it.
  4. Which motivated me to submit a symposium to APS 2018, together with the amazing Jessica Flake, entitled “Measurement Schmeasurement: How Poor Measurement Practices Threaten Cumulative Psychological Science”.
  5. With ggplot2 help by the brilliant Jana Jarecki.
  6. This is from October 2017 and a bit outdated, but I don’t have the time to run it again now.
  7. We collected none of these, and full credit goes to the original authors that allowed us to re-use their data and provide them in the supplementary materials of the respective papers.
  8. Yes, I’m one of these these people.
  9. Because it’s complicated sometimes.
  10. 700% more visitors that month, and long and lively discussions on the Psych Methods Facebook Group, so I conclude people still think this is an important issue to discuss.

4 thoughts on “Summary of my academic 2017

  1. Pingback: Summary of my academic 2018 - Eiko Fried

  2. Pingback: Looking back at 2018 - Eiko Fried

  3. Pingback: Assistant professor, APS rising star, and some papers – Eiko Fried

  4. Pingback: Meu ano acadêmico 2017 | Caio Maximino

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.