This is a brief commentary by Orestis Zavlis (Psychology and Language Sciences, University College London, United King) and me on the recent PNAS paper entitled “Impulsivity is a stable, measurable, and predictive psychological trait”. The paper follows the typical structure of *alphabet papers* — papers that have established factors of C(ognitive dysfunctioning) or D(ark factor of personality) or D(disease) or D(istractibility) or G(eneral intelligence) or L(anguage) or N(euroticism) or N(eurodiversity) or S(tress) or Personality (“the big one”) or P(sychopathology)^{1}. In this case, authors establish the I(impulsivity) factor.

Unfortunately, given that our commentary “does not contribute substantially to the discussion of the original article”, it was declined by PNAS, so we post it here. You can also find it on the OSF as a PDF.

Since its early psychometric days, impulsivity has been one of the most prominent and well-researched personality traits (1). In recent years, however, a number of researchers have questioned the trait ‘impulsivity’ by showing how it is simply a conglomerate of other traits (1). Huang and colleagues (2) attempted to address these controversies by applying bifactor analyses on 48 measures of impulsivity, yielding a statistical factor of impulsivity which they interpreted to be a ‘valid and useful psychological construct’ (p. 1). Here, we question this conclusion for three reasons.

First and foremost, bifactor models provide poor tests for general factor hypotheses because of their high fitting propensity, a widely acknowledged issue in the statistical literature (3,4). Specifically, bifactor models tend to fit various datasets reasonably well, and can outperform alternative factor models even when those other factor models represent the true data-generating process (see (5) for a simulation). In that sense, well-fitting bifactor models cannot, on their own, ‘significantly advance the conceptualization and measurement of construct impulsivity’ (or any other psychological construct) (2) (p.1).

Second, in this particular case, the bifactor models do not even fit well. Although the authors suggest that RMSEA and SRMR fit values of 0.06-0.09 represent ‘good fit’ (p.5), this is not in keeping with established guidelines (6), wherein such values imply moderate-to-poor fit to datasets.

Finally, the authors conflate their statistical impulsivity factor as a psychological impulsivity factor (i.e., psychological construct), an interpretation that does not follow for several reasons. One of those reasons is statistical equivalence: the tendency for theoretically different statistical models to fit a set of data (particularly, cross-sectional) equally well (7,8). To illustrate this problem, we examine whether a network model might also fit the data well. Network models are statistical tests of the network hypothesis which contrasts the bifactor one by implying that psychological constructs (like intelligence, psychopathology, and personality) are not unitary latent variables, but rather networks of mutually reinforcing variables (9).

To test this model, we randomly halve the authors’ Wave 1 data (N=1676), applying exploratory network analysis on the first half and confirmatory network analysis (for which we obtain fit indices) on the second half (see https://osf.io/rdmk5/ for open data and code). Our results show that the confirmatory network model exhibits better fit than the confirmatory bifactor model on Wave 1 (N=838) but not on Wave 2 (N=196) data, most likely due to reduced statistical power (see Table 1 and Figure 1).

These results show that a network model is, at the very least, an equally good summary of the current data, illustrating the unfortunate fact that fit indices alone are inadequate in terms of adjudicating between alternative theoretical viewpoints (7,8). Instead, fit indices should always be complemented with existing multidisciplinary evidence (10). Currently, comprehensive reviews of such evidence suggest that ‘impulsivity fails to satisfy even the basic requirements of a psychological construct and should be rejected as such’ (2, p.2). Based on this work and the arguments above, we conclude that there is currently no evidence for impulsivity as a ‘psychological trait’ (3, p.1).

**References **

- Strickland JC, Johnson MW. Rejecting impulsivity as a psychological construct: A theoretical, empirical, and sociocultural argument. Psychological review. 2021;128(2):336.
- Huang Y, Luan S, Wu B, Li Y, Wu J, Chen W, et al. Impulsivity is a stable, measurable, and predictive psychological trait. Proc Natl Acad Sci USA. 2024 Jun 11;121(24):e2321758121.
- Bader M, Moshagen M. Assessing the fitting propensity of factor models. Psychological Methods [Internet]. 2022 Oct 6 [cited 2024 Jun 17]; Available from: https://doi.apa.org/doi/10.1037/met0000529
- Preacher KJ. Quantifying Parsimony in Structural Equation Modeling. Multivariate Behavioral Research. 2006 Sep;41(3):227–59.
- Greene AL, Eaton NR, Li K, Forbes MK, Krueger RF, Markon KE, et al. Are fit indices used to test psychopathology structure biased? A simulation study. Journal of Abnormal Psychology. 2019 Oct;128(7):740–64.
- Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the Fit of Structural Equation Models: Tests of Significance and Descriptive Goodness-of-Fit Measures. Methods of Psychological Research. 2003;8:23–74.
- Fried EI. Lack of Theory Building and Testing Impedes Progress in The Factor and Network Literature. Psychological Inquiry. 2020 Oct 1;31(4):271–88.
- Watts AL, Greene AL, Bonifay W, Fried EI. A critical evaluation of the p-factor literature. Nature Reviews Psychology. 2023;1–15.
- Borsboom D, Deserno MK, Rhemtulla M, Epskamp S, Fried EI, McNally RJ, et al. Network analysis of multivariate data in psychological science. Nat Rev Methods Primers. 2021 Dec;1(1):58.
- Lewandowsky S, Farrell S. Computational modeling in cognition: principles and practice. Thousand Oaks: Sage Publications; 2011. 359

Pingback: Playing the alphabet factor game: the S factor for satisfaction » Eiko Fried