Antidotes to cynicism creep in academia

      30 Comments on Antidotes to cynicism creep in academia

This is one of these blog posts that doesn’t read well if you stop halfway. First, I provide evidence that academia can look pretty broken: there is low-quality work everywhere you look, the peer-review system has long outlived its utility, and academic publishing is a dumpster fire. Add considerable work pressure, the publish-or-perish culture, and precarious employment situations, and things can look gloomy and disheartening. Second, and this is where the blog becomes a bit personal, I stress how important it is for me not to become a science cynic, because of the responsibility towards my mental health and work, my team, my colleagues, and my students. Third, I then highlight antidotes to cynicism creep, and the many things that have greatly helped me with motivation and staying positive.


1. So. Many. Problems.

My most central claim in the blog post is that there are so many reasons to be disheartened and disillusioned. This pertains to many areas of the system we work in, including published papers, peer-review, the publishing industry, severe issues with workload and job insecurity, and so on. I will talk about some of them, and later come back to discuss how I maintain motivation and energy, and why.

2. Problems with scientific papers

Without exaggeration, I believe that the majority of published works in my field (broadly defined as psychology) do not add value. Many papers draw conclusions that are not supported by evidence, which cascades through the literature, because these papers are cited for the conclusions, not the evidence. The majority of published works are not reproducible, in the sense that authors conduct science behind closed doors without sharing data or code. Many published works are not replicable, i.e., will not hold up to scrutiny over time. Theories are verbal and vague, which means they can never get properly rejected. Instead, as Paul Meehl famously wrote, they sort of just slowly fade away as people lose interest. Let me try to convince you that this is an entirely reasonable position, based on the evidence we have.

(1) There are scientific disciplines dedicated to interrogating the scientific literature for potential flaws. Resulting work includes flashy papers such as “Why Most Published Research Findings Are False“, but also nuanced work highlighting considerable problems in several of areas, e.g. issues of replicability in psychology and cancer biology, large surveys of researchers admitting to engaging in questionable practices (including making up data), an ever-growing number of high-profile fraud cases who have now started suing researchers pointing out problems, and so on.

(2) Like many others, I have also made my own experiences by reading papers and engaging with the literature, and believe that a substantial proportion draw invalid inferences. I have written many commentaries to point out the most egregious flaws, but there aren’t enough hours in the week to engage with and rebut even 10% of bizarre claims in the literature.

(3) Each year, a dramatic number of papers, surpassing 10,000 in 2023 alone, are retracted. These papers are likely only the tip of the iceberg. In fact, “the number of articles produced by businesses that sell bogus work and authorships to scientists [..] is estimated to be in the hundreds of thousands“. This doesn’t speak to a system working as intended where we can rely on inferences we read.

(4) These problems are hard to demonstrate abstractly, so I will list a few concrete examples below. I have also written many dozens of blog posts on problematic papers that you can find on this website — for example, just in the last few months, on depression & temperature, the default mode network, and the disease factor. Below are four papers from our psychedelics overview paper where authors  wrote things that were not correct based on their evidence:

“Abbar et al. found in a randomized controlled trial (RCT) comparing ketamine against placebo that there was no persistent benefit of ketamine over placebo at the exit timepoint of the trial in week 6, but concluded in the abstract that ‘ketamine [..] has persistent benefits for acute care in suicidal patients’. Ionescu et al. found in an open-label ketamine study that only 2 of 14 patients show sustained improvement at 3-month follow-up (which may well be due to the placebo effect or other factors), but the title of the paper reads ‘Rapid and Sustained Reductions in Current Suicidal Ideation’. Palhano-Fontes et al. concluded in their ayahuasca study (n = 14 treatment, n = 15 placebo) that ‘blindness was adequately preserved’, when all participants in the treatment group said they believed they had received ayahuasca, but less than half of participants in the placebo group said so. And Daws et al. compared two treatment arms, including one using psilocybin-assisted psychotherapy, against each other, concluding that one treatment outperformed the other despite the lack of a statistically significant interaction term between the treatments.”


Get notifications for new blog posts on eiko-fried.com:

Join 1,061 other subscribers

2.1 Small detour: Mindfulness and brain morphology

Let me provide you with a specific example, and then I’ll zoom out and embed this in a broader context. A few days ago, I complained (apologies, it happens when I become cynical) that especially the neuroimaging literature in psychology is riddled with studies that are false positive findings and do not replicate subsequently, and posted about a recent paper that found no relation between mindfulness training and changes in brain morphology, contrasting prior work. I embedded this point in broader criticism of the literature at large1, problems that lead to a massive amount of research waste (i.e. money that could be spent better).

As a response to my complaint, someone posted a recent meta-analysis that supposedly shows convincingly that there are relations between mindfulness training and brain morphology. Here is the relevant part of the paper’s abstract:

“[..] This study aimed to investigate the structural brain changes in mindfulness-based interventions through a meta-analysis. [..] 11 studies (n=581) assessing whole-brain voxel-based grey matter or cortical thickness changes after a mindfulness CT were included. Anatomical likelihood estimation was used to carry out voxel-based meta-analysis with leave-one-out sensitivity analysis and behavioural analysis as follow-ups. One significant cluster (p<0.001, Z=4.76, cluster size=632 mm3) emerged in the right insula and precentral gyrus region (MNI=48, 10, 4) for structural volume increases in intervention group compared to controls. Behavioural analysis revealed that the cluster was associated with mental processes of attention and somesthesis (pain). Mindfulness interventions have the ability to affect neural plasticity in areas associated with better pain modulation and increased sustained attention. This further cements the long-term benefits and neuropsychological basis of mindfulness-based interventions.”

This all looks fine, and only because I was supposed to grade assignments and really looked for something else to do, I scrolled down a little, and found that the authors actually removed 4 of 15 studies that contained null-findings from their meta-analysis. That is, they removed 4 studies that did not find a relationship between brain morphology and mindfulness, and then concluded that the results “cements the [..] neuropsychological basis of mindfulness-based interventions”. This is not a valid conclusion.

That the authors excluded null-findings is unclear from the title or abstract. Digging a little deeper, the sample sizes of the removed studies were nearly twice as large as the sample sizes of the included studies. And of the 11 analyzed studies, only 2 provided any significant effects, with fewer than 70 combined participants — one of which is a study on Yoga with female participants at risk for developing Alzheimer’s disease. Even without the issue of dropping null-findings, this does not qualify as robust evidence using meta-analytic guidelines (1, 2).

3. Problems with peer-review

As a result of the above, when someone sends me a paper in which a particular finding is reported, or when a colleague publishes a paper, I have little a priori confidence that what is communicated follows from the presented data. But wait, aren’t all scientific papers vetted through a system of peer-review? Yes, most journals have systems in place where reviewers critically evaluate papers. In practice, this system does not work very well, which often comes as a surprise to journalists or friends outside of academia I talk to. Here are some reasons why peer-review does not work well.

(1) Authors are often asked to recommend reviewers; you can immediately see how this can lead to problems. “You recommend me and I recommend you” is common, or just recommending academic friends who are not impartial. There are no structural mechanisms in place at scientific journals to interrogate or prevent these issues other than perhaps checking if researchers have published together.

(2) Peer-reviewers often disagree with each other when they rate the quality of a manuscript. When we work as editors as read several peer-reviews, we often face situations where one reviewer is very unhappy with a paper and recommends rejection, but the second reviewer is very enthusiastic. We see this as authors when we often (not always) get conflicting feedback. And we see this in scientific studies on the topic.

(3) There are no real scientific standards for who becomes a reviewer, and some journals like Frontiers use software that automatically selects and invites reviewers. I am regularly invited to review geological work on depression — “a landform sunken or depressed below the surrounding area” — because I work on major depressive disorder. PhD students I supervise are often invited in the second year of their PhD by an automated system. This isn’t to say they aren’t qualified, but I don’t think the public, when they see ‘peer-review’, would think PhD students vet papers.

(4) Most papers are reviewed by 2 or 3 reviewers. They in most cases do not get reimbursed, and there is little motivation other than scientific integrity and the importance of service to the field to take the work seriously. Given how stressed researchers can be, I have seen many, many, many low-quality reviews. This is supported by pretty devastating experimental work showing that reviewers usually miss fatal flaws when being asked to review flawed papers (1, 2, 3). 2

(5) There is no accountability for a bad review, because reviews are nearly never public. So even if you just wrote “fuck you” in a review and do nothing else (please don’t), the worst that happens is that the editor or the journal no longer invite you to review again. Journals stress that peer-reviews are confidential, which means they wouldn’t even be allowed to broadly share that you are a terrible reviewer (e.g. to other journals), following their own rules.

In the previous section, I tried to explain why I don’t have a lot of confidence in the conclusions of any random paper before vetting it. Now you know why the fact that this paper was peer-reviewed does not in any fundamental sense increase my trust in the veracity of presented findings.

4. Problems with the publishing industry

But Eiko, these papers are published by journals like Science and Nature. Are those not cornerstones of truth in the world? Are they not beyond reproach? Well .. I believe that the scientific publishing industry is inherently tied to some of these problems. I’ve written about the industry in some detail previously, and recommend the blog to catch up if you don’t work in academia.

(1) In sum, most scientific publishers are for-profit companies. They sell scientific papers in the way Apple sells smartphones or computers, and they have no inherent interest in scientific integrity or cumulative knowledge building because these are not goods that inherently increase profits. I don’t even think this is morally bankrupt: it is just a business, and it is our fault that we have let a system happen in which scientific publishing is a business, rather than organized by states or governments or non-profits or universities, with the goal to make scientific findings available to everyone.

(2) In a nutshell, a researcher does research and submits a paper; other researchers serve as editors for a journal and help select appropriate papers for publication; other researchers peer-review the paper; and a journal eventually publishes the paper, for a profit. None of the researchers (authors, editors, reviewers) usually get compensated. Essentially, tax payers pay researchers who then do work that publishers sell back to tax payers for a ridiculously high profit margin. The procedure varies a bit, but in clinical psychology, psychiatry, and methodology, the above is standard practice and representative of publishing in these fields. How is this not disheartening?

(3) The publisher ‘MDPI’ owns many journals, and in recent years most of these journals, on average, published one special issue per week. Several journals even published more than 2 special issues per day (Wikipedia). There is no way that all of this is robust, thorough, carefully vetted science, especially when one considers the very short times from submission to publication. This may be one of the reasons why some MDPI journals were recently delisted from a list of legitimate journals, and why some researchers have long considered MDPI a predatory publisher. MDPI is not alone here, Frontiers journals and other publishers have also received serious criticism.

(4) I already mentioned that last year, over 10000 papers were retracted. Related headlines such as ‘Scammers impersonate guest editors to get sham papers published‘ don’t exactly inspire confidence either.

5. Problems with work pressure and incentives

Let’s move to us: researchers and educators.

(1) I’ll start showing you how bad things are, describing my own situation. It’s never nice to hear a privileged German white guy complaining, but the point I’m making is that even with my level of privilege, things were barely manageable. During my PhD in Germany, I was in a precarious employment situation, earning less than €1250 per month, from which I needed to pay for my healthcare. This is because we were employed as ‘freelancers’, a trick universities do to save social welfare and other contributions (they also did not pay into my pension fund). Many of my friends and colleagues during their PhDs were employed and paid for half-time positions, but expected to work full time. I then had 2 temporary postdoc positions for 2 years each. And then I worked as an Assistant Professor, in a temporary position. Only recently, at the age of 39, did I obtain tenure, i.e., my very first permanent job.

(2) This is related to situations surrounding “publish or perish”, long work hours, mental health problems and burnout, and a lack of sustainable and permanent jobs (1, 2, 3). A recent OECD study summarizes the situation:

“Academic careers have become increasingly precarious, endangering rights, subjecting workers to difficult working conditions and stress. [..] Most were on short-term contracts or did not have any employment relationship [..] The problem has long been severe and has gotten worse over time. [..] Many countries are experiencing the emergence of a dual labour market, with the coexistence of a shrinking protected research elite and a large precarious research class that now represents the majority in most academic systems.”

(3) I want to get back to scientific evidence and quality: these issues dramatically exacerbate the problems I discussed above surrounding the validity of findings. Especially when we are employed in precarious situations, there is little overlap between the goals and motivations of scientists, and the core goal of science itself. My colleague Anna van’t Veer created a really nice figure on this topic as part of our workshop on Responsible Scholarship, showing that pressures in the system (e.g. publish or perish, competitiveness, hectic research pace) lead to scientists doing work that benefits their careers at the moment (such as flashy publications), but do not contribute to a rigorous, robust pyramid of cumulative science (cf. “Slow Science“). Scientists are largely still evaluated based on traditional metrics such as the number of publications, journal impact factors, citations rates, and so on, and optimizing those has little to do with optimizing scientific quality — especially if you optimize these in situations of duress (i.e. worried to be unemployment next year).

6. Antidotes to cynicism creep

Let me reiterate what I said before: when someone sends me a paper or a newspaper article about a paper, my view today is that the conclusions may or may not be valid. I don’t expect things to hold up just because this is a scientific paper (compared to a blog post), or because it is peer-reviewed (compared to a preprint), or because it is published in Nature or Science, or because it is published by a famous scientist. I think my view is reasonable and supported by evidence, at least in the fields I work in.

This view can lead to cynicism, which is what I want to avoid for myself — cynicism drains my motivation and energy. It’s not enjoyable to do this sort of work when you doubt the validity of much of the literature. Cynicism can also be contagious, potentially affecting junior colleagues and students. And then you’re disheartened, your team is discouraged, your students are bummed out … not a good place to be in, for you or anyone else. I believe my team does the best work when we’re both critical and motivated. Motivation can of course come from a sense of urgency, but it becomes dangerous (for me anyway) when it tips over into cynicism.

But this view that many scientific conclusions are invalid also implies a call to action, because the status quo threatens the idea of building a robust, cumulative pyramid of foundational blocks that stand the test of time.

And this is where the energy lies. The motivation and hope and amazing opportunities to make things better, together with smart and kind people all around us. There is a good chance we can make a dent in the problems I’ve summarized above, in my lifetime, and we have already seen a lot of improvements over the last years. We need to do so not only to prevent research waste, but also to prevent a further erosion of society’s trust in science, which can be deadly as we have seen during the COVID-19 pandemic in relation to vaccines.

Here’s how I get my energy. This is necessarily idiosyncratic, but I hope some of these will work for you, too.

6.1 Watch 👀

I listed so many problems above, but none of them have to be permanent, and there is progress for all of them.

(1) I’m privileged to get to see a lot of amazing people in action. People like Jessica Schleider, Jennifer Tackett, Don Robinaugh, Marc Molendijk, Praveetha Patalay, Laura Bringmann, Anna van’t Veer, Anna van Duijvenvoorde and so many others who not only do amazing work following modern open science principles: they are also fantastic peers, mentors, and teachers. I find it inspiring and motivating to watch them work. Generally, seeing the massive grassroots movements that have popped up in the last half-decade, such as open science communities and reproducibiliteas, give me a lot of hope for the future. There are too many people and initiatives to list here, but recently, folks have started putting out bounties to find errors in their own work, offering either payments or donations to charity if mistakes are identified (1, 2, 3). And a few weeks ago, ERROR went online, “a bug bounty program to systematically detect and report errors in scientific publications, modelled after bug bounty programs in the technology industry” where investigators are paid for discovering errors.

(2) In terms of precarious contracts, there is progress in the Netherlands both on the national level, and the level of specific universities and faculties. For instance, the Faculty of Science at the Free University Amsterdam decided to discontinue the tenure track system (which is quite unique: other jobs don’t require you to perform very well for 5 years before you are perhaps offered a permanent job). They will replace the tenure track system with a career track policy that assumes a permanent contract after 18 months.

(3) The Netherlands is also among the leading countries for rewards and recognition initiatives, i.e., for initiatives that try to change what criteria we are evaluated on. Away from traditional metrics such as impact factors and fancy journals, to the question of how open and transparent our works are, how effectively we engage with the public, and how much help and collaborative opportunities we can provide to colleagues. If you are primarily teaching, you ought to be evaluated based on .. surprise .. TEACHING rather than on your publications. Utrecht University has taken bold steps, for instance, moving towards a much more collaborative and inclusive environment.

6.2 Do 🤝

(1) I get a lot of energy from activism to improve things in academia, for example, as co-founder of the Open Science Community Leiden, member of the Young Academy Leiden, and all activities that come with these groups and initiatives. Being in the privileged position to talk about the issues that plague researchers, research, and public trust is a huge responsibility, but can also be very motivating when folks in charge respond positively. I highly recommend activism. This is why I find it so utterly alienating when people on Twitter shit on the ‘open science movement’, which to 99% consists of super enthusiastic early career researchers who have learned their lessons from flawed literature, and try to make things better in their own work.

(2) More broadly, for me personally, the way forward is to incentivize, champion, and promote better and more robust scientific work. I find this motivating and encouraging, and an efficient antidote against cynicism creep. I find it intellectually rewarding because it is an effort that spans many areas including teaching science, doing science, and communicating science. And I find it socially rewarding because it is a teamwork effort embedded in a large group of (largely early career) scientists trying to improve our fields and build a more robust, cumulative science. In the best case, these efforts not only safeguard the quality of science and its application, but also enable trust, foster equal opportunities and outcomes, and prevent research waste.

(3) Teach! I’ve been teaching a research methods course to clinical master students in Leiden for a few years, and I love how quickly and clearly they understand problems in the literature, and what can be done to address these issues. I’ve been able to teach workshops for international audiences, e.g. on the importance of proper theory building and testing (rather than vague, narrative ideas that can’t ever be rejected) together with Don Robinaugh; on the importance of improving our measurement practices with Jessica Flake; and on network analysis and other statistical methods with a host of other amazing researchers. Students are skeptical, and they are ready to identify and tackle challenges.

(4) Practice and celebrate, for your own work and that of others, what Merton referred to as a crucial norm in science: organized scepticism, i.e., that “scientific claims should be exposed to critical scrutiny”. And while we do so, lets try to make sure to call out problematic work, not people (at least for a while, unless we have called out work by the same people too many times). Tough on the issue, soft on the person. Science is a social enterprise as well, and it’s no fun to be criticized. Let’s practice together, and start teaching folks earlier on in their careers, that they are not their science, and rejected hypotheses and theories say nothing about you as a person. Let’s practice criticism, for instance, following Rapaport’s rules, and let’s practice being criticized. Quote:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  3. You should mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

(5) Write about the problems you see. When I joined Denny Borsboom’s lab in 2014 and learned about network models and estimated coefficients, one of my first thoughts was that current estimation routines don’t provide information on model stability, such as confidence regions. So I teamed up with Sacha Epskamp and we wrote a tutorial paper on the topic, urging researchers to estimate and communicate uncertainty about the estimated model results. The community embraced the paper and practice quickly, which was very rewarding to see. Similar things happened when I teamed up with Michiel van Elk to write about problems in psychedelic science, or Jessica Flake to write about problems in psychological measurement. This gives me energy because scientists and teachers and journalists and policy-makers are willing to engage with critical material: it just needs to be out there, in accessible ways.

(6) Publish reviews. If we really keep ye good olde peer-review system, the least we must do is put them online. I would very much like to see the reviews for the mindfulness paper I listed above, what the handling editor wrote, and how the authors responded. Were these issues not caught? Did the reviewers catch them but the editor did not ask the authors to fix the problems? Perhaps most imporantantly, did the authors have good counter-arguments, making my criticism potentially invalid? Publishing reviews resolves these questions, and while it is rare, some publishers have implemented this practice for a long time (e.g., BMC journals for several journals in 2001 already). Modern journals such as Meta Psychology publish all communication. How cool is that? (Sidenote: I sign my reviews, because I think I should be held accountable, but I am also one of the most privileged groups in academia and have a permanent job — there is little objective danger here. But I would never ask early career folks or folks in precarious employment conditions to sign.)

(7) Publish code and data, whenever possible. Researchers should be asked to share their data and code (i.e., their exact statistical analyses, which is usually a programming code). Pushling such information can be incredibly useful because it allows the scientific community at large to practice organized skepticism — mistakes happen, in every job, and we must make sure to catch them when they happen. I have shared code for all my papers since 2015 (because that is free, I don’t think there are ever reasons not to), and data when possible, and doing so enabled the scientific community to identify a mistake in one of my statistical analyses, resulting in a correction of my work.

Of course, some of the proposed solutions and initiatives above may not work out, and some may even put us a step back — I hear you. But there is actual progress, coming from motivated folks in different areas of academia, thinking about these issues and how to solve them in good faith. This gives me a lot of energy. It’s a bit like a first therapy session for PTSD, where one of the most important things to do is normalize: “You may think your symptoms are bizarre or abnormal, but actually, these are quite common symptoms many people experience, and they are quite normal given what you have been through!” I experience the current initiatives as very normalizing for me (it’s not you Eiko, it’s the system, stupid).

And everybody is different, and my answer to the question of what gives energy may not be correct for you. I still hope it may inspire.

Which make me wonder: what inspires you?

7. Conclusion

There are plenty of reasons to be disillusioned about the system as a whole. But maybe that’s the first necessary step: dis-illusionment, seeing things for what they are. This provides energy and motivation for change. So many people have been dis-illusioned, and we stand on the shoulders of giants who have helped to dispel false beliefs, including that peer-review guarantees scientific quality, that journals with high impact factors publish higher quality work, that Nature charging €10,000 for open access fees is fair, or that precarious employment is normal and fine.

The last decade has shown that we can really make a dent into some of these issues, just looking at the reforms we have seen, at the national level (e.g., the Dutch Taverne amendment allowing researchers to share their work in public), international level (e.g., UNESCO’s recommendations on Open Science), journal level (e.g., TOP guidelines and Registered Reports), and level of funders (e.g., the Dutch funder NWO mandates much more transparency than was the case a decade ago). While we share values of open science, work is in progress figuring out how to best implement these values, and this is a genuinely difficult process that will take time. But let’s not mistake important debates on what practices are best suited to implement our values optimally with a lack of progress generally.

  1. In case you’re interested in evidence for this claim, see e.g. here, here, and here for the depression literature
  2. Adam Mastroianni wrote a great piece on problems of peer-review.

30 thoughts on “Antidotes to cynicism creep in academia

  1. Pingback: Scientific publishers *not* adding value » Eiko Fried

  2. Pingback: The Risks We Take For Others – Living With Evidence

  3. gabriel

    the merits of open everything is undeniable. data and code should be open. but one of the problems today is plagiarism. this will weight heavily on new academics who will have a risk all bet to open their data vs stabiliahed peers who will just het better visibility anyway. I don’t think anyone have proposed any solution, practically speaking.

    then removing renure is throwing the baby with the baby water. tenure is an awesome mechanism to keep fields moving. the problem is that most academics closed themselves in their little circle of close peers (like you are already doing in this very text) and now the system is being abused. problem is the abuse, not the system. it doesn’t have safe guards against abuse by the academics abusing it because all the safeguards are for the academics against the institutions! and that’s a great thing that should be preserved. likewise on praising the new neoliberal overly capitalistic system Netherlands find itself in now is detrimental to decades of labour conflict I won’t even go into as it is all too obvious and not even veiled.

    Reply
  4. Pingback: Thursday assorted links | Factuel News | News that is Facts

  5. Noémi Schuurman

    Thanks for sharing these findings, thoughts, experiences and feelings down Eiko. I think it nicely summarizes a lot of real problems we encounter daily in our work as academics.

    Dropping a line to emphasize one of your points “Teach”. I very much believe (no, I have no concrete evidence to back this) that the most impact any academic will have is through teaching and supervision – that is, it is a way to have A LOT of impact, through other people.

    Unfortunately, the value of teaching is in practice often underestimated and undervalued, perhaps in part because it is indirect impact and hard to quantify. But, teaching is increasingly being valued more in academia in various places (for example here at Utrecht University). In any case, valuing teaching and rolemodeling more in academia could also be one way forward to remove some of the publish or perish pressure, next to efforts that focus on improving specific aspects of research practice.

    Reply
  6. Bernard

    Thank you for your article.
    Maybe 2 effective solutions should be also highlighted.
    The PCI initiative (https://peercommunityin.org/). We review the preprints and share online our reviewing, then preprints are (or not) recommended to PCI friendly journals or PCI journal (https://peercommunityin.org/pci-and-journals/). A PCI psychology is in preparation.
    Collective resignation of editorial board to create a new academic journal with an diamond/gold OA model (https://www.statnews.com/2024/02/01/scientific-publishing-neuroimage-editorial-board-resignation-imaging-neuroscience-open-access/)

    Reply
  7. Anon_ECR

    I have to say I’m in two minds about this post. I like the first half because it nicely collates key problems in science and academia in one place. The second part moves to an antidote to the cynicism built up. But it reads mostly like a list of virtuous actions that you are personally doing and which we should be doing too. I don’t find that helpful at all. It’s like writing an article about climate change and how we’re pretty doomed, and to list personal acts of activism as an antidote to pessimism, and that we should watch our own CO2 output. Calls to action are great, but I think they should be contextualized as such to not provide false hope to the disillusioned. Ways to make systems better are not an antidote to cynicism about that system… Furthermore, I suspect many would lose and not gain energy from doing the things you suggest, because they require (exhausted) academics to confront the problems we were supposed to get an antidote for directly instead of keeping them at an arm’s length.

    I think if the goal was to give an antidote, there are much better routes:
    (i) A line of arguments about how academia may still be the best game in town for curious and serious people. The thing about academia is that it purports to understand the world and make things better, and there are obvious efforts in this direction. So even if one is reasonably disheartened by academia, in for example the corporate world, usually people aren’t interested in figuring out how things work at all. In many cases, the implicit goal outside of academia is rather to convince people internal and external to companies that the world works in ways that it expressly doesn’t (to sell things).
    (ii) You could have provided a critical evaluation of the first half of your article. For example, you point out retraction numbers, sharing a graph that shows a stark increase. But the numbers you show are absolute and don’t factor in the overall amount of published articles anually. What are the trends in retraction proportions? If you look at PubMed or other sources, it seems that the absolute number of published research articles is outpacing the number of retractions (especially if you exclude 2023, which as per your graph is an anomalous year). So instead of using incomplete information to potentially make folks more rather than less worried, there’s a missed opportunity for a more substantive antidotes to pessimism.

    Reply
    1. Eiko Post author

      Thanks for the feedback. It sounds like you’re saying you could have written a better blog post than this, with better arguments (e.g. there is no alternative to science, which is a great argument), less personal highlights, being more critical about some cited sources in part I, etc.

      I absolutely believe you — much can be improved. I would love to read a better version of this blog post, and I am sure others would like to as well, so please leave a comment here when you’re done, I’d be happy to link to it in the piece!

      For me personally, a blog like this takes around 8-10 hours to write / check / clean-up / tag / thumbnail / share, and I just don’t have more resources to devote to a single blog, unfortunately.

      Reply
    2. Anon_MCR

      Thanks Anon_ECR. The original blog post was fine but from the title, I was looking for exactly those two last points, which really do give me heart.

      Reply
    3. Noémi

      I personally feel the progress and change people are making in academia, does provide an antidote to cynism for me, to some extent. I feel this shows that academia/science is to some extent self-correcting, and to some extent things are working as they should. There is room in academia for criticism and to learn from them, even if sometimes that room seems small and change goes slowly. That is not something to take for granted, something to protect, and something that gives me hope.

      Your point i) is also something I feel needs to be pointed out more, but it does not personally make me feel less cynical :p (that is, it seems close to ”some places are even worse/have it much worse then you, don’t you feel better now ?”).

      Reply
  8. Boris Barbour

    Amen to nearly all of that.

    I think, in the end, we need not just to say nice things about the good science, but to make it pay, and also to make bad science not pay. It may sound a bit aggressive, but I think we should all make use of whatever influence we have – grant, hiring and promotion committees being obvious possibilities – to direct funds towards good science and scientists and away from those who produce bad science.

    It would be terrific if your analyses of specific papers were linked from PubPeer. It shouldn’t take more that a minute to post a link…

    Reply
    1. Eiko Post author

      Good call /re pubpeer. Goes on the very long to do list of “things to do that would only take a few minutes” ;).

      Reply
  9. Matt Patton

    Thanks for writing about this Eiko. I’ve been mulling over how to address my own cynicism creep for a couple weeks.

    I can’t shake the feeling that the open science movement has a pretty narrow view of activism. Most open science discussion is just scientists talking to other scientists, as if no one else has a stake in science working well. But it’s safe to say non-scientists would prefer that health research be trustworthy, that their taxes not go to support fraud and waste, that scientists not mislead them with hype in the media, that scientists should adhere to agreed-upon standards of transparency, etc.

    I’m still not sure why the open science movement hasn’t done a better job mobilizing public support. Perhaps it is fear of politicization, or perhaps it is just that scientists are so busy under the current system. Maybe it’s elitism. Maybe it’s cynicism about public opinion. Whatever the reason, this situation has left non-scientists who are concerned about these problems with little we can do to help. I don’t know of any major non-profits educating the general public, connecting the dots between the problems in science and normal people’s lives.

    To me it seems especially frustrating because there’s so much agreement around key reforms. Reforms aren’t implemented because of collective action problems we know how to overcome in other labor sectors. I don’t know what the solution is, but I think it starts with broadening our imaginations about what effective open science activism can look like. We don’t really lack for solutions anymore. We lack the coordination to get them implemented.

    Reply
    1. Eiko Post author

      Hey Matt.

      Three thoughts. First, I think at least here in the NL things are happening. I don’t think the public is involved much, which isn’t great, but there are definitely many committees also outside of universities where OS debates are happening, with OS activists being involved.

      Second, of course not enough is happening, it is a somewhat insular discussion, but I doubt this is because folks love their ivory towers. It’s genuinely difficult. As a teacher and administrator and leader of a team and researcher and writer and blogger and programmer and science communicator and and and and, you know, all the stuff academics do in their daily lives, it’s difficult to also be a implementation scientist, political organizer, being able to easily mobilize politicians or the public to stand behind topics, etc. So I fully agree that some paths forward are clear, but I honestly just don’t know exactly what the next steps ought to be. We wrote the Elsevier piece. It’s outrageous what we describe. But I just don’t really know any Dutch politicians, and I really have no time left in my week to start some sort of extra initiative now trying to take down Elsevier. I think it’s super fair to counter that this sounds a little defensive: it honestly is, because I would LOVE to see shit getting done rather than just being talked about.

      And that leads to point three: perhaps good to step back, talk to implementation scientists, and figure out how time could be spent most effectively on actually getting reforms on the way, rather than creating a bunch of literature just narrating the need for reforms!

      Reply
      1. Matt Patton

        Oh I completely understand. I should have been more specific that I think it’s a problem with the conversation in the open science movement in general. You more than do your part educating the public (including educating me). You’re the last person who should feel defensive about not doing enough.

        But I feel like that also speaks to the larger issue. The labor needs to be more equitably distributed. People like me need to be networking to do things like set up nonprofit advocacy orgs or a researchers’ unions. I just think the conversation in the movement doesn’t pay enough attention to the practical business of organizing to achieve goals. I’ll give some more thought to what can facilitate that.

        Just my take from the outside. Speaking of public education, thanks for doing your three (!) talks today. I hope you’re taking some proper holiday time this summer.

        Reply
        1. Eiko Post author

          I definitely don’t do enough, looking at awesome colleagues like Anna van’t Veer who is SO INVOLVED in these issues. Big role model who devotes a lot more time than me to this. So good reasons to be defensive for sure ;) !! Thanks Matt!

          Reply
    2. One thought

      One reason might be that there has (and continues to be) a ‘holier than thou’ clique attitude within the open science movement, unfortunately. The ‘bropen-science’ term still applies.

      Reply
  10. Mahmut Ruzi

    Very well thought out and written on a vital issue. I am a chemist and the issues are the same in that field. I think most researchers are aware of the problem but have very little to no incentive to do something about it, so most of us take the path of least resistance and keep publishing and reviewing papers of little significance.
    I have written a blog post about the issue and suggested some solutions that are similar to yours: publishing reviews, paying reviewers, and a bit radical solution to get rid of the whole commercial as well as society-based publishing and adopt an arxiv-like system.

    I would love to hear your thoughts.

    https://mahmutruzi.medium.com/addressing-and-resolving-critical-issues-in-academic-publishing-ad13f5f85c43

    Reply
      1. Mahmut Ruzi

        Thanks for reading.

        Yes, the evidence you provided is much more broad and detailed.
        I will add your post to the reference list and revise my writing.

        Reply
  11. Reihaneh

    You just stated my biggest fear in the first sentence, Eiko!
    I was lucky to have the chance to express that in the PSYNETS workshop and you were really kind to give me confidence. However, I keep asking myself what if my work does not add value? I suppose some people may think that would be too much to think about for a master-level project/assignment but I disagree. Unfortunately, I think this fear has taken me to a stage where I just can’t make progress or move forward with my assignments/PhD projects in the way that I like!

    Reply
    1. Eiko Post author

      One way to think of science is the effort of building a cumulative system of knowledge together — a pyramid of sorts. All science relies on prior science, in many cases, thousands of tiny, solid, reliable rocks of knowledge. So all we can do is try to make our units of science — paper, master theses, blog posts — as solid as we can. I don’t think solid means perfect, or having a sample size of 1 million, or having assessed all variables with perfect measurement. Something is solid if the INFERENCE follows from the presented EVIDENCE.

      I think n=1 studies can be solid, as long as authors report flaws and limitations honestly. So please don’t let concerns about perfect stop your work, no matter how small. But you should carefully spell out limitations of your work for other people reading your work who may not know as much as you do. Does that make sense?

      Reply
      1. Reihaneh

        Thanks a lot for your reply, Eiko. That makes perfect sense.
        “Something is solid if the INFERENCE follows from the presented EVIDENCE” -> I will put this on my desk to remind myself every day!
        I had a conversation with a lecturer a couple of weeks ago for an assignment. She asked, “How will you know whether your chosen statistical analysis has produced a productive analysis?” I said I would discuss why I want to use a specific method based on its relevance to my RQ, its underlying assumptions, and once conducted, based on the robustness of the analysis. I think my answer was close to what she was looking for. However, it made me think about what is actually meant by productive analysis? Can we ever know with certainty if a chosen statistical analysis produced a PRODUCTIVE analysis?

        Reply
        1. Eiko Post author

          I don’t think like that, so find the question hard to answer. We select statistical models because we have questions about the world, but we cannot ask the world, so we ask data. Statistical models help us do that.

          My thoughts on models and data are spelled out here in some detail.

          https://eiko-fried.com/on-theory/

          Reply
  12. Simon Disillusioned

    Thanks for a great article, especially the data on retractions. I reported a Paper for fraud and the Head of Anthropology at my London Uni then attempted to get me to withdraw the complaint. UK Freedom of Speech in Higher Education 2023 is a useful tool imo.
    Ultimately it’s sad to say it but Academia is neither well paid, nor high status, nor stress free to work in. Pay peanuts get monkeys. And the other issue in the humanities is Silos. Academics defending their own sub discipline for fear of being found out as irrelevant or meaningless on its own. They then employ as Research Assistants and PhDs people who parrot their Academic bosses, with predictable consequences. Quality is low low low everywhere I see. The more you look at any humanities paper the more problems you see. It’s a serious mess that the current wokerati are only worsening. At least in the private sector research has a measurable goal. In academia that goal is to defend the current orthodoxy and maintain jobs for those who have floated to the top, like sc…m

    Reply
    1. Eiko Post author

      Sorry the cynicism creep got to you! I feel like that too sometimes, but being involved in a ton of initiatives, and seeing so many young people full of energy to change things, I really don’t think it’s an option to give up just yet.

      Reply
  13. Adam

    “Things are terrible but you should still have blind optimism” sounds unwise . Optimism is stupidity if there is little chance things will change

    Reply
  14. a tired PhD student

    This article hits too close to home. Very well written! I struggled to overcome the cynicism with the rigid structures within my department, which often stifled “innovation” (proper science) and collaboration. While I’ve decided to leave academia this summer, I’m hopeful that future changes can address these issues.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.