Reflections on APS 18: open science, transparency, and inclusion

APS 2018 — the conference of the Association for Psychological Science — ended a few days ago. Although this was my fifth APS in a row, it was a very different experience than in the last years. So I wanted to write down a few thoughts, with a focus on how to make sure to keep improving scientific quality, transparency, open science practices, and inclusiveness in the next years to come.

Introduction

In the last years, APS always felt like flying out to a conference where I neither knew the organization, nor most of the researchers. I payed the membership fee for purely economic reasons — the reduction in conference fee was larger than the membership fee itself. I never quite understood how talks or posters were selected, didn’t know the “who is who” of APS, and was unaware of most science-political topics that were being discussed1. This is partly because I wasn’t that active on social media, partly because I was at a different career stage, but also — let’s face it — because the A in APS kind of stands for American, and folks from outside the US can have a harder time grasping the internal workings of APS.

This time, things felt different, because I had a much closer connection to both the organization and the researchers at APS, for different reasons: I had the honor to be awarded the APS Rising Star award a few months back; I have published in several APS journals recently, such as Perspectives and Clinical Psychological Science; Jessica Flake and I published our recent piece “Measurement Matters” in the APS Observer. In addition, I was much more up to date regarding science-political developments — largely due to being more active on social media than I used to, and due to a recent blog post I wrote on self-citation practices of an APS editor. I also met numerous APS editors and reviewers with whom I was in contact in the last year, and talked to folks working for the APS observer.

All of a sudden, and contrasting prior years, APS has turned into a real organization with familiar faces. People I know and value contribute to APS, and strive to make it better every year.

Another difference this year was that I wasn’t there primarily to talk about my own work, which is what I did in the last years. Instead, I organized a symposium together with Jessica Flake in which we highlighted general and specific measurement problems in psychology (slides); put together and chaired a symposium for four early career researchers who presented their work on using ecological momentary assessment data to help improve the life of patients with mental health problems (slides); and was part of an invited panel organized by Jennifer Tackett that discussed open science issues in Clinical Psychology, together with the other panelists Bethany Teachman, Ann Kring, Aidan Wright, and the discussant Richard Lucas.

It was awesome, but it also made me think about APS as an actual organization, in which people I know play an important role. Which means it is time to share some critical thoughts with the goal to make APS better next year.

Reward structure and open science

The reward structure in science is broken2, which is responsible for many of the problems we are facing in psychology. We are focusing on sexy over rigorous science, on quantity over quality of publications, on awarding honors to specific individuals when research is always a collaborative effort. Such a system incentivizes questionable research practices such as p-hacking, it rewards cheating, and fosters a competitive instead of a collaborative environment.

Regarding the award structure, I was incredibly humbled to have received a recent award at APS, but it felt odd to receive an individual achievement award for work I would have never been able to pull off alone. Tim van der Zee summarized related issues last week:

Let’s step back for a second and look at the larger picture: Psychology is moving forward, and much more has happened in the last 5 years than I had ever anticipated. We have registered reports, the Psychological Science Accelerator, new journals such as Collabra Psychology focusing in rigorous and open science over sexy findings3, the Loss of Confidence project4, and many other recent open science developments. If we want to make further progress, we need to reward open science practices and initiatives, and change the incentive system.

APS also featured important open science topics much more prominently than in previous years. For instance, Katie Corker gave a workshop on reproducibility:

And Simine Vazire, Penny Pexman, Stephen Lindsay, and Leif Nelson talked about publishing trends, journal policies, and other efforts to promote open science.

But there is much more to be done, and we should do much more. So, dear APS: Can we modify the reward system more than we have so far? With poster awards next year that reward open science practices, with Rising Stars of open science? Researchers who conduct more rigorous work have disadvantages in a climate where only the number of papers and sexy results count: Symposia on trustworthy null-findings, and talks of preregistered studies and large-scale collaborations can offer alternative ways forwards. In addition, we could implement visual cues in the conference program for certain open science practices such as preregistration. It would help attendees navigate the program, and select talks we want to see.

Transparency

APS is a large organization, and I have been paying membership fees for several years now. In return, I would like to learn more about the organization and its processes. Specifically related to the conference, I would like to understand how posters and talks and researchers receiving awards are selected; what percentage of researchers is successful in each of these categories; and what the demographics of these successful researchers are.

To put it differently, what are the checks and balances in place to ensure that women, minorities, and other folks who had it much harder than me — and face(d) many more obstacles than I did — have a fair chance for having their symposia accepted, and for receiving awards? How can we improve these checks and balances to be more inclusive? What is being done currently?

Again, I don’t meant to say nothing is happening: I know folks in important APS committees who care about inclusion and transparency, and am sure this is being talked about and worked on. Thanks for putting in the hard work! But it would be great if we could learn more about what is happening, what has improved and what remains to be improved, and how APS members can help with this transition.

US centrism

The US are a crucial country for advancing psychological science, and APS arose in the US, so it’s not surprising that there is a US-focus at APS. But this focus was very visible this time around — most likely because I started paying more attention to it — and I wonder what we can do to be more inclusive.

Take, for example, this year’s class of rising star awardees. As Hans IJzerman pointed out correctly:

Not all awardees are North American, but most work at US universities. About 80% were awarded to researchers in the US and Canada. Other awards went to the UK (6), the Netherlands (5), Australia (3), Belgium (2), Germany (2), Hong Kong (2), Korea (1), Israel (1), and Brazil (1)5. This means about 90% of the awards went to countries in which English is the native language.

This has structural reasons: One needs to be a fellow to be nominated, know how the nomination process works, and needs a nomination through a longterm APS member. But these structural reasons are good news in a sense, because we can change the structure to make the system more inclusive — i.e. we have levers to change this and to move forward. And note that this is not just about inclusiveness: APS, for selfish reasons, should ensure to feature the best research and researchers to help improving psychological science. Many of these are North American, of course; but there are also many that aren’t.

The US-focus is also obvious in many other areas, for instance when it comes to APS paper or poster abstracts. Samples are usually described as “n=100 participants”, unless they are non-US samples, in which case a country is mentioned (“n=100 French participants”). And discussions about grants usually focus on US grants — which makes sense if we only talk about grants of one specific country; there would be much less benefit discussing solely Austrian grants at APS. But maybe we can broaden the debate across all areas.

If you add these points together, you see the pattern, and I wanted to use this blog post to raise a bit more awareness for inclusion. Researchers from outside of the US can feel quite lost at APS, and if privileged white male Eiko — who lived in the US for 2 years during grad school — feels lost, I’m sure there are others who feel a bit lost, too.

I’m sure there are some folks at APS working hard on inclusion, but it seems fair to call this out and ask for a bit more transparency about the process. Please let us know how to support you, and let us know what we can do to foster inclusion.

  1. i.e. science drama
  2. Along with scientific publishing
  3. COI: I am on the editorial board
  4. Shoutout to Tal Yarkoni, Christopher Chabris, and Julia Rohrer
  5. I counted once, after 10 hours on a plane, so if someone wants to double check / cross-validate, I’d be happy for a recount

8 thoughts on “Reflections on APS 18: open science, transparency, and inclusion

  1. Pingback: Summary of my academic 2018 - Eiko Fried

  2. Pingback: Looking back at 2018 - Eiko Fried

  3. Pingback: APS 2018: Collection of all network presentations | Psych Networks

  4. Anonymous

    “Psychology is moving forward, and much more has happened in the last 5 years than I had ever anticipated. We have registered reports, the Psychological Science Accelerator, new journals such as Collabra Psychology focusing in rigorous and open science over sexy findings3, the Loss of Confidence project4, and many other recent open science developments.”

    Moving forward, or sticking a different label on some stuff and pretend it’s an improvement and/or making the same mistakes for a 2nd time?

    Registered Reports: if i understood things correctly, journals can keep the pre-registration information hidden from the reader like it’s no big deal and like this sort of stuff has not caused any problems in the past (cf. https://www.psychologicalscience.org/observer/preregistration-becoming-the-norm-in-psychological-science#comment-8352965)…

    Psychological Science Accelerator: if i understood things correctly a few people will get way too much unnecessary power in deciding which (replication) research deserves massive attention and resources like it’s no big deal and like that has not caused any problems in the past (cf. http://andrewgelman.com/2018/05/08/what-killed-alchemy/#comment-728154)…

    Loss of confidence project: if i understood things correctly researcher’s “opinion” and “word” about (their own) findings and/or conclusions are somehow being valued (instead of presenting actual evidence like replications or file-drawered studies?) like it’s no big deal and that sort of stuff has not caused any problems in the past…

    I worry that a lot of the recent “improvements” may not really be improvements and/or are possibly way too uncritticaly endorsed by a lot of (open science) folks without really thinking things through. It all reminds me of that quote: “The definition of insanity is doing the same thing over and over again, but expecting different results. ”

    Just because it has been given the label “open science”, and just because certain “open science” folks think about them, doesn’t mean these “improvements” or the people who suggest them, are exempt of critical thought. I am getting increasingly worried that “the old boys network” is being replaced by some “open science network” with the same sort of (problematic) group dynamics at play.

    Reply
    1. Eiko Post author

      To make sure I understand your argument correctly: None of the very recently implemented ideas and best practices are perfect, hence we should do nothing instead? If that’s the argument, I disagree. If that’s not the argument, clarification would help.

      Further, you seem to have problems with the old system, but also with recent implementations of best practices. What’s your way forward? Doing nothing is unacceptable for me.

      Reply
      1. Anonymous

        1) “None of the very recently implemented ideas and best practices are perfect, hence we should do nothing instead?”

        I did not say to do nothing> I just said that i think lots of recent proposed “improvements” are perhaps not really improving anything and/or are making the same mistakes again. Perhaps i could even add that i think they could be making things worse. I am just puzzled by the fact that these “imporovements” have been thought of they way they have, and even more puzzled by the fact that i seem to hear very little discussion about these possible problematic issues…

        Concerning “improvements” and them posibly “not being perfect”: you can’t know for sure how things develop over time, but i reason it should not be (-come) an excuse to implement sub-optimal stuff under the guise of “well, it’s not perfect but…” and “well, we’ll just have to monitor how this improvement will influence things”. That’s a brilliant way to bypass any criticism before implementation, and would be silly and non-scientific.

        If i am not mistaken, there is a nice statement that i feel should be thought about way more in psychological science: “Primum non nocere”, which translates as “first, do no harm” (https://en.wikipedia.org/wiki/Primum_non_nocere).

        2) “What’s your way forward? Doing nothing is unacceptable for me.”

        Ehm, concering:

        “Registered Reports”: i reason they should, of course, make the pre-registration available to the reader via a simple link in the paper and make that a mandatory part of it being a “Registered Report” (that this wasn’t , and possibly still isn’t, the case apparently is incomprehensible to me).

        I reason it would be sub-optimal to have the pre-registration available in some online thingy without an explicit link in the paper. Also, the attention and praise for, and role of, stage 1 peer-review should be minimized. Both latter points all will help make a possible transition to something else than the current peer-review/editor/journal-system (which is probably the cause of many, if not all, of science’s problems) much more likely and possible.

        “Psychological Science Accelerator”: i love collaboration, but i don’t like group processes and creating a bunch of labs that function as some sort of research assistants. For a different possible collaborative format, i described my best idea here: http://andrewgelman.com/2017/12/17/stranger-than-fiction/#comment-628652

        “Loss of confidence project”: Post your non-published replications and/or “failed” studies in some journal of your choosing, and/or on Psyarxiv. There, it can be cited and will be indexed by google scholar, so it “counts” for your h-index and all that good stuff :) If you want to disclose methodological details not presented in the original work, use “Psychdisclosure.org” (https://psychdisclosure.org/) or “CurateScience” (http://curatescience.org/).

        Reply
        1. Eiko Post author

          I disagree with some of your criticism, and agree with some of the ideas you raise. Frankly, I also like the pushback for the sake of pushback: We need criticism to vet new ideas.

          Overall, many points you raise are consistent with moving forward, and it’s just about details. Make sure to raise your voice in the debate, and let people know in which details you would like to see things being different.

          Important in this context are goals. For this reason, your criticism of the loss of confidence project doesn’t really apply. The researchers’ main goal is “to destigmatize declaring a loss of confidence in one’s own research finding within the field of psychology” (from their website). The loss of confidence project is complementary to your suggestion to publish non-replications, not an alternative. And I am sure that Tal, Christopher, and Julia would agree with me/you. So some of the points you raise are not obstacles at all, and there is more agreement that you anticipated.

          Reply
    2. Anonymous

      “Moving forward, or sticking a different label on some stuff and pretend it’s an improvement and/or making the same mistakes for a 2nd time?”

      In light of (the gist of) this quote it could be interesting, and/or amusing, to mention a few sources i recently came across:

      1) The (gist of the) idea of “Registered Reports” may have been thought of, and written down, and perhaps even executed prior to the recent “replication crisis”.

      See this tweet by Bishop that mentions a book by M. J. Mahoney called “Science as subject” (1976) that basically describes what is now being called “Registered Reports”: https://twitter.com/deevybee/status/1061651017228603393

      The (gist of) the idea may actually also already have been executed (starting in 1997) , and even evaluated, prior to the “replication crisis”. See: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(15)01131-9/fulltext

      2) The (gist of the) idea of “Open Practices Badges” may have been thought of, and even executed prior to the recent “replication crisis” in 2009.

      See R. D. Peng “Reproducible research and Biostatistics” (2009): https://www.ncbi.nlm.nih.gov/pubmed/19535325/

      And see Rowhani-Farid & Barnett (2018) for an evaluation of these “reproducibility badges”: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5843843/

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.