A blog on statistics, methods, and open science. Understanding 20% of statistics will improve 80% of your inferences.

Friday, August 4, 2017

Towards a more collaborative science with StudySwap

The replication crisis is over. Sure, not everyone has gotten the memo (either about it having started, or about it having ended) but the majority of scientists agree that there were (slightly) too many findings from the past that cannot easily be replicated. The underlying reasons are clear: publication bias, flexibility in the data analysis, low power, and not enough rewards for replication studies. The solutions are also clear: registered reports, sample size justification, better statistics training, and publishing and funding replication research.

So, researchers optimistic about other things we can improve in addition to reproducibility are already looking forward. Since the beginning of 2017 we have entered the theory crisis, which makes it, among other things, very difficult to falsify theories. Young scholars are already getting enthusiastic about the upcoming measurement crisis, where we finally come to grasp with a largely ignored issue concerning our measurement tools.

But here, I want to focus on one of the greatest challenges I think our science will face: The need to collaborate. Because collaboration is such a tricky issue, I project it will take us the most time of all crises to solve – but I also expect we will be rewarded by a Golden Age when researchers figure out how to most effectively coordinate our collective resources.

Figure 1. List of crises in psychology, taken from a slide from an introduction to psychology lecture in 2076. Yes, we are still using Powerpoint in 2076.

However, some precocious individuals are trying to prove me wrong by showing collaboration is not just possible, but easy. Randy McCarthy and Chris Chartier have started StudySwap: A website where you can advertise ‘haves’ and ‘needs’ to indicate you can collect data for others, or you are looking for others to collect data for you.

At my department, we sometimes ‘StudySwap’ among colleagues. It’s difficult to get people to the lab for a short 15 minute study with which participants earn 3 euro, so we try to combine studies where possible into sets that take longer, which participants find more interesting to come to the lab for financially. StudySwap broadens the scope of this swapping. If you have a small participants pool, you can get more participants at another university. If you are looking for special populations (e.g., people from different cultures) you can post a need. But as a teacher, I can also imagine posting several ‘haves’ for our research practicum next year, where 100 students need to collect data in small groups, and we could use a replication study from another lab as the topic for some groups. Or, it may be beneficial to find studies that are “ready to go” if you have a student who needs to complete a study during a fixed period of time (e.g., a semester, an academic year, etc.).

Now, Randy and Chris are taking StudySwap in new exciting directions. They are coordinating a Nexus (similar to a special issue, but open indefinitely) in the journal Collabra about Collections2 – crowd-sourced research projects where groups of researchers, or collections of researchers, collect data that will analyzed by grouping all data together (such as RRRs, the ManyLabs projects, or the Pipeline Projects). This approach of designing (sets of) studies that will be aggregated and synthesized is known as a prospective meta-analysis. When pre-registered, it is the absolute state-of-the-art of doing science. The Nexus in Collabra will highlight some exciting ways in which such prospective meta-analyses can be designed, such as collecting conceptual replications, examining different outcome measures, or populations. Another example I could see happening is Collections2 that focus explicitly on sampling both individuals and stimuli from a larger population.

The nice thing about Collections2 in the Nexus special issue in Collabra is that submissions can be Registered Reports, so any accepted project that is successfully executed will lead to a publication. Registered Reports help to emphasize the proposed hypothesis and methods (as opposed to the observed results) and will likely provide an important incentive for recruiting contributing labs. Follow StudySwap (@Study_Swap) and Collabra (@CollabraOA) on Twitter for official announcements about how you can get involved with the upcoming Nexus. Although the Nexus is not accepting proposals quite yet, it is not too soon to start planning a potential crowd-sourced project.

In my personal experience, joining in on a collaborative research project (in my case, the RP:P) was perhaps one of the most educational experiences I did when I was a young scholar. It is worth the time just for how much you can learn, but obviously, it is very nice that your time and effort is also rewarded through a publication.

I couldn’t be more excited about what Randy and Chris are working on with StudySwap. This is what having a vision looks like. They have identified one of the major limitations of psychological science – funding individuals to perform research lines in relative isolation – and are trying to make psychological science better. I will be joining them by posting haves and responding to needs, if only to try to prove my own prediction wrong that we will enter a Collaboration Crisis in 2036. If we look at fields around us that face similar difficulties in collecting high quality data (e.g., medicine, physics) then we know collaboration on a larger scale will need to happen. Recent successful collaborative projects such as RP:P and ManyLabs show it is feasible to work together on replications. If we figure out how to collaborate on novel lines of research, I’m confident psychology will enter a golden age where important insights are generated with a reliability and speed that will impress the general public, greatly enhancing the reputation of psychological science.


  1. 2016-2025: Pre-registration crisis where some researchers decide that providing a link to pre-registrations in the paper so the reader can actually check this information is just messing with the narrative and flow of the paper. "Pre-registration" (hiding the pre-registration information from the reader) is born. The following paper by Bem is cited for reasons why:


    2030-2035: "Pre-registration" crisis where some researchers decide to investigate whether "pre-registrated" studies (pre-registration that is not been made available to the reader but only to the reviewers) actually adhere to the protocol.

    They can then cite the following paper which already made clear that reviewer-only "pre-registration" is probably not a very good idea for science:


    "This survey revealed that only one-third of the peer reviewers surveyed examined registered trial information and reported any discrepancies to journal editors."

  2. "Now, Randy and Chris are taking StudySwap in new exciting directions. They are coordinating a Nexus (similar to a special issue, but open indefinitely) in the journal Collabra about Collections2 – crowd-sourced research projects where groups of researchers, or collections of researchers, collect data that will analyzed by grouping all data together (such as RRRs, the ManyLabs projects, or the Pipeline Projects)."

    I like that you use the term "collaborative" research instead of McCarthy's blog that uses the term "crowdsourced" research.

    One definition of "crowdsourcing" (https://www.merriam-webster.com/dictionary/crowdsourcing) reads as follows:

    "the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers"

    In my comprehension of crowdsourcing, the latter part is most important: it does not rely on traditional employees or suppliers and involves the internet community.

    Since "crowdsourced" research, as i understand it and is talked about on this and McCarthy's blog, simply involves collaboration of researchers at "official" and "traditional" positions and does not involve the internet community/general public at all, i reason the term "crowdsourced" research is sub-optimal at best.