What is best for science does not always correspond to what is best for scientists. In academia, researchers are almost exclusively evaluated by the number of papers they publish and in which journals. One of the tokens of success that helps researchers climb the career ladder of academia is publishing in prestigious journals with a high impact factor (how meaningful this metric is has been questioned, for example see here). However, publishing in these journals usually requires novel and exciting results that will significantly impact the field. Therefore, scientists are incentivised to produce lots of interesting results and frame scientific discoveries in compelling stories. This leads to a problem, which is nicely framed by Chris Chambers, Professor of cognitive neuroscience at Cardiff University, in one of his presentations on reproducibility:
He asks: 'Which part of a research study should be beyond your control?'
Then he asks: 'Which part of a research study is most important for publishing in ‘top journals’ and advancing your career?'
In other words, researchers are primarily rewarded for the part of a study they should have no control over. One of the problems of putting the emphasis on results is that it leads to publication bias, when the outcome of a study determines whether it is published. Of course, novel and exciting results very often stem from a well developed research question and a sound methodology. The issue is whether the study would have been published (and if so, in the same journal) had that same methodology led to a negative result. Given the unpredictability of fundamental research, you could be an excellent scientist but still struggle to climb the career ladder of academia if you happen to get negative results, or if your findings are not considered sufficiently novel or exciting.
Ultimately, the current incentive structure in academia is not compatible with what is best for science. It leads to the flawed idea that good quality science is reflected by the novelty of research findings. Rewarding researchers based on results does not incentivise reproducibility or control experiments that are at the core of good science. Researchers may also be discouraged to work on important preliminary studies that inform more exciting research questions because they fear it will not advance their career. Last but not least, the pressure to produce exciting results may tempt researchers to engage in poor research practices to make the results seem more significant than they actually are (e.g. selective reporting of results, exaggerated conclusions, HARKing and so forth).
Registered Reports was founded in an attempt to tackle publication bias and poor research practices by placing more emphasis on what researchers can control, namely, the quality of the hypothesis and methodology of a study. Chris Chambers is one of the founders. Registered Reports are a type of article in which the hypothesis, method and analysis are pre-registered and peer-reviewed before the research is conducted. If the outcome of the first peer-review step is positive, the journal offers an in-principle acceptance. This means that the study is guaranteed to be published, providing the experiments conform to the proposed plan. Once the data is collected, the manuscript is reviewed a second time and published. While publication does hinge on whether the study is conducted according to the initial proposal, there is still room for some flexibility and creativity. For example, the proposed analysis should take the form of a decision tree that covers different outcomes in the data. Researchers can also submit 'exploratory analyses' that were not included in the pre-registration. Since its launch in 2013, the Registered Reports format has been adopted by 107 journals (alongside traditional publishing formats). You can find a list of journals, updates and a comprehensive FAQ section on the Center for Open Science's website.
Registered Reports go a long way towards tackling some of the incentive problems in science. Pre-registration makes researchers put a lot of thought into the experimental design and prevents biases such as re-formulating hypotheses based on acquired data. Peer-review at stage 1 offers researchers the opportunity to get some feedback early on (when it can still be changed) and ensures resources are not wasted because of a flawed methodology. The guarantee of publication virtually abolishes publication bias and takes away the pressure to publish exciting results — the model even incentivises reproducibility studies, which are by definition not novel. Given the emphasis on developing a sound methodology with good statistical power, Registered Reports might be a particularly valuable experience for students who have just started their PhD. This model is also a good way for early career researchers to show they care about transparency and reproducibility.