At first I was planning to write two separate posts on each one of those issues. Reproducibility and Open Access were extensively discussed during the #AAASmtg, and even if I wanted it would be impossible to share all with you. Also, they were discussed in totally separated sessions, and they apparently are two totally separate issues, I can’t help to see them as two big problems mainly caused by our system.
Yes, reproducibility is a real problem. According to a Nature’s survey, more than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.There has been a scary big number of published articles being retracted due to reproducibility problems. Not only experiments can give us different results when repeated, but sometimes the same data can be interpreted differently among researchers. The attempt to replicate key cancer studies with the “Reproducibility Project: Cancer Biology” is bringing more questions than answers.
Yes, science should be open and accessible to all. Most scientific research is funded by federal grants, which is supported largely by our taxes. It is logically expected to be open to the public that is actually paying for it. Open Access brings more transparency to research and can also stimulate curiosity and interest – bringing more attention to science! Openness not only regarding published articles, but also Open Data – where other researchers can access, analyze, and collaborate.
But both reproducibility and open access fall under the problem of how our system works.
Scientists are mainly judged by the number of publications, and the impact factor of the journals where they publish. One of the main requirements for publication of a scientific article is NOVELTY. Neither replication nor negative studies are encouraged to be published. Even though I totally agree when Jessica Polka said at the #AAASmtg that “We need a culture where people read papers and not the name of the journal”, unfortunately both funding and hiring committees still care about journal titles and metrics when judging scientist’s achievements.
The “publish or perish” culture not only incentive publishing unreliable data, but also decreases the quality of science. Tight funding and the increasing competition may encourage falsehood and misconduct in academia. Probably not deliberately, but who has time (and money) to repeat that experiment with those two outliers? There is a lot of cherry-picking and p-hacking that can be easily performed by the analyzer. If you don’t believe me, you should try Nate Silver’s example of how to Hack Your Way To Scientific Glory.
Landing an academic job, getting funding, and publishing science is tough business! As professor Sydney Brenner points out in this interview:
Even God wouldn’t get a grant today because somebody on the committee would say, oh those were very interesting experiments (creating the universe), but they’ve never been repeated. And then someone else would say, yes and he did it a long time ago, what’s he done recently? And a third would say, to top it all, he published it all in an un-refereed journal (The Bible).
Who’s to blame? Open Access defenders blame publishers. Publishers blame academics. Academics blame funding agencies. Potential solutions? Besides a cultural change in the system, where science would be more important than metrics, reproducibility and negative results should be supported and encouraged. Some publishers are contributing to this, as Nature published a manifesto for reproducible science and Elsevier creating developing a new article type especially for replication studies. A new kind of paper combining the flexibility of basic research with the rigour of clinical trials was recently proposed by some researchers. Data sharing and the recent incentive to use preprints in biological science can also help with this, as data and research drafts are open to other researchers feedback, and possible problems with replications.
The scientific world is paying more attention to quality of research, replication, data analysis. Change seems to be coming. Hopefully funders, publishers, and hiring committees will change along as well.