A project that aimed to replicate the findings of significant cancer biology studies was eventually completed after eight years. And the study's findings show that, like social science research, cancer research has a replication problem.
The goal of the Reproducibility Project: Cancer Biology was to duplicate experiments from 53 high-profile cancer studies published between 2010 and 2012. However, the team finds in two papers published on Dec. 7 in eLife that only a quarter of the experiments could be replicated.
The original plan, according to Tim Errington, director of research at the Center for Open Science in Virginia, which undertook the investigation, was to replicate 193 tests from 53 journals. However, as detailed in one of the team's two articles published today, this was decreased to 50 experiments from 23 papers.
"Just trying to understand what was done and reported in the papers in order to do it again was really hard. We couldn't get access to the information," Errington said.
There were 112 potentially replicable binary "success or failure" outcomes in all among the 50 experiments. However, according to the second study published today, Errington and his colleagues were only able to duplicate the effects of 51 of these - or 46%.
The investigations were entirely in-vitro or animal-based preclinical cancer biology studies, and no genomic or proteomic tests were performed. They were chosen because they were all "high-impact" studies that had been read and frequently cited by other researchers. They were published between 2010 and 2012 and were all "high-impact" studies that had been read and heavily cited by other researchers.
Errington described the findings as "a bit eye-opening."
"The report tells us a lot about the culture and realities of the way cancer biology works, and it's not a flattering picture at all," Jonathan Kimmelman, a bioethicist at McGill University in Montreal said, who coauthored a commentary on the project exploring the ethical aspects of the findings.
It's disturbing if experiments that cannot be replicated are utilized to initiate clinical trials or drug development initiatives, Kimmelman said. If it turns out that the science on which a treatment is based is not reliable, "it means that patients are needlessly exposed to drugs that are unsafe and that really don't really have a shot at making an impact on cancer," he added.
Simultaneously, Kimmelman warns against misinterpreting the data as evidence that the present cancer research system is flawed.
Scientists must still assess if a study's techniques are impartial and rigorous. If the outcomes of the original tests and their replications differ, it's an opportunity to learn why and what the consequences are.