The Reproducible Irreproducibility of The ScientificProcess: NIH Plans for Increased Rigour


In a previous blog the issue of data replication was discussed as exemplified by Bexarotene as a potential treatment for Alzheimer’s disease (see here). This issue is gaining increased attention as evidenced by a recent article in The Economist entitled “Trouble in the Lab” and “How Science Goes Wrong” (here) and now Francis Collins and Lawrence Tabak, who are respectively Director and Principal Deputy Director of the National Institutes of Health, offer their perspective on the issue (see here). In their article they emphasize that irreproducibility is only rarely due to deliberate fabrication or fiddling of the data. Rather, they criticise poor experimental design, publications that limit the space given to technical details (and which therefore make the reproduction of experiments more difficult) coupled with the pressure for publication, particularly in high-profile journals, that does not encourage scientists to try replicate or disprove their own data. In addition, methodological descriptions are sometime kept deliberately vague or omit a key step or “secret sauce” to maintain a competitive advantage within the originating lab.
As regards drug discovery, it is remarkable that critical decision-making, preclinical efficacy studies often pay scant regard to such basic principles of experimental design as blinding, randomization and power-calculations; elements that are essential components in the more regulated – and therefore more rigourous – arena of clinical trial design. This issue was highlighted by researchers at Bayer (see here) who reported reproducibility of 20-25% of published data associated with 67 projects, which were mainly in the oncology area whereas researchers at Amgen were only able to reproduce data in only 6 out of 55 (11%) key publications relating to hematology and cancer targets (see here). Moreover, although much is made of the so-called “Pharma bias” in publication relating to perceived potential conflicts of interest of academic scientists or the preferential publication of positive clinical trials, there is also a considerable academic bias towards publishing positive but not negative findings which “creates a huge conflict of interest for academics, and a strong bias to write papers that support the hypotheses included in grant applications and prior publications” (see here ). Furthermore, as C. Glenn Begley (who led the Amgen study and is now the Chief Scientific Officer of TetraLogic) states in an interview with Reuters: “The real problem is that scientists are reluctant to speak up about studies that won’t replicate because there is so much to lose,” Begley said. “If I criticize you, and you review my next grant application, you might [take revenge]. That’s why people are afraid to say the reason they couldn’t replicate a study is that it was just plain wrong” (see here).
john 1
Figure: Consequences of data irreproducibility (from the Global Biological Sciences Institute report on The Case for Standards in Life Science Research – Seizing Opportunities at a Time of Critical Need).
Clearly the issue of the lack of reproducibility not only has a negative impact on scientific, institutional and journal reputations but also causes a huge waste of time, effort and resources and damages the publication opinion of scientific research in the life sciences (see Figure). So, what’s to be done about it? Well, the NIH recognize that part of the problem may well be due to a lack of training of scientists resulting in poor experimental design and accordingly they are instigating formal training for intramural scientists that could serve as a template for wider dissemination. In addition, the NIH will encourage more rigorous examination of grant applications and may require additional preclinical studies to support clinical trials that are based on only limited preclinical efficacy data. Efforts are also being made to encourage journals to devote more space to methodological details as well publish negative findings. However, perhaps the toughest nut to crack is the academic incentive system which prizes publications in high-profile journals; a system that may well encourage rapid submission before systematic replication is carried out. Moreover, the use by university promotion committees of the number of publications in high impact factor journals is a convenient metric but is not necessarily the means of judging a scientist’s contribution. Still, the fact that there is an open discussion of these multiple, interrelated factors and that steps are being taken to correct the self-correcting mechanisms that underpin the scientific progress are in themselves major steps forward.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s