Editage Insights is attending the 2017 Peer Review Congress and this is a report of some of the sessions from the second day of the conference. In the studies presented by various groups during the first half, the discussions revolved around the quality of reporting in scientific publications. Read the report to know more.
Hridey Manghwani, Manager, Corporate Communications, Editage, is attending the 2017 Peer Review Congress. This is Hridey's personal report of the second day of the conference.
After the keynote address by David Moher from Ottawa Hospital Research Institute, Day 2 of the Eighth International Congress on Peer Review and Scientific Publishing changed gears with sessions on the quality of reporting.
Robert Frank from the University of Ottawa presented his group’s study that analyzed the variables associated with false results in imaging research. Meta analyses (MAs) are the closest proxy to the truth in imaging studies. Study-level (citation rate, publication order, and publication timing relative to STARD)and journal-level variables(Impact factor, STARD endorsement, and Cited half-life) were studied for their association with the distance between primary results and summary estimates from MAs. After analyzing 98 MAs containing 1458 primary studies, the study concluded that the distance between primary results and summary estimates from MAs was not smaller for studies published in high impact factor (IF) journals compared to lower IF journals. Their conclusions re-emphasized that higher IFs and citation rates are not indicative of better study quality.
Next up was Sarah Daisy Kosa from McMaster University and Toronto General Hospital Health Network. Her study discussed discrepancies between reported trial data and those updated in clinical trial registries for studies published in high-IF journals. Her research group searched PubMed for all randomized clinical trials (RCTs) published in the top 5 (as per the 2014 IF published by Thomson Reuters) medicine journals between 2012 and 2015. Of the 200 RCTs they studied, discrepancies were observed between publications and clinical trial registries. Their study clearly highlighted the need for uniform reporting of clinical data in publications and registries in order to preserve the quality and integrity of scientific research.
Cole Wayant from Oklahoma State University Center for Health Sciences then took the stage to discuss the quality of methods and reporting of systematic reviews (SRs) underpinning clinical practice guidelines. His study analysed MAs and SRs based on PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) and AMSTAR (A Measurement tool to assess Systematic Reviews) guidelines. His study noted that many items on these checklists were missing for MAs and SRs, reemphasizing the need for higher quality SRs, especially because they are considered level 1A evidence and can impair clinical decisions, thereby hindering evidence-based medicine.
The final and perhaps the most engaging presentation of the session was delivered by Kaveh Zakeri, a medical resident from the University of California, San Diego. His study analyzed the existence of optimism bias in contemporary National Clinical Trials Network Phase 3 trials. Zakeri’s group identified 185 published studies from PubMed that were phase III randomized co-operative group clinical trials between January 2007 and January 2017. By comparing the proposed and observed effect sizes, they calculated proposed-to-observed hazard ratios (HRs) that were indicative of therapy benefit and reduction in adverse effects. Their results helped them conclude that most NCI-sponsored Phase 3 trials could not demonstrate statistically significant benefits of new therapies. They suggested that although the magnitude of optimism bias had reduced compared to studies between 1955 and 2006, better rationales for proposed effect sizes were required for clinical trial protocols. Many trials in highly specialized fields cannot be repeated and even though the sample size can be small owing to an inability to recruit enough patients for a trial, most phase II and phase III trials provided no rationale at all for the proposed effect size.
We'll cover the sessions of the second half in the next part of the report. The post-lunch sessions will discuss scientific literature, trial registration, and funding/grant review.