By Kate Johnson – October 14, 2009
In just a few days Dr. Supachai Rerks-Ngarm, from Thailand’s Ministry of Health, and members of the U.S. Military will present their HIV vaccine study to their peers at the AIDS Vaccine conference in Paris.
It won’t be their first presentation of their findings, but they probably wish it was.
Their first presentation to the world’s media was a bit of a circus, that left many wishing the substance had matched the hype.
As a medical journalist I’ve seen my share of circus acts, and collected a whole folder of abandoned stories to show for it.
They are half-finished articles about published, or yet-to-be published medical studies that I started writing and then discarded because things just didn’t seem to add up. These are not stories that I’ve set aside easily. I have labored over data, interviewed the authors, queried the discrepancies, and torn my hair out. In the case of one study, which was already published in a medical journal, I even tried to take my doubts to the journal’s editor in chief, who had issued a glowing press release about the study. In the end, I abandoned all of these stories because I doubted the integrity of the science.
For me, such abandoned articles are time and money down the drain. More troubling though is the knowledge that other people, more scientifically savvy than me, have turned a blind eye to this dishonesty. Not only is shoddy science making it to the podium of scientific meetings, or the pages of medical journals, it is influencing future research and patient care.
I’ve called this phenomenon dishonest – a word I have chosen carefully and with some hesitation. Certainly not all shoddy science is dishonest. Some is just sloppy, clumsy, and inaccurate. But according to an unpublished analysis presented at the Sixth International Congress of Peer Review and Biomedical Publication, and reported in Nature Medicine , only 28% of medical paper retractions were for “honest error” and a whopping 43% were for unethical reasons.
In my last blog I talked about the pharmaceutical industry and how shoddy science from this arena could be considered smart marketing. Whether it’s ethical is another question.
But Big Pharma is not the only bad guy. As we’ve seen from the latest ghostwriting scandal, which I’ve blogged about here , and expanded upon here, independent researchers also play a role in dishonest scientific publishing.
In fact, they may even play a bigger role than the pharmaceutical industry, according to another unpublished analysis presented at the Peer Review conference.
“We absolutely should not let up on our scrutiny of industry,” Karen Woolley, one of the investigators, told Nature. “But why are we always pointing our finger over there? There’s an elephant in the room, and that’s the nonfinancial conflicts of interest in academia.”
“Academic data doesn’t get scrutinized at all,” commented Liz Wager, another co-investigator.
A recent study in the Journal of the American Medical Association uncovered bias or the potential for bias in almost 70% of more than 300 published drug or device trials. As one of the co-authors, David Moher, Ph.D. told me, most of the bias came from selective reporting of outcomes, in other words, telling only part of the story.
Dr. Moher and I discussed how the general public may not fully understand the independent researcher’s motive for this kind of dishonesty. Certainly the academic pressure to “publish or perish” is easy enough to understand. But the barriers to medical publishing may be less obvious.
I see these barriers first hand in my capacity as a writing and editing consultant for researchers. Medical journals reject many submissions. Fearing rejection, researchers may be tempted to “improve” their papers by putting a spin on their findings. Another analysis presented at the Peer Review congress detected inappropriate spin in more than 40% of papers with statistically insignificant findings, reported Nature Medicine.
“There’s some very, very questionable behavior that goes on by people in research,” says Dr. Moher. Examples include “trying to hide certain parts of studies, trying to make things look good, or being so fuzzy that its impossible to figure out what’s going on”.
This type of dishonesty is hard to pinpoint. As my folder of unfinished articles shows, it takes time and a lot of hair pulling to realize that things don’t add up. For the mainstream press, and even hurried medical professionals that realization may take even longer.
The recent news about the HIV vaccine is a good example. The U.S. Military and Thailand’s Ministry of Public Health conducted the study and released their results to the lay press before running them through the peer-review process of a scientific conference or medical journal. What started as headline news about a 31% reduction in HIV transmission then fizzled into a suggestion that the results may have been a statistical fluke.
Medical writer and blogger Adam Jacobs was among the first to sound the alarm,along with health journalism prof and blogger Gary Schwitzer.
Soon after, Jon Cohen’s blog in Science voiced complaints from an anonymous HIV/AIDS expert: “The press conference was not a scholarly, rigorously honest presentation…it doesn’t meet the standards that have been set for other trials, and it doesn’t fully present the borderline results. It’s wrong.”
The investigators will have a second chance in Paris and we will all be watching.
It will be interesting to see if they change their spin.
Another interesting blog, Kate.
Some interesting issues here, which I would summarise as follows:
1. How much published research is misleading or even downright wrong?
2. How much of that is due to dishonesty and how much is due to incompetence?
3. Are there any differences between industry and academia in the answers to any of the above?
We don’t really know the answers to any of those questions with particularly good precisions, but as far as I can tell, the answer to question 1 is “a worryingly large amount”, which seems to be backed up by some of the research you mention.
I suspect the answer to question 2 is that incompetence is the main reason. Even in the 43% of retractions (and let’s face it, retraction is a pretty extreme response, so you would expect the prevalence of flagrant dishonesty among retracted articles to be much higher than average) that were due to unethical reasons, I suspect a lot of that was a result of researchers simply being unaware of current ethical standards, rather than a deliberate attempt to subvert them, although I don’t have evidence to back that up.
As for question 3, I really have no idea, and I’m not aware of any good quality research that answers the question. The research by Woolley et al shows that if a pharma industry paper is retracted, it is less likely to be for misconduct than if an academic paper is retracted, which suggests that pharma may be more honest than academia, although in the absence of denominator data it’s hard to be sure. The paper you describe that was recently published in JAMA, which you discussed with Dr Moher, was a good opportunity to look at this, as they collected data on industry sponsorship and on the quality of the publications, but sadly they didn’t report the association between the two. I’ve emailed one of the authors of that paper to see if I can find out more, but haven’t yet had a reply.
I’m working on a little research project myself at the moment that may, in some small way, shed some light on one aspect of the difference in quality between industry and academic papers. Watch this space.
But I think a conclusion that is perfectly safe to draw is that it is not OK to assume a paper is perfectly trustworthy simply because it wasn’t sponsored by industry.
Great points Adam. As for question #2 and your suspicion that most bad research results from incompetence rather than dishonesty, I ponder this point in my work, almost daily. Although dishonesty is a disturbing possibility, incompetence, or ignorance may be equally disturbing. Half truths and shades of dishonesty have become so standard in society – from breast implants, to food additives, to athletic enhancement – that people may not always realize when they have crossed the line into a lie.
Kate Johnson
Hi Kate,
Great blog. Our results do challenge perceptions. Editors and readers should not assume that a paper with declared medical writing and industry support is automatically ‘dodgy’. Similarly, editors and readers should not assume that a paper without any declared commercial associations is ‘ridgy didge’ (please excuse me from adding some Aussie vernacular to your blog). Industry is not all bad and academia is not all good. Our assumptions and Manichean view of the world are blinding us from putting resources where they need to go. We seem to have an imbalance in our checks and balances. Why are most of our efforts to prevent and detect misconduct directed against 4% of the problem? Why isn’t non-commercially sponsored research subject to the same type of safeguards as commercially sponsored research, such as independent statistical analysis requirements, announced and unannounced audits, and thorough disclosure requirements? Even the recently released ICMJE Uniform Disclosure Form for Potential Conflicts of Interest barely probes into non-commercial conflicts of interest; one question only and last on the form. We realise that commercial interests are important, but when will we realise that non-commercial interests are as important, if not more so?
P.S. If professional medical writers want to assure journal editors that they have nothing to hide, I recommend they ask their authors to complete and submit the medical writing checklist recently published in PLoS Medicine and also available on the EQUATOR website. Question 5 has real teeth, but if you have nothing to hide, it won’t present you with any problems.