• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Vioxx ghost writers

You can't necessarily trust the conclusions provided by the author, no matter how qualified (or in this case 'involved') they appear to be. The design and results should stand on their own.

Linda

But the design and the data are submitted with the conclusion and are (supposed to be) checked by the FDA. Any thoughts on that? Any new drug research is submitted with all this info and all the data. The FDA need not depend on the conclusion of an author. So, unless data are altered or omitted, the FDA should be catching stuff like this. Am I missing something?
 
But the design and the data are submitted with the conclusion and are (supposed to be) checked by the FDA. Any thoughts on that? Any new drug research is submitted with all this info and all the data. The FDA need not depend on the conclusion of an author. So, unless data are altered or omitted, the FDA should be catching stuff like this. Am I missing something?

I'm not sure I understand what you are getting at. The FDA reviews the data and forms their own conclusions. While this is an example of more overt bias, it is generally recognized that all conclusions have the potential to reflect the biases of their authors.

Linda
 
I'm not sure I understand what you are getting at. The FDA reviews the data and forms their own conclusions. While this is an example of more overt bias, it is generally recognized that all conclusions have the potential to reflect the biases of their authors.

OK, I see your point. Without seeing the data myself I cannot really say, but from the link posted above, it appears there was statistical significance in the number of deaths in the Vioxx-treated group vs. the placebo group. How does that just fly by the FDA auditor (regardless of the conclusion in the report)?

I apologize if I am misunderstanding here, but I work in the industry, formerly as a QA auditor, and a statistically significance increase in deaths is something that's hard to miss or hide. Even when the Sponsor explains away the deaths as not drug-related, this has to cause some concern, perhaps warranting a repeat study. Especially, a tripling of the number of deaths. (I'd also like to know if this increase in deaths was reflected in the animal data as well.)

Is this a case of the FDA being biased in favor of the report because someone really really wanted this drug on the market? Some sort of noble bias, such as a deep belief and desire that this drug would improve lives, or something more sinister (follow the money)?

Sorry for being less than perfectly coherent. It's been a long day.
 
I'd also add this is nothing new and many of us in the medical profession are well aware of it. Though I cannot say everyone is. Quite often drug makers do the research then hire respected experts in the field to present the research at conferences as if it was their own. Been going on a very long time.
 
I'd also add this is nothing new and many of us in the medical profession are well aware of it. Though I cannot say everyone is. Quite often drug makers do the research then hire respected experts in the field to present the research at conferences as if it was their own. Been going on a very long time.

Do you view this as ethical? The link provided by fls led me to believe, even though it is common place, the JAMA does not view it as honest.

"Context Authorship in biomedical publication provides recognition and establishes accountability and responsibility. Recent litigation related to rofecoxib provided a unique opportunity to examine guest authorship and ghostwriting, practices that have been suspected in biomedical publication but for which there is little documentation."
and this
"Documents were found demonstrating that medical publishing companies provided near complete drafts of review manuscripts to authors for editing, in addition to managing submissions and revisions. For instance, in preparing one manuscript, representatives from Scientific Therapeutics Information indicate in a publications status report that the first draft was sent to Merck and the company was awaiting comments, but an author needed to be invited."

Living in Winston Salem, NC, I am familiar with Dr.'s getting paid to say what is necessary to "pay the bills."
 
OK, I see your point. Without seeing the data myself I cannot really say, but from the link posted above, it appears there was statistical significance in the number of deaths in the Vioxx-treated group vs. the placebo group. How does that just fly by the FDA auditor (regardless of the conclusion in the report)?

I apologize if I am misunderstanding here, but I work in the industry, formerly as a QA auditor, and a statistically significance increase in deaths is something that's hard to miss or hide. Even when the Sponsor explains away the deaths as not drug-related, this has to cause some concern, perhaps warranting a repeat study. Especially, a tripling of the number of deaths. (I'd also like to know if this increase in deaths was reflected in the animal data as well.)

Is this a case of the FDA being biased in favor of the report because someone really really wanted this drug on the market? Some sort of noble bias, such as a deep belief and desire that this drug would improve lives, or something more sinister (follow the money)?

Sorry for being less than perfectly coherent. It's been a long day.

I'm not sure why you are saying that the FDA allowed this data to fly by. This information, as well as other information, eventually led to the withdrawal of Vioxx.

Linda
 
I'd also add this is nothing new and many of us in the medical profession are well aware of it. Though I cannot say everyone is. Quite often drug makers do the research then hire respected experts in the field to present the research at conferences as if it was their own. Been going on a very long time.

I wasn't aware of it. Not like this. Although it explains why I would notice that someone who should know better was talking bullocks when presenting research.

Frankly, I'm appalled. However, it does support my point that we're better off making sure that people have the tools to recognize crap, rather than relying on making sure crap isn't produced.

Linda
 
I wasn't aware of it. ...

Linda
Interesting. It came to my attention a number of years ago, like I said, with drug companies hiring experts in the field to present lectures on research they did not do but which they implied in the conferences they did do. I believe I was at a flu conference when someone asked the speaker if he was a ghost presenter (not in those words of course). I can't recall exactly, I'll see if I can find what I went home and looked into after hearing about it though. I know I read more after it came to my attention.
 
Do you view this as ethical? ...
Of course not! It's big bucks for some of these guys though and doctors are human like everyone else.

I did think it was common knowledge among providers though so I think that deserves more looking in to now.
 
I'm not sure why you are saying that the FDA allowed this data to fly by. This information, as well as other information, eventually led to the withdrawal of Vioxx.

Linda

I'm not sure why we're not communicating well here, but this drug was approved and put on the market, yes? Then the FDA let it pass. Am I not being clear? Is there something I am missing?
 
Last edited:
http://www.bloomberg.com/apps/news?pid=20601109&sid=a5z.VogSbbXo&refer=home

On the way home from work yesterday they talked about this on NPR. How can we have any faith in big pharmaceutical when they behave like this? I knew they were strictly profit motivated, but this takes the cake.

Interesting article. As a physician, I for one can tell you that most Docs take studies done under the support of drug companies with a grain of salt. When the rep walks in and says,

"Doc our latest study shows that our product is superior for this reason", I smile politely, listen with one ear, then go about my day.

For starters you have to distinguish trials that prove a product works, versus one that tries to compare itself to (and differentiate itself from) another either in its class, or against standard drug treatments for condition X.

example.

ACE inhibitors are BP pills. When they first came to market, the studies presented were concerning how well they worked, usually against placebo, or standard therapy for Hypertension.

Then as the market for ACEs got crowded, you started to see the different ACEs come up with new data, saying they were better for Cerebrovascular protection, or Cardioprotection, or renal protection.

This second group of studies, I find, are often more questionable, and they weigh much less in my decision making.

There are so many factors that go into deciding how valid a study is, INCLUDING who sponsored the study.

That is why we often use METANALYSIS (meta-analysis) of dozens of studies with similar endpoints to determine the validity of a particular outcome.

If 100 studies, 94 of them RDBCTs, all show a similar statistically significant difference in outcome, then you can be much more assured the outcome is valid, then simply going based on the outcome from study Y presented by Pharma company Z.

-----

With respect to the Vioxx issue, I would be curious to see if these outcomes wrt deaths, alzheimers related or otherwise, were significantly greater with Vioxx then any standard Cox-1 NSAID.

TAM:)
 
Last edited:
I'm not sure why we're not communicating well here, but this drug was approved and put on the market, yes? Then the FDA let it pass. Am I not being clear? Is there something I am missing?

How is that relevant when the information you are referring to came from studies performed several years after the drug was approved? There were no differences in mortality in the studies the FDA reviewed for approval.

Linda
 
Catfish, all new drugs on the market are generally considered by prescribers to be incompletely studied. You cannot test something on 100,000 people before releasing it to market. And you cannot test it over a couple decades either. So new drugs are cautiously prescribed by the majority of us until more experience accumulates. You start by using the drugs on people who the alternatives fail. Gradually a new drug might replace an old one if it seems superior.

It isn't a perfect system. Marketing isn't always helpful. Drug companies have done a bad thing now and then. The FDA gets more lax under administrations like the current one. And not all providers are perfect either. But overall, the system works and we make a continual effort to improve it.

Another thing I recall about the ghost presenters, it's pretty obvious when a drug is presented that the drug company was involved in the presentation. The specific flu conference I mentioned where the presenter's involvement in the research was questioned was focused on Relenza if I recall. Regardless of the presenter, everyone there was aware the information was coming from the drug company.

That doesn't make the information completely suspect. Drug companies know they are not advertising to the lay public. They know that providers are savvy about evidence based medicine. We are responsible to give patients good care, it isn't like we are swayed by pretty purple pills or slick packaging.
 
How is that relevant when the information you are referring to came from studies performed several years after the drug was approved? There were no differences in mortality in the studies the FDA reviewed for approval.

Linda


OK, I missed that. Thanks for the clarification and sorry for the confusion.
 
Catfish, all new drugs on the market are generally considered by prescribers to be incompletely studied. You cannot test something on 100,000 people before releasing it to market. And you cannot test it over a couple decades either. So new drugs are cautiously prescribed by the majority of us until more experience accumulates. You start by using the drugs on people who the alternatives fail. Gradually a new drug might replace an old one if it seems superior.

It isn't a perfect system. Marketing isn't always helpful. Drug companies have done a bad thing now and then. The FDA gets more lax under administrations like the current one. And not all providers are perfect either. But overall, the system works and we make a continual effort to improve it.

Another thing I recall about the ghost presenters, it's pretty obvious when a drug is presented that the drug company was involved in the presentation. The specific flu conference I mentioned where the presenter's involvement in the research was questioned was focused on Relenza if I recall. Regardless of the presenter, everyone there was aware the information was coming from the drug company.

That doesn't make the information completely suspect. Drug companies know they are not advertising to the lay public. They know that providers are savvy about evidence based medicine. We are responsible to give patients good care, it isn't like we are swayed by pretty purple pills or slick packaging.


Thanks for the overview. I appreciate it. However, I am aware of all of this; as I mentioned, I work in the industry. The part I missed was that the studies that found the problems were done after the drug was released. Thus my confusion. I was under the mistaken impression that the data that showed a threefold increase in deaths in the drug-treated group were in the original dataset before submission to the FDA for new drug approval.

I apologize for the confusion.
 
Last edited:
When I worked in pharma, I ghost wrote all the time. I was a researcher, and the lead investigators involved in the study were involved but clearly didn't have time to write the study report, etc. I don't see anything ethically wrong with this. They were given the manuscript, commented and suggested changes, and then we'd have a meeting before the final study report was produced. After the study report was done, it would be submitted for publication including peer-review and additional questioning/clarification of the salient points of the study. It would be wrong if the investigators were not involved in the process at all.

In one particular study I was involved in (an early Phase IV trial of a newly released drug), I actually added a study parameter at the last minute before the study went "live". It was a simple global assessment scale to be completed at the end of the treatment phase of the trial, and I had a thought that it might be interesting to see the clinician's perspective on a simplistic "better vs. worse" impression of the patient's treatment. Interestingly, this showed a very strong positive result and formed the basis of a subsequent ad campaign for the drug. That drug now makes about $400 million per year for the company, and is the #2 add-on drug for ongoing therapy in that disease. In some cases, it's even used as monotherapy.

My one little brainstorm at the end of a long day which was added into the study at the last minute ended-up providing the foundation of a marketing campaign that resulted in the rapid uptake of this drug on the market in this niche. All I got was a little bonus at the end of the year. Not griping. I'm a doctor now. This is just how the biz works. And, don't kid yourself. It's a business.

-Dr. Imago
 

Back
Top Bottom