We are frequently told that randomized, double-blind, placebo-controlled clinical trials are the gold standard for high quality drug research. Heck, we’ve even said it ourselves on numerous occasions. We believed this mantra until a recent meta-analysis of statin studies forced us to reconsider the value and validity of RCTs.

The study in question (European Journal of Preventive Cardiology, March 12, 2014) had an impressive title:

“What proportion of symptomatic side effects in patients taking statins are genuinely caused by the drug? Systematic review of randomized placebo-controlled trials to aid individual patient choice.”

According to the authors the answer is:

“Only a small minority of symptoms reported on statins are genuinely due to the statins: almost all would occur just as frequently on placebo.”

The conclusion of the meta-analysis of randomized controlled trials [RCTs] was that statins do not cause muscle aches or other side effects except possibly a modest increase in new cases of type 2 diabetes. Presumably the muscle aches, fatigue, nerve pain, arthritis symptoms, mental fogginess, sexual dysfunction, etc. are all imaginary, since they were just as likely to occur in patients taking placebos. If you would likie to read how patients reacted to these conclusions, here is a link with some powerful stories (and yes…they are anecdotal and not scientific, but they are powerful just the same).

The Reasoning Behind RCTs

The “gold-standard” concept of clinical research, RCTs, was created because of a recognition that patients could be easily influenced by the study organizers. In an unblinded trial, both the patients and the doctors know who is getting the “real” drug and who is not.

In a “single-blinded” trial the patients are in the dark but the doctors know who is getting actual medicine and who is getting placebo. In both cases, expectations can easily influence outcomes. Patients are more likely to get benefit from something if they are told it is the real deal. And if doctors know who is swallowing the medicine instead of the sugar pill, they can influence the results in subtle, sometimes subconscious, ways.

In theory, if neither the doctors, nurses nor subjects know what is real and what is fake, there will be no influence and the outcome will be “pure.” That is the foundation upon which the double-blind, placebo-controlled trial system is built.

It sounds almost foolproof and for decades health professionals have held up the RCT as the highest standard of research. It is the epitome of “evidence-based medicine,” a mantra that means scientifically valid. The FDA requires at least two randomized controlled trials demonstrating statistically significant benefit before approving a drug for market.

What’s Wrong With Randomized Controlled Trials?

What very few health professionals have realized is that there are serious flaws with the randomized controlled trial system of drug testing.

Although RCTs are pretty good at establishing statistically significant benefit, they have traditionally not been good at predicting how well a particular treatment will work for any given individual. Many drugs can be proven to be 10-15% better than nothing (placebo). That may be enough to get FDA approval. But it may only mean that one person out of 60 (the number needed to treat or NNT) will actually get any benefit after five years of therapy. That happens to be the best case scenario for otherwise healthy people taking a statin to lower cholesterol. To see more about NNTs and statins against heart attacks, here is a cool website link.

What randomized controlled trials are not good at is detecting adverse drug reactions. Here is a link to “the invisible gorilla” video experiment. In this study, “half of the people who watched the video and counted the passes missed the gorilla. It was as though the gorilla was invisible.”

If you watch this video you will say that it’s impossible to miss the gorilla. That’s in part because of the title and because you are prepared. If you were unaware of the nature of the experiment and were totally focused on the white-shirted basketball passers, you too might have missed the gorilla the way the Harvard students did.

The point of the study in the words of the researchers:

“This experiment reveals two things: that we are missing a lot of what goes on around us, and that we have no idea that we are missing so much.”

The same thing could be said of randomized, double-blind, placebo-controlled drug studies. Investigators cannot see what they are not looking for. Unanticipated side effects often go unnoticed.

One of the best examples involves Prozac-like antidepressants. Randomized clinical trials conducted before the drug was marketed revealed that sexual side effects were relatively rare (in the 2-16% range). For people with depression, reduced libido was reported at a rate of 3% and impotence at a rate of 2% while taking Prozac, and not reported for people on placebos. In a collection of RCTs for a variety of ailments including depression, OCD, bulimia and panic disorders, reduced libido was reported at a rate of 4% on Prozac vs. 1% on placebo. You will find these data in the official prescribing information for Prozac at DailyMed.

Researchers now know that sexual problems with Prozac-like drugs actually range from a low of 30% to a high of 80% of patients (depending upon the study). Bob Temple, one of the FDA experts on clinical trials, admitted to us that SSRI-type antidepressants have a rate of sexual dysfunction above 50%.

People report that drugs like Celexa, Effexor, Lexapro, Paxil, Prozac and Zoloft can reduce libido, interfere with sexual arousal, contribute to erectile dysfunction (ED) and delay or block orgasm. Some people describe a numbness or lack of sensation as “genital anesthesia” and it may persist long after such drugs are discontinued (Open Psychology Journal, Vol. 1, pp 42-50, 2008). The authors concluded:

“Post-market prevalence studies have found that Selective Serotonin Reuptake Inhibitor (SSRI) and Serotonin-Norepinephrine Reuptake Inhibitor (SNRI) sexual side effects occur at dramatically higher rates than initially reported in pre-market trials.”

The bottom line is that double-blind clinical trials of antidepressants were incapable of detecting side effects that they were not looking for. (By the way, most of the impressive placebo-controlled statin studies did not detect type 2 diabetes as a side effect, largely because the investigators did not know it existed and did not look for it.)

The reverse also happens in double-blind, placebo-controlled clinical trials. When researchers know about a specific side effect in advance of a clinical trial, they may ask everyone who participates (both those getting the active drug as well as those on placebo) whether they have experienced that symptom. This completely undermines the validity of the side effect data.

Here is an analogy. We no longer allow police detectives to point out potential suspects. That’s because research has demonstrated that this can influence a victim’s choice. People must look at a lineup of similar-looking individuals. With no prompting from the detective, they are asked to identify the suspect. Even with this improved methodology, eyewitnesses frequently misidentify a victim. DNA evidence has repeatedly demonstrated that subjective evaluation is flawed.

Here is a clinical example: Topamax (topiramate) is an anti-seizure drug that is also prescribed for migraines. In clinical trials the drug caused fatigue in 15% of those taking a dose of 200-400 mg. People taking placebo “experienced” fatigue 13% of the time. The active drug caused nausea in 10% of patients on Topamax and 8% of those on placebo. The conclusion by clinicians and FDA executives is likely to be that the drug might actually cause fatigue in only 2% of patients, ie, the difference between active drug and placebo. That is an easy conclusion to draw and doctors, pharmacologists and FDA officials have said exactly that to us on repeated occasions.

Here’s another example. The stimulant drug Adderall XR (mixed amphetamines) has official prescribing information that notes that adults taking the drug in a clinical trial experienced “nervousness” 13% of the time. This is a known side effect of amphetamines just as it is with high doses of caffeine. Guess what? The placebo in the Adderall XR study “caused” nervousness 13% of the time too.

Many people, including many FDA executives, might conclude that Adderall XR does not cause nervousness, since the incidence of this symptom was identical in both the placebo arm as well as the active drug arm of the trial.

The reality is likely to be that by asking patients whether they experienced fatigue and nausea during the Topamax clinical trial or nervousness during the Adderall XR trial, those on placebo responded affirmatively. This way the investigators, intentionally or unintentionally, skewed the placebo results in a specific direction, thereby creating a false assumption that the actual drugs did not cause such side effects.

In one clinical trial of the statin-type drug called Crestor, 12.7% of those taking 40 mg reported myalgia, compared to 12.1% of those on placebo. Myalgia can be defined as muscle pain, though it has been defined in odd ways in some statin studies. In another Crestor trial (the JUPITOR Trial), 7.6% of the patients on 20 mg of Crestor experienced myalgia vs. 6.6% of those on placebo. Arthralgia (joint pain) occurred in 3.8% of those on Crestor compared to 3.2% of those on placebo. FDA officials would likely say that Crestor did not cause either muscle pain or joint pain since the placebo side effects of myalgia and arthralgia were roughly comparable.

We disagree. A landmark study by renowned Harvard researcher, Jerry Avorn, MD, has revealed the Achilles heel in double-blind, randomized controlled trials.

“Adverse effect patterns of the drug group are closely related to adverse effects of the placebo group…Symptom expectations of patients were likely to have been influenced by the consent forms used in the specific trials. Adverse effects mentioned in informed consents might not only increase expectation effects but might also facilitate the perception and reporting of these symptoms…Our results question the basic assumption of clinical trials, namely that all unspecific effects are reflected in the placebo group, while the drug group shows the additive effect of the chemical drug action. Clearly, the adverse effect patterns of placebos reflect, in part, the adverse effects expected for the drug, which complicates the detection of drug-induced adverse effects.”

What Does This Mean For You?

We started this essay with a quote from some very distinguished researchers: “Only a small minority of symptoms reported on statins are genuinely due to the statins: almost all would occur just as frequently on placebo.”

You now know that these very smart scientists likely drew a faulty conclusion from the data. If double-blind, placebo-controlled trials are flawed in the way in which they collect side effect information, then physicians, nurses, pharmacists and other health professionals must reevaluate adverse drug reactions reported in the official prescribing information.

The FDA needs to reconsider the way in which it requires drug companies to collect symptom information in drug trials. To reduce bias, a universal side effect form (that can be modified under special circumstances) would allow a better technique for gathering such information.

Weigh in below. How do you know whether a particular medicine may cause a side effect? Do you trust the official prescribing information? Share your own drug experience in the comment section below.

Join Over 55,000 Subscribers at The People's Pharmacy

Each week we send two free email newsletters with breaking health news, prescription drug information, home remedies and a preview of our award-winning radio show. Join our mailing list and get the information you need to make confident choices about your health.

  1. Andrew

    Hi, wouldn’t the nocebo effect also affect the group taking the active drug? Why would telling patients before hand about certain side effects increase the reporting in the placebo group but not the drug group? Wouldn’t the drug group have some who experience the nocebo effect, as well as additional patients who actually experience the side effects?

  2. DS

    I saw an ad in the paper for women over 60 to participate in a drug study. I looked online and the found a company that does the drug trials. It stated that sixty per cent of the people in their studies had done others. They explained which days you would be staying at the facility, that you would have blood drawn at very specific times, be fed diets that might differ from that of others staying there, etc. The study would involve I think several stays and several outpatient visits, and ECGs as well as blood tests.
    The statement about many of the participants being in several trials made me wonder. Who are these people that subject themselves to trials of drugs? No one is going to experiment with my health, and repeatedly! There was no mention of long term (years) side effects nor of studies of the brain. I found it all rather creepy.

  3. alan b.

    Actual side effects, as one poster noted are clearly able to influence the placebo effect component of the drugs overall impact and be taken as a sign the drug is working. This is why poisons were once widely believed to be curative. This also skews the perceived benefits particularly with drugs whose action may appear enhanced by subjective evaluation or which are intentionally psychotropic like antidepressants. Psychology and belief can influence immune and biological functions as well.
    A strong faith in drugs is likely to result in massive skewing of positive effects. Prior use of such drugs also will skew nocebo side effects if the expectation was previously learned from real effects by a similar drug.

  4. Marcia T.

    I am a 62 y/o female. I am 5’6″ and weigh about 142 lbs. I have no history of diabetes. I was prescribed a statin medication for high cholesterol, which runs in my family, about 2 years ago. I believe that was Lipitor. I felt like I had a bad flu and could not stay upright. I missed an entire week of work before it occurred to me that it might be the medication. I stopped it and although I did not feel great, I was able to return to work. I reported this to my MD who prescribed another statin. I took that for quite a few months. During that time, I was able to work, but it seemed very much more difficult.
    I found that I was struggling mentally to keep up at the workplace. In addition, I developed terrible pain in my arms and hands. Finally, I noticed that my skin seemed more yellow than usual. After 25 years as a Radiation Therapist, I decided to quit my job to consider retirement because working had become so difficult. To help with the pain, I began acupuncture. The practitioner I saw had the premier reputation here in Chico Ca.. He shared with me his distrust of statins. I decided to quit them. After a time, the yellowness went away, the pain went away and I am again working as a Radiation Therapist.
    Recently, I went to my annual physical. My cholesterol was 296, after a long period of time, admittedly, when I was not managing my diet very well and had not resumed exercise. My MD suggested I try Zetia and Welchole. I needed pre-authorization for both. The Zetia came through first, so I filled that and started it. Within a week into that, I had a strange occurrence at work, which I do not know if it related or not. I was walking back form the time clock after punching in that morning. I had been up for 2 hours and had been feeling just fine. Suddenly I had a very strange feeling in my mid-section. Initially it felt like what you might feel with strong emotional change. That quickly changed to the feeling that I was going to pass out, including the nausea that often precedes that feeling.
    Within half a minute, I went from feeling fine to having to drop to one knee and put my head down. In a minute or two I was able to get up to go sit down. After sitting out the first 20-30 minutes, I went back to work, but never felt good the rest of the week. I felt tired and most of all I felt terribly irritable-not just a mental state, but a physically emotional one which I felt in the pit of my stomach. Although I did not act out these feelings, both my family and my co-worker said they had noticed a difference in me. I also had tremendous gas and bloating. After 5 days of feeling this way, it occurred to me that perhaps it was the Zetia, which I had believed to be much more benign as my doctor mentioned that it was not as effective as the statins. I had not expected to have side effects this time. I quit the drug and am starting to feel better, although not as good as I had been.
    At this point, I am not taking anything, waiting to feel better. After 5 days, I still feel tired and seem to have some bloating, but the irritability and the physical sensations that went with it are gone. When I feel certain that I am symptom free, I will give the Welchole a trial. Now, admittedly, I am a bit apprehensive about that. I see my MD in May with blood results for lipid panel and liver enzymes in hand, as well as a report on my health behavior.

  5. oldetimer

    I wonder if the following would be an acceptable/neutral statement in a consent form:
    “The pills/capsules you are taking as part of this study, may or may not cause side-effects. These side-effects, if they do occur, are not dangerous or threatening in any way, although you might become aware of them. Just tell the investigator what, if anything, you might be experiencing”.

  6. Cindy M. B.

    So true, Mary, so true…. You know the mind-body continuum? It works! Thinking creates “reality” in so many ways.

  7. Mary

    I sincerely wish Western Medicine would actually use the placebo effect more effectively than they do.
    Placebo shows that our bodies and minds CAN heal or make ill more effectively than so-called medicines at times.
    Big pharma could then sell sugar pills at an extravagant price and call it medicine.

  8. Donnie

    What is in the placebos? Only the drug and chemical companies know. It is generally thought that a placebo is a sugar pill, but it can be something entirely different and can affect the outcome of the clinical trials. Do a google search for info about what is in a placebo and you will be amazed. Beware, the Trojan Horse.

  9. snh

    Very interesting and well articulated.

  10. Dr. Judi

    Many decades ago, when I was a second year med student, we were guinea pigs for a so called double blind study. It was reverse single blind! Every student guinea pig could figure out, by the side effects, who got the active drug and who got the placebo. We kept it secret from the researchers, not to spoil their enthusiasm for their research. We the guinea pigs knew which was which. The researchers did not. That’s why I call it reverse single blind.
    When there are side effects which are psychological and not pharmacological, it is called nocebo. That’s the opposite of placebo.

  11. Chad Nye

    The above summary is consistent with the ususal treatment of RCT criticisms, which is to say only 1/2 of the story is often told. Take for example this quote from the above text:
    “It sounds almost foolproof and for decades health professionals have held up the RCT as the highest standard of research. It is the epitome of “evidence-based medicine,” a mantra that means scientifically valid.”
    I know of no self-respecting systematic review scholar or researcher who would disagree with the 1st sentence and none who completely agree with the second sentence. That is, ‘sounds almost foolproof’ is exactly that, ‘sounds’, RCTs are not fool proof; no study is fool proof including RCTs. To say it is the “epitome” and connect that to ‘scientifically valid’ is to suggest that all other options are ‘inferior’. That is simply not the true either.
    The nature of the issues, patients, conditions, interventions, outcomes, and dependent and independent variables all contribute to determining a scientifically valid method of research and the RCT is only one of many methodological approaches. RCTs or any other research design for that matter can NEVER be perfect and can NEVER account for all potential influences of an outcome, they simply can’t all be measured or observed. At least an RCT can account for the observable characteristics (e.g., age, gender,pre-existing symptoms/conditions, dosage, length of treatment, etc.) and under the randomization theory the assumption that a comparable distribution of the unobservable characteristics is potentially present in all participants does at least provide the possibility of a level of unbiased accountability.
    Does this equal a ‘foolproof’ study… absolutely not, but is this not better than a study in which has not at least attempted to account for the possibility of unmeasured influences that might inadvertently bias a treatment outcome in a negative manner.

  12. JimR.

    One of the authors, Ben Goldacre, writes”… the headline “Statins ‘have no side effects’”. That’s not what our paper found. But it was an interesting piece of work, with an odd result, looking at side effects in randomised trials of statins: specifically, and unusually, it compares the reports of side effects among people on statins in trials, against the reports of side effects from trial participants who were only getting a dummy placebo sugar pill.”

  13. john h abeles md

    Another flaw I believe exists in every double blind study on the efficacy side. That is, there is no knowledge of how many placebo-responders or non-responders exist in either the active drug group or the placebo group prior to the study commencing.
    Thus the supposed non-difference (a failed efficacy study) between drug and placebo could be because there is a preponderance of placebo-responders in the placebo group versus the drug group. Similarly, a supposed successful separation of the drug from placebo in its effects, could be due to the placebo group having fewer placebo-responders in it compared to the drug group.
    It is known that there are, broadly, those that respond to placebo and those that do not. Entrants to a double-blind study are not randomized to have equal distribution of each in each group. To do so they would have to be individually tested, and such is difficult, expensive and not very accurate by any standard.
    When challenged, researchers have said it is ‘likely’ that the differences between groups is ironed out by the number of patients — but that is a guess. They really don’t know either how many of each are actually in the study, nor how many people should be in the study to statistically obtain an near-equal weighting by chance…
    No-one has satisfactorily answered this challenge I have posed to regulators, drug study designers and researchers in my long career in medical venture capital investing in and developing early medical companies.
    Another thing to remember — before the worship of the double blind study, many of our most useful drugs emerged from good observational studies — aspirin, barbiturates, atropine, penicillin, steroids, antihistamines etc.

  14. Rick

    Pharma cash has polluted the scientific basis of modern medicine. Pharma’s track record of manipulating the outcome of drug studies and downplaying adverse effects of drugs should make everyone wonder whether much of the clinical data generated by the drug companies—and submitted to the FDA—can be trusted. As Melody Petersen writes in Our Daily Meds (New York: Farrar, Straus & Giroux, 2008, p. 206):
    “Dr. John P. A. Ioannidis, an epidemiologist who holds positions at Tufts University and the University of Ioannina in Greece, said in 2005 that the conclusions of most published scientific studies are just plain wrong. In an essay, Dr. Ioannidis blamed the industrial quest for profit, the growing number of conflicts of interest among scientists, the small size of many clinical trials, as well as the manipulation of their design, for creating an era in medicine when most studies turn out to be fiction.
    “There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims,” he wrote in the journal PLoS Medicine. “However, this should not be surprising. It can be proven that most claimed research findings are false.”

What Do You Think?

Share your thoughts with others, but be mindful of protecting your own and others' privacy. Not all comments will be posted. Advice from web visitors is not a substitute for medical attention. Do not stop any medicine without checking with the prescriber. In posting a comment, you agree to our commenting policy and website terms and conditions.