The Randomized, Double-Blind, Placebo-Controlled Trial: Gold Standard or Golden Idol?


Benton Bramwell, ND, and Matt Warnock

Historical context of the randomized, double-blind, placebo-controlled trial: How did we get here?

The history of the randomized, double-blind, placebo-controlled trial (RDBPCT) is fascinating. In 1747 James Lind described a controlled trial  when 12 sailors aboard a British naval vessel who suffering from scurvy were divided into groups of two and given various options of dietary interventions.  Those receiving oranges and limes made a rapid recovery while other groups did not.[1]  In 1800, John Haygarth reported what might be the first use of a placebo-controlled study, when wooden rods were substituted for popular metallic ones used for rheumatism relief.[2]  A trial in 1835 with design well ahead of its time seems to have first integrated all the elements of randomization, double-blinding, and use of a placebo control when a homeopathic remedy in water was compared to a placebo of plain water.[3]

In later, more widely visible efforts of the 1940s, the Medical Research Council (MRC) of the UK led a study implementing double-blinding and the use of control groups to see if the compound Patulin would help treat the common cold.[4]  Building on the application of these principles, a truly groundbreaking and widely recognized randomized, double-blind trial was then conducted in 1948 by the MRC to investigate the use of streptomycin compared to bed rest in the treatment of tuberculosis.[5]  This latter study became a well-recognized model for future randomized, controlled trials.

The Drug Amendments of 1962 heightened the desire to implement these techniques into clinical studies in the United States, requiring FDA-approved drugs to have  “substantial evidence” to establish their efficacy.[6] Specifically, the amendments state: “the term ‘substantial evidence’ means evidence consisting of adequate and well-controlled investigations, including clinical investigations, by experts qualified by scientific training and experience to evaluate the effectiveness of the drug involved, on the basis of which it could fairly and responsibly be concluded by such experts that the drug will have the effect it purports….” Interestingly, this “definition” doesn’t actually define “substantial evidence,” but rather describes a process of investigation in which “the experts are to decide what kind of evidence they would like to see and then go get it.”[7] 

In response to the amendments and in order to assess the efficacy of drugs already on the market, FDA first relied upon the opinions of 180 conventional medical specialists of the day (The Drug Efficacy Study 1966-1969).[8]  However, in 1969 the Agency published a proposed rule in the Federal Register outlining which scientific principles characterized an “adequate and well-controlled clinical investigation.”[9]  In addition to making clear that “uncontrolled studies or partially controlled studies are not acceptable evidence to support claims of effectiveness,”  the regulation also highlighted defects that would lead to a study being deemed inadequately controlled, including failure to adequately define criteria for patient selection and failure to minimize investigator bias.  The regulation allowed for three possible types of control groups: a placebo control, without or without double-blinding depending on the measurement system used; an active drug control; and a historical control.  While the actual regulation (and more recent FDA guidance[10]) allows flexibility in the type of control used and whether double-blinding is implemented, the regulation marked the beginning of controlled trials as the basis to demonstrate efficacy for drug approval. 

In its current form, found in 21 CFR 314.126(a), the regulation serves to guide researchers to distinguish between the effects of drugs and other influences, including “spontaneous change in the course of the disease, placebo effect, or biased observation.”[11]  Over time, the inclusion of randomization to facilitate comparability and the use of double-blinding and placebo controls to minimize bias has become the construct for clinical trials that, by convention, is recognized as the “gold standard” for clinical studies.[12]  In practical terms, an analysis of studies used as the basis for drug approval between 2005 and 2012 found that about 89% of studies used for approval were randomized, around 79% were double-blind, and about 55% were placebo-controlled (with close to 32% using active comparator groups).[13]

Weaknesses of the RDBPCT

The weaknesses of the RDBPCT can be appreciated by assessing the inherent limitations of each of its elements: randomization, double-blinding, and use of placebo control.

The transformative idea of randomization was clearly articulated and championed by renowned statistician R.A. Fisher while working with agricultural samples.  The hope of randomization is that confounding variables can be equally distributed between groups, allowing for greater comparability across groups. In the words of Fisher, randomization was the process “…by which the validity of the test of significance may be guaranteed against corruption by the causes of disturbance which have not been eliminated.”[14]  In the complex human body, with so many hidden variables, can we trust that simple randomization equally distributes confounding variables? 

In recent years, some have argued that in small to moderate studies, which are the mainstay of clinical research, it really doesn’t.  In fact, one team, after re-analyzing published data from two large, multicenter trials with random replicated samples of varying size found that randomization would reliably remove random differences in baseline characteristics between groups when at least 1,000 subjects were included.[15]  In other words, randomization would  only likely protect against bias if 1,000 or more subjects were enrolled.  Below this large number of subjects, “baseline characteristics were imbalanced, and substantial mean squared error in effects measurement were observed.” Large numbers are needed for randomization to reliably have its desired effect.

At first blush, blinding seems an excellent way to reduce bias in an experiment.  However, relatively few studies clearly describe their methods of blinding.  In some cases, due to the obvious effects of a drug in a trial, or its look, taste, or smell, doctors and participants can often accurately guess whether they’re getting a placebo or the actual drug. A review of 1599 studies described as blinded found that only 31 evaluated the effectiveness of their blinding and that success of blinding was only achieved about 45% of the time.[16]  But even in cases where blinding succeeds in accomplishing what it sets out to do, it succeeds in reducing bias only within the experiment. Beyond first knowing whether or not blinding really took place, the problem remains of translating results of successfully blinded experimental results to real world situations. 

Clinical practice does not occur within the confines of a blinded experiment, and how physicians interact with patients, either positively or negatively, impacts outcomes.  Using the term “iatroptherapy” to describe the effect of the physician on the healing of the patient, a thoughtful physician from some years ago stated: “A dose of enthusiastic iatrotherapy given in conjunction with an ineffectual drug will usually make patients feel better than a moderately effective drug delivered with little or no iatrotherapy.”[17]

We will delve into the significant challenge posed by the placebo component in randomized, double-blind, placebo-controlled trials, as evidenced by historical examples, in the sections that follow.

The Field Trial of the Salk Polio Vaccine

A large-scale societal reckoning with the randomized, double-blinded, placebo-controlled trial came with the field trial of Jonas Salk’s vaccine for poliomyelitis in 1954.  After Salk reported initial promising results in several hundred subjects, the push to field trials began.  The ultimate structure of the investigation included both a randomized, double-blinded, placebo-control trial (hereafter referred to simply as the placebo-controlled trial) and a unique observation study.  This design was shaped by the interests and opinions of research scientists, the needs of the March of Dimes Foundation (The National Foundation for Infantile Paralysis) that funded the work through private donations, and the citizenry at large that was motivated and engaged in a large-scale effort to end the scourge of polio.

As persuasively argued by Meldrum,[18] this conglomeration of competing needs and groups of people is one example demonstrating that the randomized, controlled trial “is not a methodical black box but a social exercise in problem solving.”  At the center of the cauldron of this particular mix of interests and people was a lay foundation that needed to show its donors that real progress was being made; that is, contributions were moving the work forward to a successful conclusion. The foundation also needed the involvement of the public to obtain data and validation from the scientific community so that the obtained results would be seen as legitimate. 

It is important to note that while the structure of the placebo-controlled trial would give the results greater validity in the eyes of many researchers, the approach prompted concern on several levels for those involved, which represent many of the same concerns with the use of a randomized, double-blind, placebo-controlled study structure that are still relevant today. For example, how to handle the logistics and expenses of such a complicated protocol were of great concern to the investigators of the field trial, and these challenges are still inherent in trials of complex design today, albeit on a lesser scale.   However, the most painful concern was the moral issue that still plagues placebo-controlled trials today: many families who wanted access to a possibly effective intervention right away would initially receive an injection with only a placebo, with some in the placebo group being possibly unnecessarily stricken with the disease.  In contrast, Salk had originally envisioned an observation trial in multiple countries where 388,800 children would be inoculated, and the incidence of ensuing polio would be compared mathematically to that of about 3 million unvaccinated children in those countries.[19]

In the end, the field trial included both a large placebo-controlled trial and an observation study in which 2nd graders at participating sites received vaccination while 1st and 3rd graders received nothing and were simply observed.  The public more generally favored the latter approach at the time and is evidenced by the fact that over 30 states opted for inclusion in the observation study while 11 committed to the placebo control.[20] The urgency of the ethical concern, combined with doubts about effectively implementing the placebo-controlled trial, temporarily heightened the emotions of the usually reserved Salk as he wrote an eloquent and poignant letter to Basil O’Connor, then president of the March of Dimes Foundation:[21]

“…if we are aware of the fact that the presence of antibody is effective in preventing the experimental disease in animals and in man, then what moral justification can there be for intentionally injecting children with salt solution or some other placebo for the purpose of determining whether or not a procedure that produces antibody formation is effective…”

This remains a powerful argument against the use of placebos.  In the context of clear understanding about the pathology of disease and mechanisms by which suffering may be alleviated, what business do we really have administering placebos that presumably don’t have any effect on the mechanisms that need to be employed in order to relieve suffering?

Salk continued:

The use of a placebo control, I am afraid, is a fetish of orthodoxy and would serve to create a ‘beautiful epidemiologic’ experiment over which the epidemiologist could become quite ecstatic but would make the humanitarian shudder and Hippocrates turn over in his grave…

Perhaps few scientists before or after have enjoyed Jonas Salk’s aptitude for both scientific excellence and prose as he continued by laying bare the position of those who would only accept as valid results coming from a randomized, double-blind, placebo-controlled trial:

“…it is not a question of science, or of ethics, or of morality, upon which those who maintain the contrary position make their stand, but rather because of false pride based on values in which the worship of science involves the sacrifice of humanitarian principles on the altar of rigid methodology…in my opinion, there is no choice but to follow a course based upon the principle: ‘Do unto others as you would have others do unto you.’”

Simply put, if you were facing the prospect of a terrible disease, would you rather get an intervention that might help you, or a placebo?  If we refuse to acknowledge the obvious answer, or put placebos before patients in our approach to research, then haven’t we sacrificed our humanity for a false pride, for the alluring glow of acceptance from others with the glitter of self-ordained scientific authority?

Though more weight was certainly given to data from the placebo-controlled trial, we would argue that the results of both the observation study and the placebo-controlled trial capably demonstrated a robust effect of the vaccine used.  Data to this point, taken from Table 3 of the published evaluation of the field trial, is shown below:[22]

Moreover, as represented in Table 7 from the same publication (Francis 1955), the vaccine caused a significant reduction in the rate of paralytic polio in both studies, whether compared to placebo or to observed control, with a p value < 0.001 in both comparisons.  Thus, the results of both the placebo-controlled trial and the observation trial show the vaccine reduces the rate of paralytic polio.  While in this case there could be discussion as to which study better represented the true rate of vaccine effectiveness, a large effect is obvious in both studies.   This prompts an important and interesting question: if placebo had not been given, and if instead vaccine had been injected in its place in an even larger observation study, how would the world have reacted to the results? 

Based on the observation data that was collected, it is hard to argue that If this hinge of scientific history had swung solely on observational data, rather than on data from a placebo controlled trial, that the benefit would have been anything but obvious.  Most of the public would have demanded access to the vaccine and the rate of paralytic polio would still have plummeted.  This leads to the next important question: was it really necessary to subject children and their families to the risk (and in some cases, the horrific reality) of preventable disease by using a placebo instead of vaccine? How much false hope, deception, and unnecessary suffering is enough to appease the demands of the scientific idol worshiped as the gold standard? 

When the Gold Standard Doesn’t Fit: Development of The National Cancer Institute’s Approach to Drug Development

When a gold standard of research structure exists there is, of course, a strong desire to conform methods of research to that standard.   This is understandable on several levels, both because of a natural desire by researchers to do the best work possible by using the best tools available and because work that conforms to a gold standard is more likely to be accepted. But we can learn a lot by asking where use of the gold standard becomes most problematic and why.

One of the domains in which the gold standard of randomized, double-blind, placebo-controlled trials faces significant challenges is in the development of new cancer drugs.  This is one of the areas of clinical care where physicians face the reality of working with extremely ill, vulnerable, and dying patients desperate to try new treatments that might either cure their cancer or at least prolong their lives.  In this context, the moral issue of placebo use is magnified because someone may be denied access to a drug that ends up not only reducing morbidity but being the difference between life and death.

In response to the patient need, the National Cancer Institute (NCI) developed a system for the distribution of experimental drugs that showed promise in animal models to then be tried in a relatively small clinical setting, before moving on to progressively larger stages of study if a drug showed potential. For this approach, NCI was roundly criticized in a series of articles in the Washington Post in the early 1980s.[23]  In the articles a former NCI official, Vincent Bono, stated that the program was like “donating someone’s body to science while they are still alive” and Robert Young of FDA compared NCI’s approach to generals in Vietnam with the attitude that “we’ve got to burn the village to save it.”

During a congressional hearing sparked by the controversy,[24]the NCI’s director at the time, Vincent DeVita, firmly supported NCI’s method of providing and testing experimental drugs to try and meet patient needs. He highlighted that the Washington Post articles mistakenly attributed many deaths to the experimental drugs, when in reality, the patients’ underlying burden of disease was the actual cause of death.  He also highlighted the fact that the research done was contributing to progress in the treatment of cancer, with an improvement to 45% of cancers being curable at that time.  Importantly, Dr. DeVita explained that NCI’s approach was based on the need to do something for the patient, to innovate for the patient’s benefit in spite of practical market obstacles, such as the development of a new drug at that time taking 10-15 years and $30-$40 million dollars.  In emphatic defense of NCI’s approach, he stated:

There are risks associated with early drug testing but the most serious toxicity of all is the unnecessary death from cancer.” 

“Cancer drugs must be tested for ethical reasons in patients with the most advanced forms of the disease who have exhausted all available treatment.”

“Any system of drug distribution we develop that denies any cancer patient access to these resources is wrong.”

The approach fostered by NCI had a different emphasis than that of typical efforts made exclusively by FDA.  Part of this was evidenced in the fact that whereas early phase I trials for drugs used to treat other diseases typically focus on toxicity and understanding a drug’s pharmacology in the human body, the early trials in the NCI approach included efficacy as an objective because of the pressing need to provide the patient with treatment options.  Understanding toxicity was also a concern, but these substances were already known to be toxic prior to testing in humans.  The approach framed by NCI, while not reliant on the randomized, double-blind, placebo-controlled trial, was responsive to pressing patient demand and had already helped to improve patient outcomes.  At the current time, NCI’s website explains that while placebos can be a way of reducing bias they are rarely used in trials of cancer treatments, being used when there is not an existing standard of treatment or being added to a standard treatment for a study.[25]

Some would argue that the NCI approach clearly makes sense in the case of very sick cancer patients when the issue of placebo is so clearly problematic, but that the RDBPCT should still be the gold standard in other less dire clinical circumstances.  However, given the increasing understanding of limitations inherent in randomization when used in smaller and medium-sized studies and blinding that have been  reviewed above, we are not convinced of this argument.  It also seems to us that those who are willing to accept the moral drawbacks of placebo in one condition but not another are on slippery ground: When does the level of suffering of one patient permit the deception of a placebo and the level of suffering of another patient not lead to its use?  Our concerns increase as we consider how well-designed cohort studies are generally a viable option for estimating therapeutic effect and understanding safety, as reviewed below.

The RDBPCT vs Observation Studies

One of the realizations shared above is that the Salk vaccine field trials showed essentially the same estimate of efficacy in both the placebo-controlled trial and the observation study that was done.  Work in more recent decades informs the scientific community that on average meta-analysis of randomized controlled trials with many subjects and well-done observation studies lead to very similar estimates of efficacy,[26],[27] despite assertions to the contrary. Additionally, real-world observation studies that include larger groups and are done over longer periods of time provide greater opportunity to understand the toxicity of an intervention compared to randomized, controlled trials that are more often constructed to show efficacy in the short to medium term.  While drugs are often approved based on studies done over a few years, it may take five or more years for the full picture of a drug’s toxicity to become appreciated.[28]  A good example of this is the increased risk of myocardial infarction seen in patients using protease inhibitor drugs, an effect that only became apparent over six years and was found during the course of an observational trial after the drugs were already in market.[29]

To the end of constructing observation trials with the most helpful design possible, others have pointed out that observational studies utilizing a cohort design, which collect data longitudinally and thus preserve the relationship between exposure and outcome (temporality), provide a higher level of evidence of causality. This is in contrast to case-control studies that identify outcomes and then search for previous exposure.[30] Therefore data from observation studies with a cohort design may be more valuable.  When such data is collected within large, even national registries or databases, high quality and informative data can be collected to inform decision making.  While such observational data currently seems to fit the category of Real World Data (RWD) that FDA suggests should be used as confirmatory evidence in addition to that obtained from controlled trials,[31] we would suggest that based on the literature reviewed above high-quality observational data should be viewed as more than just confirmatory.

Conclusion

It is important to acknowledge that no study design is perfect and free of risk. Every design can face issues such as confounding from unknown variables, bias, or difficulty in separating the signal that represents a treatment’s effect from the noise of spontaneous changes seen during the course of disease.

However, as we have pointed out above, simple randomization in small and medium-size trials may well not offer the level of protection against confounding that has previously been believed.  Blinding methods certainly have value in removing bias during a study but are often not employed effectively.  Moreover, blinded conditions are simply not consistent with the circumstances of actual clinical care, highlighting some limit of applicability from the trials of today and use of an intervention in the wider world. Placebo controls are morally challenged, especially, but not exclusively, in the case of the most severe diseases when a patient deserves every chance of real or potential help. 

In short, the principal problem with the randomized, double-blind, placebo-controlled trial as a gold standard is that, in practical use, the beautiful scientific idea has real-world limitations in how randomization and blinding are done, as well as inherent moral liability.  Moreover, the perceived need of these study elements lessens as the effect of an intervention becomes more apparent.  A group of only two subjects was needed to show that eating citrus fruit was a cure for scurvy and the obvious effect of Salk’s polio vaccine was evident in observation study and placebo-controlled study alike.

Further, there are other alternatives in study design to consider. Well-done observation studies can generally be used to estimate the effect of an intervention.  They represent understanding obtained within the flow of clinical care, and thus have wide applicability.   They are free of the moral drawbacks of administering placebos that provide false hope to sick patients who really deserve the best shot at therapeutic intervention.  In addition, such studies can provide a more complete view of an intervention’s safety profile, which manifests fully only over time.  Finally, an important question to consider is this: if in small to medium studies randomization doesn’t work as effectively as believed, and if blinding often fails, is there really much effectual difference between many RDBPCTs and observation studies?  Based on our review, we conclude that well done observation studies, especially those that make use of prospective cohorts and clinical registries/databases with large amounts of data, should be embraced much more enthusiastically for the purposes of clinical evaluation.  

If the RDBPCT is viewed as an infallible tool for problem solving without inherent weaknesses in practical usage, then data collected with it will often be viewed without important context and without needed caution.  If we persistently uphold the RDBPCT as the ultimate gold standard, beyond reproach despite practical and moral limitations, while legitimate alternatives are labeled as lesser, then we are not merely employing a tool: we have unwittingly fallen into the worship of an idol of our own making, and we will find ourselves blindly sacrificing to it.

References

[1]. Collier R. Legumes, lemons and streptomycin: a short history of the clinical trial. CMAJ. 2009 Jan 6;180(1):23-4. doi: 10.1503/cmaj.081879. PMID: 19124783; PMCID: PMC2612069.

[2]. Haygarth, Dr John, of the Imagination as a Cause and Cure of Disorders of the Body, Exemplified by Fictitious Tractors. Ann Med (Edinb). 1800;5:133–45. PMCID: PMC5111928.

[3]. Stolberg M. Inventing the randomized double-blind trial: the Nuremberg salt test of 1835. J R Soc Med. 2006 Dec;99(12):642-3. doi: 10.1177/014107680609901216. PMID: 17139070; PMCID: PMC1676327.

[4]. Stuart-Harris CH, Francis AE, Stansfeld JM. Patulin in the Common Cold. Patulin in the Common Cold.1943.

[5]. MRC Streptomycin in Tuberculosis Trials Committee. Streptomycin treatment of pulmonary tuberculosis. BMJ. 1948;ii:769–783.  See also, Crofton J. The MRC randomized trial of streptomycin and its legacy: a view from the clinical front line. J R Soc Med. 2006 Oct;99(10):531-4. doi: 10.1177/014107680609901017. PMID: 17021304; PMCID: PMC1592068.

[6]. Public Law 87-781-October 10, 1962. Drug Amendments of 1962. https://www.govinfo.gov/content/pkg/STATUTE-76/pdf/STATUTE-76-Pg780.pdf  Accessed 22 September 2023.

[7]. Temin, Peter. Taking your medicine: drug regulation in the United States. (Cambridge, Mass.: Harvard University Press, 1980. pp. 126, 127.

[8]. National Research Council. 1969. Drug Efficacy Study: Final Report to the Commissioner of Food and Drugs – Food and Drug Administration. Washington, DC: The National Academies Press. https://doi.org/10.17226/24615.

[9]. 34 FR 14596-14597.  September 19, 1969.  https://archives.federalregister.gov/issue_slice/1969/9/19/14595-14600.pdf#page=5  Accessed 22 September 2023.

[10]. Demonstrating substantial evidence of effectiveness for human drug and biological products. Guidance for Industry. U.S. Department of Health and Human Services Food and Drug Administration Center for Biologics Evaluation and Research (CBER) Center for Drug Evaluation and Research (CDER) December 2019

[11]. https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/cfrsearch.cfm?fr=314.126  Accessed 27 September 2023.

[12]. Junod, Suzanne White. FDA and clinical drug trials: A short history.  U.S. Food & Drug Administration. Accessed 22 September 2023 at https://www.fda.gov/media/110437/download#:~:text=several%20kinds%20of%20randomized%20controlled,been%20referred%20to%20as%20the%20%22. Originally published as “FDA and Clinical Drug Trials: A Short History,” in A Quick Guide to Clinical Trials, Madhu Davies and Faiz Kerimani, eds. (Washington: Bioplan, Inc.: 2008), pp. 25-55.

[13]. Downing NS, Aminawung JA, Shah ND, Krumholz HM, Ross JS. Clinical trial evidence supporting FDA approval of novel therapeutic agents, 2005-2012. JAMA. 2014 Jan 22-29;311(4):368-77. doi: 10.1001/jama.2013.282034. PMID: 24449315; PMCID: PMC4144867.

[14]. Sir Ronald A. Fisher.  The Design of Experiments.  Reprint of the 8th edition. Hafner Publishing Company, Inc. New York, N.Y. Reprinted by Arrangement, 1971. p 19.

[15]. Tri-Long Nguyen, Gary S. Collins, André Lamy, Philip J. Devereaux, Jean-Pierre Daurès, Paul Landais, Yannick Le Manach.  Simple randomization did not protect against bias in smaller trials. Journal of Clinical Epidemiology. 2017; 84: 105-113.

[16]. Hróbjartsson A, Forfang E, Haahr MT, Als-Nielsen B, Brorson S. Blinded trials taken to the test: an analysis of randomized clinical trials that report tests for the success of blinding. Int J Epidemiol. 2007 Jun;36(3):654-63. doi: 10.1093/ije/dym020. Epub 2007 Apr 17. PMID: 17440024.

[17]. Feinstein AR. Clinical biostatistics. IX. How do we measure “safety and efficacy”? Clin Pharmacol Ther. 1971 May-Jun;12(3):544-58. doi: 10.1002/cpt1971123544. PMID: 5567805.

[18]. Meldrum, Marcia Lynn.  “Departures from the Design”: The Randomized Clinical Trial in Historical Context, 1946-1970.  Dissertation for Doctor of Philosophy in History. State University of New York at Stony Brook. December 1994.

[19]. Richard Carter. Breakthrough:  The Saga of Jonas Salk.  New York: Trident Press. 1966. pages 170 and 171.

[20]. Meldrum M. “A calculated risk”: the Salk polio vaccine field trials of 1954. BMJ. 1998 Oct 31;317(7167):1233-6. doi: 10.1136/bmj.317.7167.1233. PMID: 9794869; PMCID: PMC1114166.

[21]. Richard Carter. Breakthrough:  The Saga of Jonas Salk.  New York: Trident Press. 1966. pages 190-193.

[22]. FRANCIS T Jr. Evaluation of the 1954 poliomyelitis vaccine field trial; further studies of results determining the effectiveness of poliomyelitis vaccine (Salk) in preventing paralytic poliomyelitis. J Am Med Assoc. 1955 Aug 6;158(14):1266-70. doi: 10.1001/jama.1955.02960140028004. PMID: 14392076.

[23]. Ted Gup and Jonathan Neumann. Experimental drugs: death in the search for cures.  Washington Post. Sunday October 18, 1981.

[24]. National Cancer Institute’s Therapy Program, Joint Hearing before the Subcommittee on Health and the Environment of the Committee on Energy and Commerce (House of Representatives) and the Subcommittee on Investigations and Oversight of the Committee on Science and Technology, 97th Congress, 1st Session, October 27, 1981 (Washington, D.C.: U.S. Government Printing Office, 1981. P 154).  See also Rothman DJ,Edgar H.Scientific rigor and medical realities: placebo trials in medical research. In: Fee E, Fox DM, eds. AIDS, The Making of a Chronic Disease. Berkeley, Calif: University of California Press;1992: 194-206.

[25]. https://www.cancer.gov/about-cancer/treatment/clinical-trials/what-are-trials/placebo  Accessed 26 Sept 2023.

[26]. Anglemyer A, Horvath HT, Bero L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014 Apr 29;2014(4):MR000034. doi: 10.1002/14651858.MR000034.pub2. PMID: 24782322; PMCID: PMC8191367.

[27]. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000 Jun 22;342(25):1887-92. doi: 10.1056/NEJM200006223422507. PMID: 10861325; PMCID: PMC1557642.

[28]. Resnik DB. Beyond post-marketing research and MedWatch: Long-term studies of drug risks. Drug Des Devel Ther. 2007;1:1-5. doi:10.2147/dddt.s2352

[29]. DAD Study Group, Friis-Møller N, Reiss P, Sabin CA, Weber R, Monforte Ad, El-Sadr W, Thiébaut R, De Wit S, Kirk O, Fontas E, Law MG, Phillips A, Lundgren JD. Class of antiretroviral drugs and the risk of myocardial infarction. N Engl J Med. 2007 Apr 26;356(17):1723-35. doi: 10.1056/NEJMoa062744. PMID: 17460226.

[30]. Gilmartin-Thomas JF, Liew D, Hopper I. Observational studies and their utility for practice. Aust Prescr. 2018 Jun;41(3):82-85. doi: 10.18773/austprescr.2018.017. Epub 2018 Jun 1. PMID: 29922003; PMCID: PMC6003013.

[31]. Demonstrating Substantial Evidence of Effectiveness With One Adequate and Well-Controlled Clinical Investigation and Confirmatory Evidence Guidance for Industry.  U.S. Department of Health and Human Services Food and Drug Administration Oncology Center of Excellence (OCE) Center for Biologics Evaluation and Research (CBER) Center for Drug Evaluation and Research (CDER)  September 2023.

Published November 4, 2023

About the Authors

Benton Bramwell, ND, is a 2002 graduate of National College of Naturopathic Medicine who practiced primarily in Utah while helping to expand the prescriptive rights of naturopathic physicians in that state.  Currently, he owns and operates Bramwell Partners, LLC, providing scientific and regulatory consulting services to both dietary supplement and conventional food companies.  He and his wife, Nanette, have six children and two grandchildren; they live in Manti, Utah.

Matt Warnock is an accidental herbalist, who received his MBA and Juris Doctor from BYU, then worked as an attorney, litigator, and business consultant until 2000. He then joined RidgeCrest Herbals, a family business started by his father, and started learning about
herbal medicine, focusing especially on complex herbal formulas. He has two U.S. patents for herbal formulations and methods. He lives near Salt Lake City with his wife, Carol; they are the parents of three children and four grandchildren.