The making of an urban medical myth

Issue: BCMJ, vol. 55, No. 10, December 2013, Page 467 Council on Health Promotion

Urban myths come in varying shapes and sizes. Some, like JFK conspiracy theories, seem innocuous. Others, like the myth that vaccines cause autism, are of concern because of their effect on the likelihood that parents will vaccinate their children. 

Other myths may affect the views of policymakers and the public in terms of the trust and credibility given to the medical profession.

The marriage of science and medicine over the course of the last century has led to substantial advances in care and longevity. There has been an explicit commitment since Flexner to link medical education and practice to the best available evidence so that better treatments can be brought on board and less effective ones discarded. But just how far has this process been implemented?

We can all point to recent examples of medical modalities that have been found to be lacking in merit (e.g., routine anti-arrhythmics for palpitations), but for the most part the polarity has been toward evidence-based practice.

Thus it is troubling that the claim that only 10% to 15% of conventional medicine is evidence based[1] has taken hold not only in public but also in medical discourse.

The 10% figure first gained respectability in 1979 (and again in 1983), in reports from the US Office of Technology Assessment (OTA). The reports contain the statement, “it has been estimated that only 10% to 20% of all procedures currently used in medical practice have been shown to be efficacious by controlled trial.”[2] This statement is attributed to the informal comments of OTA epidemiologist Kerr White.[1]

Dr White was referring to a 1963 insurance study that drew data from two surveys of 19 family practitioners in northern England. The surveys were intended to assess drug costs and compared the “specificity” of prescriptions for brand name versus generic drugs. Prescriptions were deemed to be “specific” if they were appropriately targeted to the condition being treated.[1]

It was in this study that prescriptions were found without doubt to be correctly targeted 10% of the time. While the study did not address whether the treatments themselves were efficacious (and Dr White apparently did not intend his comments to be generalized),[1] the figure has become immortalized.

There have been other dismal assessments of the basis of medical practice. Dr David Eddy, cited in a 1991 BMJ editorial, stated that only 15% of medical interventions were supported by solid scientific evidence.[3] Dr Eddy’s figure, drawn from studies of treatments for glaucoma and claudication1 has been widely cited as a criticism of mainstream medicine.[3

Part of the problem is what is meant by the term “evidence based.” Demanding that treatments be supported by iron-clad randomized controlled trials will likely yield a lower percentage of evidence-based therapies than treatments that make sense in the context of basic biology and seem clinically plausible. It would be unfair, for example, to claim that the use of ASA for pain in the left fourth toe is baseless simply because there are no RCTs for that specific indication. 

But lacklustre assessments of the scientific basis of modern medicine must be taken into account along with the bulk of studies that have attempted to address the question. Numerous reviews and analyses have generated figures far more generous than those of Eddy and White.[1,4]

For example, the Cochrane Collaboration website lists three much more recent (and much more rigorous) estimates showing that the vast majority of modalities in fields ranging from pediatric surgery to inpatient general medicine were based on evidence ranging from observational trials to RCTs.[5]

Retrospective chart reviews conducted in the United Kingdom have found that over 80% of general practice treatments enjoyed compelling scientific support.[6]

Some observers might feel that the 80% figure is still far too low. But it is unlikely that the figure will ever approach 100%. The inability to cover every conceivable situation with RCT support, the evolving nature of medical treatments, and the simple fact that patients do not necessarily present as discrete diagnoses all combine to put clinicians in a position where doctors do what they do best: to act in the face of uncertainty to deliver care that is consistent with the best available evidence in a professional and compassionate context.
—Lloyd Oppel, MD
Chair, Allied Health Practices Committee


This article is the opinion of the Council on Health Promotion and has not been peer reviewed by the BCMJ Editorial Board.


1.    Imrie RH, Ramey DW. The evidence for evidence-based medicine. Complement Ther Med 2000;8:123-126.
2.    Congress of the United States. Office of Technology Assessment. Assessing the Efficacy and Safety of Medical Technologies. Washington, DC: US Government Printing Office; 1978. Accessed 30 October 2013.
3.    Smith R. Where is the wisdom...? BMJ 1991;303:798-799. Accessed 30 October 2013. 
4.    Ernst E. How much of general practice is based on evidence? Br J Gen Pract 2004;54:316.
5.    The Cochrane Collaboration. It is estimated that only “10% to 35% of medical care is based on RCTs.” On what information is this based? Accessed 30 October 2013.
6.    Gill P, Dowell AC, Neal RD, et al. Evidence based general practice: A retrospective study of interventions in one training practice. BMJ 1996;312:819-821.

Lloyd Oppel, MD, MHSc, FCFP(Em). The making of an urban medical myth. BCMJ, Vol. 55, No. 10, December, 2013, Page(s) 467 - Council on Health Promotion.

Above is the information needed to cite this article in your paper or presentation. The International Committee of Medical Journal Editors (ICMJE) recommends the following citation style, which is the now nearly universally accepted citation style for scientific papers:
Halpern SD, Ubel PA, Caplan AL, Marion DW, Palmer AM, Schiding JK, et al. Solid-organ transplantation in HIV-infected patients. N Engl J Med. 2002;347:284-7.

About the ICMJE and citation styles

The ICMJE is small group of editors of general medical journals who first met informally in Vancouver, British Columbia, in 1978 to establish guidelines for the format of manuscripts submitted to their journals. The group became known as the Vancouver Group. Its requirements for manuscripts, including formats for bibliographic references developed by the U.S. National Library of Medicine (NLM), were first published in 1979. The Vancouver Group expanded and evolved into the International Committee of Medical Journal Editors (ICMJE), which meets annually. The ICMJE created the Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals to help authors and editors create and distribute accurate, clear, easily accessible reports of biomedical studies.

An alternate version of ICMJE style is to additionally list the month an issue number, but since most journals use continuous pagination, the shorter form provides sufficient information to locate the reference. The NLM now lists all authors.

BCMJ standard citation style is a slight modification of the ICMJE/NLM style, as follows:

  • Only the first three authors are listed, followed by "et al."
  • There is no period after the journal name.
  • Page numbers are not abbreviated.

For more information on the ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals, visit

BCMJ Guidelines for Authors

Leave a Reply