What Exactly is Evidence?

. . . and what do we do with it?

By Sasha Chaitow, PhD
[Somatic Research ]

Takeaway:

A contentious issue in the manual therapy profession is about whether specific applications should be evidenced-based or if clinical experience is sufficient. Promoting evidence-informed medicine within manual therapy professions may be a key way to walk in step with the medical community.

One of the most contentious issues in discussions of manual therapy is the question of how far its many applications are and are not evidence-based. Fraught debates and outright controversies have led to some practitioners stating that some manual therapies should not be used at all since the evidence supporting them is poor or absent, while others insist that clinical experience and what evidence does exist is sufficient to support their ongoing application. Some practitioners use the description “evidence-informed” to refer to approaches that take evidence into account, balancing it with clinical experience and patient preference, but the term is interpreted differently depending on the practitioner’s background and perspective. Unfortunately, both terms continue to be misunderstood, misused, and misapplied, with severe consequences for interprofessional relationships, reputations of whole professions, and patient outcomes.

This debate has been ongoing since the 1970s, yet rather than moving toward resolution, practitioners across the manual therapy professions seem to have become more divided than ever. This may partly be due to the polarizing nature of social media debates and the damage they have done to our collective attention spans, and partly due to market pressures. Yet, these conflicts are more damaging to patients, professions, and their communities than any modality with thin research evidence (but considerable experiential success), because the debate is being led by neither science nor common sense. Whatever the reasons, the shrillness of the debate and the noise it is generating breach a fundamental rule of health care: Do no harm. Whatever one’s position in the debate, that alone should be a good enough reason to pause for thought.

The Definitions

In a 1996 article, “Evidence Based Medicine: What It Is and What It Isn’t,” David Sackett and his co-authors reiterated the definition of evidence-based medicine: “The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.” The authors further qualified this by stating, “It’s about integrating individual clinical expertise and the best external evidence.”1

Key terms here are conscientious, explicit, judicious, and individual, which can often be overlooked when using social media shorthand.

Conscientiousmeans the practitioner needs to be honest with themselves regarding how much evidence they have, how well they have interpreted it, and whether it applies to the patient in front of them.

Explicitmeans they have openly followed (and if necessary, explained this to the patient) a specific evidence-based path—not vaguely so or based on recall, but literally by scanning the literature and deciding on the most appropriate evidence for the case before them.

Judiciousmeans they have exercised critical judgment as to the quality of the evidence and its appropriateness for their patient.

All too often, those who enthusiastically embrace evidence-based practice will stress the need to “follow the science” and only ever practice according to the evidence. In a study clearly laying out some of these issues,2 this approach is called “evidence nihilism.”

Evidence nihilism is expressed in terms such as: “There is no high-quality evidence investigating the efficacy of Treatment A, so we shouldn’t use it”; or: “The evidence does not demonstrate the effectiveness of Treatment A.” There are several errors in this way of thinking. Firstly, a lack of evidence does not mean a treatment does not work. It means it has not been sufficiently researched, so our knowledge about it remains limited. That is not the same as using the excuse “the science hasn’t caught up” to talk about truly outlandish claims; it is a reasonable statement and should be seen as such.

Secondly, if there has been some research, but it does not show positive results, before dismissing the technique one needs to critically appraise the quality of the research. Is it sufficiently robust to definitively demonstrate that a given technique is in fact ineffective? What are the conditions and limitations of the study design? The fine detail of what to look for when appraising such a study is accessibly explained in Trisha Greenhalgh’s invaluable handbook for clinicians,3 and such an evaluation is critical before dismissing a well-established technique, or one with hefty—if hard to read—science behind it.

What Works and How

Sharp-eyed readers may also have noticed that in the example statements of “evidence nihilism” above, I have used “efficacy” in the first, but “effectiveness” in the second. These two words do not mean the same thing. Effectiveness means the way something works in real life, whereas efficacy refers to what results are found in the context of ideal, controlled circumstances, as in a clinical trial. It is well established that the randomized controlled trial (RCT) is not appropriate for all types of treatments—developed as it was for pharmacological experimentation. As far as efficacy versus effectiveness is concerned, researchers and clinicians have often debated which of these is more useful in the context of clinical application, and one of the most strident critics of integrative medicine suggests that positive effectiveness results can be generated based on their context, rather than their true efficacy. In other words, they may seem to work in the clinic even though they show no efficacy in the lab. One example given is that of homeopathy, for which stringent trials have shown no efficacy but considerable effectiveness, which critics put down to the broader psychosocial effects.4 

A third, long-standing area of disagreement is that of mechanisms of action: how a treatment works and how we explain it. This is significant for two main reasons.

The Post-Hoc Fallacy

This is an error in thinking that can lead us to attribute a positive result (the patient felt better) to a specific cause (our treatment), when their improvement may have been the result of any number of variables, or simply the natural history of their condition (the natural progression of a disease process if not treated).

Patient Perceptions

Overpromising or overselling the effectiveness of a treatment for which we have little or no concrete evidence to support the direct cause-effect relationship can be unethical and misleading.

Understanding mechanisms of action allows us to be slightly more certain that a given technique does have a beneficial effect. It also allows us to ensure that we do not continue to use techniques with little true effectiveness, and that we do not mislead patients as to their effectiveness. Patients may recover due to a range of factors, including placebo, allowing themselves to be cared for, the effect of gentle touch, the sense that they have found an explanation for their problem, or the sense of empowerment that comes with addressing a problem, regardless of the techniques used.

It is around these points that debates continue to rage regarding whether it is unethical and downright dishonest to continue to offer therapies that are known to be inefficacious (even if they appear effective and do no immediate harm), since there is a danger of misleading a patient or indeed, delaying referral for more urgent cases. In the case of missing evidence (where we do not yet know the degree of efficacy, but clinical experience suggests effectiveness and the safety profile is good), things are even more complicated. According to Sackett, this is where conscientiousness and good judgment need to come into play, tempered and balanced with clinical experience. This is where many practitioners will cite “evidence-informed practice” as a more honest and accurate description of what they do—but definitions sometimes vary quite wildly.

Evidence-Informed Practice

The evidence-informed approach5 is described as being “fundamental to practice” aiming “to address the large gap between what is known and what is consistently done.” More specifically, “Evidence-informed decision-making models advocate for research evidence to be considered in conjunction with clinical expertise, patient preferences and values, and available resources.” (emphasis mine) This perspective addresses real-world scenarios and “interactions between evidence and action . . . and complex relationships between health-care interventions and outcomes.”6 These statements come not from some niche profession struggling to achieve validation, but from a well-cited article addressing the down-to-earth questions of how far uptake of evidence can be improved, and what this means in real-world health care in relation to both policy-making and patient outcomes. The aim is to promote specifically evidence-informed health care, and since its publication almost a decade ago, it has been repeatedly cited in studies focused on realistic approaches to improving health-care standards across specialties.

When looking more closely at this literature, it quickly becomes clear that the evidence pyramid is not as solid as we often think. It has been considered outdated for nearly a decade yet remains received wisdom among many health-care professionals. Unfortunately, the reason for this shift is the sheer poor quality of supposedly “high-level” evidence in the form of systematic reviews. This is partly due to poor reporting and study design of the reviews themselves, and partly due to poor input material in the form of badly designed or performed primary research.7 Unfortunately, the example illustrated in this source reflects poorly on manual therapy research design since it includes several poor-quality reviews of manual techniques to address idiopathic adolescent scoliosis.

Do we conclude then, that those techniques do not work? That the researchers did a poor job? Or that the techniques are not worth investigating further? Presumably, more and better research is needed, as is the ability to critically appraise the literature. What is clearly not needed, however, is for such a report to generate ideological sparring when there is work to be done.8 We cannot, on the basis of such a review, conclude anything about these techniques apart from the fact that improvements are needed in research design and literacy.

A CDC report commissioned in 2007 spells out the significance of evidence-informed practice even more clearly. The failings of systematic reviews and meta-analyses—the foundations of policy-making—are well acknowledged by major stakeholders, and calls for significant improvements to their usefulness have been made at the highest levels for well over a decade. Yet, the same report concludes that misinformation regarding the relative value of such reviews continues to circulate in the media and among researchers in diverse fields as a result of their poor understanding of the speed of advances being made and a general lack of research literacy. Nevertheless, it concludes, “Advocating an unquestioning or inappropriate overreliance on systematic reviews might discourage innovation or promising practices.”9 After establishing that “evidence-informed” practice and policy is the desired ideal, it proposes to promote and improve this through “identifying, mentoring, and supporting the champions of evidence-informed policy.” Interestingly, the points listed for improvement (higher-quality basic research and better communication between sectors) are the same ones concerning integrative and allied health professions.

These findings come from the main health authority in the US, acknowledging that the quest for knowledge and evidence and the need to reap their benefits must also reflect the “real-world interaction between evidence and action.” This is not controversial in biomedicine nor in public health; it is a pragmatic evaluation of the limitations of our scientific tools, and an equally realistic acknowledgment that clinical experience is of equal value. Yet, in the struggle to validate and professionalize manual therapy approaches and practices, it is often forgotten that many critical biomedical procedures rely a great deal more on experience and lower quality evidence than is often realized. A telling example comes from the world of surgery. Though surgical techniques can be, and often are, tested on animals and 3D models before they are used on humans, there are also cases which that is not possible due to anatomical and physiological differences.

One example I recently encountered is that of parotid dissection in the presence of tumors. The parotid glands are the largest of the salivary glands, sitting in front of and slightly below the ears. Anatomically they are closely interconnected with several major facial muscles, major cranial nerves, and significant arteries and veins. When tumors are encountered, biopsy is challenging due to the anatomical complexity of the parotid, particularly if the tumors are located below the facial nerve (deep lobe) rather than above it (superficial lobe). In the past, the approach of choice was total parotidectomy (removal of the whole gland), an invasive surgery that involves a lengthy incision from the ear into the neck, and noteworthy risk to the innervation of the face. More recent studies have proposed that less invasive resection (removal of a smaller section) is equally effective—and much safer—when lesions are present only in the superficial lobe.

However, this conclusion has been reached through recent (2021) reviews of real cases and surveys of the literature looking back at surgeries undertaken based largely on clinical experience and surgeon opinion.10 Known as “convenience samples,” these are retrospective surveys of real-life patients encountered in the hospital setting, because this is the reality of the surgical profession and RCTs are impossible to implement in such cases. This is neither unusual nor controversial; it is accepted as a feature of the nature of surgery, and similar situations apply to many other medical fields, including pediatrics, orthopedics, and even cancer rehabilitation. This is why there are multiple types of study design, making it possible to investigate a plausible technique and report as to its effectiveness, provided that all the necessary parameters are observed.

As recently as 2017, a study suggesting a form of classification of parotid lesions to make it easier to judge which approach to take, states: “Discussion on optimal treatment continues despite several meta-analyses,” explaining that “different schools favour one option or the other based on their experience, skills, and tradition.”11 The study and its findings do not waste energy by berating one or the other school (nor, presumably, engaging in social media flame wars). Instead, they offer a way for future research to be more useful and thus allow true comparisons of effectiveness and better future guidelines, which may one day provide a basis for systematic reviews that actually have a purpose, rather than clogging the airwaves with poorly evaluated, or indeed, worthless studies that are written to fill up lines on a CV rather than provide any real basis for practice.12

When it comes to bodywork and manual therapies, it has also been well established that pragmatic trial designs, qualitative studies, or whole-systems research13 may be more meaningful and effective than any number of limited RCTs, precisely because of the particularities of their applications. Such studies, looking at osteopathic, chiropractic, and massage techniques, are already in circulation.14 They do not provide definitive proof for anything except the fact that it is possible to improve on the RCT and that study designs must be fit for purpose, as highlighted in the CDC report regarding evidence quality. Yet, rather than realizing that the basis on which to understand the difference between evidence-based and evidence-informed practice, many within the MT professions continue to hold to the “evidence nihilism” model as if it is the only path open to them, while others misinterpret “evidence-informed” altogether. As noted in the CDC report summary, this is probably because they are unaware how fast things are moving.

If the purpose of promoting evidence-based medicine within the manual therapy professions is to raise the quality bar and walk in step with biomedical practice and science, then it may be time to catch up, not by shutting down any conversation that mentions clinical experience or plausible-but-unproven approaches, but by following their actual example.

Notes

1. David L. Sackett et al., “Evidence Based Medicine: What It Is and What It Isn’t,” British Medical Journal 312, no. 7023 (January 1996): 71–72, https://doi.org/10.1136/bmj.312.7023.71.

2. Robert D. Mootz, “When Evidence and Practice Collide,” Journal of Manipulative and Physiological Therapeutics 28, no. 8 (October 2005): 551–53, https://doi.org/10.1016/j.jmpt.2005.08.004.

3. Trisha Greenhalgh, How to Read a Paper: The Basics of Evidence-Based Medicine and Healthcare, 6th ed. (London: Wiley-Blackwell, 2019).

4. Edzard Ernst and M. H. Pittler, “Letter to the Editor: Efficacy or Effectiveness?,” Journal of Internal Medicine 260 (2006): 488–90, https://doi.org/10.1111/j.1365-2796.2006.01707.x.

5. Irwin Epstein, “Promoting Harmony Where There is Commonly Conflict: Evidence-Informed Practice as an Integrative Strategy,” Social Work in Health Care 48, no. 3 (April 2009): 216–31, https://doi.org/10.1080/00981380802589845; Paul Glasziou, “Evidence Based Medicine: Does It Make a Difference?,” British Medical Journal 330, no. 7482 (January 2005): 92, https://doi.org/10.1136/bmj.330.7482.92-a; R. McSherry, “Developing, Exploring and Refining a Modified Whole Systems-Based Model of Evidence-Informed Nursing” (unpublished PhD thesis, Middlesbrough, England, United Kingdom: School of Health and Social Care, Teesside University, n.d.).

6. Brendan McCormack et al., “A Realist Review of Interventions and Strategies to Promote Evidence-Informed Healthcare: A Focus on Change Agency,” Implementation Science 8, no. 107 (September 2013): 1–12, https://doi.org/10.1186/1748-5908-8-107.

7. Bob Phillips, “The Crumbling of the Pyramid of Evidence,” Archives of Disease in Childhood (blog), British Medical Journal, November 3, 2014, https://blogs.bmj.com/adc/2014/11/03/the-crumbling-of-the-pyramid-of-evidence.

8. Maciej Płaszewski and Josette Bettany-Saltikov, “Non-Surgical Interventions for Adolescents with Idiopathic Scoliosis: An Overview of Systematic Reviews,” PLoS One 9, no. 10 (October 2015): e110254, https://doi.org/10.1371/journal.pone.0110254.

9. Melissa Sweet and Ray Moynihan, Improving Population Health: The Uses of Systematic Reviews (New York: Milbank Memorial Fund and Centers for Disease Control and Prevention, 2007), https://stacks.cdc.gov/view/cdc/6921.

10. Samuel R. Auger et al., “Functional Outcomes After Extracapsular Dissection With Partial Facial Nerve Dissection for Small and Large Parotid Neoplasms,” American Journal of Otolaryngology 42, no 1 (January–February 2021):102770, https://doi.org/10.1016/j.amjoto.2020.102770; Georgios Psychogios et al., “Review of Surgical Techniques and Guide for Decision Making in the Treatment of Benign Parotid Tumors,” European Archives of Oto-rhino-laryngology 278, no. 1 (January 2021): 15–29, https://doi.org/10.1007/s00405-020-06250-x.

11. Miquel Quer et al., “Surgical Options in Benign Parotid Tumors: A Proposal for Classification,” European Archives of Oto-rhino-laryngology 274, no. 11 (June 2017): 3825–36, https://doi.org/10.1007/s00405-017-4650-4.

12. Michelle Ghert, “Should Residents and Fellows Be Encouraged to Publish Systematic Reviews and Meta-Analyses?,” Retraction Watch (blog), January 18, 2022, www.retractionwatch.com/2022/01/18/should-residents-and-fellows-be-encouraged-to-publish-systematic-reviews-and-meta-analyses.

13. Cheryl Ritenbaugh et al., “Whole Systems Research: A Discipline for Studying Complementary and Alternative Medicine,”. Alternative Therapies in Health and Medicine 9, no. 4 (July–August 2003): 32–6.

14. Florian Schwerla et al., “Osteopathic Manipulative Therapy in Women With Postpartum Low Back Pain and Disability: A Pragmatic Randomized Controlled Trial,” Journal of Osteopathic Medicine 115, no. 7 (July 2015): 416–25, https://doi.org/10.7556/jaoa.2015.087; Howard Vernon et al., “A Randomized Pragmatic Clinical Trial of Chiropractic Care for Headaches With and Without a Self-Acupressure Pillow,” Journal of Manipulative and Physiological Therapeutics 38, no. 9 (November 2015): 637–43, https://doi.org/10.1016/j.jmpt.2015.10.002; Paul Finch and Susan Bessonnette, “A Pragmatic Investigation Into the Effects of Massage Therapy on the Self Efficacy of Multiple Sclerosis Clients,” Journal of Bodywork and Movement Therapies 18, no. 1 (January 2014): 11–16, https://doi.org/10.1016/j.jbmt.2013.04.001.

 With 20 years in teaching and more than a decade in journalism and academic publishing, Sasha Chaitow, PhD, is series editor for Elsevier’sLeonChaitowLibrary of Bodywork and Movement Therapies and former managing editor of TheJournal of Bodywork & Movement Therapies. Based between the UK and Greece, she teaches research literacy and science reporting at the University of Patras, Greece. She is also a professional artist, gallerist, and educator who exhibits and teaches internationally.