Reconsidering and Revising Evidence-Based Practice in Pain Medicine: Steps Toward Sustaining the Profession?
The funding scheduled to be allocated to the National Institutes of Health (NIH) and the National Science Foundation (NSF) for the upcoming fiscal year appears to reflect—and support—the call for enhanced scientific research and education advocated by President Obama in his recent State of the Union address.1,2 Surely, some portion of these funds would be allotted to basic, translational, and clinical sciences in general. We hope, somewhat optimistically, that funding will be provided to support and sustain studies relevant to pain medicine that were fostered during the Decade of Pain Control and Research.
We ground this hope in the reality that to remain viable—if not “survive”—as both a profession and a practice, pain medicine must incorporate new information and capabilities and be empowered in these pursuits. Research—at both the bench and bedside—is the vehicle through which knowledge is obtained,3 and given the call for evidence to be the basis of healthcare reforms,4 the underlying mission of any such research endeavors is the realistic assessment of what works (and what doesn’t), in whom, and for what reason(s).
As we have noted, this may be especially problematic, given “…an ever-present gap between objectifiable aspects of pain and the pain patient that can be quantified and qualified via current techniques of assessment and diagnosis, and the subjectivity of pain…that remains inaccessible to these means.”5 This poses challenges to obtaining reliably objective measures and representations of pain with which to establish clinically relevant determinations of if, when, who, and how much to treat and gives rise to questions of what system or process of metric(s) can or should be used to evaluate the genuine clinical utility of current and emerging techniques and technologies.
The Genuine Clinical Utility of Evidence
At this point, it is crucial to inquire what “genuine clinical utility” actually means in pain care. We have repeatedly argued against “cookie-cutter” approaches to pain diagnosis and treatment in light of the individual patterns of neurological network development, structure, and function (which reflect the interaction of biological and environmental variables) and the effect of such variation on the experience and expression of pain. This reinforces the maxim that any clinical intervention represents “an experiment with an n of 1.”6-8 But even an agnostic approach to experimentation begins from a point of prior evidence from which to guide hypotheses and methods.9 Therefore, given that contemporary neuroscience reveals the uniqueness of the brain-mind and that multidisciplinary pain research has shown the individuality of the phenomenal experience of pain,10 we suggest that any consideration of evidence must be grounded to the n of 1 peculiarities of each pain patient.
However, we posit that the current paradigm of evidence assessment and evaluation—evidence-based practice (EBP)—may not be ideally configured so as to enable such personalized applications. As presented in Table 1, the limitations of EBP include 1) lack of a consistent base of scientific evidence, 2) difficulties in applying techniques studied in controlled settings to the actual care of individual patients, and 3) limited time and resources for physicians to effectively review available evidence so as to determine its relevance to the case(s) at hand.11
This is not to say that EBP lacks validity; to be sure, it can serve as a valuable method for discerning the relative benefits of various treatments. But to do so, EBP must remain current and contextual; thus, we opine that its strengths must be fortified and its limitations and weaknesses elucidated and corrected. This will require reassessment and some revision of both the evidence at hand and the concept of EBP at large. Such efforts will be important, if not necessary, in order to 1) align EBP with the current body of epistemic capital (about neuroscience and pain),12 2) validly engage EBP as a system of evaluating clinical information, and 3) regard and use differing types and levels of evidence in appropriate ways to guide clinical care.13
Issues and Problems of Evidence-Based Practice
The present state of EBP overemphasizes and overvalues certain types of studies or data and undervalues others. As a result, biases for or against systems and categories of treatment can occur that affect any realistic consideration of clinical appropriateness, resolution of equipoise, and the rendering of right and good care.11
As Manchikanti recently reported, the values of various types of treatment are qualified by the “level of evidence” used to assess their utility.14 Very often, the multicenter randomized controlled trial (RCT) is judged as the sine qua non of any such evaluation, as it can demonstrate the salience of certain treatment effects. But the RCT assumes a ceteris paribus orientation to pain and patients with pain, which we feel is naïve at best and wholly unrealistic at worst (given the current state of knowledge regarding genetics, phenotype-environmental interactions, etc). The two key elements of the RCT—randomization and control—tend to limit utility in any attempt at evaluating effectiveness relevant to a personalized approach to care and restrict the ecological validity of such studies, respectively.
Of course, methods such as preselection of subjects with specific characteristics so as to establish particular attributes that may render susceptibility to various treatment interactions can be implemented to counter these effects, at least to some degree. The modified randomized trial could then be employed to assess the influence of environmental/situational (ie, ecological) factors in the therapeutic milieu.15,16 In such cases, patient-focal “meaning effects” that can elicit placebo responses cannot and should not be disregarded, as these variables are important to those aspects of the clinical environment and encounter, are influential to therapeutic outcomes, and provide evidence to guide what treatment scenarios should entail.17-19 Thus, although the RCT has been generally viewed as the “gold standard” of evidence, a corpus of literature warns that “…all that glitters is not gold” and that modifications to the RCT as well as other clinical research protocols may be of equal, if not superior value.20