PPM ACCESS
Access to the PPM Journal and newsletters is FREE for clinicians.
13 Articles in Volume 11, Issue #7
Fibromyalgia: Practical Approaches To Diagnosis and Treatment
Juvenile Fibromyalgia: Diagnostic Challenges and Treatment Options
Aqua Therapy Helpful in Treatment Of Systemic Lupus Erythematosus
Axial Neck Pain, Radiculopathy, and Myelopathy: Recognition and Treatment
Early Treatment of TMD May Prevent Chronic Pain and Disability
Identifying Psychological Factors That Influence Surgical Outcomes
Managing Morton’s Entrapment
Premedicated Mask May Hold Promise for Migraine Patients
Mother With Low Back Pain
The Hip Replacement Patient
Evidence-based Medicine: Losing the Patient’s Voice?
What Is Going Wrong With Research?
Risk for Sedation and Car Accidents

What Is Going Wrong With Research?

Part 1 of a Two-Part Commentary

In a recent issue of Practical Pain Management, Mark Cooper and I summarized a workshop on the role of activated glia in the onset and course of maldynia.1 Expanding science suggests that neuroinflammation causes common features of a variety of diseases and conditions that are characterized by maldynia. How will practitioners from disparate specialties cooperate to identify these common features and their underlying pathophysiology, biochemistry, and biophysics? How can we identify therapeutic interventions that target these common features, crossing, as they must, disease differences and diverse specialty interests? This is the role of clinical research.

In part 1 of my commentary, I propose that there is an opportunity, if not a responsibility, for all practitioners to search for new knowledge in pain management. It’s not just a job for a select coterie of academic clinicians and scientists. I call the change in attitude “the democratization of clinical research.”

Four Arguments for the Democratization of Research
We are not required to like it, but evidence-based medicine is here to stay. This is, of course, as it should be. Optimal outcomes, practitioner accountability, and cost containment all require rational choices for the best diagnostic and therapeutic alternatives. Advancing technologies bombard us with an expanding array of choices. Our only hope of doing what is right and good for our patients is to apply the best evidence available with the virtues of respect, beneficence, and justice.2 Doing right and good according to the best evidence naturally depends on the quality of the evidence.

This editorial is about clinical research. First, accepting my declaration of the importance of clinical research, I present four arguments for the democratization of clinical research. In the second part of this commentary, I relate an experience of creating and implementing a disease registry and database for a 20-year longitudinal study of the health effects of complex regional pain syndrome (CRPS). I do not intend this to be a report of the results of that study, which is far from complete. Rather, it is a narrative of the opportunities and challenges inherent in undertaking clinical research. Finally, I offer some suggestions for ways in which pain practitioners can create information systems that will not only facilitate research in clinical practice, but will probably also help each practitioner to organize the objective impairments and subjective experience of each patient. Thus, the effort enhances the clinical practice of pain care.

Argument 1:
Most diagnostic and therapeutic methods go untested.

If a diagnostic test is noninvasive and appears to do no harm, then it often finds widespread use without reproducible evidence of efficacy and safety. An example is the history of thermography in the diagnosis of lumbar disc herniation.3 An unfortunate result of the nonspecificity of thermography for that diagnostic purpose is that it is now difficult in many communities to find a medical thermographer to study and monitor the progress of a patient who has CRPS, for example, and for whom small changes in skin temperature represent an important biomarker of disease.

New and off-label uses for previously approved medications and devices fall into the same trap. I confess to some anxiety when Mark Cooper and I recently reported the promising research on the effects of minocycline on the attenuation of activated glia that are associated with maldynia.4 The researchers reported improvement in pain behavior in the animal model. Minocycline is an antibacterial agent and is readily available in generic form. I hope it is only unattractive cynicism that makes me imagine that someone would be foolish enough to administer minocycline to anyone with maldynia without a prospective protocol that is approved by an accredited institutional review board.5 Nonetheless, when science bumps up against human nature, human nature sometimes comes out ahead, and we behave inappropriately.

Argument 2:
Most clinical testing of diagnostic and therapeutic methods is conducted by a small sample of investigators at academic centers.

An example comes from my other area of subspecialty interest, apart from CRPS: spine care. The annual meeting of the North American Spine Society has the traditional format of short paper presentations interspersed with symposia of invited lecturers. Each year a predictable group of investigators reappears to present its work. There are lots of new faces, but the “population” of presenters has a large sample of the “usual suspects.” This is not necessarily a bad thing. These are highly intelligent, experienced, and well-organized clinicians and scientists. I’ve known and admired many of them for almost 40 years. Their virtues, however, also can be limitations.

They work in institutions with advanced medical informatics. They have offices of research administration at their disposal to help with grant applications, financial management, and publication. These benefits come at the high price of “indirect costs” and bureaucratic inefficiency. The academic practitioners have residents and fellows whose “term of indenture” not only facilitates, but also commonly requires, the conduct of research. Students, residents, and fellows seldom have the time to access large samples of homogeneous study subjects to conduct long-term, prospective outcome studies. Until recently, retrospective research was the norm.

Those in the group of “usual suspects” are generally well acquainted with each other. Some are competitors; most are friends. Close relationships foster cooperation and the advent of organizations, such as the National Spine Network (nationalspinenetwork.org). Shared protocols decrease the probability that different institutions will study the same problem using slightly different diagnostic criteria or slightly different outcome variables. But the protocols usually do vary among institutions in small but deceptively important ways, which makes it difficult to reliably compare or join the results in meta-analysis.

Large-scale, longitudinal research is expensive, and the science is difficult. Controversy still rages about the clinical implications of the results of the Spine Patient Outcomes Research Trial (SPORT), which compared surgical and nonsurgical treatment methods for disc herniation.6 Broad-based clinical research using shared protocols for outcome measurement improves the probability of acquiring reliable knowledge about narrow risk-benefit margins for uncommon conditions.

Argument 3:
When independent practitioners perform well-planned clinical research, there will be less bias that derives from the ways in which research is funded and published presently.

It is well established that the general use of new diagnostic and therapeutic methods often fails to duplicate the efficacy and safety demonstrated by developmental research conducted by the inventor. There are three limitations to the reproducibility of well-studied outcomes of a diagnostic or therapeutic method. First, the initial finding may have been false. The second limitation, the mysterious “decline effect,” probably doesn’t even exist. And the third is bias. Occasionally, potential or actual bias in research is obvious. But, more commonly, bias has subtle effects. Some of those effects derive from how research is funded and published, even when no one profits financially from the results of the research.

Sources of Bias
What is studied and how it is studied often depend on the research agenda of the funding agent. Sometimes clinical research is sponsored with an eye to decreasing the cost of care. Sometimes advocacy groups have an interest in how the condition they represent is viewed by the public and among professionals. Such interests influence how research is funded and whether or not the results are published. Journals have a bias that favors positive results of research. Naturally, an inventor has a personal interest in the success of his invention, be it in fame, fortune, or academic advancement, and there is no bright line that separates the boundary between fortune and fame and academic advancement.

When a clinician thinks he or she has a good idea, he or she gives it a try, observing, one hopes, the principles of respect, beneficence, and justice. If the idea works, the clinician will try to develop some commercial enterprise by which to profit by his or her invention and then publish the results in a respected venue. If the idea doesn’t work, the venture capitalists will disappear, and until recently, there would be no mention at meetings or in journals of the negative results.

But even when a trial is successful and the idea is widely adopted, there may be unforeseen adverse events that militate against the use of the invention. Secondary users cannot achieve the same success as the inventor. In my primary specialty of orthopedic surgery, a good example of this phenomenon is the two-incision, minimally invasive total hip replacement.7 Only the inventor observed superior results without complications. The technique is not widely used.

How can we understand the outpatient use of ketamine for the treatment of maldynia in CRPS? There is little incentive for commercial exploitation of ketamine research. Based on a few reports and considerable “chatter” at meetings, it is increasingly used around the world.8 Clinical research is mostly limited to retrospective observations. Shared prospective protocols for the use of ketamine would provide much-needed information about its safety and efficacy. If more pain practitioners reported their prospective and controlled observations of their experience with a broad range of diagnostic and therapeutic methods, not just with the use of ketamine, then the latency between the introduction of new technology and its acceptance or, more particularly, its rejection would be shorter.

The principle that new diagnostic and therapeutic methods should be continually tested by disinterested observers also applies to old and accepted methods. Speculation about what might have been long ago is only useful when it leads to altered outcomes in the future. For too many, the understanding of the relationship between Reye’s syndrome and hypersensitivity to pertussis vaccine,9 or the relationship between phocomelia and thalidomide administration during pregnancy,10,11 came too late. If practitioners organize their practices for the controlled observation of outcome events, might our understanding of such relationships have a shorter latency? That is speculative, indeed, but a positive answer diminishes somewhat the cost-benefit ratio of clinical research.

When financial conflicts of interest are eliminated from clinical research, the bias they introduce is diminished. Inventors and their institutions often have a financial interest in the success of their new diagnostic or therapeutic method.12 I don’t want to overstate this argument. It would be seriously unfair to assume that such financial bias always prevents the reliable reporting of events. It would also be seriously naive to think that all effects of financial bias enter the awareness of the investigator, who can then exert behavioral control and act appropriately. Be that as it may, there can be no doubt that when research to validate the efficacy and safety of a new diagnostic or therapeutic method is conducted broadly by disinterested parties using prospective protocols of shared outcome measures, the results are more trustworthy than information whose biases are unknowable. The margin of trust may be small, but the margin of treatment effect between two pharmaceuticals, for example, or between two surgical methods may be equally small.

Argument 4:
“…Most published research findings are false.”

This quotation is from the title of a mathematically based treatise by John Ioannidis.13 He demonstrates that in our best efforts at studying, even with little bias, a well-defined problem using an adequately powered randomized controlled trial that achieves a probability of a “chance” occurrence of the result of less than 0.05 (the usual standard for a “significant” difference), we can expect that the result has a “positive predictive value” (PPV) of not more than 0.85. The less rigorous the design of the trial, the smaller the sample, the greater the number of tests or outcome measures, then the lower the PPV will be. Most study designs yield PPVs of less than 0.25. Is it any wonder that replication of research commonly fails to reproduce initial, positive results?

I propose that what has been described as the “decline effect” (the common observation that replication of “significant” research findings produces less-impressive results or that the observed effect simply disappears) is nothing more than the counterintuitive reality that we do science poorly, and even when we honestly think we are doing it well, we are not. Jonah Lehrer said as much when he declared that the decline effect may be no more than the “decline of illusion”—that the “truth” of research findings is illusory.14

One might argue that if trained and dedicated career scientists don’t get it right most of the time, one can’t expect clinical practitioners (“downtown street docs”) to do better. I argue the opposite, and skepticism is not my purpose here, nor was it Ioannidis’s. Certainly, a clinician’s subjective impressions of the trends in the outcomes of his treatments are woefully inaccurate. And just as certainly, if every clinician studies the outcomes of patients with different measures using different protocols, then the combined results are not likely to be better than reports of the practitioners’ “impressions.” But if clinicians use shared protocols for the use of a shared set of outcome instruments, then their shared observations might achieve importance.

What Ioannidis does, and what Lehrer fails to do, is propose a set of precautions and procedures that help to minimize the “intrinsic error” (my expression, not his) of clinical research. His first suggestion is for “better powered evidence, [larger studies]…” Ioannidis’s second recommendation is for “some kind of registration or networking of data collections or investigators.”13

Long before I knew of Ioannidis, I had the fortune of receiving an opportunity to create the centralized disease registry for CRPS. I call CRPS a disease, not just a syndrome, because I’m convinced that advancing science will identify its neuroinflammatory pathophysiology. Be that as it may, my experience in designing and implementing this clinical research project is a cautionary tale of the challenges and problems we confront.

In part 2 of this commentary, I describe that experience and draw from it some suggestions on how pain practitioners might band together to improve on the knowledge we can draw from observations of our everyday experience in evaluating and managing our patients’ distress and disability caused by their experience of pain.

 

Last updated on: October 3, 2011
close X
SHOW MAIN MENU
SHOW SUB MENU