Subscription is FREE for qualified healthcare professionals in the US.

What Is Going Wrong With Research?

Part 1 of a Two-Part Commentary
Page 1 of 3

In a recent issue of Practical Pain Management, Mark Cooper and I summarized a workshop on the role of activated glia in the onset and course of maldynia.1 Expanding science suggests that neuroinflammation causes common features of a variety of diseases and conditions that are characterized by maldynia. How will practitioners from disparate specialties cooperate to identify these common features and their underlying pathophysiology, biochemistry, and biophysics? How can we identify therapeutic interventions that target these common features, crossing, as they must, disease differences and diverse specialty interests? This is the role of clinical research.

In part 1 of my commentary, I propose that there is an opportunity, if not a responsibility, for all practitioners to search for new knowledge in pain management. It’s not just a job for a select coterie of academic clinicians and scientists. I call the change in attitude “the democratization of clinical research.”

Four Arguments for the Democratization of Research
We are not required to like it, but evidence-based medicine is here to stay. This is, of course, as it should be. Optimal outcomes, practitioner accountability, and cost containment all require rational choices for the best diagnostic and therapeutic alternatives. Advancing technologies bombard us with an expanding array of choices. Our only hope of doing what is right and good for our patients is to apply the best evidence available with the virtues of respect, beneficence, and justice.2 Doing right and good according to the best evidence naturally depends on the quality of the evidence.

This editorial is about clinical research. First, accepting my declaration of the importance of clinical research, I present four arguments for the democratization of clinical research. In the second part of this commentary, I relate an experience of creating and implementing a disease registry and database for a 20-year longitudinal study of the health effects of complex regional pain syndrome (CRPS). I do not intend this to be a report of the results of that study, which is far from complete. Rather, it is a narrative of the opportunities and challenges inherent in undertaking clinical research. Finally, I offer some suggestions for ways in which pain practitioners can create information systems that will not only facilitate research in clinical practice, but will probably also help each practitioner to organize the objective impairments and subjective experience of each patient. Thus, the effort enhances the clinical practice of pain care.

Argument 1:
Most diagnostic and therapeutic methods go untested.

If a diagnostic test is noninvasive and appears to do no harm, then it often finds widespread use without reproducible evidence of efficacy and safety. An example is the history of thermography in the diagnosis of lumbar disc herniation.3 An unfortunate result of the nonspecificity of thermography for that diagnostic purpose is that it is now difficult in many communities to find a medical thermographer to study and monitor the progress of a patient who has CRPS, for example, and for whom small changes in skin temperature represent an important biomarker of disease.

New and off-label uses for previously approved medications and devices fall into the same trap. I confess to some anxiety when Mark Cooper and I recently reported the promising research on the effects of minocycline on the attenuation of activated glia that are associated with maldynia.4 The researchers reported improvement in pain behavior in the animal model. Minocycline is an antibacterial agent and is readily available in generic form. I hope it is only unattractive cynicism that makes me imagine that someone would be foolish enough to administer minocycline to anyone with maldynia without a prospective protocol that is approved by an accredited institutional review board.5 Nonetheless, when science bumps up against human nature, human nature sometimes comes out ahead, and we behave inappropriately.

Argument 2:
Most clinical testing of diagnostic and therapeutic methods is conducted by a small sample of investigators at academic centers.

An example comes from my other area of subspecialty interest, apart from CRPS: spine care. The annual meeting of the North American Spine Society has the traditional format of short paper presentations interspersed with symposia of invited lecturers. Each year a predictable group of investigators reappears to present its work. There are lots of new faces, but the “population” of presenters has a large sample of the “usual suspects.” This is not necessarily a bad thing. These are highly intelligent, experienced, and well-organized clinicians and scientists. I’ve known and admired many of them for almost 40 years. Their virtues, however, also can be limitations.

They work in institutions with advanced medical informatics. They have offices of research administration at their disposal to help with grant applications, financial management, and publication. These benefits come at the high price of “indirect costs” and bureaucratic inefficiency. The academic practitioners have residents and fellows whose “term of indenture” not only facilitates, but also commonly requires, the conduct of research. Students, residents, and fellows seldom have the time to access large samples of homogeneous study subjects to conduct long-term, prospective outcome studies. Until recently, retrospective research was the norm.

Those in the group of “usual suspects” are generally well acquainted with each other. Some are competitors; most are friends. Close relationships foster cooperation and the advent of organizations, such as the National Spine Network ( Shared protocols decrease the probability that different institutions will study the same problem using slightly different diagnostic criteria or slightly different outcome variables. But the protocols usually do vary among institutions in small but deceptively important ways, which makes it difficult to reliably compare or join the results in meta-analysis.

Large-scale, longitudinal research is expensive, and the science is difficult. Controversy still rages about the clinical implications of the results of the Spine Patient Outcomes Research Trial (SPORT), which compared surgical and nonsurgical treatment methods for disc herniation.6 Broad-based clinical research using shared protocols for outcome measurement improves the probability of acquiring reliable knowledge about narrow risk-benefit margins for uncommon conditions.

Argument 3:
When independent practitioners perform well-planned clinical research, there will be less bias that derives from the ways in which research is funded and published presently.

Last updated on: October 3, 2011
First published on: September 1, 2011