PPM ACCESS
Access to the PPM Journal and newsletters is FREE for clinicians.
18 Articles in Volume 11, Issue #9
Pain and Sleep: A Delicate Balance
Management of Insomnia: Considerations For Patients With Chronic Pain
PPM Editorial Board Outlines Management Strategies for Chronic Pain Patients With Insomnia
Attention Deficit Hyperactivity Disorder And Patients With Pain
Dry Needling Offers Relief From Chronic Low Back Pain
Etiology of Chronic Pain and Mental Illness: How To Assess Both
Temporomandibular Disorder: Examining the Cause And Treatments
Highlights From PAINWeek 2011
Is Your Patient Using Heroin?
Medications For Low Back Pain
Nonpharmacologic Treatments for Patients With Sleep Disorders and Pain
Man With Constant, Daily Headache Pain, Photophobia, Phonophobia, and Nausea
Successful Nonoperative Treatment of Persistently Painful Knees Following Total Knee Arthroplasty—A Case Series
Insomnia in Chronic Pain Patients
What Is Going Wrong With Research? Finding the Right Answer
Testing Positive for Marijuana in Urine
Hydrocodone, Carisoprodol, and Alprazolam—A Most Lethal Combination
Pro-inflammatory Diet

What Is Going Wrong With Research? Finding the Right Answer

Part 2 of a Two-Part Commentary

In a recent issue of Practical Pain Management, Mark Cooper and I summarized a workshop on the role of activated glia in the onset and course of maldynia.1 [Editor's Note:  You can read Part 1 of this article:  Giving Severe and Chronic Pain a Name:  Malydynia.] Expanding science suggests that neuroinflammation causes common features of a variety of diseases and conditions that are characterized by maldynia. How will practitioners from disparate specialties cooperate to identify these common features and their underlying pathophysiology, biochemistry, and biophysics? How can we identify therapeutic interventions that target these common features, crossing, as they must, disease differences and diverse specialty interests? This is the role of clinical research.

In this editorial, I propose that there is an opportunity, if not a responsibility, for all practitioners to search for new knowledge in pain management. It’s not just a job for a select coterie of academic clinicians and scientists. The work begins by making the tools of clinical research accessible to everyone. It’s a worthy effort.

Enhancing the Practice of Pain Care
Optimal outcomes, practitioner accountability, and cost containment all require rational choices for the best diagnostic and therapeutic alternatives. Advancing technologies bombard us with an expanding array of choices. Our only hope of doing what is right and good for our patients is to apply the best evidence available with the virtues of respect, beneficence, and justice.2 Doing right and good according to the best evidence, naturally, depends on the quality of the evidence. In the first part of this commentary, I presented four arguments for the democratization of clinical research.

In this editorial, I relate an experience of creating and implementing a disease registry and database for a 20-year longitudinal study of the health effects of complex regional pain syndrome (CRPS). I do not intend this to be a report of the results of that study, which is far from complete. Rather, it is a narrative of the opportunities and challenges inherent in undertaking clinical research. Finally, I offer some suggestions for ways in which pain practitioners can create information systems that not only will facilitate research in clinical practice, but probably will also help practitioners organize the objective impairments and subjective experience of each patient. Thus, the effort enhances the clinical practice of pain care.

A Cautionary Tale
Toward the end of 2006, a representative of a family foundation approached the Reflex Sympathetic Dystrophy Syndrome Association (RSDSA)3 with an offer to fund research on the long-term health effects of CRPS, previously known as reflex sympathetic dystrophy. RSDSA is a national, not-for-profit organization whose mission is to promote greater awareness and earlier recognition of CRPS, to fund innovative research, and to provide access to resources and support to people with CRPS, their friends, and families. The foundation’s offer amounted to an informal “request for proposal,” under the unusual circumstance that the meeting of the foundation’s board, which would decide on the research proposal and on the allocation of funds, was to take place 3 weeks hence.

The board acted on a “research proposal abstract.” Abstracts or summaries of original research reports and review articles are common in academic journals and books. A common example of abstract formatting is the one still used by The New England Journal of Medicine: background, methods, results, and conclusions.4 There are many styles of formatting that range from almost a dozen headings to none at all.5 I mention this along with the present example to illustrate how any basic science or clinical problem can fall into an easily accustomed and recognizable format. One can write a research proposal abstract in 30 minutes. Refining a proposal abstract to be worthy of pursuit takes longer, of course.

Thus it was that RSDSA was able to meet the deadline of the foundation’s board and obtain funding as the sponsor of a 20-year study of the health effects of CRPS. The proposal abstract turned into a full research proposal with protocols for each of the abstract headings. The proposal turned into an application for approval by an accredited review board. In the present case, I avoided academic entanglements by applying to a private review board. Institutional review board (IRB) is a term of art for such boards because they are commonly maintained by academic institutions, medical centers, and scientific institutions. IRBs ensure the proper conduct of research and protect the rights and safety of study subjects.6 For research using animal models, the analogous agency is the Institutional Animal Care and Use Committee. 7 To satisfy the needs of nonacademic or nonaffiliated investigators and sponsors, notably pharmaceutical and device manufacturers, private IRBs exist.

The proposed survey—a collection of enrollment and outcome instruments for RSDSA’s 20-year research—is very large: nine instruments, each of which is a lengthy set of questions, some with branching and cascading responses. The size of the questionnaires and the acquisition of responses to them over the Internet required expertise and skills beyond those of the average office computer user—certainly beyond those of this office computer user.

RSDSA awarded a competitive contract for database management to Emerge.MD.8 The company was known to me as the database manager of the North American Research Committee on Multiple Sclerosis,9 which conducts an ongoing survey of people with multiple sclerosis. The requirements that militated for the contracting of a database management company were: a) integrity of the data, b) confidentiality of the data, and c) long-term security of the data. This is a 20-year project, but there’s no reason why RSDSA couldn’t maintain this registry of people with CRPS in perpetuity. Most academic journals insist that clinical research methods observe the outcome measures for 2 years. I’ve always wondered whether that’s really because most conditions reach a steady state 2 years after diagnosis or treatment, or because the labor for most academic research comes from students, interns, residents, and fellows whose term of service is limited. Maintaining clinical research protocols takes an order of magnitude more work once the “indentured servant” who started the project leaves the training program. Cynicism is not my purpose here. The observation supports the arguments for the democratization of clinical research.

RSDSA also hired a part-time project manager to serve the needs of study subjects in remote locations. Enrollees, now numbering more than 500, come from every country of the English-speaking world, and, of course, the study is open to anyone sufficiently fluent in English to register, give consent, and complete the survey instruments. Every research project of this sort must have someone who is readily available to study subjects to answer questions and help them solve technical problems. When the study subjects are in remote locations, on the Internet, a project manager is necessary. There is regular communication among the study respondents, the project manager, the principal investigator, administrative manager (RSDSA’s executive director), and the database managers of Emerge.MD. It’s a curious consequence of our interconnected world that this Internet-based research is conducted by a principal investigator in Washington, DC, an administrative manager in Milford, Connecticut, a project manager in Los Angeles, California, and database managers and programmers in Phoenix, Arizona, with the approval of an IRB in Olympia, Washington.

Office-based Research
There are some obvious similarities and differences between Internet- and office-based research. Digital data are acquired, encoded, and stored by the same methods whether from respondents sitting at home on the Internet or sitting in a practitioner’s office. They may be sitting at a computer terminal in the office or filling out pencil-and-paper forms.

Requirements for data integrity, security, and confidentiality are the same on the Internet and in the office. Increasingly popular electronic medical record (EMR) systems permit office or clinic patients to register for outcomes research as they register for care. Protocols that simultaneously register patients for both are ideal. Systems that link medical records and outcome instruments require the same confidentiality and HIPAA10 compliance that are required by statute and by IRBs for the protection of research subjects.

Once data is properly formatted and stored, practitioners can share and analyze data from any source, either the Internet, a terminal in the practitioner’s office, or pencil-and-paper questionnaires that are transcribed into digital form.

There are some unique advantages to office-based research:

  • Respondents are more “comfortable” when giving consent and completing a survey if there is direct, face-to-face help and explanations of the protocol. Experience with the CRPS 20-year study demonstrates that Internet and phone communication serves well to help respondents work their way through the questionnaire. Nonetheless, when the helper is sitting next to the respondent the process is easier and faster.
  • Clinical research can pay for itself (well, not really, but in part).
  1. “Pay-for-performance” (P4P) programs, sometimes associated with “pay-for-reporting” programs, provide a small premium in reimbursement for evaluation and management of specified conditions (eg, spinal stenosis).11 P4P is “process research,” not unlike a clinical “checklist.” Checklists are now routine in most operating rooms. Whether or not P4P programs will last is unclear.
  2. Commercial sponsorship may provide funds to support clinical research. Participation in a focused trial of a device or pharmaceutical facilitates the establishment of registration and data management routines that can far outlive the initial research protocol. You want your staff to be comfortable with such routines. You want your office assistant to ask what protocol the patient is on, rather than to shudder and try to hide when encountering a protocol sheet and consent form on the patient’s chart.
  3. Seed money or complete funding for research is available from a variety of sources when a unique problem seeks a clinical research solution. The 20-year longitudinal study of the health effects of CRPS is an example. That study has the support of a family foundation. RSDSA has a program of small research grants (up to $10,000) intended to provide seed money that enables investigators to demonstrate the potential of a unique, innovative research proposal in order to obtain larger, long-term funding. Such programs are not intended to support “generic” outcomes research; but, as with commercial sponsorship, the data management systems developed with the funding for a unique protocol can outlive their initial purpose.
  • Enrollment in office-based research is the gateway to enrollment in an appropriate “disease registry.” The relationship between an outcomes research protocol and a disease registry is tricky. There are layers of privacy and confidentiality that might be breached if the two are not clearly separated. When patients consent to inclusion in clinical research, they consent to a specific protocol that defines how each respondent’s identity and personal health information are protected. A disease registry protects the enrollee’s identity and medical information until a qualified investigator requests informed consent to obtain that information.

Organizations that maintain disease registries understand their obligation to protect the interests of enrollees, and investigators understand that the registries do not contain an open database that they can mine at will. Hospitals and academic medical centers have learned the difficult lesson that their records are not available for any student, house officer, or attending, no less any outside agency, to study without IRB approval, no matter how interesting or important the research question might be. Younger readers might be incredulous to learn that anyone ever thought a doctor should have open access to any and all hospital records pertaining to any particular research interest; but within this writer’s memory, the Belmont principles (respect, beneficence, and justice) were but twinkles in ethicists’ eyes.12

What Pain Practitioners Want to Know
Outcomes research in pain management has a huge problem: The primary outcome “measure” is subjective, and, therefore, it can’t be measured. One can measure mechanical or thermal “pain threshold.”13 But the experience of pain is subjective and cannot be measured on any scale that is subject to external validation.14 The visual analog scale for pain (VAS-P) is a longitudinal measure, tracking the pain experience of one person over time. The VAS-P cannot be used to compare the experience of pain of more than one person at one time. It is so sensitive to testing conditions that the comparison of even very large groups of subjects is suspect; comparison of samples from different studies conducted by different pain management clinics is out of the question. Add “quality of life” (QoL) as an outcome measure and we have subjectivity squared.

We can measure markers of pain and QoL (behaviors and events that are reasonably associated with presence or absence of pain, or with a good or ill QoL), but not “the thing itself.” The subjective experiences of pain and QoL are immeasurable.This is a problem for those in other fields, even if investigators in fields such as diabetes care don’t think so. Glucose levels and hemoglobin A1c, along with glomerular filtration rates and urinary protein, are objective measures. There are well-validated scores for retinal integrity and peripheral vascular competence. Increasingly, and appropriately, diabetes and other disease specialists understand that what is important about the improvement or decline of such outcome measures is their effect on each patient’s or study participant’s QoL, a person’s experience of his or her condition, what Edmund Pellegrino called “the predicament of illness.”15

For the 20-year CRPS study, we elected to use a set of VAS-P and the McGill Pain Questionnaire in its short form,16 along with the Pain Disability Index (PDI)17 and the 36-item Short Form Health Survey (SF-36).18 There are dozens of outcome instruments for pain and QoL (or, considered as a measure of the negative end of the spectrum, instruments to measure disability). Investigators criticize them all, and rightly so. There can be no perfect instrument when the quantity to be measured is only a representation, and often a very vague representation, of “the thing itself.” Diversity of opinion about which are the best outcome instruments leads to diversity of research protocols that make it very difficult, if not impossible, to compare and combine research findings.

If two surveys use two different questions or different instruments to measure the same event, such as remission of CRPS, work disability, or depression, can one reliably compare them? Will the results of the 20-year longitudinal study be worthy of comparison to Srinivasa Raja’s and Robert Schwartzman’s results?19,20 One must reconcile such questions, and more, before one can combine the data in a meta-analysis. I suspect that a “compare-and-contrast” exercise often is worthwhile, but a meta-analysis is not.

I’ve made the argument that clinical research is a good thing and that it should be part of every pain management practitioner’s routine. The value of such research, in part, is to study rare events and small margins of efficacy and safety in various diagnostic and therapeutic methods. Value requires numbers—large samples that are beyond the scope of any one pain management practice. The data from clinic A must be comparable with the data from clinic B down the street. The comparison will be less reliable if clinic A uses the Beck Depression Index, clinic B uses the Zung Questionnaire,21 and clinic C uses the Profile of Mood States 22 to measure the same event. Is the SF-12 v2 comparable with the individual subsections of the SF-36? Is the PDI the same as the Pain Disability Questionnaire (PDQ)? Who is to make the Solomonic choice among them?

An Immodest Proposal
My proposal is not immodest because I would presume to dictate to anyone else a particular way of choosing among the various opportunities to conduct clinical research. It is immodest because I think it is actually possible for the professions of pain care to succeed in the way that I propose they should. There should be a fundamental collection of questions and instruments that everyone uses as the baseline data set. The various specialty and subspecialty institutions that represent the interests of pain practitioners of every sort should get together and choose that data set.

Naturally, different subspecialists will want different outcome measures for their particular practice. Interventional pain managers require different information than do physical therapists. But there is a subset of outcome measures that are required for all practitioners who are concerned with the outcomes of their treatment for painful conditions. The question of comparing and combining information across specialties other than pain management can be even more troublesome. For example, some specialists prefer the Oswestry Disability Index and others the Roland-Morris23 Disability Questionnaire as a measure of functional capacity; whereas pain practitioners use the PDQ24 or PDI. Is there a single instrument that will measure the spectrum of capacity and disability across musculoskeletal medicine and pain management?

As difficult as it is to choose the proper instrument to measure pain and QoL, it is even harder to select an instrument, or a combination of them, to measure impairments of mood and effect. But I propose that we must choose. Once all the stakeholders agree that consensus is worthwhile, reaching consensus on which instruments form the basic data set is possible. It won’t happen quickly, but it is possible.

Stakeholders in Outcomes Instruments
If there were a set of outcomes instruments that represents the best approximation of the measurement we seek, then a well-formatted survey could be included as a module of any and all practice management and EMR packages that are marketed throughout the healthcare industry. Such an “outcomes evaluation module,” like enrollment demographics modules and opioid management programs, would be uniform across the industry. I say “would” because no such uniformity exists, and I am counting on the advent of consensus across the broad range of academic, clinical, and corporate interests.

Success in selecting a basic data set for outcomes assessment will be hard won, even if we’re only talking about the practice of pain management. Everyone would have to buy into the decision of a representative group of clinical and basic scientists and practitioners, industry representatives, biometricians, and biostatisticians. The group would probably have to include computer programmers and technicians who transcribe the instruments into applications suitable for incorporation into commercial products of the EMR industry. Having worked with the staff of Emerge.MD to set up the “questions document,” the application and the database for the 20-year CRPS study, I can declare that these are not easy tasks. But it can be done. Consider the energy and effort that goes into any of the Cochrane Reviews25 or the cooperation that went into the multicenter, multispecialty collaboration to complete the SPORT (Spine Patient Outcomes Research Trial) research.26

Conclusion
In conclusion, I return to Ioannidis for a note of caution. He admonishes investigators to select the questions they wish to study very carefully. One must anticipate the probability of finding a true relationship, and use that anticipation to carefully define the protocol and the method of analysis. Shared data sets and protocols must not be a “treasure trove” for data-mining. Nonetheless, office-based clinical research can provide the power to give research findings credibility, if not “teeth.”

Last updated on: December 15, 2011
close X
SHOW MAIN MENU
SHOW SUB MENU