While anecdotal results of CES treatment for anxiety disorders are invariably positive, a rigorous, scientific approach is required for analyzing, collating, and reporting results from the vast body of research done on CES. Due to varying methodologies and measures, the myriad of studies do not lend themselves to a simple consolidation of results. Therefore, a statistical method called meta-analysis is used to combine results in a meaningful way and allow an objective measure of the efficacy of CES.
Daniel L. Kirsch, PhD; Marshall F. Gilula, MD
Part 2 continues from the March 2007 issue of Practical Pain Management. Meta-analysis is a statistical method of combining the results of several studies that address a set of related research hypotheses. Because the results from different studies investigating different independent variables are measured on different scales, the dependent variables in a meta-analysis are some standardized measure of effect size. The usual effect size indicator is either the standardized mean difference or an odds ratio in experiments with outcomes of dichotomous variables (success versus failure).
In this case, a meta-analysis of CES calculates the percent of patients improving versus the percent not improving to yield the treatment effect size r, which is equal to the amount of patient improvement given as percentage.32 In the previous issue, it was reported that results of 500 patients produced an effect size r=.62. When the smaller groups of patients with specific types of anxiety related disorders were broken out, the effect size among those suffering from panic disorder was r=.45, OCD patients, r=.68, bi-polar disorder r=.71, PTSD (r =.55) ADHD (r =.62), and phobias (r =.49). The overall mean effect size for the combined smaller groups was r=.64. These results can be compared with the accepted standardized ratings of r=.10 for small effect, r=.30 for medium effect and r=.50 for large effect.33 Thus it can be seen that the overall effect of CES for anxiety disorders is large and that there is a notable effect of duration of use that enhances such outcomes.
When any given study is published, the authors analyze the data and report whether or not the treatment utilized in their study had a discernable effect. They may report that the treatment had a significant effect at the .05 or .01, or .001 level of probability. In the first instance, the .05 indicates that if the study were to be repeated 100 times, the changes found might have occurred by chance alone only 5 times out of 100. Or in the case of .01 or .001 level of probability, the result would be expected to have occurred by chance alone only one time out of 100 or one time out of 1,000, respectively.
Please refer to the April 2007 issue for the complete text. In the event you need to order a back issue, please click here.