The difference between statistical and clinical significance is subtle but important. A result that we identify as statistically significant (usually defined at the 5% level, with P ≤ 0.05) may not be clinically significant (usually defined as an effect size, or difference between treatments, of ≥ 10%). The P-value, though readily attainable through any standard statistical software program, may not be very meaningful in real life. What we really want to know is the probability that an observed outcome is of clinical or practical significance. To this end, we have to use our experience and understanding of a clinical situation to define a threshold of clinical significance, which is often defined as a difference ≥ 10%. An alternative to the P-value is the confidence interval, which defines a range, in the same terms by which the data were measured, within which the true value probably (95% probability) resides. The confidence interval also provides information about statistical significance, the strength and direction of the effect, and enables us to consider the clinical relevance of the outcome. To make these points clear, authors and editors should make an effort to report confidence intervals about their point estimates. In the following editorial, Dr. Joseph explains the importance of reporting confidence intervals in the context of health care reform.
Article info
Identification
Copyright
© 2010 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.