Compare statistical and clinical significance

 Compare statistical and clinical significance

 Compare statistical and clinical significance

Measures of statistical significance quantify the probability of a study’s results being due to chance. Clinical significance, on the other hand, refers to the magnitude of the actual treatment effect (i.e., the difference between the intervention and control groups, also known as the “treatment effect size”), which will determine whether the results of the trial are likely to impact current medical practice (Ranganathan, P. et al., 2015). The “P” value, frequently used to measure statistical significance, is the probability that the study results are due to chance rather than an actual treatment effect. The conventional cut-off for the “P” value to be considered statistically significant is 0.05 (or 5%). A P < 0.05 implies that the possibility of the results in a study being due to chance is <5% (Ranganathan, P. et al., 2015).  In clinical practice, an impact’s “clinical significance” depends on its implications on existing practice-treatment effect size, which is one of the most critical factors driving treatment decisions. Sharma (2021) suggests that the clinical significance should reflect “the extent of change, whether the change makes a real difference to subject lives, how long the effects last, consumer acceptability, cost-effectiveness, and ease of implementation.” While there are established, traditionally accepted values for statistical significance testing, this is lacking for evaluating clinical significance (Sharma H., 2021). Often, it is the judgment of the clinician (and the patient) that decides whether a result is clinically significant or not.

According to Van Cutsem, E. et al. (2018), statistical significance heavily depends on the study’s sample size; with large sample sizes, even minor treatment effects (clinically inconsequential) can appear statistically significant; therefore, the reader must carefully interpret this “significance.” A study published in the Journal of Clinical Oncology compared overall survival in 569 patients with advanced pancreatic cancer who were randomized to receive erlotinib plus gemcitabine versus gemcitabine alone. Median survival was “significantly” prolonged in the erlotinib/gemcitabine arm (6.24 months vs. 5.91 months, P = 0.038). The P = 0.038 means that there is only a 3.8% chance that this observed difference between the groups occurred by chance (which is less than the traditional cut-off of 5%) and, therefore, statistically significant. In this example, the clinical relevance of this “positive” study is the “treatment effect” or difference in median survival between 6.24 and 5.91 months – a mere ten days, which most oncologists would agree is a clinically irrelevant “improvement” in outcomes, especially when considering the added toxicity and costs involved with the combination. In clinical research, statistically significant study results are often clinically meaningful. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice.

Ranganathan, P., Pramesh, C. S., & Buyse, M. (2015). Common pitfalls in statistical analysis: Clinical versus statistical significance. Perspectives in clinical research6(3), 169–170. https://doi.org/10.4103/2229-3485.159943

Sharma H. (2021). Statistical significance or clinical significance? A researcher’s dilemma for appropriate interpretation of research results. Saudi journal of anesthesia15(4), 431–434. https://doi.org/10.4103/sja.sja_158_21

Van Cutsem, E., Hidalgo, M., Canon, J. L., Macarulla, T., Bazin, I., Poddubskaya, E., … & Hammel, P. (2018). Phase I/II trial of pimasertib plus gemcitabine in patients with metastatic pancreatic cancer. International Journal of Cancer143(8), 2053-2064.


Place your order

Get quality help in rewriting, editing, and proofreading

Get Started