TY - JOUR
T1 - Power, precision, and sample size estimation in sport and exercise science research
AU - Abt, Grant
AU - Boreham, Colin
AU - Davison, Gareth
AU - Jackson, Robin
AU - Nevill, Alan
AU - Wallace, Eric
AU - Williams, Mark
PY - 2020/6/19
Y1 - 2020/6/19
N2 - The majority of papers submitted to the Journal of Sports Sciences are experimental. The data are collected from a sample of the population and then used to test hypotheses and/or make inferences about that population. A common question in experimental research is therefore “how large should my sample be?”. Broadly, there are two approaches to estimating sample size – using power and using precision. If a study uses frequentist hypothesis testing, it is common to conduct a power calculation to determine how many participants would be required to reject the null hypothesis assuming an effect of a given size is present. That is, if there’s an effect of the treatment (of given size x), a power calculation will determine approximately how many participants would be required to detect that effect (of size x or larger) a given percentage of the time (often 80%). Power calculations as conducted in popular software programmes such as G*Power (Faul et al., 2009) typically require inputs for the estimated effect size, alpha, power (1 – ᵦ), and the statistical tests to be conducted. All of these inputs are subjective (or informed by previous studies) and up to the researcher to decide the most appropriate balance between type 1 error rate (false positive), type 2 error rate (false negative), cost, and time. In contrast, estimating sample size via precision involves estimating how many participants would be required for the frequentist confidence interval or Bayesian credible interval resulting from a statistical analysis to be of a certain width. The implication is that a narrower confidence interval or credible interval allows a more precise estimation of where the “true” population parameter (e.g., mean difference) might be.
AB - The majority of papers submitted to the Journal of Sports Sciences are experimental. The data are collected from a sample of the population and then used to test hypotheses and/or make inferences about that population. A common question in experimental research is therefore “how large should my sample be?”. Broadly, there are two approaches to estimating sample size – using power and using precision. If a study uses frequentist hypothesis testing, it is common to conduct a power calculation to determine how many participants would be required to reject the null hypothesis assuming an effect of a given size is present. That is, if there’s an effect of the treatment (of given size x), a power calculation will determine approximately how many participants would be required to detect that effect (of size x or larger) a given percentage of the time (often 80%). Power calculations as conducted in popular software programmes such as G*Power (Faul et al., 2009) typically require inputs for the estimated effect size, alpha, power (1 – ᵦ), and the statistical tests to be conducted. All of these inputs are subjective (or informed by previous studies) and up to the researcher to decide the most appropriate balance between type 1 error rate (false positive), type 2 error rate (false negative), cost, and time. In contrast, estimating sample size via precision involves estimating how many participants would be required for the frequentist confidence interval or Bayesian credible interval resulting from a statistical analysis to be of a certain width. The implication is that a narrower confidence interval or credible interval allows a more precise estimation of where the “true” population parameter (e.g., mean difference) might be.
UR - http://www.scopus.com/inward/record.url?scp=85089982546&partnerID=8YFLogxK
U2 - 10.1080/02640414.2020.1776002
DO - 10.1080/02640414.2020.1776002
M3 - Editorial
C2 - 32558628
SN - 0264-0414
VL - 38
SP - 1933
EP - 1935
JO - Journal of Sports Sciences
JF - Journal of Sports Sciences
IS - 17
ER -