Published on 21/12/2025
A Practical Guide to Sample Size Calculation in Clinical Trials
Calculating the correct sample size is one of the most important aspects of designing a clinical trial. An underpowered study may miss a true treatment effect, while an overpowered one could waste resources and expose more participants to risk unnecessarily. A well-justified sample size not only supports statistical validity but also satisfies regulatory and ethical standards.
This tutorial walks you through how to calculate sample size in clinical trials using core statistical parameters like power, significance level, and effect size. The guide includes practical examples, best practices, and regulatory expectations from USFDA and EMA.
Why Sample Size Calculation Is Crucial
- Ensures high probability of detecting a clinically meaningful effect (power)
- Maintains ethical responsibility by minimizing participant exposure
- Optimizes budget and trial resources
- Meets regulatory expectations for trial justification
Improper calculations may result in non-approvable trials, requiring additional studies and delays.
Key Concepts in Sample Size Calculation
1. Significance Level (α)
The probability of a Type I error — falsely rejecting the null hypothesis. Typically set at 0.05.
2. Power (1−β)
The probability of correctly rejecting the null hypothesis when the alternative is true. Commonly set
3. Effect Size
The minimum clinically meaningful difference between treatment groups. Smaller effects require larger samples.
4. Variability (σ)
The standard deviation of the primary outcome. Larger variability increases required sample size.
5. Allocation Ratio
The ratio of subjects in control versus treatment arms, often 1:1 but may vary (e.g., 2:1 in oncology).
6. Dropout Rate
The estimated percentage of participants who may withdraw or be lost to follow-up. Usually 10–20% buffer added to account for this.
Step-by-Step Sample Size Calculation
Step 1: Define the Trial Objective and Endpoint
- Objective: Demonstrate superiority, non-inferiority, or equivalence
- Endpoint: Choose the primary variable (e.g., blood pressure, survival rate)
Step 2: Choose the Statistical Test
- Continuous variables: t-test or ANCOVA
- Binary outcomes: Chi-square or logistic regression
- Time-to-event: Log-rank test or Cox regression
Step 3: Define Assumptions
Based on prior studies or pilot data, define:
- Expected mean and SD in each group (for continuous)
- Event rates (for binary or survival data)
- Alpha and power levels
- Dropout rate
Step 4: Use a Sample Size Formula or Software
Example for comparing two means (equal groups):
n = ( (Zα/2 + Zβ)² × 2 × σ² ) / δ²
- σ²: Estimated variance
- δ: Clinically significant difference
- Zα/2 and Zβ: Standard normal values for desired alpha and power
Or use software tools like:
- PASS
- G*Power
- SAS PROC POWER
- R (pwr package)
Step 5: Adjust for Dropouts
Example: If calculated sample size is 100 and 10% dropout is expected:
Adjusted n = 100 / (1 - 0.10) = 112
Example Scenario: Superiority Trial
You are testing a new antihypertensive drug expected to reduce systolic BP by 8 mmHg more than placebo. Assume:
- Standard deviation (SD): 15 mmHg
- Alpha: 0.05 (two-sided)
- Power: 90%
- Allocation: 1:1
- Dropout: 15%
Using a t-test and the formula above or software, you calculate 86 per group. After adjusting for dropout, final sample size per group is 101, totaling 202 subjects.
Common Mistakes in Sample Size Estimation
- ❌ Using unrealistic effect sizes to reduce sample size
- ❌ Ignoring dropouts or loss to follow-up
- ❌ Misusing statistical tests (e.g., using a t-test for skewed data)
- ❌ Using outdated pilot data without validation
- ❌ Not documenting assumptions in the SAP
Regulatory Expectations for Sample Size
Regulatory bodies like CDSCO and EMA require:
- Clear documentation of sample size assumptions in the protocol and SAP
- Use of clinically relevant effect sizes
- Inclusion of dropout adjustments
- Transparency on how estimates were derived
- Justification for deviation from planned size
Trial inspections may focus on these justifications, especially when the study fails to meet endpoints.
Best Practices for Reliable Sample Size Estimation
- Base estimates on robust data from earlier trials or meta-analyses
- Engage biostatisticians early in protocol development
- Document all assumptions clearly in the SAP
- Use sensitivity analyses to explore different scenarios
- Validate calculations through independent QA or Pharma SOPs
Adaptive Designs and Sample Size Re-estimation
In complex trials, adaptive designs allow for mid-trial re-estimation of sample size based on interim data. Regulatory approval and strict blinding are required to preserve validity. Use in consultation with Data Monitoring Committees (DMCs) and follow guidelines from pharma regulatory compliance.
Conclusion: Thoughtful Sample Size Planning Leads to Robust Trials
Sample size determination is more than just a statistical exercise—it’s a foundational component of clinical trial integrity. Proper calculations minimize risk, meet ethical standards, and satisfy regulators. With a methodical approach and clear documentation, your study can be designed for success from the outset.
