Published on 21/12/2025
Using Causality Assessment Tools for Adverse Events in Clinical Trials
Introduction: The Importance of Causality Assessment
When an adverse event (AE) occurs in a clinical trial, one of the most important steps is assessing whether the event is related to the investigational product or to other factors such as underlying disease, concomitant medication, or procedures. Regulatory agencies such as the FDA, EMA, MHRA, and CDSCO require that sponsors and investigators use causality assessment tools or structured methods to evaluate the relationship between AEs and study drugs. This assessment influences not only regulatory reporting (e.g., expedited reports of SAEs and SUSARs) but also overall drug safety profiles and labeling decisions.
Several standardized tools exist to support causality judgments, the most widely used being the WHO-UMC causality scale and the Naranjo algorithm. These tools aim to reduce subjectivity and ensure consistency across investigators and sponsors. This article provides a step-by-step guide on causality assessment tools, how they are applied in clinical trials, regulatory expectations, and best practices for accurate attribution of AEs.
The WHO-UMC Causality Assessment Scale
The World Health Organization – Uppsala Monitoring Centre (WHO-UMC) scale is one of the most widely
- Certain: A clinical event with a plausible time relationship to drug administration, not explained by other factors, with clear response to withdrawal (dechallenge).
- Probable / Likely: A reasonable temporal relationship to drug intake, unlikely explained by other conditions, with response to dechallenge.
- Possible: A reasonable time relationship but could also be explained by other drugs or conditions.
- Unlikely: Time to drug intake makes causal relationship improbable, and alternative explanations are more likely.
- Conditional / Unclassified: More data required for assessment.
- Unassessable / Unclassifiable: Insufficient or contradictory information prevents judgment.
This structured approach ensures regulators and sponsors can see a transparent, reproducible rationale for causality assignments. For instance, if a patient develops elevated liver enzymes after starting the study drug, and the values normalize after discontinuation, the event may be classified as “Probable” or “Certain” depending on supporting data.
The Naranjo Algorithm
Another commonly used tool is the Naranjo algorithm, a questionnaire-based method that scores causality based on 10 questions, such as whether the AE appeared after drug administration, whether the AE improved upon withdrawal, and whether rechallenge produced the event again. Scores categorize causality as “Definite,” “Probable,” “Possible,” or “Doubtful.”
While widely used in post-marketing settings, the Naranjo algorithm is sometimes considered too simplistic for complex trial data. Nevertheless, it remains valuable in providing a structured framework for causality decisions.
Other Causality Assessment Tools
In addition to WHO-UMC and Naranjo, several other tools are applied in specific therapeutic areas:
- RUCAM (Roussel Uclaf Causality Assessment Method): Designed for drug-induced liver injury (DILI).
- Bayesian and probabilistic models: Emerging approaches that integrate large datasets and prior knowledge.
- Algorithmic causality scales: Adapted for oncology and immunotherapy-related AEs.
Selection of the tool depends on the therapeutic area, regulatory requirements, and availability of objective data. For example, oncology trials often integrate CTCAE severity grading with causality assessments to build a more comprehensive safety profile.
Regulatory Expectations and Inspection Findings
Regulators expect consistency, documentation, and rationale in causality assessments. Key expectations include:
- FDA: Requires causality fields in IND safety reports and reconciliation with narratives.
- EMA: Mandates causality assignment in EudraVigilance submissions for SUSAR reporting.
- MHRA: Frequently cites missing or inconsistent causality documentation in inspections.
- ICH E2A/E2B: Identifies causality as a required data element for safety reporting.
For example, during an EMA inspection of an oncology trial, auditors cited a sponsor for failing to justify why multiple cases of hepatotoxicity were classified as “Unlikely.” The lack of documented rationale highlighted the importance of using structured causality tools.
Public trial registries such as the WHO International Clinical Trials Registry Platform emphasize the role of standardized AE documentation, reinforcing the need for reliable causality assessment methods across studies.
Challenges in Causality Assessment
Despite structured tools, causality assessment faces several challenges:
- Subjectivity: Different investigators may interpret scales differently without proper training.
- Incomplete data: Missing lab results or diagnostic confirmation complicates judgments.
- Multiple drugs: Patients on concomitant medications pose attribution challenges.
- Rechallenge limitations: Ethical considerations often prevent rechallenge, reducing certainty.
To mitigate these issues, sponsors should develop SOPs, train investigators, and require documentation of rationale for each causality judgment.
Best Practices for Applying Causality Tools
Sponsors and CROs can improve causality assessments by implementing best practices such as:
- Train investigators on WHO-UMC and other tools before trial initiation.
- Require narrative justification for each causality classification.
- Use drop-down menus in eCRFs with WHO-UMC categories to reduce variability.
- Perform data manager and medical monitor review of causality consistency.
- Reconcile causality across eCRFs, narratives, and pharmacovigilance databases.
For example, in a Phase III diabetes trial, causality assessments were cross-checked against concomitant medication records, ensuring consistency and reducing misclassification.
Key Takeaways
Causality assessment tools are critical for ensuring accurate, consistent, and regulatory-compliant AE documentation. The WHO-UMC scale, Naranjo algorithm, and specialized methods provide structured frameworks to reduce subjectivity and support regulatory reporting. Sponsors and investigators must:
- Apply causality assessment tools consistently across trials.
- Document rationale for each judgment.
- Train site staff to ensure uniform understanding and application.
- Reconcile causality across systems for regulatory submissions.
By adopting these practices, clinical teams can strengthen pharmacovigilance, meet regulatory expectations, and safeguard patient safety in clinical trials.
