per protocol vs ITT – Clinical Research Made Simple https://www.clinicalstudies.in Trusted Resource for Clinical Trials, Protocols & Progress Tue, 05 Aug 2025 12:52:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Using Seroconversion as an Endpoint in Vaccine Trials https://www.clinicalstudies.in/using-seroconversion-as-an-endpoint-in-vaccine-trials/ Tue, 05 Aug 2025 12:52:24 +0000 https://www.clinicalstudies.in/using-seroconversion-as-an-endpoint-in-vaccine-trials/ Read More “Using Seroconversion as an Endpoint in Vaccine Trials” »

]]>
Using Seroconversion as an Endpoint in Vaccine Trials

Seroconversion as a Vaccine Trial Endpoint: A Practical, Regulatory-Ready Guide

What “Seroconversion” Means in Practice—and When It’s the Right Endpoint

“Seroconversion” (SCR) translates immunology into a binary decision: did a participant mount a meaningful antibody response or not? In vaccine trials, it’s typically defined as a ≥4-fold rise in titer from baseline (for seronegatives often from below LLOQ) to a specified post-vaccination timepoint (e.g., Day 28 or Day 35), or meeting a threshold titer such as neutralization ID50 ≥1:40. Unlike geometric mean titers (GMTs), which summarize central tendency, SCR focuses on responders and is easy to interpret for dose selection, schedule comparisons, and immunobridging. It is especially powerful when baselines vary widely, when there are “ceiling effects” near the ULOQ, or when non-normal titer distributions complicate parametric tests.

When should SCR be primary? Consider it for: (1) early to mid-phase studies comparing dose/schedule arms where a clinically meaningful proportion of responders is the key decision; (2) bridging across populations (e.g., adolescents vs adults) when ethical or feasibility constraints limit classic efficacy endpoints; and (3) outbreak contexts where rapid, binary readouts accelerate go/no-go decisions. When should it be secondary? If your primary goal is to detect magnitude differences (breadth and peak titers) or to model correlates of protection, GMT or continuous neutralization/binding endpoints may be preferred, with SCR supporting the narrative. Either way, define SCR in the protocol, lock analysis rules in the SAP, and ensure the lab manual guarantees consistency of baselines, timepoints, and cut-points across sites.

Defining Seroconversion Correctly: Assay Limits, Baselines, and Data Rules

SCR is only as credible as the lab methods behind it. Your lab manual and SAP must predefine analytical parameters and handling rules so the binary “responder” label reflects biology, not analytics. Typical ELISA IgG parameters include LLOQ 0.50 IU/mL, ULOQ 200 IU/mL, and LOD 0.20 IU/mL. Pseudovirus neutralization might span 1:10–1:5120, with < 1:10 imputed as 1:5 for calculations. Baseline values below LLOQ are commonly set to LLOQ/2 (e.g., 0.25 IU/mL or 1:5), and the post-vaccination value is compared against this standardized baseline. Values above ULOQ must be either repeated at higher dilution or handled per SAP (e.g., set to ULOQ if repeat is infeasible). These decisions influence the fold-rise, and thus SCR classification.

Illustrative Seroconversion Definitions (Declare in Protocol/SAP)
Endpoint Assay Specs Baseline Rule Responder Definition
ELISA IgG SCR LLOQ 0.50; ULOQ 200; LOD 0.20 IU/mL Baseline <LLOQ set to 0.25 ≥4× rise from baseline or ≥10 IU/mL
Neutralization SCR Range 1:10–1:5120; LOD 1:8 <1:10 set to 1:5 ID50 ≥1:40, or ≥4× rise

Consistency across time and geography matters. If you change cell lines, antigens, or detection reagents mid-study, run a bridging panel and file a comparability memo. Pre-analytical controls—blood draw timing, centrifugation, storage at −80 °C, ≤2 freeze–thaw cycles—should be harmonized in the central lab network to avoid spurious changes in SCR. While SCR is a clinical endpoint, reviewers often ask if clinical supplies and labs were in control. Citing representative PDE (e.g., 3 mg/day residual solvent) and MACO cleaning limits (e.g., 1.0–1.2 µg/25 cm2) in your quality narrative shows end-to-end control from manufacturing to measurement, which helps ethics committees and DSMBs trust the readout.

Positioning SCR in Objectives, Estimands, and Decision Rules

Turn SCR into a disciplined decision tool by anchoring it to clear objectives and estimands. For dose/schedule selection, a common co-primary framework pairs GMT and SCR: first test non-inferiority on GMT (lower-bound ratio ≥0.67), then compare SCR using a margin (e.g., difference ≥−10%). In pediatric/adolescent immunobridging, you may declare co-primary SCR NI and GMT NI versus adult reference. Estimands should address intercurrent events: a treatment policy estimand counts responders regardless of non-study vaccine receipt, while a hypothetical estimand imputes what SCR would have been without breakthrough infection. Choose one up front and align your missing-data plan (e.g., multiple imputation vs. complete-case).

Operationalize decisions in the SAP. Example: “Select 30 µg over 10 µg if SCR difference is ≥+7% with non-inferior GMT; if SCR gain is <7% but Grade 3 systemic AEs are ≥2% lower, choose the safer dose.” Multiplicity control matters if SCR is co-primary with GMT or tested in multiple age strata—use gatekeeping (hierarchical) or Hochberg procedures. For protocol and SOP exemplars aligning endpoints to analysis shells, see pharmaValidation.in. For high-level regulatory expectations on endpoints and analysis principles, consult public resources at FDA.gov.

Statistics for Seroconversion: Power, Sample Size, and Non-Inferiority Margins

On the statistics side, SCR is a binomial endpoint analyzed with risk differences or odds ratios and exact or Miettinen–Nurminen confidence intervals. Power depends on the expected control SCR, the effect (superiority) or margin (non-inferiority), and allocation ratio. For non-inferiority in immunobridging, margins of −5% to −10% are common, justified by assay precision, clinical judgment, and historical platform data. Assume, for example, adult SCR 90% and pediatric SCR 90% with an NI margin of −10%: to show pediatric−adult ≥−10% with 85–90% power at α=0.05, you might need ~200–250 pediatric participants versus a concurrent or historical adult reference, accounting for ~5–10% attrition and stratification (e.g., age bands).

Illustrative Sample Size Scenarios for SCR
Comparison Assumptions Objective Power N per Group
Dose A vs Dose B SCR 85% vs 92%, α=0.05 Superiority (Δ≥7%) 85% 220
Ped vs Adult 90% vs 90%; NI margin −10% Non-inferiority (Δ≥−10%) 90% 240 (ped), 240 (adult or well-matched ref)
Schedule 0/28 vs 0/56 88% vs 92%; α=0.05 Superiority (Δ≥4%) 80% 300

Predefine population sets: per-protocol for immunogenicity (met visit windows, valid specimens) and modified ITT to reflect real-world deviations. The SAP should specify sensitivity analyses excluding out-of-window draws or samples with pre-analytical flag (e.g., third freeze-thaw). Multiplicity: if SCR is co-primary with GMT, use hierarchical testing (e.g., GMT NI first, then SCR NI) to control familywise error. When event rates shift (e.g., baseline seropositivity in outbreaks), blinded sample size re-estimation based on observed variance and proportion is acceptable if pre-specified and firewall-protected.

Case Study (Hypothetical): Selecting a Dose by SCR Without Sacrificing Tolerability

Design: Adults are randomized 1:1:1 to 10 µg, 30 µg, or 100 µg on Day 0/28. Co-primary endpoints are ELISA IgG GMT at Day 35 and SCR (≥4× rise or ≥10 IU/mL if baseline <LLOQ). Safety focuses on Grade 3 systemic AEs within 7 days. Assay parameters: ELISA LLOQ 0.50; ULOQ 200; LOD 0.20 IU/mL; neutralization assay 1:10–1:5120 with <1:10 set to 1:5. Results (dummy): SCR: 10 µg=86% (95% CI 80–91), 30 µg=93% (88–96), 100 µg=95% (91–98). GMT is highest at 100 µg but Grade 3 systemic AEs rise from 3.0% (10 µg) → 4.8% (30 µg) → 8.5% (100 µg). The SAP’s decision rule requires ≥5% SCR gain or non-inferior GMT with ≥2% absolute AE reduction to choose the lower dose. Here, 30 µg vs 100 µg shows only +2% SCR with ~3.7% fewer Grade 3 AEs; 30 µg is selected as RP2D. Sensitivity analyses (per-protocol only, excluding out-of-window samples) confirm the choice.

Illustrative SCR and Safety Snapshot (Day 35)
Arm SCR (%) 95% CI Grade 3 Sys AEs (%)
10 µg 86 80–91 3.0
30 µg 93 88–96 4.8
100 µg 95 91–98 8.5

Interpretation: SCR sharpened the risk–benefit judgment: the marginal SCR gain from 30→100 µg did not justify higher reactogenicity. The DSMB endorsed 30 µg and recommended stratified analyses by age (≥50 years) to confirm consistency; in older adults SCR remained ≥90% with acceptable tolerability, supporting a uniform adult dose.

Documentation, Inspection Readiness, and Reporting SCR in CSRs

Auditors and reviewers will follow your SCR from raw data to narrative. Keep the Trial Master File (TMF) contemporaneous: lab manual (assay limits; cut-points), specimen handling SOPs (centrifugation, storage, shipments), versioned SAP shells for SCR tables/figures, and change-control records for any mid-study assay updates with bridging panels. In the CSR, present both absolute SCR and ΔSCR between arms with 95% CIs, stratified by age, sex, region, and baseline serostatus; pair with GMT ratios and safety. For multi-country programs, harmonize translations for ePRO fever diaries and ensure background serostatus definitions match across central labs.

Finally, align your endpoint strategy with recognized quality and regulatory frameworks so decisions travel smoothly from protocol to label. While seroconversion is a “clinical” readout, end-to-end quality still matters—manufacturing remains under state-of-control (representative PDE 3 mg/day; cleaning MACO 1.0–1.2 µg/25 cm2 as examples), and clinical data are ALCOA (attributable, legible, contemporaneous, original, accurate). With clear definitions, fit-for-purpose assays, and disciplined statistics, SCR becomes a robust, inspection-ready endpoint that accelerates development without compromising scientific integrity.

]]>
Phase III Vaccine Efficacy Trial Design and Execution https://www.clinicalstudies.in/phase-iii-vaccine-efficacy-trial-design-and-execution/ Fri, 01 Aug 2025 17:58:16 +0000 https://www.clinicalstudies.in/phase-iii-vaccine-efficacy-trial-design-and-execution/ Read More “Phase III Vaccine Efficacy Trial Design and Execution” »

]]>
Phase III Vaccine Efficacy Trial Design and Execution

How to Plan and Run Phase III Vaccine Efficacy Trials

Purpose of Phase III: Confirming Efficacy, Safety, and Consistency at Scale

Phase III vaccine trials provide the pivotal evidence needed for licensure: they confirm clinical efficacy, characterize safety across thousands of participants, and may assess consistency across manufacturing lots. The typical design is multicenter, randomized, double-blind, and placebo- or active-controlled, recruiting from regions with sufficient background incidence to accumulate events efficiently. Primary endpoints are clinically meaningful and pre-specified—most commonly laboratory-confirmed, symptomatic disease according to a stringent case definition. Secondary endpoints expand this to severe disease, hospitalization, or virologically confirmed infection regardless of symptoms, while exploratory endpoints may include immunobridging substudies to characterize immune markers that might later serve as correlates of protection.

Because these studies are large, operational discipline is paramount: rigorous endpoint adjudication, independent Data and Safety Monitoring Board (DSMB) oversight, risk-based monitoring, and robust randomization processes all contribute to high-quality evidence. While the clinical team focuses on endpoints and safety, CMC readiness remains critical: clinical supplies must meet GMP specifications, and quality documentation should be inspection-ready throughout the trial. For background reading on licensing expectations, the EMA’s vaccine guidance provides aligned regulatory considerations. For practical perspectives on GMP controls and case studies that interface with clinical execution, see PharmaGMP.

Endpoint Strategy and Case Definitions: From Attack Rates to Vaccine Efficacy (VE)

Endpoint clarity is the backbone of Phase III. A typical primary endpoint is “first occurrence of virologically confirmed, symptomatic disease with onset ≥14 days after the final dose in participants seronegative at baseline.” The case definition specifies symptom clusters (e.g., fever ≥38.0 °C plus cough or shortness of breath) and requires laboratory confirmation (PCR or validated antigen assay). An independent, blinded Clinical Endpoint Committee (CEC) adjudicates cases using standardized dossiers to prevent site-to-site variability. Vaccine Efficacy (VE) is calculated as 1−RR, where RR is the risk ratio (cumulative incidence) or hazard ratio (time-to-event). Confidence intervals and multiplicity adjustments are pre-specified; for two primary endpoints (overall and severe disease), alpha may be split or protected with a gatekeeping hierarchy.

Illustrative Endpoint Framework (Define in Protocol/SAP)
Endpoint Population Ascertainment Window Key Definition Elements
Primary: Symptomatic, PCR-confirmed disease Per-protocol, seronegative at baseline ≥14 days post-final dose Symptom criteria + PCR within 4 days of onset; CEC-adjudicated
Key Secondary: Severe disease Per-protocol Same as primary Hypoxia, ICU admission or death; verified with medical records
Exploratory: Any infection ITT From Dose 1 Asymptomatic PCR surveillance; central lab algorithm

Immunogenicity substudies collect serum at baseline, pre-dose 2, and post-vaccination (e.g., Day 35, Day 180). Even when not primary, analytics must be fit-for-purpose. For example, an ELISA may define LLOQ 0.50 IU/mL, ULOQ 200 IU/mL, and LOD 0.20 IU/mL; neutralization readouts might span 1:10–1:5120, with values <1:10 imputed as 1:5. These parameters and out-of-range handling rules are locked in the SAP to protect interpretability and support any later correlates work.

Design Choices: Individual vs Cluster Randomization, Event-Driven Plans, and Adaptive Elements

Most Phase III vaccine trials use individually randomized, double-blind designs with 1:1 or 2:1 allocation. Cluster randomization (e.g., by community or workplace) can be considered when contamination between participants is unavoidable or when logistics favor site-level allocation; however, it requires larger sample sizes to account for intracluster correlation and more complex analyses. Event-driven designs are common: the study continues until a target number of primary endpoint cases accrue (e.g., 150), which stabilizes VE precision regardless of fluctuating attack rates. Group-sequential boundaries (O’Brien–Fleming or Lan–DeMets) govern interim analyses for efficacy and/or futility, and the DSMB reviews unblinded data under a charter that details decision thresholds.

Sample Event-Driven Scenarios (Illustrative)
Assumptions Target VE Events Needed Nominal Power
Attack rate 1.5%/month; 1:1 randomization 60% 150 90%
Attack rate 1.0%/month; 2:1 randomization 50% 200 90%
Cluster ICC=0.01; 40 clusters/arm 60% 220 85%

Blinded crossover after primary efficacy may be preplanned for ethical reasons, but it requires careful estimands to preserve interpretability. Schedules (e.g., Day 0/28) and windows (±2–4 days) should be operationally feasible. Rescue analyses for variable incidence (e.g., regional re-allocation) belong in the Master Statistical Analysis Plan and risk registry, ensuring changes remain auditable and GxP-compliant.

Safety Strategy at Scale: AESIs, Background Rates, and DSMB Oversight

Phase III safety aims to detect uncommon risks and to quantify reactogenicity in real-world–like populations. Solicited local/systemic reactions are captured via ePRO for 7 days after each dose; unsolicited AEs through Day 28; SAEs and adverse events of special interest (AESIs) throughout. AESIs are tailored to platform and pathogen (e.g., anaphylaxis, myocarditis, Guillain–Barré syndrome), and analyses incorporate background incidence benchmarks so observed rates can be contextualized. A blinded DSMB reviews accumulating safety and efficacy against pre-agreed boundaries. Stopping/pausing rules are encoded in the protocol and DSMB charter—for example, anaphylaxis (immediate hold), clustering of related Grade 3 systemic events in any site (temporary pause and targeted audit), or unexpected lab signals prompting intensified monitoring.

Illustrative DSMB Safety Triggers (Define in Charter)
Safety Signal Threshold Action
Anaphylaxis Any related case Immediate hold; case-level unblinding as needed
Systemic Grade 3 AE ≥5% within 72 h in any arm Pause dosing; urgent DSMB review
Myocarditis (AESI) SIR >2.0 vs background Enhanced cardiac workup; adjudication panel
Liver enzymes ALT/AST ≥5×ULN >48 h Cohort pause; expanded labs and causality review

Safety narratives, MedDRA coding, and reconciliation with source documents are critical for inspection readiness. Signal detection extends beyond rates: temporal clustering, site-specific patterns, and demographic differentials should be explored in blinded fashion first, then unblinded only under DSMB governance. Aligning safety data structures with the SAP and eCRF design reduces queries and shortens CSR timelines.

Operational Excellence: Data Quality, Cold Chain, and Deviation Control

Large vaccine trials succeed or fail on operational discipline. Randomization must be tamper-proof with real-time emergency unblinding capability; IMP accountability needs traceable cold chain logs (continuous temperature monitoring, alarms, and documented excursions). Central labs require validated methods and clear chain of custody. Although clinical teams do not compute cleaning validation limits, it is helpful to cite representative PDE and MACO examples from the CMC file to reassure ethics committees—e.g., PDE 3 mg/day for a residual solvent and MACO surface limit 1.0 µg/25 cm2 for a process impurity. Risk-based monitoring (central + targeted on-site) prioritizes high-risk processes (drug accountability, endpoint ascertainment, consent) and uses KRIs (e.g., out-of-window visits, missing PCR samples) to trigger focused actions.

Example Deviation & Corrective Action Log (Dummy)
Deviation Type Example Impact Immediate Action CAPA Owner
Visit Window Day 28 +6 days Per-protocol population risk Document; sensitivity analysis Site PI
Specimen Handling PCR swab mislabeled Endpoint jeopardized Re-collect if feasible; retrain Lab Lead
Cold Chain 2–8 °C excursion 90 min Potential potency loss Quarantine lot; QA decision IMP Pharmacist

Maintain an audit-ready Trial Master File (TMF) with contemporaneous filing of monitoring reports, DSMB minutes, and CEC adjudication outputs. Predefine estimands for protocol deviations and intercurrent events (e.g., receipt of non-study vaccine), and ensure the SAP describes per-protocol and ITT analyses alongside mitigation for missingness.

Case Study: Event-Driven Phase III for Pathogen Y and the Path to Licensure

Consider a two-dose (Day 0/28) protein-subunit vaccine tested in an event-driven, 1:1 randomized trial across three regions. The primary endpoint is first episode of symptomatic, PCR-confirmed disease ≥14 days after Dose 2. The design targets 160 primary endpoint cases to provide ~90% power to show VE ≥60% when true VE is 65%, using an O’Brien–Fleming boundary for two interim looks at 60 and 110 events. Over 8 months, 172 cases accrue (vaccine=48, control=124), yielding VE=1−(48/124)=61.3% (95% CI 51.0–69.6). Severe disease reduction is 84% (95% CI 65–93). Solicited systemic Grade 3 events occur in 4.8% of vaccinees vs 2.1% of controls; myocarditis AESI is observed at 3 vs 2 cases, with a DSMB-judged SIR consistent with background.

Immunobridging substudy (n=1,200) shows ELISA IgG GMT 1,850 (LLOQ 0.50 IU/mL, ULOQ 200 IU/mL, LOD 0.20 IU/mL) and neutralization ID50 responder rate 92% (values <1:10 set to 1:5 per SAP). A Cox model suggests a 45% reduction in hazard per 2× increase in ID50, supporting a potential correlate. With efficacy met and safety acceptable, the dossier proceeds to regulatory review with complete CSR, validated datasets, and lot-to-lot consistency results. For quality and statistical principles relevant to filings, consult ICH guidance in the ICH Quality Guidelines. A robust post-authorization plan (Phase IV) and risk management strategy close the loop from Phase III success to sustainable public health impact.

]]> Handling Dropouts and Protocol Deviations in Clinical Trial Analysis https://www.clinicalstudies.in/handling-dropouts-and-protocol-deviations-in-clinical-trial-analysis/ Fri, 25 Jul 2025 23:21:30 +0000 https://www.clinicalstudies.in/?p=3928 Read More “Handling Dropouts and Protocol Deviations in Clinical Trial Analysis” »

]]>
Handling Dropouts and Protocol Deviations in Clinical Trial Analysis

How to Handle Dropouts and Protocol Deviations in Clinical Trial Analysis

Dropouts and protocol deviations are almost inevitable in clinical trials. Whether due to patient withdrawal, non-adherence, or procedural inconsistencies, these events can distort the trial results if not properly handled. Regulators like the USFDA and EMA expect clear definitions and pre-specified methods for managing these issues in both the protocol and Statistical Analysis Plan (SAP).

This tutorial explains how to classify, analyze, and report dropouts and protocol deviations in a way that preserves data integrity, ensures regulatory compliance, and supports valid conclusions from your clinical trial.

What Are Dropouts and Protocol Deviations?

Dropouts:

Subjects who discontinue participation before completing the study, often due to adverse events, lack of efficacy, consent withdrawal, or personal reasons.

Protocol Deviations:

Any departure from the approved trial protocol, whether intentional or unintentional, including incorrect dosing, visit window violations, or missing assessments.

Proper classification and documentation of both are required in GMP-compliant studies.

Types of Protocol Deviations

  • Major Deviations: Affect the primary endpoint or trial integrity (e.g., incorrect randomization)
  • Minor Deviations: Do not impact key trial outcomes (e.g., visit outside window)
  • Eligibility Deviations: Inclusion of ineligible subjects
  • Treatment Deviations: Non-adherence to investigational product protocol

Major deviations usually exclude subjects from the Per Protocol (PP) analysis set but may remain in the Intent-to-Treat (ITT) set.

Statistical Approaches for Dropouts

1. Intent-to-Treat (ITT) Analysis:

Includes all randomized subjects, regardless of adherence or dropout. This approach preserves randomization benefits and is the gold standard for efficacy trials.

However, missing data due to dropouts must be addressed using methods such as:

  • Mixed Models for Repeated Measures (MMRM)
  • Multiple Imputation (MI)
  • Pattern-Mixture Models
  • Last Observation Carried Forward (LOCF) – discouraged for primary analysis

2. Per Protocol (PP) Analysis:

Includes only subjects who adhered strictly to the protocol. This provides a clearer picture of treatment efficacy under ideal conditions.

It is often used as a supportive analysis to ITT and must be predefined in the SAP and CSR.

Handling Protocol Deviations in Analysis

Deviations should be categorized and analyzed for their impact. Best practices include:

  • Pre-specify major vs minor deviations in the SAP
  • Perform sensitivity analysis excluding subjects with major deviations
  • Justify inclusion/exclusion of deviators in each analysis set
  • Report all deviations in the CSR by type and frequency

Major deviations that affect endpoints (e.g., missing primary assessments) should typically exclude those subjects from PP analysis.

Estimand Framework and Intercurrent Events

The ICH E9(R1) guideline encourages defining “intercurrent events,” which include dropouts and deviations. These are addressed through different strategies like:

  • Treatment Policy: Analyze all randomized subjects regardless of intercurrent events
  • Hypothetical: Model the outcome as if the event had not occurred
  • Composite: Combine event with outcome into a single endpoint
  • Principal Stratum: Restrict analysis to subgroup unaffected by the event

Choosing the right estimand and handling approach is a regulatory expectation and should align with trial registration strategies.

Regulatory Expectations for Dropouts and Deviations

USFDA: Emphasizes transparency in dropout handling and discourages LOCF as a primary method. Requires dropout reasons to be detailed in submission.

EMA: Requires analysis of protocol adherence and impact on efficacy interpretation. Supports multiple sensitivity analyses.

CDSCO: Encourages sponsor accountability in tracking and preventing protocol violations. Dropout management is critical during audits.

Best Practices for Managing Dropouts and Deviations

  • Include dropout prevention strategies in the protocol
  • Use eCRFs to track deviation type, reason, and impact
  • Train sites on protocol adherence and data quality
  • Implement real-time deviation monitoring dashboards
  • Review deviation reports during interim data reviews

Example Scenario

In a Phase III diabetes trial, 10% of patients dropped out before the Week 24 endpoint. ITT analysis used MMRM to handle missing data, assuming MAR. A per-protocol analysis excluded 6% with major protocol deviations. Sensitivity analyses using pattern-mixture models supported the robustness of findings, as treatment effect remained statistically significant under all assumptions. The FDA approved the submission based on the transparent and well-planned analysis of dropouts and deviations.

Conclusion

Handling dropouts and protocol deviations effectively is essential for the credibility and regulatory acceptance of your clinical trial. Start with proper planning and classification, follow with appropriate statistical handling, and ensure transparent documentation. Using robust ITT and PP analyses, backed by sensitivity analyses and regulatory guidance, helps ensure that your results are reliable, unbiased, and ready for global submission.

]]>