test negative design – Clinical Research Made Simple https://www.clinicalstudies.in Trusted Resource for Clinical Trials, Protocols & Progress Thu, 14 Aug 2025 11:10:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Passive vs Active Surveillance Strategies for Post-Marketing Vaccine Safety https://www.clinicalstudies.in/passive-vs-active-surveillance-strategies-for-post-marketing-vaccine-safety/ Thu, 14 Aug 2025 11:10:22 +0000 https://www.clinicalstudies.in/passive-vs-active-surveillance-strategies-for-post-marketing-vaccine-safety/ Read More “Passive vs Active Surveillance Strategies for Post-Marketing Vaccine Safety” »

]]>
Passive vs Active Surveillance Strategies for Post-Marketing Vaccine Safety

Choosing Between Passive and Active Surveillance in Post-Marketing Vaccine Safety

Passive vs Active Surveillance—What They Are and When to Use Each

Passive surveillance collects Individual Case Safety Reports (ICSRs) from clinicians, patients, and manufacturers via national systems (e.g., VAERS/EudraVigilance analogs). It excels at early pattern recognition because it listens broadly: new Preferred Terms, atypical narratives, or demographic clustering can flag emerging issues quickly. Strengths include speed of intake, rich free-text, and relatively low cost. Limitations are well known: no direct denominators, susceptibility to under- or stimulated reporting, duplicate submissions during media spikes, and variable case quality. In passive streams, you will rely on disproportionality statistics (PRR, ROR, EBGM) to identify unusual vaccine–event reporting patterns that merit clinical review.

Active surveillance uses linked healthcare data (EHR/claims/registries, sometimes laboratory feeds) to construct cohorts with person-time denominators. It supports observed-versus-expected (O/E) checks, rapid cycle analysis (RCA) with MaxSPRT boundaries, and confirmatory designs such as self-controlled case series (SCCS) or matched cohorts. Strengths include stable denominators, control of confounding, and ability to estimate incidence rates and relative risks over calendar time. Limitations include access/agreements, data harmonization, lag, and the need for robust governance and validation packs (Part 11/Annex 11 controls, audit trails, and change control). In practice, sponsors rarely choose one or the other: passive detects, active quantifies, and targeted follow-up adjudicates. To align terminology and SOP structure with regulators, many teams adapt practical PV templates from PharmaRegulatory.in, and mirror public expectations summarized by the U.S. FDA.

Comparative Design Considerations: Data, Methods, and Compliance

Surveillance strategy is as much about design and documentation as it is about databases. Passive streams must prove clean inputs: MedDRA version control, explicit Preferred Term selection rules, ICSR de-duplication criteria (e.g., age/sex/onset/lot match), and translation QA for non-English narratives. Active streams must show traceable ETL pipelines, linkage logic, and privacy safeguards. Both must demonstrate ALCOA (attributable, legible, contemporaneous, original, accurate) and computerized system controls: role-based access, validated audit trails, and time synchronization. Pre-declare decision thresholds in your signal management SOP: what PRR/ROR/EBGM constitutes a “screen hit,” what O/E ratio prompts escalation, which risk windows apply by AESI, and when SCCS/cohort studies begin. Link these rules to your Risk Management Plan (RMP) and Statistical Analysis Plan (SAP) so clinical, safety, and biostatistics use the same vocabulary when evidence evolves.

Passive vs Active Surveillance—Illustrative Comparison (Dummy)
Topic Passive (ICSRs) Active (EHR/Claims/Registries)
Primary purpose Early detection & narrative patterns Rate estimation & confirmation
Key statistics PRR / ROR / EBGM screens O/E, RCA (MaxSPRT), SCCS/cohort
Data strengths Broad intake, low latency Denominators, covariates, follow-up
Weaknesses No denominators, duplicates, bias Access, harmonization, lag
Compliance focus MedDRA rules, E2B(R3), audit trail ETL validation, linkage, Annex 11

Operationally, success comes from hand-offs. Write a responsibility matrix: safety scientists review screen hits weekly; epidemiology runs O/E; biostatistics maintains RCA/SCCS code; clinical adjudicates with Brighton criteria; QA reviews audit trails; regulatory owns labels and communications. Keep this map in the PSMF and TMF, with links to datasets and code hashes, so an inspector can trace the path from intake to decision without guesswork.

Analytics That Bridge Both: From PRR to O/E, SCCS, and RCA (with Numbers)

Pre-declare screens and thresholds to avoid hindsight bias. In passive data, a common rule is PRR ≥2 with χ² ≥4 and n≥3; ROR with 95% CI excluding 1; EBGM lower bound (e.g., EB05) >2. Combine these with clinical triage: age/sex clustering, time-to-onset after dose, and mechanistic plausibility. In active data, compute O/E using stratified background rates and biologically plausible windows. Example (dummy): Week W, 1,200,000 second doses to males 12–29; background myocarditis 2.1/100,000 person-years → expected in 7 days ≈ 1,200,000 × (7/365) × (2.1/100,000) ≈ 0.48. Observed 6 adjudicated cases → O/E ≈ 12.5 → escalate. Run RCA weekly with MaxSPRT; if the boundary is crossed, initiate SCCS. A typical SCCS result might show IRR 4.6 (95% CI 2.9–7.1) for Days 0–7, IRR 1.8 (1.1–3.0) for Days 8–21.

Where laboratory markers define cases, declare method capability so inclusion is transparent: high-sensitivity troponin I LOD 1.2 ng/L and LOQ 3.8 ng/L (illustrative) for myocarditis adjudication; platelet factor 4 (PF4) ELISA performance for thrombotic syndromes. Keep quality context close to safety: representative PDE 3 mg/day for a residual solvent and cleaning MACO 1.0–1.2 µg/25 cm2 reassure reviewers that non-biological explanations (contamination, carryover) are unlikely. For a plain-language overview of signal expectations and pharmacovigilance vocabulary, the WHO library provides accessible references at who.int/publications.

Designing a Hybrid Surveillance Program: A Step-by-Step Playbook

Step 1 — Define AESIs and windows. Pre-register adverse events of special interest (AESIs) by platform (e.g., myocarditis for mRNA, TTS for vector vaccines) with Brighton definitions and risk windows (0–7, 8–21 days, etc.). Step 2 — Map data flows. Draw a single diagram linking ICSRs → coding/deduplication → screen queue; and registries/EHR/labs → ETL → O/E/RCA/SCCS pipelines. Step 3 — Write thresholds. Document PRR/ROR/EBGM cut-offs, O/E escalation rules, RCA boundary settings, and SCCS triggers. Step 4 — Validate systems. For passive, validate ICSR intake (E2B R3), MedDRA versioning, translation QA, and audit trails. For active, validate linkage logic, ETL checkpoints, time sync, and back-ups under Part 11/Annex 11; containerize analytics and lock code hashes. Step 5 — Staff governance. Run a weekly multi-disciplinary signal review (safety, clinical, epidemiology, biostatistics, quality, regulatory) with minutes, owners, and due dates. Step 6 — Pre-write communications. Draft label/FAQ templates so confirmed signals can be communicated with denominators and plain language quickly.

Roles and Handoffs (Dummy)
Owner Primary Tasks Outputs
Safety Scientist Screen PRR/ROR/EBGM; triage Screen log; clinical packets
Epidemiologist O/E, background rates O/E worksheets; sensitivity
Biostatistics RCA, SCCS/cohort Boundaries; IRR/HR tables
Clinical Panel Adjudication (Brighton) Levels 1–3 decisions
Quality (QA/CSV) Audit trails; validation Reports; CAPA
Regulatory Label/RMP updates eCTD docs; DHPC drafts

Keep a one-page crosswalk in the TMF: SOP → dataset → code → output → decision → label. If a screen hit escalates, an inspector should be able to start at the decision memo and walk back to the raw ICSR and the database cut that produced the O/E.

Case Study (Hypothetical): Turning Noisy Signals into Decisions

Week 1–2 (Passive): 20 myocarditis ICSRs in males 12–29 after dose 2; PRR 3.0 (χ² 9.2), EB05 2.2. Narratives cite chest pain and elevated troponin (above assay LOQ 3.8 ng/L). Week 3 (Active O/E): 1.2 M doses administered; background 2.1/100,000 person-years; expected 0.48; observed 6 adjudicated Brighton Level 1–2 → O/E 12.5. Week 4 (RCA): MaxSPRT boundary crossed in Days 0–7; geographies consistent. Week 5–6 (SCCS): IRR 4.6 (2.9–7.1) for Days 0–7; IRR 1.8 (1.1–3.0) for Days 8–21. Decision: add myocarditis to important identified risks; update label/HCP guidance with absolute risks (“~12 per million second doses in young males within 7 days”). Quality check: lots in shelf life; cold chain in range; representative PDE 3 mg/day and MACO 1.0–1.2 µg/25 cm2 unchanged—reducing concern for non-biological drivers.

Decision Snapshot (Dummy)
Criterion Threshold Result Action
PRR/χ² ≥2 / ≥4; n≥3 3.0 / 9.2; n=20 Escalate to O/E
O/E ratio >3 in key strata 12.5 Initiate RCA
RCA boundary Crossed Yes (wk 4) Run SCCS
SCCS IRR LB >1.5 2.9 Confirm signal

The full package—ICSRs, coding rules, O/E worksheets, RCA configs, SCCS code/outputs, adjudication minutes, and quality context—goes into the TMF and supports rapid, defensible labeling.

KPIs, Governance, and Inspection Readiness: Keeping the System Alive

Measure both surveillance performance and decision speed. Surveillance KPIs: % valid ICSRs triaged ≤24 h, screen hits reviewed per SOP cadence, median days from screen to O/E, RCA boundary checks on schedule, % adjudications completed within SLA. Quality KPIs: audit-trail review completion, ETL error rate, linkage success, reproducibility checks (code hash matches), and completeness scores for ICSRs. Decision KPIs: time to label update, time to DHPC release, and % of decisions backed by confirmatory analytics.

Illustrative Monthly Dashboard (Dummy)
KPI Target Current Status
Valid ICSR triage ≤24 h ≥95% 96.8% On track
Screen hits reviewed weekly 100% 100% Met
Median days Screen→O/E ≤7 5 On track
Audit-trail review completed Monthly Yes Met
Reproducibility hash match 100% 100% Met

Inspection readiness is narrative clarity plus evidence. Keep a “read me first” note in the TMF that maps SOPs → data cuts → code → outputs → decisions. Store all public communications (FAQs, HCP letters) with the analytics that support them. For method calibration, run periodic negative-control screens so your system demonstrates specificity, not just sensitivity.

]]>
Designing Long-Term Vaccine Effectiveness Monitoring Programs https://www.clinicalstudies.in/designing-long-term-vaccine-effectiveness-monitoring-programs/ Thu, 14 Aug 2025 03:42:08 +0000 https://www.clinicalstudies.in/designing-long-term-vaccine-effectiveness-monitoring-programs/ Read More “Designing Long-Term Vaccine Effectiveness Monitoring Programs” »

]]>
Designing Long-Term Vaccine Effectiveness Monitoring Programs

Long-Term Vaccine Effectiveness Monitoring Programs: A Step-by-Step, Inspection-Ready Guide

What “Long-Term Effectiveness” Means—and Why It Matters for Regulators and Patients

“Long-term effectiveness” (VE) is the real-world reduction in disease risk among vaccinated people compared with comparable unvaccinated (or differently vaccinated) people over extended periods—months to years. It differs from efficacy in randomized trials because exposure, variants, behaviors, and booster uptake all evolve. Sponsors and public-health programs rely on VE monitoring to answer questions randomized trials cannot: How quickly does protection wane? Which subgroups (e.g., ≥65 years, immunocompromised) lose protection first? Do boosters restore protection to prior levels and for how long? These answers inform labeling, booster recommendations, Health Care Provider (HCP) guidance, and risk–benefit summaries in periodic safety and risk-management reports.

Regulators expect VE programs that are methodologically sound, documented, and auditable. That means: (1) clear protocols and SAPs describing designs (cohort, case–control, test-negative), endpoints (laboratory-confirmed disease, hospitalization), and planned time-since-vaccination analyses; (2) robust data linkage across immunization registries, electronic health records (EHR), labs, and vital statistics; (3) bias controls (propensity scores, calendar-time adjustment, negative controls); and (4) transparent data integrity with ALCOA principles, audit trails, and reproducible code. When outcomes are lab-confirmed, document analytical performance so adjudicators and inspectors trust that “cases” are truly cases. For example, an RT-PCR may operate at LOD ~25 copies/mL with reporting LOQ ~50 copies/mL, or an ELISA anti-antigen IgG might have LOD 3 BAU/mL and LOQ 10 BAU/mL—dummy values shown for illustration. Quality context also matters: cite representative PDE (e.g., 3 mg/day for a residual solvent) and cleaning-validation MACO (e.g., 1.0–1.2 µg/25 cm2) to demonstrate that manufacturing hygiene is stable, so changes in VE are not confounded by product quality drift. For practical validation patterns (URS → IQ/OQ/PQ → live monitoring) that often support these programs, see pharmaValidation.in; for high-level public expectations on post-authorization evidence and surveillance, consult the European Medicines Agency.

Core Designs for Long-Term VE: Cohort, Case–Control, and Test-Negative (When to Use Which)

Cohort designs follow vaccinated and comparison groups over time, estimating hazard ratios (HR) or incidence rate ratios (IRR) via Cox or Poisson models. They are intuitive and flexible for time-varying covariates (ageing, comorbidities) and for modeling time since vaccination with splines or grouped intervals (e.g., 0–3, 3–6, 6–9, 9–12 months). VE is typically computed as (1−HR)×100% or (1−IRR)×100%. Example (dummy): adjusted HR for hospitalization 0.35 at 0–3 months → VE 65%; HR 0.58 at 6–9 months → VE 42% (waning).

Case–control designs are efficient when outcomes are rare (e.g., ICU admissions). Controls are sampled from the source population; vaccination odds are compared via conditional logistic regression. Careful density sampling and matching on calendar time help align variant waves and public-health measures.

Test-negative designs (TND) restrict to people seeking testing for compatible symptoms; cases test positive and controls test negative. TND helps control healthcare-seeking bias, especially for respiratory pathogens. However, it assumes testing behavior and exposure risk are similar among cases and test-negative controls; violations (e.g., targeted testing of high-risk groups) can bias VE estimates. Always present sensitivity analyses: alternate symptom criteria, excluding occupational screens, and calendar-time strata.

Across all designs, specify variant periods (by sequencing or proxy), previous infection status, and booster exposure as time-varying. Pre-declare subgroup analyses (age bands, comorbidity, immunocompromise) and outcome severity tiers (any symptomatic disease, ED visit, hospitalization, ICU, death). If laboratory confirmation defines outcomes, list analytical sensitivity (e.g., PCR LOD/LOQ) and any antibody thresholds used for case adjudication. Keep clinical relevance central: 10-point VE swings at high baseline risk (hospitalization in ≥65 years) may drive labeling changes; smaller swings in low-risk groups might not.

Data Sources, Linkage, and Governance: From Registries to Analysis-Ready Datasets

Long-term VE depends on clean, linked data: immunization registries for exposure dates and product lots; EHR/claims for comorbidities, encounters, and outcomes; labs for PCR/antigen/serology; vital statistics for deaths. Establish privacy-preserving linkage (hashed keys or trusted third-party) and write a Data Management Plan that describes extract–transform–load (ETL), quality checks (duplicate vaccinations, impossible intervals), and audit trails. Use common data models where possible; version-lock code (Git) and containerize analyses to ensure reproducibility. Calendar-time and region must be explicit so variant waves and policy changes (masking mandates, testing access) can be adjusted for.

Governance makes the system credible. Set a cadence—monthly Safety/Effectiveness Board reviewing VE dashboards, bias diagnostics, and planned SAP updates. Keep ALCOA visible: (1) attributable—who ran what code, (2) legible—clear variable dictionaries, (3) contemporaneous—timestamped extracts, (4) original—immutable raw snapshots with checksums, and (5) accurate—validation logs for joins and de-duplication. File everything in the Trial Master File (TMF) and cross-reference your Risk Management Plan (RMP) so that safety signals and effectiveness waning are interpreted together.

Illustrative VE Monitoring Plan (Dummy)
Component Source Frequency Key Checks
Exposure (vax/booster) Registry Weekly Duplicate doses; lot validity
Outcomes EHR/Claims Weekly Case definition; admit/discharge coherence
Labs PCR/Antigen Daily Specimen date vs onset; LOD/LOQ flags
Mortality Vital statistics Monthly Linkage success; excess deaths scan

Finally, include a short “quality context” appendix: representative PDE and MACO examples and a pointer to manufacturing/handling change control. If product quality remained in-spec, reviewers can focus on biological waning, variant escape, or behavior, not contamination or degradation.

Modeling Waning and Booster Effects: Time-Since-Vaccination Done Right

Waning is a time-varying phenomenon, so treat time since vaccination (TSV) as a primary exposure. In Cox models, implement TSV with restricted cubic splines or pre-specified intervals (e.g., 0–3, 3–6, 6–9, 9–12 months). Include an interaction between TSV and age/comorbidity to allow different waning patterns across subgroups. For boosters, use a grace period (e.g., 7–14 days post-dose) before counting booster protection, and model boosting as a new time-varying exposure layered atop primary series. Adjust for calendar-time via strata or splines to absorb variant waves and public-health changes. Present absolute risks, not just relative VE: a 10-point VE drop against hospitalization could translate into thousands of additional admissions when incidence is high.

Example (dummy): A national cohort of 2.5 M adults shows adjusted hazard ratios for hospitalization of 0.32 (VE 68%) at 0–3 months, 0.48 (VE 52%) at 3–6 months, and 0.64 (VE 36%) at 6–9 months. A booster lowers HR to 0.28 (VE 72%) in the first 3 months post-booster, then stabilizes at 0.40 (VE 60%) by months 3–6. Stratification by ≥65 years shows faster waning (VE 30% at 6–9 months). Sensitivity analyses excluding prior infection or redefining outcomes as ICU/death confirm patterns. Communicate clearly which outcomes are modeled (symptomatic disease vs hospitalization vs ICU/death) and ensure estimates are accompanied by CIs and absolute risks per 100,000 person-months.

Dummy VE by Time Since Vaccination and Booster
Interval Adjusted HR VE (1−HR) 95% CI
0–3 mo (primary) 0.32 68% 64–71%
3–6 mo (primary) 0.48 52% 47–56%
6–9 mo (primary) 0.64 36% 30–42%
0–3 mo (booster) 0.28 72% 68–75%
3–6 mo (booster) 0.40 60% 55–64%

Bias and Sensitivity Analyses: Proving Robustness When Assumptions Are Fragile

Effectiveness estimates are only as good as their assumptions. Common threats include confounding by indication (early adopters differ from late adopters), differential outcome ascertainment (vaccinated may test more or less), prior infection (partial immunity), and immortal time bias (misclassifying pre-vaccination time). Pre-specify controls: propensity-score weighting/matching; negative control outcomes (conditions unrelated to vaccination) to detect residual bias; negative control exposures (e.g., future vaccination date) to guard against immortal-time artifacts; falsification tests (e.g., VE in pre-rollout period should be ~0%). In test-negative designs, vary symptom definitions, exclude occupational screens, and ensure similar testing access across groups. Report missing-data handling, discordance between administrative and clinical dates, and how you treated partial primary series.

Link bias diagnostics to decisions. For example, if negative controls show residual confounding in young adults, prioritize PS matching over weighting in that stratum; if hospitalization VE is robust but ED-visit VE is sensitive to testing policies, emphasize the former in labeling or HCP materials. Keep a reproducibility package—scripts, parameter files, data-dictionary extracts—with checksums in the TMF. Wherever labs define outcomes, reiterate analytical transparency (e.g., PCR LOD / LOQ) and maintain chain-of-custody logs. Maintain a one-page “quality context” memo with representative PDE and MACO examples so reviewers discount non-biological confounders.

Operations, KPIs, and Inspection Readiness: Turning Methods into a Living Program

Build dashboards that update monthly with clear denominators and confidence bands. Core KPIs include: cohort coverage (% of population linked to registry + EHR), median lag from data cut to dashboard, proportion with prior-infection data captured, VE by TSV (primary and booster), subgroup VE (≥65 years, immunocompromised), and sensitivity-analysis completion rate. Pair these with quality KPIs: ETL error rate, linkage success, audit-trail review completion, and reproducibility checks (hashes match across re-runs). Governance minutes should record decisions (e.g., “Shift public-facing emphasis to hospitalization VE during variant X; plan booster study in ≥65 years”).

Case study (hypothetical). A country-wide VE program shows hospitalization VE falling from 68% at 0–3 months to 36% at 6–9 months in ≥65 years during Variant-Delta. A booster restores VE to 70% for 0–3 months post-booster. Bias checks: negative control outcome (ankle sprain) OR ≈1.00; negative control exposure (future vaccination) shows no effect; TND sensitivity with stricter symptom criteria yields VE within 3 points. Labs confirm case definitions with PCR LOD ~25 copies/mL and LOQ ~50 copies/mL (illustrative). Manufacturing and cleaning controls are documented (representative PDE 3 mg/day, MACO 1.0–1.2 µg/25 cm2), ruling out quality confounders. The program recommends boosters for ≥65 years, updates HCP materials, and files an eCTD supplement with methods, outputs, and code hashes.

Templates and Deliverables: What to File, Share, and Automate

For each cycle, produce: (1) a protocol/SAP addendum if methods change; (2) a data-cut memo (date, sources, versions); (3) an analysis report with TSV curves, subgroup tables, and sensitivity results; (4) a reproducibility package (container/Docker hash, code, parameter files); (5) an executive summary with plain-language statements for policy makers and HCPs. Automate ETL quality checks, PS balance diagnostics, and table shells to reduce manual error. Keep a crosswalk that maps SOPs → datasets → code → outputs → decisions so inspectors can follow the thread end-to-end.

]]>