enrollment forecast – Clinical Research Made Simple https://www.clinicalstudies.in Trusted Resource for Clinical Trials, Protocols & Progress Mon, 03 Nov 2025 10:23:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Country Mix Optimization: Add Sites with Predictable Gains https://www.clinicalstudies.in/country-mix-optimization-add-sites-with-predictable-gains/ Mon, 03 Nov 2025 10:23:34 +0000 https://www.clinicalstudies.in/country-mix-optimization-add-sites-with-predictable-gains/ Read More “Country Mix Optimization: Add Sites with Predictable Gains” »

]]>
Country Mix Optimization: Add Sites with Predictable Gains

Country Mix Optimization: How to Add Sites That Deliver Predictable Gains (Not Just More Complexity)

Outcome-first site expansion: when adding countries lifts velocity—and when it only adds noise

The real question: will a new country raise weekly randomizations with confidence?

“Add more sites” is a reflex; “add the right country” is a strategy. Country mix optimization means selecting additional geographies that increase predictable weekly randomizations without blowing up governance, cost, or data quality. The proof is simple: does the expansion shrink time-to-interim, stabilize variance, and survive inspection drills? If not, it’s just operational theater. This article gives a defensible pathway—grounded in regulatory expectations and inspection habits—to identify countries that reliably convert cohort access into randomizations, and to de-risk the first 90 days after activation.

Declare one compliance backbone, reuse it across all geographies

Publish a single, portable control statement: US/EU/UK electronic records and signatures conform to 21 CFR Part 11 and map cleanly to Annex 11; oversight uses ICH E6(R3) terms; safety interfaces acknowledge ICH E2B(R3); US transparency aligns to ClinicalTrials.gov, while EU postings flow via EU-CTR in CTIS; privacy follows HIPAA and GDPR/UK GDPR; all systems preserve a searchable audit trail; operational anomalies route through CAPA; program risk is tracked with QTLs and governed using RBM. Document that activation artifacts and country decisions are filed to the TMF/eTMF; decentralized and patient-tech elements (e.g., eCOA, DCT) are readiness-checked; operational timepoints are compatible with CDISC nomenclature and downstream SDTM/ADaM derivations; statistical timing respects non-inferiority or superiority assumptions. Anchor once with compact in-line links to FDA, EMA, MHRA, ICH, WHO, PMDA, and TGA; then stop explaining—start executing.

Define the outcome targets before you pick countries

Set three outcomes: (1) portfolio randomization velocity (weekly band with 80% confidence); (2) variance control—country/site contribution volatility and its effect on milestone credibility; (3) startup-to-first-patient-in latency. Candidate countries must improve at least two of the three and not degrade the third. Put this scoring in your governance deck so decisions are transparent and reproducible.

Regulatory mapping: US-first framing with EU/UK portability and quick global wrappers

US (FDA) angle—line-of-sight from claim to artifact

In US inspections, assessors test whether your claims (e.g., “Country X will add 8/month”) resolve to retrievable evidence: epidemiology and EHR cohort pulls, feasibility answers with named stewards, diagnostics and pharmacy capacity, startup timelines, and prior trial conversions. They sample a country’s first activation and walk backward through ethics approvals, training, greenlight communications, and the first randomizations, timing each step. Have drill-through from portfolio tiles to site listings to TMF artifacts, and keep definitions consistent across countries to reduce cognitive load during review.

EU/UK (EMA/MHRA) angle—same truth, different wrappers

EU/UK focus on capacity & capability, governance cadence, data minimization, and alignment to EU-CTR/CTIS or UK registry narratives. The underlying evidence is the same: approvals → capacity → trained people → pharmacy/diagnostics readiness → greenlight → predictable enrollment. If your US-first definitions are ICH-consistent and your privacy notes are explicit, you’ll port with minor localization.

Dimension US (FDA) EU/UK (EMA/MHRA)
Electronic records Part 11 validation summaries Annex 11 alignment; supplier qualification
Transparency ClinicalTrials.gov consistency EU-CTR status via CTIS; UK registry
Privacy HIPAA “minimum necessary” GDPR/UK GDPR minimization & residency
Inspection lens Event→evidence trace and retrieval speed Capacity, capability, governance tempo
Selection narrative Claim mapped to artifacts Capacity & governance mapped to artifacts

Process & evidence: the Country Mix Scorecard that survives inspection

Build a light, transparent scoring model

Score each candidate country on five domains with weights you can explain in two minutes: (A) Patient Access & Epidemiology (30%); (B) Startup Latency & Governance (20%); (C) Diagnostics & Pharmacy Capacity (15%); (D) Cost, Contracts & Incentives (15%); (E) Data Quality & Prior Performance (20%). Each domain is composed of 3–5 questions with explicit rules (e.g., “median ethics-to-greenlight ≤ 30 business days = 90+ points”). Require an artifact for any answer that moves a domain >10 points. Publish 80% confidence bounds for the expected monthly randomizations and a “credibility” modifier that down-weights countries with stale or weak evidence.

Instrument startup and velocity the same way everywhere

Define clocks once: approval → greenlight; greenlight → first-patient-in; consent → eligibility decision; eligibility → randomization. Use the same SLA thresholds and trending displays across countries. If a country needs a special rule (e.g., centralized pharmacy), describe it in a two-line footnote on the dashboard to prevent definitional drift.

  1. Publish weighted scoring rules with domain questions and artifacts required.
  2. Produce 12-month cohort counts filtered by inclusion/exclusion; name the data steward and date the pull.
  3. Collect startup medians (ethics, contracts, pharmacy mapping) and variance (IQR, 90th percentile).
  4. Show diagnostics capacity (blocks/week), utilization, and read turnaround medians.
  5. Document prior trial conversions (pre-screen→consent→randomization) for similar burden studies.
  6. Quantify cost per randomized subject (budget + operational overhead) with sensitivity ranges.
  7. Publish an 80% confidence band for monthly randomizations and expected contribution to milestones.
  8. Route red thresholds and model misses through governance and file the action/effectiveness loop.
  9. Drill from portfolio tiles → listings → TMF artifact locations in one click; save run parameters.
  10. Rehearse “10 artifacts in 10 minutes” for each newly added country and file stopwatch evidence.

Decision Matrix: which countries to add, defer, or replace—under uncertainty

Scenario Option When to choose Proof required Risk if wrong
High cohort access, slow startup Add with “startup sprint” & phased targets Ethics/contract medians improving; strong diagnostics Recent medians, IQR, pharmacy readiness plan Spend before velocity; variance spikes
Moderate cohort, excellent governance Use as stabilizer, not volume engine Predictable clocks; low variance history 3-trial conversion history; governance cadence Underwhelming volume; over-index on stability
Great answers, weak evidence Conditional add; credibility discount Artifacts promised within 2 weeks Named stewards; artifact list with dates Optimism bias; milestone slip
High cost per randomization Defer; invest in diagnostics at existing sites When capacity buys more velocity per $ elsewhere Cost curve vs velocity; intervention model Overpay for low lift; budget burn
Country underperforms for 2 cycles Replace or backfill; keep 1 “anchor” site When variance threatens milestones Miss analysis; before/after evidence plan Churn; onboarding tax with minimal gain

File decisions so reviewers can follow the thread

Maintain a “Country Mix Decision Log”: question → option → rationale → evidence anchors (dashboards, listings, epidemiology, contracts, diagnostics capacity) → owner → due date → effectiveness result. Cross-link from the portfolio view and file to Sponsor Quality in the TMF so auditors can walk the logic without meetings.

QC / Evidence Pack: exactly what to file where (so the expansion is inspection-ready)

  • Scoring model with weights, rules, artifact requirements, and example calculations.
  • Country epidemiology & cohort counts (12 months), with data steward sign-off and query parameters.
  • Startup medians and variance (ethics, contracting, pharmacy mapping, system onboarding) with sources.
  • Diagnostics/pharmacy capacity: blocks/week, read turnaround, accountability templates, readiness memos.
  • Prior performance: conversion ladders and variance from comparable trials (burden/benefit matched).
  • Cost per randomized subject and sensitivity ranges; budget approvals and assumptions.
  • Governance minutes showing red thresholds, decisions, actions, and effectiveness checks.
  • Portfolio drill-through: tiles → listings → artifact locations; run logs with parameter files.

Vendor oversight & privacy: align contracts to data minimization and export rules

Qualify recruiters, diagnostics partners, couriers, and translation vendors. Limit access via least privilege, define residency constraints where applicable, and keep data-flow diagrams current. For the US, include privacy BAAs consistent with principles; for EU/UK, emphasize minimization and transfer safeguards. Store interface descriptions and SLAs alongside country packets so the audit trail is complete.

Templates that reviewers appreciate: paste-ready language, KPIs, and footnotes

Paste-ready tokens for your decision deck

Outcome token: “Country X expected to add 6–8 randomizations/month (80% band 5–9) with startup median 30 business days; variance stabilizer for Milestone M2.”
Evidence token: “EHR cohort 1,240 in 12 months under I/E filters; diagnostics blocks 10/week; read median 72 hours; pharmacy readiness in 10 days; three trials with pre-screen→randomization conversion 21% (IQR 18–24%).”
Risk token: “Primary risk is contracting latency due to public procurement; plan: template framework + early legal intake; confidence unaffected.”

Footnotes that preempt most audit debates

Under each chart or listing, state: timekeeper system (CTMS/eSource), timestamp granularity (UTC + site local), exclusions (anonymous inquiries, duplicates), and the change-control ID when a definition evolves. These notes keep the conversation on risk and action, not semantics.

Modeling predictable gains: simple math that tells you where to invest next

Convert country attributes into velocity and variance

Use a compact model: randomizations per week = capacity × conversion probability, where capacity is bounded by coordinator hours, clinic sessions, and diagnostic blocks. Overlay variance from historical conversion ladders and startup latency to produce an 80% band. Countries that shrink the band and shift it upward are high priority—even if their average volume is only moderate—because they stabilize milestone credibility.

Buy down the biggest constraint first

For many programs, diagnostics is the binding constraint; for others, it’s consent behavior or scheduling. Test “what if” levers: add CRN blocks, pre-authorize diagnostics, or expand evening clinics. Compare lift (randomizations/week) per $1,000 and per calendar week. Add the country whose lever buys the largest lift with the smallest variance shock and whose evidence package is inspection-ready.

Guardrails for stats and operations

Mirror operational targets to statistical needs. If the design assumes tight visit windows or non-inferiority margins, favor countries with shorter eligibility lead times and reliable scheduling. Ensure naming tokens for visits align to analysis windows so downstream derivations remain clean—thus avoiding rework during data cuts.

Cadence & governance: keep the country mix honest every week

A 30-minute loop that scales

Run three boards weekly: (1) Velocity board—weekly randomizations with 80% bands by country; (2) Startup board—greenlight and latency medians with 90th percentiles; (3) Risk board—KRIs/QTLs with actions. Red tiles trigger named interventions (sprint legal, open diagnostics blocks, coordinator surge). By Friday, file a one-page effectiveness note with before/after mini-charts and close the loop.

Reproducibility & retrieval drills prove control

Enable drill-through from portfolio tiles to listings to TMF artifacts; save run parameters and environment hashes so reruns match. Rehearse “10 artifacts in 10 minutes” for each newly added country within the first month. When you can perform the drill on demand, your country mix isn’t just smart—it’s auditable.

FAQs

What matters more: average volume or variance?

Both, but variance often decides milestone credibility. A country delivering moderate but stable volume can be more valuable than a high-mean/high-variance one that causes commitment misses. Use an 80% band to compare countries fairly—then choose the one that lifts velocity while shrinking uncertainty.

How many countries should a mid-size program carry?

Enough to hedge variance and regulatory risk without multiplying startup tax. Many programs succeed with 4–6 well-profiled countries: two volume engines, one or two stabilizers, and one or two specialty contributors (e.g., rare diagnostic capabilities). Add more only if the model shows net gains after overhead.

What if a country’s evidence looks great but artifacts are missing?

Apply a credibility discount. Add conditionally with a two-week artifact deadline and publish the discount in the scorecard. If artifacts arrive on time, restore weight; if not, downgrade or replace. This prevents optimism bias from creeping into milestone promises.

How do contract and privacy rules affect selection?

Materially. Long public procurement cycles or complex data residency can erase cohort advantages. Capture realistic contracting medians, include privacy guardrails, and model their impact on latency and cost per randomized subject before you commit.

How quickly should we see lift after adding a country?

Expect measurable impact within two cycles of activation if diagnostics and pharmacy were prepared in parallel. If lift doesn’t appear, revisit assumptions: is capacity real, are referrals flowing, are scheduling blocks protected, and are there unmodeled payer or governance frictions?

What’s the cleanest way to keep global definitions aligned?

Publish a one-page definitions sheet and pin it to every dashboard: event names, clocks, exclusions, timekeeper systems, and change-control IDs. When definitions evolve, version the sheet and file it with run logs so inspectors can reconcile numbers across months and countries.

]]>
Feasibility Questions That Predict Enrollment (Scoring Sheet) https://www.clinicalstudies.in/feasibility-questions-that-predict-enrollment-scoring-sheet/ Sat, 01 Nov 2025 20:32:41 +0000 https://www.clinicalstudies.in/feasibility-questions-that-predict-enrollment-scoring-sheet/ Read More “Feasibility Questions That Predict Enrollment (Scoring Sheet)” »

]]>
Feasibility Questions That Predict Enrollment (Scoring Sheet)

Feasibility Questions That Actually Predict Enrollment: A Defensible Scoring Sheet for US/UK/EU Programs

Why feasibility must predict enrollment—not just describe the site—and how to make it inspection-ready

From “profile of a site” to “probability of randomization”

Traditional questionnaires catalog capabilities—beds, scanners, prior trials—but rarely answer the business-critical question: how many randomized participants by when? A predictive feasibility framework flips the script. You ask targeted questions tied to patient flow, pre-screen attrition, scheduling capacity, and local bottlenecks; you score those answers with transparent rules; and you output an enrollment forecast with a confidence range and a contingency plan. This approach builds credibility with study leadership and withstands sponsor and regulator scrutiny because each number is traceable to verifiable artifacts in the TMF/eTMF.

Declare the compliance backbone once—then reuse it everywhere

Ensure your instrument is born audit-ready. Electronic processes align to 21 CFR Part 11 and port neatly to Annex 11; oversight uses ICH E6(R3) terminology; safety signaling references ICH E2B(R3); registry narratives remain consistent with ClinicalTrials.gov in the US and map to EU-CTR postings through CTIS; privacy counts and EHR-based feasibility respect HIPAA and GDPR. All workflows emit a searchable audit trail and route anomalies through CAPA. Anchor your stance with compact in-line links to the FDA, the EMA, the UK’s MHRA, the ICH, the WHO, Japan’s PMDA, and Australia’s TGA so reviewers don’t need a separate references list.

Outcome metrics everyone buys

Define three outcome targets up front: (1) a 13-week enrollment forecast with 80% confidence bounds; (2) a Site Conversion Ratio (pre-screen → consent → randomization) with expected screen failure rate by key inclusion/exclusion; and (3) a startup latency estimate from greenlight to first-patient-in. These become the backbone of your decision meetings, weekly operations, and inspection narrative.

Regulatory mapping—US first, with EU/UK portability baked in

US (FDA) angle—how assessors actually probe feasibility

US reviewers sampling under FDA BIMO look for line-of-sight from a claim (“we can recruit 4/month”) to evidence (EHR cohort counts, referral agreements, past trial conversion, coordinator capacity). They test contemporaneity (when was the data pulled?), attribution (who ran the query?), and retrievability (how quickly can you open the listing and relevant approvals). Your questionnaire and scoring notes should therefore reference data sources explicitly (EHR cube, tumor board logs, screening calendars) and point to where those artifacts live in TMF.

EU/UK (EMA/MHRA) angle—same science, different wrappers

EU/UK review emphasizes transparency, site capacity and capability, and data minimization. If your instrument uses ICH language, locks down personal data, and provides jurisdiction-appropriate wording, it ports with minor wrapper changes. Include quick-switch text for NHS/NIHR contexts (site governance timing, clinic templates) and emphasize public register alignment.

Dimension US (FDA) EU/UK (EMA/MHRA)
Electronic records Part 11 validation summary attached Annex 11 alignment; supplier qualification
Transparency Consistency with ClinicalTrials.gov EU-CTR/CTIS statuses; UK registry notes
Privacy HIPAA “minimum necessary” in counts GDPR/UK GDPR data minimization
Sampling focus Event→evidence trace on claims Capacity, capability, governance proof
Operational lens Pre-screen → consent → randomization As left, plus governance timelines

The question domains that truly predict enrollment (and what bad answers look like)

Patient flow & local epidemiology

Ask for counts with filters, not vague “we see many patients.” Example prompts: eligible patients seen last 12 months; new-patient inflow/month; proportion with stable contact details; proportion likely insured for required procedures; competing trials in the same indication; typical time from referral to consent. Red flags: counts reported without time window or filters; “data not available”; copy-pasted figures identical to other sites.

Pre-screen & screening operations

Who runs pre-screens? What tools? What hours? What’s the coordinator:PI ratio on screening days? Ask for scheduling constraints (MRI, infusion chair, endoscopy) and average lead times. Red flags: “PI will screen” (capacity bottleneck); single coordinator across multiple trials; no protected clinic time.

Consent behavior and screen failures

Request historical conversion for similar burden/benefit profiles and ask for top three consent barriers (travel, placebo fears, work conflicts). Ask for mitigation levers the site actually controls (transport vouchers, evening clinics). Red flags: “We do not track” or blanket “80% will consent.”

Startup latency signals

Contracting/IRB turnaround medians, pharmacy mapping lead time, device/software onboarding speed, and past first-patient-in latencies. Red flags: “varies” without numbers; pharmacy “as soon as possible.”

Data and systems readiness

Probe whether site has exportable screening logs, audit-ready calendars, and role-based access to study systems. Ask if their CTMS can exchange site-level forecasts and actuals programmatically. Red flags: manual spreadsheets only; no controlled screening log schema.

  1. Require 12-month EHR cohort counts filtered by key criteria (with data steward sign-off).
  2. Collect conversion history for similar trials (pre-screen → consent → randomization).
  3. Capture coordinator capacity (hours/week) and protected clinic slots for screening.
  4. Quantify diagnostic/procedure wait times that gate eligibility timelines.
  5. Document startup latencies (contracts, IRB/REC, pharmacy mapping) with medians/IQR.
  6. Identify top 3 local consent barriers and site-controlled mitigations.
  7. Confirm availability of exportable screening logs with unique IDs.
  8. Request formal competing-trial list within 30 miles and site strategy to differentiate.
  9. Obtain written referral pathways (internal, network, community partners).
  10. Record who owns forecasting (role) and the weekly cadence for updates.

The scoring sheet: weights, confidence, and a defendable math story

Build a weighted model you can explain in two minutes

Keep it simple and transparent. Assign weights to five domains: Patient Flow (30%), Screening Capacity (20%), Startup Latency (15%), Competing Trials (15%), Consent Behavior (20%). Convert site answers to normalized sub-scores (0–100) with clearly published rules. Example: if coordinator hours/week ≥16 and there are two protected screening half-days, Screening Capacity earns ≥85.

From score to forecast with confidence

Translate composite score to an initial monthly forecast using historical analogs, then apply a confidence factor based on data quality (stale EHR pulls, missing logs, unverified referrals). Publish 80% bounds, not a point fantasy. Low data quality widens the interval and downgrades site priority even if the mean looks attractive.

Prevent “gaming” and enforce evidence

For any claim that materially affects the score, require an artifact (EHR cohort screenshot, scheduling report). Add a “credibility” modifier that can subtract up to 10 points for poor evidence. Publish these rules so sites know the bar and the study team can defend down-selection.

Scenario Option When to choose Proof required Risk if wrong
High patient flow, low coordinator capacity Conditional selection + surge staffing Coordinator hours can double in <4 weeks Staffing plan; clinic slot proof Leads pile up; poor subject experience
Strong consent rates, long diagnostics wait Add mobile/partner diagnostics External scheduling MSA feasible Vendor quotes; governance approval Attrition before eligibility confirmed
Great answers, poor evidence Downgrade score; revisit in 2 weeks Artifacts promised but not filed Over-commitment; missed FPI
Moderate score, critical geography Keep as back-up; open later Contingency value outweighs cost Unused site cost; spread thin

Process & evidence: make it rerunnable, traceable, and inspection-proof

Wire data sources into operations

Automate EHR cohort pulls where possible and capture steward attestations with time windows. Store screening logs in a controlled schema with unique IDs and role-based access; route changes through change control. Tie forecasting into CTMS so weekly updates flow without spreadsheets, and enable drill-through from portfolio dashboards to the underlying site listings.

Define oversight hooks (KRIs & actions)

Track KRIs such as consent drop-off, screen failure drivers, and visit lead-time. Use a small set of thresholds with unambiguous actions: if forecast accuracy misses by >30% two cycles in a row, shift budget to better-performing sites or escalate mitigations. Escalation outcomes should feed program risk governance and your QTLs view.

QC / Evidence Pack: what to file where

  • RACI, risk register, KRI/QTLs dashboard for feasibility and enrollment.
  • System validation (Part 11 / Annex 11), audit trail samples, SOP references.
  • Safety interfaces (EHR alerts, adverse event routing) noted per ICH E2B(R3).
  • Forecast lineage and traceability (source listings → composite score → portfolio view) using CDISC-aligned terms and example SDTM visit naming where relevant.
  • CAPA records for systemic data quality or forecasting issues with effectiveness checks.

The inspection-ready feasibility questionnaire: paste-ready, high-signal items

Patient flow & eligibility filters (quantitative)

Provide 12-month counts of patients meeting inclusion A/B/C and exclusion X/Y/Z; new-patient inflow per month; percent with confirmed contact info; payer mix relevant to required procedures; typical time from diagnosis to specialist appointment; competing trials list and overlap rate.

Screening engine (operational)

Coordinator hours/week; protected clinic half-days for screening; coordinator:PI ratio; diagnostic wait times for eligibility; availability of evening/weekend clinics; access to mobile diagnostics.

Consent behavior (behavioral)

Historical conversion rates by similar burden trials; top 3 consent barriers; mitigations site controls (transport, parking, tele-consent); languages supported; community outreach partnerships.

Startup latency (timeline)

Medians (IQR) for contracts, IRB/REC, pharmacy mapping, system onboarding; last three trials’ first-patient-in latencies; typical bottlenecks and fixes that worked.

Data & systems (traceability)

Screening log system; export capability; role-based access; evidence storage; reconciliation cadence to CTMS; ability to provide weekly forecast deltas with reasons.

Modern realities: decentralized, digital, and human—baked into the score

Decentralized and patient-tech readiness

If your design includes remote activities (DCT) or patient-reported outcomes (eCOA), weight a readiness sub-score: identity assurance, device logistics, broadband coverage, staff training for remote support, and cultural/linguistic suitability of materials. Ask sites how many remote visits/week they can support and what their help-desk coverage looks like.

Equity and community factors

Include indicators that proxy for reaching under-represented populations: local partnerships, clinic hours outside 9–5, availability of interpreters, and transportation solutions. These questions both improve accrual and strengthen your public-facing commitments.

Budget and incentive realism

Ask whether proposed per-patient budgets cover coordinator time, diagnostics, and retention touchpoints. Undercooked budgets lead to quiet disengagement; your scoring sheet should penalize this risk unless the sponsor is willing to adjust.

Turn answers into forecasts—and manage reality every week

The weekly loop

Require sites to submit forecast/actuals deltas with reasons and next-week plan. Consolidate at program level and use simple visuals: funnel (pre-screen→consent→randomization), capacity bar (coordinator hours), and a risk list keyed to KRIs. Keep narrative short; actions matter more than prose.

Re-weight quickly when the field changes

When a competing trial opens or a diagnostic line clears, adjust weights for that domain and publish the new composite quickly. The math is simple; the discipline is keeping a single source of truth and filing the rationale.

Close the loop to quality & safety

Feasibility that ignores safety or data quality is self-defeating. If rapid growth at a site correlates with consent deviations or adverse event under-reporting, throttle back and invest in training. Your governance minutes should show these cause-and-effect checks.

FAQs

How many questions should a predictive feasibility form include?

Keep it to 25–35 high-signal items across five domains (Patient Flow, Screening Capacity, Startup Latency, Competing Trials, Consent Behavior). Each question should either drive the score or populate a decision table—if it does neither, cut it.

How do we validate the scoring sheet?

Back-fit the model to 2–3 completed studies in a similar indication and compare predicted vs actual monthly randomizations. Adjust weights where residuals are persistent. Re-validate after protocol or process changes that affect conversion or capacity.

What evidence must accompany high-impact claims?

Any claim that moves a sub-score >10 points should have a supporting artifact: EHR cohort screenshots with steward signature and date window, scheduling reports, signed referral MOUs, or historical screening logs. File these where your team and inspectors can drill through from the scorecard.

How do we include remote elements fairly?

Create a DCT/ePRO readiness sub-score that tests identity, logistics, staff support, and connectivity. Sites that can support remote visits reliably should score higher because they can convert more interested candidates and maintain retention.

What’s a defensible way to present the forecast?

Provide an 80% confidence interval around the monthly point estimate and clearly state assumptions (referral volume, consent rate, diagnostic capacity). Publish weekly deltas with short reasons and show actions taken when reality diverges from plan.

How do we prevent optimistic responses without proof?

Use a credibility modifier and publish it. If evidence is missing or stale, subtract up to 10 points, widen the confidence interval, and decrease priority in site selection. Re-score promptly when evidence arrives.

]]>