feasibility questionnaire – Clinical Research Made Simple https://www.clinicalstudies.in Trusted Resource for Clinical Trials, Protocols & Progress Sat, 01 Nov 2025 20:32:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Feasibility Questions That Predict Enrollment (Scoring Sheet) https://www.clinicalstudies.in/feasibility-questions-that-predict-enrollment-scoring-sheet/ Sat, 01 Nov 2025 20:32:41 +0000 https://www.clinicalstudies.in/feasibility-questions-that-predict-enrollment-scoring-sheet/ Read More “Feasibility Questions That Predict Enrollment (Scoring Sheet)” »

]]>
Feasibility Questions That Predict Enrollment (Scoring Sheet)

Feasibility Questions That Actually Predict Enrollment: A Defensible Scoring Sheet for US/UK/EU Programs

Why feasibility must predict enrollment—not just describe the site—and how to make it inspection-ready

From “profile of a site” to “probability of randomization”

Traditional questionnaires catalog capabilities—beds, scanners, prior trials—but rarely answer the business-critical question: how many randomized participants by when? A predictive feasibility framework flips the script. You ask targeted questions tied to patient flow, pre-screen attrition, scheduling capacity, and local bottlenecks; you score those answers with transparent rules; and you output an enrollment forecast with a confidence range and a contingency plan. This approach builds credibility with study leadership and withstands sponsor and regulator scrutiny because each number is traceable to verifiable artifacts in the TMF/eTMF.

Declare the compliance backbone once—then reuse it everywhere

Ensure your instrument is born audit-ready. Electronic processes align to 21 CFR Part 11 and port neatly to Annex 11; oversight uses ICH E6(R3) terminology; safety signaling references ICH E2B(R3); registry narratives remain consistent with ClinicalTrials.gov in the US and map to EU-CTR postings through CTIS; privacy counts and EHR-based feasibility respect HIPAA and GDPR. All workflows emit a searchable audit trail and route anomalies through CAPA. Anchor your stance with compact in-line links to the FDA, the EMA, the UK’s MHRA, the ICH, the WHO, Japan’s PMDA, and Australia’s TGA so reviewers don’t need a separate references list.

Outcome metrics everyone buys

Define three outcome targets up front: (1) a 13-week enrollment forecast with 80% confidence bounds; (2) a Site Conversion Ratio (pre-screen → consent → randomization) with expected screen failure rate by key inclusion/exclusion; and (3) a startup latency estimate from greenlight to first-patient-in. These become the backbone of your decision meetings, weekly operations, and inspection narrative.

Regulatory mapping—US first, with EU/UK portability baked in

US (FDA) angle—how assessors actually probe feasibility

US reviewers sampling under FDA BIMO look for line-of-sight from a claim (“we can recruit 4/month”) to evidence (EHR cohort counts, referral agreements, past trial conversion, coordinator capacity). They test contemporaneity (when was the data pulled?), attribution (who ran the query?), and retrievability (how quickly can you open the listing and relevant approvals). Your questionnaire and scoring notes should therefore reference data sources explicitly (EHR cube, tumor board logs, screening calendars) and point to where those artifacts live in TMF.

EU/UK (EMA/MHRA) angle—same science, different wrappers

EU/UK review emphasizes transparency, site capacity and capability, and data minimization. If your instrument uses ICH language, locks down personal data, and provides jurisdiction-appropriate wording, it ports with minor wrapper changes. Include quick-switch text for NHS/NIHR contexts (site governance timing, clinic templates) and emphasize public register alignment.

Dimension US (FDA) EU/UK (EMA/MHRA)
Electronic records Part 11 validation summary attached Annex 11 alignment; supplier qualification
Transparency Consistency with ClinicalTrials.gov EU-CTR/CTIS statuses; UK registry notes
Privacy HIPAA “minimum necessary” in counts GDPR/UK GDPR data minimization
Sampling focus Event→evidence trace on claims Capacity, capability, governance proof
Operational lens Pre-screen → consent → randomization As left, plus governance timelines

The question domains that truly predict enrollment (and what bad answers look like)

Patient flow & local epidemiology

Ask for counts with filters, not vague “we see many patients.” Example prompts: eligible patients seen last 12 months; new-patient inflow/month; proportion with stable contact details; proportion likely insured for required procedures; competing trials in the same indication; typical time from referral to consent. Red flags: counts reported without time window or filters; “data not available”; copy-pasted figures identical to other sites.

Pre-screen & screening operations

Who runs pre-screens? What tools? What hours? What’s the coordinator:PI ratio on screening days? Ask for scheduling constraints (MRI, infusion chair, endoscopy) and average lead times. Red flags: “PI will screen” (capacity bottleneck); single coordinator across multiple trials; no protected clinic time.

Consent behavior and screen failures

Request historical conversion for similar burden/benefit profiles and ask for top three consent barriers (travel, placebo fears, work conflicts). Ask for mitigation levers the site actually controls (transport vouchers, evening clinics). Red flags: “We do not track” or blanket “80% will consent.”

Startup latency signals

Contracting/IRB turnaround medians, pharmacy mapping lead time, device/software onboarding speed, and past first-patient-in latencies. Red flags: “varies” without numbers; pharmacy “as soon as possible.”

Data and systems readiness

Probe whether site has exportable screening logs, audit-ready calendars, and role-based access to study systems. Ask if their CTMS can exchange site-level forecasts and actuals programmatically. Red flags: manual spreadsheets only; no controlled screening log schema.

  1. Require 12-month EHR cohort counts filtered by key criteria (with data steward sign-off).
  2. Collect conversion history for similar trials (pre-screen → consent → randomization).
  3. Capture coordinator capacity (hours/week) and protected clinic slots for screening.
  4. Quantify diagnostic/procedure wait times that gate eligibility timelines.
  5. Document startup latencies (contracts, IRB/REC, pharmacy mapping) with medians/IQR.
  6. Identify top 3 local consent barriers and site-controlled mitigations.
  7. Confirm availability of exportable screening logs with unique IDs.
  8. Request formal competing-trial list within 30 miles and site strategy to differentiate.
  9. Obtain written referral pathways (internal, network, community partners).
  10. Record who owns forecasting (role) and the weekly cadence for updates.

The scoring sheet: weights, confidence, and a defendable math story

Build a weighted model you can explain in two minutes

Keep it simple and transparent. Assign weights to five domains: Patient Flow (30%), Screening Capacity (20%), Startup Latency (15%), Competing Trials (15%), Consent Behavior (20%). Convert site answers to normalized sub-scores (0–100) with clearly published rules. Example: if coordinator hours/week ≥16 and there are two protected screening half-days, Screening Capacity earns ≥85.

From score to forecast with confidence

Translate composite score to an initial monthly forecast using historical analogs, then apply a confidence factor based on data quality (stale EHR pulls, missing logs, unverified referrals). Publish 80% bounds, not a point fantasy. Low data quality widens the interval and downgrades site priority even if the mean looks attractive.

Prevent “gaming” and enforce evidence

For any claim that materially affects the score, require an artifact (EHR cohort screenshot, scheduling report). Add a “credibility” modifier that can subtract up to 10 points for poor evidence. Publish these rules so sites know the bar and the study team can defend down-selection.

Scenario Option When to choose Proof required Risk if wrong
High patient flow, low coordinator capacity Conditional selection + surge staffing Coordinator hours can double in <4 weeks Staffing plan; clinic slot proof Leads pile up; poor subject experience
Strong consent rates, long diagnostics wait Add mobile/partner diagnostics External scheduling MSA feasible Vendor quotes; governance approval Attrition before eligibility confirmed
Great answers, poor evidence Downgrade score; revisit in 2 weeks Artifacts promised but not filed Over-commitment; missed FPI
Moderate score, critical geography Keep as back-up; open later Contingency value outweighs cost Unused site cost; spread thin

Process & evidence: make it rerunnable, traceable, and inspection-proof

Wire data sources into operations

Automate EHR cohort pulls where possible and capture steward attestations with time windows. Store screening logs in a controlled schema with unique IDs and role-based access; route changes through change control. Tie forecasting into CTMS so weekly updates flow without spreadsheets, and enable drill-through from portfolio dashboards to the underlying site listings.

Define oversight hooks (KRIs & actions)

Track KRIs such as consent drop-off, screen failure drivers, and visit lead-time. Use a small set of thresholds with unambiguous actions: if forecast accuracy misses by >30% two cycles in a row, shift budget to better-performing sites or escalate mitigations. Escalation outcomes should feed program risk governance and your QTLs view.

QC / Evidence Pack: what to file where

  • RACI, risk register, KRI/QTLs dashboard for feasibility and enrollment.
  • System validation (Part 11 / Annex 11), audit trail samples, SOP references.
  • Safety interfaces (EHR alerts, adverse event routing) noted per ICH E2B(R3).
  • Forecast lineage and traceability (source listings → composite score → portfolio view) using CDISC-aligned terms and example SDTM visit naming where relevant.
  • CAPA records for systemic data quality or forecasting issues with effectiveness checks.

The inspection-ready feasibility questionnaire: paste-ready, high-signal items

Patient flow & eligibility filters (quantitative)

Provide 12-month counts of patients meeting inclusion A/B/C and exclusion X/Y/Z; new-patient inflow per month; percent with confirmed contact info; payer mix relevant to required procedures; typical time from diagnosis to specialist appointment; competing trials list and overlap rate.

Screening engine (operational)

Coordinator hours/week; protected clinic half-days for screening; coordinator:PI ratio; diagnostic wait times for eligibility; availability of evening/weekend clinics; access to mobile diagnostics.

Consent behavior (behavioral)

Historical conversion rates by similar burden trials; top 3 consent barriers; mitigations site controls (transport, parking, tele-consent); languages supported; community outreach partnerships.

Startup latency (timeline)

Medians (IQR) for contracts, IRB/REC, pharmacy mapping, system onboarding; last three trials’ first-patient-in latencies; typical bottlenecks and fixes that worked.

Data & systems (traceability)

Screening log system; export capability; role-based access; evidence storage; reconciliation cadence to CTMS; ability to provide weekly forecast deltas with reasons.

Modern realities: decentralized, digital, and human—baked into the score

Decentralized and patient-tech readiness

If your design includes remote activities (DCT) or patient-reported outcomes (eCOA), weight a readiness sub-score: identity assurance, device logistics, broadband coverage, staff training for remote support, and cultural/linguistic suitability of materials. Ask sites how many remote visits/week they can support and what their help-desk coverage looks like.

Equity and community factors

Include indicators that proxy for reaching under-represented populations: local partnerships, clinic hours outside 9–5, availability of interpreters, and transportation solutions. These questions both improve accrual and strengthen your public-facing commitments.

Budget and incentive realism

Ask whether proposed per-patient budgets cover coordinator time, diagnostics, and retention touchpoints. Undercooked budgets lead to quiet disengagement; your scoring sheet should penalize this risk unless the sponsor is willing to adjust.

Turn answers into forecasts—and manage reality every week

The weekly loop

Require sites to submit forecast/actuals deltas with reasons and next-week plan. Consolidate at program level and use simple visuals: funnel (pre-screen→consent→randomization), capacity bar (coordinator hours), and a risk list keyed to KRIs. Keep narrative short; actions matter more than prose.

Re-weight quickly when the field changes

When a competing trial opens or a diagnostic line clears, adjust weights for that domain and publish the new composite quickly. The math is simple; the discipline is keeping a single source of truth and filing the rationale.

Close the loop to quality & safety

Feasibility that ignores safety or data quality is self-defeating. If rapid growth at a site correlates with consent deviations or adverse event under-reporting, throttle back and invest in training. Your governance minutes should show these cause-and-effect checks.

FAQs

How many questions should a predictive feasibility form include?

Keep it to 25–35 high-signal items across five domains (Patient Flow, Screening Capacity, Startup Latency, Competing Trials, Consent Behavior). Each question should either drive the score or populate a decision table—if it does neither, cut it.

How do we validate the scoring sheet?

Back-fit the model to 2–3 completed studies in a similar indication and compare predicted vs actual monthly randomizations. Adjust weights where residuals are persistent. Re-validate after protocol or process changes that affect conversion or capacity.

What evidence must accompany high-impact claims?

Any claim that moves a sub-score >10 points should have a supporting artifact: EHR cohort screenshots with steward signature and date window, scheduling reports, signed referral MOUs, or historical screening logs. File these where your team and inspectors can drill through from the scorecard.

How do we include remote elements fairly?

Create a DCT/ePRO readiness sub-score that tests identity, logistics, staff support, and connectivity. Sites that can support remote visits reliably should score higher because they can convert more interested candidates and maintain retention.

What’s a defensible way to present the forecast?

Provide an 80% confidence interval around the monthly point estimate and clearly state assumptions (referral volume, consent rate, diagnostic capacity). Publish weekly deltas with short reasons and show actions taken when reality diverges from plan.

How do we prevent optimistic responses without proof?

Use a credibility modifier and publish it. If evidence is missing or stale, subtract up to 10 points, widen the confidence interval, and decrease priority in site selection. Re-score promptly when evidence arrives.

]]>
Key Questions to Include in a Feasibility Questionnaire https://www.clinicalstudies.in/key-questions-to-include-in-a-feasibility-questionnaire/ Mon, 25 Aug 2025 09:52:00 +0000 https://www.clinicalstudies.in/key-questions-to-include-in-a-feasibility-questionnaire/ Read More “Key Questions to Include in a Feasibility Questionnaire” »

]]>
Key Questions to Include in a Feasibility Questionnaire

Essential Questions for Designing an Effective Feasibility Questionnaire

Understanding the Role of Feasibility Questionnaires

Before selecting sites and investigators, sponsors and CROs must carefully evaluate a site’s ability to successfully execute a clinical trial. A feasibility questionnaire is one of the most important tools for this assessment. These documents collect structured information about a site’s resources, patient pool, regulatory experience, and infrastructure readiness. Regulatory agencies such as the FDA, EMA, and national authorities expect sponsors to document feasibility efforts as part of Good Clinical Practice (GCP) compliance. Without a robust feasibility process, sponsors risk delays, under-enrollment, and inspection findings during trial audits.

Feasibility questionnaires typically cover domains such as:

  • Patient recruitment and retention potential
  • Principal Investigator (PI) and sub-investigator experience
  • Site infrastructure, including equipment and labs
  • Previous performance in similar therapeutic areas
  • Local regulatory and ethics committee processes

For example, in oncology studies, questionnaires often probe whether the site has access to pathology labs capable of immunohistochemistry testing, or whether genetic testing partnerships exist. In infectious disease studies, questions may focus on availability of biosafety level facilities. Thus, while core domains remain consistent, therapeutic area–specific tailoring is essential.

Critical Patient-Related Questions

Patient recruitment is one of the most common barriers to timely trial completion. Regulators, including the European Medicines Agency (EMA), emphasize that feasibility assessments should be realistic and data-driven. A questionnaire must therefore ask targeted questions about patient populations. Examples include:

Sample Question Purpose
How many patients with the target condition were treated at your site in the past 12 months? Estimate available patient pool using real-world data
What percentage of patients at your site are willing to participate in clinical trials? Gauge cultural and demographic acceptance of trials
Do you have access to patient registries or referral networks? Assess additional recruitment sources

Incorporating epidemiological data strengthens these questions. For example, if a site estimates 300 eligible patients annually but national disease burden data suggests fewer than 50 cases in the region, this discrepancy raises concerns about overestimation. Sponsors should cross-check questionnaire responses with external databases such as ClinicalTrials.gov to validate feasibility claims against trial recruitment histories.

Questions on Investigator and Staff Experience

A site’s human resources are equally critical. Regulators often highlight inadequate investigator oversight as a frequent finding in inspections. Questionnaires should evaluate whether the PI and supporting staff have the necessary experience. Key questions include:

  • How many clinical trials has the PI conducted in the past five years, and in which therapeutic areas?
  • Has the PI received any regulatory inspection findings related to GCP?
  • What is the average turnover rate of study coordinators and research nurses?
  • What GCP training and certification do staff currently hold?

For example, a PI with ten oncology trials completed but with multiple FDA Form 483 citations may be a higher risk compared to a less experienced PI with a clean regulatory record. Feasibility questionnaires should capture such nuances.

Infrastructure and Technology Questions

Infrastructure capability directly influences trial quality. For complex trials requiring bioanalytical testing, imaging, or cold-chain management, questionnaires must go beyond basic facilities inquiries. Sample questions include:

  • Does the site have validated -80°C freezers with continuous temperature monitoring?
  • Are backup power systems in place to safeguard sample integrity?
  • Is the site equipped with validated software for electronic data capture (EDC)?
  • Are laboratory instruments calibrated according to international standards (e.g., ISO 15189)?

Some questionnaires include sample validation parameters such as:

Parameter Example Value
Limit of Detection (LOD) 0.05 ng/mL for biomarker assay
Limit of Quantitation (LOQ) 0.10 ng/mL for biomarker assay
Power backup duration Minimum 8 hours for critical equipment

These details help sponsors differentiate between sites that claim readiness and those that are genuinely prepared for trial operations.

Regulatory and Ethics Questions

Finally, feasibility questionnaires must assess local regulatory and ethics environments. Delays in IRB/EC approvals are a common reason for missed trial timelines. Essential questions include:

  • What is the average IRB/EC review timeline for clinical trials at your institution?
  • Do you have prior experience submitting to regulatory authorities such as FDA, EMA, CDSCO, or PMDA?
  • Are there institutional policies restricting enrollment of vulnerable populations?

For example, if a site reports an average of 45 days for ethics approvals, sponsors can plan activation timelines accordingly. Sites with extended timelines (e.g., >90 days) may not be suitable for fast-track studies.

Transition to Next Considerations

The above domains—patient recruitment, investigator experience, infrastructure, and regulatory landscape—form the backbone of feasibility questionnaires. However, sponsors must also evaluate validation of responses, data reliability, and strategies to prevent overpromising. These aspects will be explored in Part 2, with focus on case studies, pitfalls, and best practices for robust feasibility planning.

Validating Feasibility Questionnaire Responses

Feasibility questionnaires are only useful if responses are accurate. Regulators and sponsors increasingly emphasize data verification as part of trial oversight. Sponsors must apply validation strategies to ensure that sites are not inflating capabilities or patient pools to secure trial participation.

One approach is to cross-verify patient pool estimates with hospital records, referral databases, or national disease registries. For example, if a site reports 500 annual cases of Type 2 diabetes, but regional public health data suggests only 300 cases, the sponsor should investigate. Similarly, sponsors should request anonymized patient counts or ICD-10 code reports to substantiate claims.

Case Study: Inflated Patient Recruitment Claims

A multinational sponsor faced delays in an oncology trial when three sites overestimated recruitment potential. While questionnaires projected 50 patients per site annually, actual enrollment was less than 10. Upon review, it was found that sites included patients outside inclusion criteria. This case underscores the importance of rigorous validation, including review of electronic health records (EHRs) and prior recruitment histories from registries such as ISRCTN Registry.

Common Pitfalls in Questionnaire Design

Despite best intentions, poorly designed questionnaires often result in incomplete or misleading data. Common pitfalls include:

  • Overly generic questions that do not capture therapeutic-specific nuances
  • Yes/No questions without quantitative context (e.g., “Do you have lab facilities?” instead of “How many calibrated centrifuges are available?”)
  • Failure to include data validation fields or request supporting documentation
  • Excessive questionnaire length leading to incomplete responses

To avoid these issues, sponsors should pilot-test questionnaires with selected sites and adjust based on feedback. Regulatory authorities also recommend focusing on essential questions that directly impact trial feasibility, rather than exhaustive lists that burden sites unnecessarily.

Best Practices for Effective Questionnaires

Effective feasibility questionnaires balance comprehensiveness with clarity. Best practices include:

  • Tailoring questionnaires by therapeutic area (oncology, cardiology, infectious disease)
  • Using a mix of quantitative and qualitative questions
  • Integrating electronic platforms to streamline completion and analysis
  • Embedding mandatory data validation checks (e.g., requiring supporting documentation uploads)

Some sponsors now deploy digital feasibility tools integrated with Clinical Trial Management Systems (CTMS). These allow automated scoring, comparison across sites, and identification of red flags such as inconsistent patient data. For example, an AI-enabled feasibility tool might score sites based on patient pool adequacy, infrastructure readiness, and regulatory history, generating a composite feasibility index for decision-making.

Sample Feasibility Scoring Framework

Domain Weight Example Metric
Patient Recruitment 40% Number of eligible patients per year
Investigator Experience 25% Number of prior GCP-compliant trials
Infrastructure Readiness 20% Validated equipment and facilities
Regulatory/EC Environment 15% Average ethics review timeline

This weighted approach ensures objective decision-making while allowing customization for specific trial needs. For instance, in rare disease studies with small populations, patient recruitment weight might increase to 60%.

Conclusion

Feasibility questionnaires are a cornerstone of site selection and clinical trial planning. By including targeted questions on patients, investigators, infrastructure, and regulatory environment—and by validating responses through data cross-checks—sponsors can mitigate risks of underperformance and regulatory non-compliance. Effective design not only accelerates trial start-up but also strengthens inspection readiness by demonstrating a structured feasibility process.

]]>
Site Feasibility Assessments in Ultra-Rare Conditions https://www.clinicalstudies.in/site-feasibility-assessments-in-ultra-rare-conditions/ Tue, 19 Aug 2025 19:57:39 +0000 https://www.clinicalstudies.in/?p=5600 Read More “Site Feasibility Assessments in Ultra-Rare Conditions” »

]]>
Site Feasibility Assessments in Ultra-Rare Conditions

Optimizing Site Feasibility in Clinical Trials for Ultra-Rare Diseases

Why Site Feasibility is Especially Crucial for Ultra-Rare Trials

In ultra-rare disease clinical trials, where eligible patient populations may be limited to only a few individuals per country—or even globally—site feasibility takes on an elevated level of importance. A misstep in site selection can lead to zero enrollment, delays, protocol amendments, or even trial failure. Sponsors cannot afford traditional high-volume approaches or selection based on historical metrics alone.

Feasibility assessments in these studies must focus on disease-specific patient availability, diagnostic capacity, investigator expertise in rare pathologies, and local regulatory familiarity with orphan drug protocols. Effective feasibility processes enable targeted recruitment, reduced site burden, and streamlined regulatory navigation. Agencies like the EMA and FDA expect robust documentation showing rationale behind site selection for such sensitive research populations.

Challenges in Identifying Feasible Sites for Ultra-Rare Conditions

Key challenges in site feasibility include:

  • Scattered patient populations: Patients may be spread across countries or continents
  • Limited diagnostic infrastructure: Especially for genotypically defined subgroups
  • Low investigator experience: Physicians may have managed only 1–2 cases ever
  • Ethical and regulatory complexity: Local authorities may lack rare disease trial precedents

For example, in a lysosomal storage disorder trial targeting 12 global patients, one high-profile academic site failed to enroll due to lack of genetic testing facilities, despite clinical interest. Early feasibility vetting could have flagged this mismatch.

Steps in Conducting Rare Disease Feasibility Assessments

A structured feasibility process for ultra-rare studies involves:

  1. Feasibility Questionnaire: Tailored to assess site’s access to target population, diagnostic tools, and previous rare disease experience
  2. Patient Funnel Analysis: Estimating the number of patients diagnosable, consentable, and willing to participate within study timelines
  3. Protocol Complexity Assessment: Determining alignment between study demands and site capabilities (e.g., need for sedation MRI, long-term follow-up)
  4. Regulatory Landscape Review: Understanding IRB timelines, import/export rules, and pediatric approval pathways
  5. Site Qualification Visits (SQVs): Virtual or on-site walkthroughs for infrastructure and PI engagement evaluation

These steps, executed sequentially, provide a risk-profiled site readiness score and inform go/no-go decisions with clarity.

Patient Mapping and Registry Utilization

Feasibility should include proactive engagement with national rare disease registries, patient advocacy groups, and reference centers. Mapping where patients are diagnosed, managed, and treated—not just where hospitals exist—is critical.

For instance, India’s Clinical Trial Registry and national disease registries can help sponsors assess where most of the genetically confirmed cases are clustered. Such data may suggest partnerships with local genetic labs or patient support NGOs to ensure effective outreach during recruitment.

Case Study: Multi-National Feasibility for a Pediatric Enzyme Replacement Trial

A sponsor planning a global trial for a pediatric metabolic disorder with 18 patients worldwide began by distributing a standard feasibility questionnaire. Despite 30 responses, only 8 sites could confirm access to more than 1 patient, and only 4 had proven ERT experience. Post-screening, 5 were qualified through remote SQVs. This focused approach led to 95% of planned enrollment in under 8 months.

Such precision feasibility ensured optimal site-to-patient ratio, regulatory readiness, and engagement from experienced clinicians—drastically reducing trial risk.

Feasibility in Decentralized or Hybrid Trial Models

Decentralized trial (DCT) elements are gaining traction in rare disease research. Feasibility must now include assessment of:

  • Telemedicine infrastructure for follow-ups
  • Home health visit availability for sample collection or infusions
  • Local lab capabilities for urgent assessments
  • eConsent and remote monitoring readiness

Ultra-rare disease trials may enroll just one or two patients per site—making hybrid or DCT components not just helpful but essential for trial execution.

Regulatory Expectations and Documentation

Agencies such as EMA, FDA, and PMDA expect site selection to be justified in the Clinical Trial Application (CTA) dossier. Key documents include:

  • Site feasibility reports and questionnaires
  • Rationale for geographic distribution of sites
  • Documentation of site capabilities for protocol-specific procedures
  • Backup site lists and criteria for substitution

During GCP inspections, regulators may question why non-performing sites were selected or why local approvals were delayed. A clear feasibility traceability matrix helps defend site selection rationale.

Conclusion: Precision Feasibility is a Cornerstone of Rare Disease Trial Success

In ultra-rare clinical trials, each patient is precious—and each site is strategic. A well-executed feasibility process minimizes trial risk, optimizes resource use, and accelerates timelines. Sponsors should invest in tailored feasibility assessments that go beyond numbers and focus on true site readiness for complex, high-stakes research.

From infrastructure and personnel to patient access and regulatory history, every data point matters. Precision in feasibility leads to precision in outcomes—both scientific and operational.

]]>