change control – Clinical Research Made Simple https://www.clinicalstudies.in Trusted Resource for Clinical Trials, Protocols & Progress Thu, 06 Nov 2025 05:09:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 MedDRA/WHODrug & Footnotes: Version Control That’s Traceable https://www.clinicalstudies.in/meddra-whodrug-footnotes-version-control-thats-traceable/ Thu, 06 Nov 2025 05:09:29 +0000 https://www.clinicalstudies.in/meddra-whodrug-footnotes-version-control-thats-traceable/ Read More “MedDRA/WHODrug & Footnotes: Version Control That’s Traceable” »

]]>
MedDRA/WHODrug & Footnotes: Version Control That’s Traceable

Make MedDRA/WHODrug Version Control Traceable: Footnotes, Change Logs, and Evidence That Survive Review

Why dictionary version control is a regulatory deliverable (not just a data-management task)

What “traceable” means for coded data

When reviewers challenge an adverse event count or a concomitant medication pattern, they are really testing whether your coded terms can be traced back to the raw descriptions and forward to the analysis without ambiguity. That requires: naming the dictionary and its version in outputs, proving how re-codes were handled, and showing that every change left a trail the team can open in seconds. If your pipeline cannot demonstrate this, re-cuts will drift, and seemingly small recoding decisions will become submission risks.

Start by declaring your dictionaries, once

State plainly which dictionaries govern safety and medication coding and show them to reviewers where they expect to see them—titles, footnotes, metadata, reviewer guides, and the change log. This is where you anchor your process to MedDRA for adverse events and WHODrug for concomitant medications and therapies; the rest of the system (shells, listings, datasets, and CSR text) should echo those declarations, word for word.

The compliance backbone (one paragraph you can reuse everywhere)

Your coded-data controls align to CDISC conventions, with lineage from SDTM into ADaM and machine-readable definitions in Define.xml supported by ADRG and SDRG. Oversight follows ICH E6(R3), estimand language follows ICH E9(R1), and safety exchange is consistent with ICH E2B(R3). Operational expectations consider FDA BIMO; electronic records/signatures meet 21 CFR Part 11 and map to Annex 11. Public transparency stays consistent with ClinicalTrials.gov and EU postings under EU-CTR via CTIS, and privacy respects HIPAA. Every decision leaves an audit trail, systemic issues route through CAPA, risk is tracked with QTLs and governed by RBM, and artifacts are filed to the TMF/eTMF. Cite authorities inline once—FDA, EMA, MHRA, ICH, WHO, PMDA, TGA—and keep the rest operational.

Regulatory mapping: US-first clarity with EU/UK portability

US (FDA) angle—event → evidence in minutes

For US assessors, the most efficient path begins at an AE/CM listing, continues to the coding policy and dictionary version, and ends in the derivation notes that produce counts in safety tables. Titles and footnotes should declare the dictionary (e.g., “MedDRA 26.1” or “WHODrug Global B3 April-YYYY”), and reviewer guides should narrate any mid-study re-codes, including the reason, scope, and before/after impacts. Inspectors expect re-runs to be deterministic for the same cut and parameters; if counts changed due to a dictionary update, you must show the change record and reconciliation listing that explains why.

EU/UK (EMA/MHRA) angle—same truth, localized wrappers

EU/UK reviewers ask the same traceability questions, but they also probe alignment with public narratives (e.g., AESIs, ECIs), dictionary governance, and accessibility (grayscale legibility, clear abbreviations). Keep one truth—dictionary, version, and change control—then adapt only labels and narrative wrappers. If coded terms feed estimand-sensitive endpoints (e.g., NI analyses of safety outcomes), call the version in the footnote and cross-reference the SAP clause to avoid interpretive drift across submissions.

Dimension US (FDA) EU/UK (EMA/MHRA)
Electronic records Part 11 validation; role attribution Annex 11 alignment; supplier qualification
Transparency Consistency with ClinicalTrials.gov wording EU-CTR status via CTIS; UK registry alignment
Privacy HIPAA “minimum necessary” GDPR/UK GDPR minimization & residency
Dictionary declarations Version in titles/footnotes and reviewer guides Same, plus emphasis on governance narrative
Mid-study updates Change log + reconciliation listings Same, with explicit impact analysis exhibit
Inspection lens Event→evidence drill-through speed Completeness & portability of rationale

Process & evidence: a version-control system for coded data that reduces rework by 50%+

Freeze names, state versions, and make updates predictable

Publish a one-page coding convention: which dictionary applies to which domains, how synonyms and misspellings are handled, and how multi-ingredient products are mapped. Freeze the notation for versions (“MedDRA 26.1” / “WHODrug Global B3 April-YYYY”) and require the same token to appear in shells, listings, reviewer guides, and specs. Put all dictionary files, mapping tables, and synonym lists under version control; commits should be atomic and tied to change requests.

Run reconciliation listings at each cut

At every database snapshot, run standard listings that show top deltas: new preferred terms, counts that shifted after a dictionary update, and records that failed or changed mapping. File before/after exhibits for material changes with a short narrative of impact on safety tables. This practice prevents “mystery count” escalations near submission.

Make footnotes carry the story reviewers need

Titles and footnotes should name the dictionary and version, declare how partial dates and multiple records per visit are handled, and specify any special mappings (e.g., custom AESI lists). When versions change, the footnote must note the effective date and cross-reference the change log entry, so the story is visible everywhere the numbers appear.

  1. Publish a coding convention and freeze dictionary naming and version tokens.
  2. Place dictionary source files and synonym tables under version control.
  3. Require titles/footnotes to cite dictionary and version across all outputs.
  4. Run reconciliation listings at each cut; file before/after exhibits for shifts.
  5. Cross-link reviewer guides (ADRG/SDRG) to change logs and specs.
  6. Parameterize re-code windows and rules; no hard-coded dates in macros.
  7. Capture environment hashes and parameters to ensure reproducible re-runs.
  8. Escalate recurring deltas to governance; create CAPA with effectiveness checks.
  9. Prove drill-through: output → footnote → change log → listing → source text.
  10. File all artifacts to TMF with two-click retrieval from CTMS tiles.

Decision Matrix: choose the right option when dictionaries, synonyms, or products change

Scenario Option When to choose Proof required Risk if wrong
MedDRA version update mid-study Versioned re-code with impact exhibit Routine release; broad PT/SOC shifts Change log; before/after counts; listing deltas Unexplained safety count changes
WHODrug formulation change (multi-ingredient) Controlled split-map to components Therapy analysis requires components Spec note; mapping table; unit tests Over/under-count exposure signals
Company synonym list grows Governed additions + audit trail Recurring free-text variants CR/approval; versioned synonyms Shadow mapping; repeat queries
Local-language term spike Targeted lexicon expansion + QC New region/site onboarding Lexicon diff; sample recodes Misclassification; site friction
Safety signal under code review Lock version; defer re-code to post-cut Near-lock timelines; high scrutiny Governance minutes; risk note Count drift; avoidable delay

Document decisions where inspectors will look first

Maintain a “Dictionary Decision Log”: question → option → rationale → artifacts (change log ID, listing diff, spec snippet) → owner → effective date → effectiveness metric (e.g., query reduction). File to Sponsor Quality and cross-link from ADRG/SDRG so the path from a number to a decision is obvious.

QC / Evidence Pack: the minimum, complete set reviewers expect for coded data

  • Coding convention and dictionary governance SOP with version history.
  • Dictionary source files and synonym tables under version control (hashes).
  • Change log entries with scope, rationale, owner, and impact summaries.
  • Reconciliation listings (before/after) for material updates with narrative.
  • ADRG/SDRG sections that cite dictionary versions and special handling.
  • Shells/listings with versioned titles/footnotes and provenance footers.
  • Program headers with lineage tokens and parameter file references.
  • Unit tests that cover edge cases (multi-ingredient, local language, duplicates).
  • Environment locks and rerun instructions producing byte-identical results.
  • TMF filing map with two-click retrieval from CTMS portfolio tiles.

Vendor oversight & privacy

Qualify coding vendors to your convention, enforce least-privilege access, and retain interface logs. For EU/UK subject-level listings, document minimization and residency controls; keep sample redactions and privacy review minutes with the evidence pack.

Footnotes that carry the hard truths: version, exceptions, and special lists

Footnote tokens (copy/paste)

Dictionary version: “Adverse events coded to MedDRA [version]; concomitant medications coded to WHODrug Global [release/format].”
Re-code notice: “Counts reflect re-coding from MedDRA [old]→[new] effective [date]; before/after listing in Appendix [id].”
Special lists: “AESIs reviewed per sponsor list v[xx]; ECIs flagged in listing [id].”

Where to put the tokens

Put the version token in every safety table title and in the AE/CM listing titles; put re-code tokens in footnotes at the first output impacted by the change; repeat only where numbers could be misread without the context. Use the same token strings in metadata (Define.xml) and reviewer guides.

Common pitfalls & quick fixes

Pitfall: Version changes without visible notice → Fix: footnote token + change-log ID + reconciliation listing. Pitfall: Shadow synonym lists → Fix: govern additions with approvals and hashes; publish diffs. Pitfall: Multi-ingredient mapping drift → Fix: controlled split-map with tests and a visible policy.

Operational cadence: keep dictionaries, programs, and narratives synchronized

Parameterize what humans forget

Externalize dictionary versions, effective dates, and AESI/ECI lists in parameter files—not in macros. Run logs must echo parameters verbatim, and outputs must include a provenance footer (program path, timestamp, data cut, parameter file) so reviewers can re-run without archaeology.

Dry runs and “coding days”

Schedule cross-functional readouts where clinicians, safety physicians, programmers, and QA review the latest deltas, re-coded terms, and their impact on tables. File minutes and before/after exhibits; convert recurring issues into CAPA with effectiveness checks.

Measure what matters

Track time-to-reconcile after a dictionary update, count of material shifts per cut, percentage of outputs with correct version tokens, and drill-through time (output → change log → listing → source). Set thresholds in portfolio QTLs and escalate exceptions.

FAQs

How prominently should dictionary versions appear?

Prominently enough that a reviewer cannot miss them: in safety table titles, AE/CM listing titles, footnotes where the context is critical, and in reviewer guides. The same token must also appear in Define.xml/metadata so machine and human readers see the same truth.

What’s the fastest way to prove a count changed because of a dictionary update?

Open the output footer (program path/parameters), show the footnote with the version token and change-log ID, and then open the reconciliation listing that lists the before/after pairs. Close with the governance minute that approved the update. That three-step path resolves most queries.

How should we handle multi-ingredient products in WHODrug?

Adopt a controlled split-map policy, document it in the convention, and test with synthetic fixtures. Footnote any departures from the default (e.g., product-level mapping when exposure analysis requires aggregates) and file the mapping table with the evidence pack.

Do mid-study MedDRA updates always require re-coding?

No. If timelines are tight and the impact is modest, lock the version for the current cut and schedule re-coding for the next one. Document the decision, the risk, and the plan in governance minutes, and carry a footnote that explains the lock to avoid confusion.

Where should synonym lists live, and how are they governed?

Under version control next to dictionary source files. Additions require change requests, approvals, and hashes. Publish diffs and run a targeted reconciliation listing to show the impact of new synonyms on counts or mappings.

How do we prevent version drift between shells, listings, and reviewer guides?

Centralize tokens in a shared library referenced by shells, programs, and guide templates. When the version changes, update the token once, regenerate outputs, and re-run automated checks that ensure the token appears where required.

]]>