Why Audit Trails Can't Prove AI Decisions—How Michelangelo Can
For: Quality systems professionals, compliance teams, regulatory affairs
The Problem with Traditional Audit Trails
In pharmaceutical manufacturing and other regulated industries, audit trails are the foundation of compliance. They record who did what, when, and provide the evidence chain that regulators inspect. Quality Management Systems (QMS), Laboratory Information Management Systems (LIMS), and validation platforms all generate audit trails.
But audit trails have a fundamental limitation: they log what happened after it happened. They don't enforce what's allowed before it happens.
This worked fine when systems were deterministic and manual processes were tightly controlled. But AI-assisted workflows break this model.
Where Audit Trails Fail for AI Decisions
1. Cross-system gaps: AI tools often pull data from multiple systems (LIMS, QMS, ERP, external databases). Each system has its own audit trail, but there's no unified evidence chain showing the AI used only approved data sources.
2. Ungoverned transformations: When a quality engineer copies data from one system, feeds it to an AI model, manually curates the output, and pastes it into a deviation report—none of those transformations are captured in traditional audit trails.
3. Post-hoc logging: The audit trail records that "User X approved deviation Y at timestamp Z." But it doesn't prove the AI analysis that led to that approval followed validated constraints. An auditor has to trust the narrative, not verify the constraints.
4. No replay capability: If a regulator questions a decision, you can show them logs, but you can't independently replay the decision process to prove the same constraints were enforced.
Why This Creates Compliance Risk
AI introduces non-deterministic behavior. The same input might produce slightly different outputs on different runs. AI models can hallucinate plausible-sounding facts. Prompt injection attacks can manipulate model behavior. Data leakage can expose sensitive information.
None of these risks are addressed by traditional audit trails. You can log that an AI tool was used, but you can't prove it stayed within approved boundaries.
This creates three types of inspection risk:
- Evidence insufficiency: The audit trail doesn't contain enough information to prove compliance. The inspector has to trust your explanation.
- Constraint verification failure: Even if you have documentation saying "AI tool X must only use data source Y," the audit trail doesn't prove that constraint was actually enforced during the decision.
- Retrospective reconstruction problems: If an issue surfaces months later, you can't reliably reconstruct what the AI actually did because the evidence chain has gaps.
How Michelangelo Closes the Gap
Michelangelo doesn't replace audit trails. It provides a different kind of evidence: deterministic constraint enforcement.
Instead of logging what happened, Michelangelo enforces what's allowed before the AI output can be used.
The Michelangelo Approach
1. Define admissibility constraints upfront: Before any AI tool is deployed, quality defines machine-readable constraints: approved data sources, allowed transformations, output format requirements, decision thresholds, human approval gates.
2. Enforce constraints at runtime: When an AI tool generates an output, Michelangelo intercepts it before the user can act on it. The enforcement gate checks: Did it use only approved inputs? Did it follow approved methods? Is the output within approved parameters? If any constraint is violated, the output is blocked.
3. Generate evidence artifacts: Every enforcement decision produces a structured evidence pack: inputs received, constraints applied, enforcement outcome, timestamps, identities. This artifact is machine-readable and tamper-evident.
4. Enable independent verification: An auditor can take the evidence pack and independently replay the enforcement process. They don't have to trust your narrative—they can verify the constraints were actually enforced.
5. Integrate with existing systems: Evidence packs flow into your QMS alongside traditional audit trail entries. The QMS records who approved the decision; Michelangelo proves the decision followed approved constraints.
What This Means for Quality Teams
Risk-based audit trail review becomes manageable: Instead of reviewing thousands of log entries, you review decision-level evidence packs that prove constraint enforcement.
Inspection readiness improves: Evidence packs are structured, verifiable, and inspection-ready by construction. You're not explaining what you think happened—you're showing proof of governance.
Computer system validation simplifies: Michelangelo provides deterministic controls that traditional validation approaches can verify. You're not validating "the AI"—you're validating the governance layer.
AI adoption becomes defensible: Quality teams can confidently deploy AI-assisted workflows knowing the evidence will survive inspection scrutiny.
Competitive Landscape
Many vendors sell tools with audit trails. Some provide AI governance platforms with model monitoring and factsheets. But none provide deterministic runtime enforcement of admissibility constraints.
- QMS/eDMS platforms (Veeva, MasterControl, TrackWise): Provide audit trails within their systems but don't govern AI tools or cross-system workflows.
- Validation tools (ValGenesis, Kneat): Manage validation documentation but assume deterministic systems, not AI.
- AI governance platforms (IBM watsonx.governance, Arize): Provide model monitoring and audit logs but don't enforce constraints at runtime or produce inspection-ready evidence packs.
Michelangelo closes the gap: a deterministic gate that prevents non-compliant AI outputs from being used, produces decision-level evidence that survives toolchain handoffs, and enables replayable verification without vendor trust dependencies.
Contact
For technical discussions, validation planning, or integration scoping:
Phil Cheevers
Pink House Technology
905-321-2291 • 242-809-1832
info@pinkhouse.tech