Beyond Audit Trails
Why traditional compliance tools miss the evidence gap — and how runtime governance fixes it
What this document is
This is a pre-NDA briefing for consulting partners, integrators, and legal counsel. It explains:
- What FDA and EU regulations require for audit trails
- Where evidence breaks in real-world workflows (especially with AI)
- How Michelangelo's approach differs from traditional audit trail strategies
- Commercial models for integration and licensing
Part 1: What Regulations Actually Require
FDA 21 CFR Part 11 (Electronic Records)
Key requirement: Electronic records must be accurate, reliable, and capable of being reviewed. Systems must generate complete audit trails that record who did what, when, and why.
What this means: Every change to a record must be logged with timestamp, user identity, and reason. The audit trail itself must be tamper-evident and retained for the life of the record.
EU GMP Annex 11 (Computerised Systems)
Key requirement: Critical data must be identifiable, secure, and include full audit trail. Changes to data or software must be documented and authorized.
What this means: Similar to Part 11 but with explicit emphasis on data integrity throughout the entire lifecycle. The audit trail must prove that data hasn't been manipulated.
What auditors look for
When regulators inspect audit trails, they're checking:
- Completeness: Are all changes recorded? Are there gaps?
- Authenticity: Can records be trusted? Are they tamper-evident?
- Traceability: Can you follow the chain of custody for critical decisions?
- Review capability: Can the trail be searched, filtered, and meaningfully reviewed?
Part 2: Where Evidence Breaks in Real Workflows
The fundamental problem
Audit trails log what happened in a single system. But real work crosses multiple systems and tools. The evidence breaks between systems.
Example: A quality engineer investigating a deviation pulls data from LIMS, reviews batch records in the ERP, consults historical deviations in the QMS, uses an AI tool to analyze trends, manually curates the AI output, and copies the conclusion into the deviation report.
Each system has its own audit trail. But nothing proves:
- The AI tool used only approved data sources
- The engineer didn't manually edit the AI output in unapproved ways
- The copy/paste step preserved data integrity
- The final conclusion followed validated decision criteria
Five common evidence gaps
1. Multi-system handoffs
Data moves between QMS, LIMS, ERP, eDMS, AI tools. Each has audit trails, but no unified chain proves end-to-end integrity.
2. Ungoverned transformations
Copy/paste, manual curation, spreadsheet analysis, LLM prompts. These steps are invisible to traditional audit trails.
3. Post-hoc logging
Audit trails record that something happened, but they don't enforce constraints before it happens. You can log a violation, but you can't prevent it.
4. AI introduces non-determinism
Same input may produce different outputs. Models hallucinate. Prompts can be manipulated. Traditional validation assumes deterministic systems.
5. No replay capability
If a decision is questioned months later, you can show logs, but you can't independently replay the decision process to prove constraints were enforced.
Part 3: How Michelangelo Closes the Gap
The fundamental difference
Traditional approach: Log what happened after it happens
Michelangelo approach: Enforce what's allowed before it can be used
How it works
Step 1: Define admissibility constraints upfront
Before any AI tool is deployed, quality defines machine-readable constraints: approved data sources, allowed transformations, output requirements, decision thresholds, human approval gates.
Step 2: Runtime enforcement gate
When an AI tool generates an output, Michelangelo intercepts it before the user can act on it. The gate checks: Did it use approved inputs? Did it follow approved methods? Is the output compliant? If any constraint is violated, the output is blocked.
Step 3: Evidence artifact generation
Every enforcement decision produces a structured evidence pack: inputs received, constraints applied, enforcement outcome, timestamps, identities. This artifact is machine-readable and tamper-evident.
Step 4: Independent verification
An auditor can take the evidence pack and independently replay the enforcement process. They don't have to trust narratives—they can verify constraints were actually enforced.
Step 5: Integration with existing systems
Evidence packs flow into your QMS alongside traditional audit trail entries. The QMS records approval decisions; Michelangelo proves decisions followed approved constraints.
What this means for compliance teams
- Risk-based review becomes manageable: Review decision-level evidence packs instead of system-level log files
- Inspection readiness improves: Evidence packs are structured, verifiable, inspection-ready by construction
- Computer system validation simplifies: Validate the governance layer, not "the AI"
- AI adoption becomes defensible: Deploy AI-assisted workflows knowing evidence will survive scrutiny
Part 4: Commercial Models
For system integrators and consulting firms
Lab evaluation license: Paid pilot scope (1-2 workflows, defined constraints, 90-day evaluation) with NDA-protected mechanics disclosure. Typical engagement: $50K-$150K depending on workflow complexity.
Production license: Annual site license with variable participation fees based on governed decision volume. Structured as governance infrastructure, not per-seat software.
OEM/integration license: For consulting firms or technology vendors to embed Michelangelo in their solutions. Repeatable consulting motion for AI evidence governance.
For pharmaceutical manufacturers
Direct licensing available for internal deployment. Typical path:
- Pre-NDA evaluation (this document)
- NDA execution and mechanics review
- Lab evaluation (1-2 workflows, 90 days)
- Production license negotiation
- Site deployment and validation support
Contact
For engagement discussions, NDA execution, or technical questions:
Phil Cheevers
Pink House Technology
905-321-2291 • 242-809-1832
info@pinkhouse.tech