AI Audit Trail
Track and review all AI decisions and actions in the platform.
AI Audit Trail
The AI Audit Trail provides complete visibility into AI operations within PartnerAlly. It logs what AI did, when, and why—essential for governance, compliance, and debugging.
What the Audit Trail Captures
Every AI action is logged:
| Action Type | What's Logged |
|---|---|
| Document Analysis | Document analyzed, gaps found, confidence scores |
| Gap Detection | Gap created, severity assigned, reasoning |
| Workflow Generation | Workflow created, tasks generated, source |
| Risk Prioritization | Priority calculated, factors used, result |
| Chat Responses | Question asked, response given, sources used |
Accessing the Audit Trail
Go to AI Governance
Navigate to the AI Governance section.
Click "Audit Trail"
Opens the audit log viewer.
Browse or Search
Find specific entries.
Audit Log Fields
Each log entry contains:
| Field | Description |
|---|---|
| Timestamp | When the action occurred |
| Action Type | What type of AI operation |
| Input | What was provided to AI |
| Output | What AI produced |
| Model | Which AI model was used |
| Confidence | AI's confidence level |
| User | Who triggered the action |
| Context | Related items (document, gap, etc.) |
Filtering the Audit Trail
By Action Type
Filter to specific actions:
- Document Analysis
- Gap Detection
- Workflow Generation
- Risk Scoring
- Chat Assistant
By Date Range
Specify time period:
- Today
- This week
- This month
- Custom range
By User
See actions triggered by:
- Specific user
- System (automated)
- API
By Confidence Level
Filter by AI confidence:
- High (80% and above)
- Medium (50-79%)
- Low (below 50%)
Filtering by low confidence can help identify AI outputs that may need human review.
Reading Audit Entries
Entry Details
Click any entry to see:
Input Section:
- What was provided to AI
- Context available
- Configuration at the time
Processing Section:
- Model used
- Processing time
- Resources consumed
Output Section:
- What AI returned
- Confidence score
- Reasoning (if available)
Outcome Section:
- What happened next
- Human actions taken
- Whether output was modified
Understanding AI Reasoning
Explainability
For key decisions, AI provides reasoning:
- Why a gap was identified
- How severity was determined
- What factors drove prioritization
- Why certain tasks were generated
Confidence Breakdown
Confidence scores may include:
- Contributing factors
- Uncertainty sources
- Supporting evidence
Tracking Human Overrides
When Humans Disagree
The audit trail captures:
- Original AI output
- Human modification
- Reason for override (if provided)
- Final outcome
Override Analysis
Review overrides to:
- Identify systematic AI issues
- Improve future recommendations
- Document human judgment
- Support audit discussions
Exporting Audit Data
Export Options
| Format | Use Case |
|---|---|
| CSV | Data analysis |
| JSON | Technical integration |
| Audit documentation |
Exporting
Apply Filters
Set the scope of data to export.
Click "Export"
Opens export dialog.
Choose Format
Select CSV, JSON, or PDF.
Download
File is generated and downloaded.
Audit Trail for Compliance
Demonstrating AI Oversight
The audit trail helps demonstrate:
- All AI actions are logged
- Human review is maintained
- Overrides are documented
- Decisions are traceable
Supporting Audits
Provide auditors with:
- Complete AI activity logs
- Decision rationale
- Override history
- Confidence distributions
Regulatory Requirements
May help satisfy:
- EU AI Act transparency requirements
- SOC 2 logging requirements
- Industry-specific AI governance
Retention and Storage
How Long Data Is Kept
Audit trail retention:
- Default: 3 years
- Configurable by organization
- Follows data retention policies
Data Security
Audit data is:
- Encrypted at rest
- Access controlled
- Immutable (cannot be modified)
- Backed up regularly
Audit trail entries cannot be deleted or modified. This ensures integrity for compliance purposes.
Monitoring and Alerts
Setting Up Alerts
Create alerts for:
- Low confidence decisions
- Unusual AI activity
- High volume of overrides
- Error conditions
Monitoring Dashboard
Track AI health via:
- Activity trends
- Error rates
- Confidence distributions
- Override patterns
Best Practices
Regular Review
- Review audit trail weekly
- Look for patterns
- Identify improvement areas
- Document findings
Override Documentation
When overriding AI:
- Provide reason when prompted
- Be specific about disagreement
- Document your reasoning
- Enable continuous improvement
Audit Preparation
Before audits:
- Familiarize with audit trail
- Generate relevant reports
- Prepare explanations
- Identify human oversight examples
Common Questions
How far back does the trail go?
Depends on retention settings:
- Minimum 1 year required
- Default 3 years
- Maximum based on storage
Can I delete audit entries?
No. Audit entries are immutable to ensure compliance integrity.
Who can access the audit trail?
Access is role-based:
- Admins: Full access
- Compliance: View access
- Users: Own actions only
Is the audit trail included in data exports?
Yes, for organization data exports. User personal data exports include their own actions.
Next Steps
- Bias Assessments - Review fairness testing
- Model Registry - Understand AI models
- Oversight Settings - Configure controls