AI Analysis & Confidence
Understand how AI identifies compliance gaps, confidence scores, and reasoning.
AI Analysis & Confidence
PartnerAlly uses AI to automatically identify compliance gaps when you upload documents. This page explains how AI analysis works, what confidence scores mean, and how to interpret AI reasoning.
How AI Gap Detection Works
When you upload a document, the AI:
Document Processing
The AI reads and understands the document content, structure, and context.
Framework Mapping
It compares the document against your enabled compliance framework requirements.
Gap Identification
It identifies where the document doesn't fully address required controls.
Confidence Scoring
Each identified gap receives a confidence score based on evidence quality.
Severity Assignment
Gaps are rated by severity based on control criticality and impact.
Understanding Confidence Scores
The confidence score (0-100%) reflects how certain the AI is about a gap.
Score Ranges
| Score | Meaning | Recommended Action |
|---|---|---|
| 90-100% | Very high confidence | Likely valid, proceed with remediation |
| 75-89% | High confidence | Review briefly, usually accurate |
| 50-74% | Moderate confidence | Review carefully before acting |
| 25-49% | Low confidence | Manual verification recommended |
| Below 25% | Very low confidence | Significant review needed |
What Affects Confidence
Higher confidence when:
- Document explicitly mentions the control topic
- Clear absence of required content
- Control requirements are well-defined
- Document quality is good (not scanned, properly formatted)
Lower confidence when:
- Document language is ambiguous
- Control requirements are complex
- Topics are discussed but coverage is unclear
- Document quality is poor
Confidence scores are about AI certainty, not gap severity. A low-confidence gap might still be critical if confirmed, and a high-confidence gap might be low severity.
Reading AI Reasoning
Each AI-identified gap includes reasoning that explains:
What the AI Looked For
The specific requirements the AI was checking:
- Control language from the framework
- Common implementation patterns
- Required documentation elements
What It Found
What the document actually contains:
- Relevant sections identified
- Topics covered
- Language analyzed
Why It's a Gap
The explanation for the identified gap:
- What's missing or incomplete
- How the document falls short
- Comparison to requirements
Example Reasoning
Control: SOC 2 CC6.1 - Access Control
Looking For: Documented access review procedures, periodic access reviews, access revocation process
Found: Document discusses access provisioning but does not mention periodic access reviews or review frequency
Gap Identified: No documented process for periodic access reviews. The policy addresses initial access grants but lacks procedures for ongoing review and verification.
Confidence: 87%
Reviewing AI-Identified Gaps
Quick Review Process
Sort by Confidence
Start with highest confidence gaps (most likely valid).
Read the Reasoning
Understand why the AI flagged this as a gap.
Check Source Excerpts
Review the document sections the AI analyzed.
Validate or Adjust
Confirm the gap, adjust severity, or mark as resolved if invalid.
When to Trust AI
Generally trust the AI when:
- Confidence is 75%+
- Reasoning clearly explains the issue
- Source excerpts support the conclusion
- You can verify the gap exists
When to Question AI
Review more carefully when:
- Confidence below 60%
- Reasoning seems generic
- Control is complex or nuanced
- Document is difficult to parse
Adjusting AI Results
If the Gap Is Valid
- Keep the gap as identified
- Optionally adjust severity if needed
- Proceed with remediation planning
- Add any additional context in comments
If the Gap Is Invalid
- Review the AI reasoning carefully
- Mark the gap as "Resolved"
- Add notes explaining why it's not a gap
- Example: "Control is addressed in separate document DOC-145"
If Severity Is Wrong
- Edit the gap
- Adjust severity to appropriate level
- Add notes explaining the adjustment
- Example: "Reduced to Medium - compensating controls exist"
Don't dismiss AI gaps without careful review. If you find yourself frequently overriding AI findings, contact support—it may indicate a configuration or document quality issue.
Source Document Excerpts
What Excerpts Show
AI highlights specific passages from your document:
- Sections analyzed for the control
- Relevant text that influenced the finding
- Context around identified issues
Using Excerpts
Click excerpt links to:
- Jump to the document location
- See the full surrounding context
- Verify the AI's interpretation
Excerpt Limitations
Excerpts may not show:
- Every relevant section (document too large)
- Context from other documents
- Information in images or charts
Improving AI Accuracy
Better Documents
AI performs best with:
- Clear, well-structured documents
- Proper headings and sections
- Text-based formats (not scanned PDFs)
- Consistent terminology
Framework Configuration
Ensure your frameworks are:
- Properly enabled for analysis
- Controls mapped correctly
- Custom controls well-defined
Feedback Loop
Help improve AI over time:
- Provide feedback on incorrect gaps
- Document valid overrides
- Report patterns of errors to support
Multi-Document Analysis
AI considers relationships between documents:
| Pattern | Example |
|---|---|
| Cross-reference | Policy references a separate procedure |
| Layered coverage | Multiple policies address one control |
| Evidence types | Policy + implementation evidence |
Gaps from Combined Analysis
Sometimes a gap appears because:
- Individual documents don't cover the full requirement
- Documents conflict or are inconsistent
- Coverage is fragmented across sources
AI Limitations
Understand what AI can and cannot do:
What AI Does Well
- Find missing required content
- Identify incomplete coverage
- Compare against control requirements
- Process large document volumes
What AI May Miss
- Context only humans would know
- Implicit coverage through custom terms
- Verbal procedures not documented
- Industry-specific nuances
When Human Review Is Essential
- Low-confidence gaps
- Critical severity gaps
- Gaps in complex areas
- Audit-critical controls
Confidence Trends
Track confidence patterns:
High Average Confidence
If most gaps are 80%+:
- AI is performing well on your documents
- Document quality is likely good
- Framework mapping is accurate
Low Average Confidence
If many gaps are below 60%:
- Document quality may need improvement
- Consider re-uploading cleaner versions
- Review framework configuration
- Contact support if persistent
Common Questions
Can I turn off AI gap detection?
No—it's core to the platform. But you can:
- Review and override findings
- Adjust confidence thresholds for alerts
- Configure which frameworks are analyzed
Why do some documents produce many gaps?
Common reasons:
- Document is general and doesn't address specific controls
- Framework has many controls not relevant to the document
- Document quality affects parsing
- Legitimate gaps exist
How do I report an AI error?
- Note the gap ID and details
- Document what the AI got wrong
- Contact support with this information
- We use feedback to improve the model
Next Steps
- Uploading Documents - Add documents for analysis
- Gap Details - Review identified gaps
- Workflows - Create remediation plans