Confidence Scores
Understanding AI certainty levels and how to interpret confidence scores.
Confidence Scores
Every AI-generated finding in PartnerAlly includes a confidence score. This score indicates how certain the AI is about its analysis. Understanding confidence helps you know when to trust AI and when to dig deeper.
What Confidence Means
The Basics
Confidence scores answer: "How sure is the AI?"
- 100% would mean absolute certainty (never claimed)
- 0% would mean complete uncertainty (rarely occurs)
- Most scores fall between 50-95%
Score Ranges
| Range | Level | Meaning |
|---|---|---|
| 85-100% | High | Strong evidence, clear match |
| 70-84% | Medium-High | Good evidence, reasonable inference |
| 50-69% | Medium | Some evidence, interpretation needed |
| 30-49% | Low-Medium | Limited evidence, uncertain |
| Below 30% | Low | Weak evidence, needs review |
How Confidence is Calculated
Factors That Increase Confidence
AI is more confident when:
- Text explicitly states the control
- Multiple documents support the finding
- Standard terminology is used
- Clear, unambiguous language
- Direct framework alignment
Factors That Decrease Confidence
AI is less confident when:
- Implicit or implied controls
- Vague or unclear language
- Single weak source
- Non-standard terminology
- Ambiguous requirements
Example Confidence Factors
High Confidence (92%):
- Document explicitly states "All users must use MFA"
- Multiple references to multi-factor authentication
- Clear implementation details provided
Low Confidence (45%):
- Document mentions "strong authentication"
- No specific MFA requirement
- Implementation unclearWhere You See Confidence
Gap Analysis
Each identified gap shows:
- Confidence percentage
- Visual indicator (high/medium/low)
- Factors affecting confidence
Document Analysis
After document analysis:
- Overall confidence for the document
- Per-control confidence scores
- Aggregate across frameworks
Chat Responses
AI chat indicates certainty:
- "I'm confident that..." (high)
- "Based on available documents..." (medium)
- "I'm uncertain, but..." (low)
Visual Indicators
Confidence Colors
| Color | Meaning |
|---|---|
| Green | High confidence (85%+) |
| Yellow/Amber | Medium confidence (50-84%) |
| Red/Orange | Low confidence (below 50%) |
Confidence Badges
Badges appear on:
- Gap cards
- Document analysis results
- Risk assessments
- AI-generated suggestions
Interpreting Confidence
High Confidence (85%+)
What it means:
- Strong textual evidence exists
- AI is fairly certain of finding
- Lower risk of false positive
What to do:
- Generally trust the finding
- Still worth a quick review
- Good basis for action
Medium Confidence (50-84%)
What it means:
- Evidence exists but isn't conclusive
- Interpretation was required
- Some uncertainty remains
What to do:
- Review the source material
- Verify AI interpretation
- Use your judgment
- Consider additional context
Low Confidence (below 50%)
What it means:
- Limited evidence found
- Significant inference required
- AI is uncertain
What to do:
- Definitely review manually
- Check if documents are complete
- Look for additional sources
- May be false positive
Low confidence findings are not necessarily wrong. They indicate uncertainty and need for human review.
Confidence in Context
Document Quality Impact
Document quality affects all confidence scores:
| Document Quality | Typical Confidence |
|---|---|
| Clear, complete policy | 80-95% |
| General documentation | 60-80% |
| Partial or draft | 40-65% |
| Vague or unclear | 25-50% |
Framework Complexity
Some frameworks are harder to assess:
- Simple, clear requirements → Higher confidence
- Complex, interpretive requirements → Lower confidence
Acting on Confidence
Workflow by Confidence
High Confidence First
Address high-confidence findings first if resources limited.
Review Medium Confidence
Verify before acting on medium-confidence items.
Investigate Low Confidence
Research low-confidence findings to determine validity.
Document Decisions
Record why you accepted or rejected findings.
Filtering by Confidence
Use confidence filters:
- View only high-confidence gaps
- Focus review on low-confidence
- Track confidence distribution
Improving Confidence
Better Documents
Upload better documents to improve confidence:
- Complete policy content
- Clear language
- Standard terminology
- Good formatting
More Context
Provide additional context:
- Related policies
- Implementation evidence
- Supporting documents
Re-Analysis
After improving documents:
- Re-analyze the document
- See updated confidence
- Address remaining uncertainty
Confidence and Compliance
Audit Considerations
For audits, consider:
- High confidence findings are stronger evidence
- Low confidence may need corroboration
- Document your validation process
- Show human review of AI findings
Regulatory Context
When reporting to regulators:
- Note that AI assisted analysis
- Highlight human validation
- Be transparent about uncertainty
- Show review processes
False Positives and Negatives
False Positives
AI may incorrectly identify gaps:
- Even high-confidence findings can be wrong
- Context AI doesn't have
- Unusual document structure
- Non-standard approaches
False Negatives
AI may miss real gaps:
- Not in analyzed documents
- Implied but not explicit
- Complex control relationships
- Framework interpretation differences
Handling Errors
When AI is wrong:
- Mark as false positive/negative
- Add notes explaining why
- Feedback improves future analysis
- Adjust your processes
Confidence Over Time
Score Stability
Confidence scores may change:
- Document updates
- Additional context
- Model improvements
- Framework changes
Tracking Changes
Monitor confidence trends:
- Overall confidence improving?
- Specific areas uncertain?
- Document quality effects?
Best Practices
Don't Ignore Low Confidence
Low confidence items may be:
- Real issues AI can't confirm
- Missing documentation
- Complex situations
- Worth investigating
Don't Over-Trust High Confidence
Even high confidence means:
- AI could be wrong
- Context may be missing
- Human review still valuable
- Verify critical findings
Use Confidence Strategically
Prioritize review effort:
- Spot-check high confidence
- Thoroughly review low confidence
- Track confidence trends
- Improve document quality
Common Questions
Can confidence reach 100%?
Rarely. AI maintains some uncertainty. Very high confidence (95%+) indicates strong evidence.
Why did confidence change?
May change due to: document updates, re-analysis, additional documents, or model improvements.
Is low confidence always a problem?
No. It indicates uncertainty, not necessarily error. It's a signal to investigate further.
Should I act on low-confidence findings?
Investigate first. They may be valid but need human verification.
How do I improve overall confidence?
Upload complete, clear documents with standard terminology and comprehensive coverage.
Next Steps
- Document Analysis - Improve document quality
- Human Review - Validating AI findings
- AI in PartnerAlly - Overview of AI features