PartnerAlly Docs
AI Features

Confidence Scores

Understanding AI certainty levels and how to interpret confidence scores.

Confidence Scores

Every AI-generated finding in PartnerAlly includes a confidence score. This score indicates how certain the AI is about its analysis. Understanding confidence helps you know when to trust AI and when to dig deeper.

What Confidence Means

The Basics

Confidence scores answer: "How sure is the AI?"

  • 100% would mean absolute certainty (never claimed)
  • 0% would mean complete uncertainty (rarely occurs)
  • Most scores fall between 50-95%

Score Ranges

RangeLevelMeaning
85-100%HighStrong evidence, clear match
70-84%Medium-HighGood evidence, reasonable inference
50-69%MediumSome evidence, interpretation needed
30-49%Low-MediumLimited evidence, uncertain
Below 30%LowWeak evidence, needs review

How Confidence is Calculated

Factors That Increase Confidence

AI is more confident when:

  • Text explicitly states the control
  • Multiple documents support the finding
  • Standard terminology is used
  • Clear, unambiguous language
  • Direct framework alignment

Factors That Decrease Confidence

AI is less confident when:

  • Implicit or implied controls
  • Vague or unclear language
  • Single weak source
  • Non-standard terminology
  • Ambiguous requirements

Example Confidence Factors

High Confidence (92%):
- Document explicitly states "All users must use MFA"
- Multiple references to multi-factor authentication
- Clear implementation details provided

Low Confidence (45%):
- Document mentions "strong authentication"
- No specific MFA requirement
- Implementation unclear

Where You See Confidence

Gap Analysis

Each identified gap shows:

  • Confidence percentage
  • Visual indicator (high/medium/low)
  • Factors affecting confidence

Document Analysis

After document analysis:

  • Overall confidence for the document
  • Per-control confidence scores
  • Aggregate across frameworks

Chat Responses

AI chat indicates certainty:

  • "I'm confident that..." (high)
  • "Based on available documents..." (medium)
  • "I'm uncertain, but..." (low)

Visual Indicators

Confidence Colors

ColorMeaning
GreenHigh confidence (85%+)
Yellow/AmberMedium confidence (50-84%)
Red/OrangeLow confidence (below 50%)

Confidence Badges

Badges appear on:

  • Gap cards
  • Document analysis results
  • Risk assessments
  • AI-generated suggestions

Interpreting Confidence

High Confidence (85%+)

What it means:

  • Strong textual evidence exists
  • AI is fairly certain of finding
  • Lower risk of false positive

What to do:

  • Generally trust the finding
  • Still worth a quick review
  • Good basis for action

Medium Confidence (50-84%)

What it means:

  • Evidence exists but isn't conclusive
  • Interpretation was required
  • Some uncertainty remains

What to do:

  • Review the source material
  • Verify AI interpretation
  • Use your judgment
  • Consider additional context

Low Confidence (below 50%)

What it means:

  • Limited evidence found
  • Significant inference required
  • AI is uncertain

What to do:

  • Definitely review manually
  • Check if documents are complete
  • Look for additional sources
  • May be false positive

Low confidence findings are not necessarily wrong. They indicate uncertainty and need for human review.

Confidence in Context

Document Quality Impact

Document quality affects all confidence scores:

Document QualityTypical Confidence
Clear, complete policy80-95%
General documentation60-80%
Partial or draft40-65%
Vague or unclear25-50%

Framework Complexity

Some frameworks are harder to assess:

  • Simple, clear requirements → Higher confidence
  • Complex, interpretive requirements → Lower confidence

Acting on Confidence

Workflow by Confidence

High Confidence First

Address high-confidence findings first if resources limited.

Review Medium Confidence

Verify before acting on medium-confidence items.

Investigate Low Confidence

Research low-confidence findings to determine validity.

Document Decisions

Record why you accepted or rejected findings.

Filtering by Confidence

Use confidence filters:

  1. View only high-confidence gaps
  2. Focus review on low-confidence
  3. Track confidence distribution

Improving Confidence

Better Documents

Upload better documents to improve confidence:

  • Complete policy content
  • Clear language
  • Standard terminology
  • Good formatting

More Context

Provide additional context:

  • Related policies
  • Implementation evidence
  • Supporting documents

Re-Analysis

After improving documents:

  1. Re-analyze the document
  2. See updated confidence
  3. Address remaining uncertainty

Confidence and Compliance

Audit Considerations

For audits, consider:

  • High confidence findings are stronger evidence
  • Low confidence may need corroboration
  • Document your validation process
  • Show human review of AI findings

Regulatory Context

When reporting to regulators:

  • Note that AI assisted analysis
  • Highlight human validation
  • Be transparent about uncertainty
  • Show review processes

False Positives and Negatives

False Positives

AI may incorrectly identify gaps:

  • Even high-confidence findings can be wrong
  • Context AI doesn't have
  • Unusual document structure
  • Non-standard approaches

False Negatives

AI may miss real gaps:

  • Not in analyzed documents
  • Implied but not explicit
  • Complex control relationships
  • Framework interpretation differences

Handling Errors

When AI is wrong:

  1. Mark as false positive/negative
  2. Add notes explaining why
  3. Feedback improves future analysis
  4. Adjust your processes

Confidence Over Time

Score Stability

Confidence scores may change:

  • Document updates
  • Additional context
  • Model improvements
  • Framework changes

Tracking Changes

Monitor confidence trends:

  • Overall confidence improving?
  • Specific areas uncertain?
  • Document quality effects?

Best Practices

Don't Ignore Low Confidence

Low confidence items may be:

  • Real issues AI can't confirm
  • Missing documentation
  • Complex situations
  • Worth investigating

Don't Over-Trust High Confidence

Even high confidence means:

  • AI could be wrong
  • Context may be missing
  • Human review still valuable
  • Verify critical findings

Use Confidence Strategically

Prioritize review effort:

  • Spot-check high confidence
  • Thoroughly review low confidence
  • Track confidence trends
  • Improve document quality

Common Questions

Can confidence reach 100%?

Rarely. AI maintains some uncertainty. Very high confidence (95%+) indicates strong evidence.

Why did confidence change?

May change due to: document updates, re-analysis, additional documents, or model improvements.

Is low confidence always a problem?

No. It indicates uncertainty, not necessarily error. It's a signal to investigate further.

Should I act on low-confidence findings?

Investigate first. They may be valid but need human verification.

How do I improve overall confidence?

Upload complete, clear documents with standard terminology and comprehensive coverage.

Next Steps

On this page