PartnerAlly Docs
AI Governance

Model Registry

Understand the AI models used in PartnerAlly.

Model Registry

The Model Registry documents the AI models used throughout PartnerAlly. This transparency helps you understand what powers AI features and supports governance requirements.

Why a Model Registry?

The registry helps you:

  • Understand AI capabilities and limitations
  • Meet regulatory disclosure requirements
  • Support audit and compliance discussions
  • Make informed decisions about AI reliance

The Model Registry is updated when models change. Significant changes are communicated in advance.

Registered Models

Document Analysis Model

Purpose: Analyzes policy documents against compliance frameworks

AttributeDetails
NamePartnerAlly Document Analyzer
TypeLarge Language Model (LLM)
ProviderGoogle Gemini / Groq (fallback)
CapabilitiesDocument parsing, control mapping, gap identification
LimitationsImage-based PDFs, very long documents
Last UpdatedSee platform release notes

What It Does:

  • Reads and understands document content
  • Compares against framework control requirements
  • Identifies coverage gaps
  • Assigns confidence scores

Risk Prioritization Model

Purpose: Ranks risks by urgency and importance

AttributeDetails
NamePartnerAlly Risk Prioritizer
TypeScoring algorithm with ML components
CapabilitiesMulti-factor risk scoring, priority ranking
LimitationsBased on available data, may miss external factors

What It Does:

  • Weighs multiple risk factors
  • Calculates priority scores
  • Ranks risks for attention
  • Considers organizational context

Workflow Generation Model

Purpose: Creates remediation workflows from gaps and risks

AttributeDetails
NamePartnerAlly Workflow Generator
TypeLLM with structured output
CapabilitiesTask sequencing, dependency identification
LimitationsOrganizational context, custom processes

What It Does:

  • Analyzes gap or risk details
  • Generates appropriate task sequences
  • Suggests dependencies
  • Applies best practice patterns

Chat Assistant Model

Purpose: Answers compliance questions conversationally

AttributeDetails
NamePartnerAlly Assistant
TypeConversational LLM
CapabilitiesQuestion answering, guidance, explanations
LimitationsNot legal advice, knowledge cutoffs

What It Does:

  • Understands natural language questions
  • Provides compliance guidance
  • Explains platform features
  • References relevant documentation

Model Information

For Each Model

The registry provides:

InformationDescription
Model IDUnique identifier
VersionCurrent version in use
PurposeWhat the model does
InputsWhat data it receives
OutputsWhat it produces
LimitationsKnown constraints
Last UpdatedWhen last modified

Technical Details

For technical users:

  • Model architecture type
  • Provider information
  • Performance characteristics
  • Integration approach

Model Lifecycle

How Models Are Updated

StageDescription
DevelopmentNew model versions created
TestingRigorous internal testing
ValidationBias and quality assessment
Staged RolloutGradual deployment
MonitoringOngoing performance tracking

Update Notification

You're notified when:

  • Significant model changes occur
  • Capabilities are added or removed
  • Known limitations change
  • Performance characteristics shift

Model Governance

Oversight

All models undergo:

  • Regular performance review
  • Bias assessment
  • Security evaluation
  • Compliance verification

Documentation

Each model has:

  • Technical specification
  • Risk assessment
  • Testing results
  • Deployment procedures

Change Control

Model updates follow:

  • Formal change process
  • Approval requirements
  • Rollback procedures
  • Communication protocols

AI models have inherent limitations. Always apply human judgment to AI outputs, especially for critical compliance decisions.

Using Model Information

For Compliance

Use registry information to:

  • Document AI use in your program
  • Support audit discussions
  • Meet disclosure requirements
  • Demonstrate oversight

For Decision-Making

Understand limitations to:

  • Know when to rely on AI
  • Identify when to add human review
  • Set appropriate expectations
  • Explain AI outputs to stakeholders

Model Limitations

Common Limitations

All AI models share some limitations:

LimitationDescription
Knowledge CutoffsMay not know recent events
Context ConstraintsLimited context window
Confidence CalibrationMay be over/under-confident
Edge CasesMay struggle with unusual scenarios

Specific Limitations

Each model has specific limitations documented in the registry. Review before relying on AI for critical decisions.

Requesting Information

Additional Details

For more model information:

  1. Review the in-app registry
  2. Contact your account team
  3. Request technical documentation
  4. Schedule a technical discussion

For Audits

Auditors can request:

  • Model documentation packages
  • Testing results
  • Governance documentation
  • Technical specifications

Common Questions

Can I choose which model to use?

Model selection is managed by PartnerAlly. You can configure oversight and review settings for AI outputs.

How do I know when models change?

Model updates are communicated via:

  • Release notes
  • Email notifications (for significant changes)
  • In-app announcements

Are models trained on my data?

Your data is not used to train models without explicit consent. See our privacy policy for details.

What if a model doesn't work well for my use case?

Report issues:

  • Document the problem
  • Contact support
  • We investigate and improve
  • Consider adjusted settings

Next Steps

On this page