Model Registry
Understand the AI models used in PartnerAlly.
Model Registry
The Model Registry documents the AI models used throughout PartnerAlly. This transparency helps you understand what powers AI features and supports governance requirements.
Why a Model Registry?
The registry helps you:
- Understand AI capabilities and limitations
- Meet regulatory disclosure requirements
- Support audit and compliance discussions
- Make informed decisions about AI reliance
The Model Registry is updated when models change. Significant changes are communicated in advance.
Registered Models
Document Analysis Model
Purpose: Analyzes policy documents against compliance frameworks
| Attribute | Details |
|---|---|
| Name | PartnerAlly Document Analyzer |
| Type | Large Language Model (LLM) |
| Provider | Google Gemini / Groq (fallback) |
| Capabilities | Document parsing, control mapping, gap identification |
| Limitations | Image-based PDFs, very long documents |
| Last Updated | See platform release notes |
What It Does:
- Reads and understands document content
- Compares against framework control requirements
- Identifies coverage gaps
- Assigns confidence scores
Risk Prioritization Model
Purpose: Ranks risks by urgency and importance
| Attribute | Details |
|---|---|
| Name | PartnerAlly Risk Prioritizer |
| Type | Scoring algorithm with ML components |
| Capabilities | Multi-factor risk scoring, priority ranking |
| Limitations | Based on available data, may miss external factors |
What It Does:
- Weighs multiple risk factors
- Calculates priority scores
- Ranks risks for attention
- Considers organizational context
Workflow Generation Model
Purpose: Creates remediation workflows from gaps and risks
| Attribute | Details |
|---|---|
| Name | PartnerAlly Workflow Generator |
| Type | LLM with structured output |
| Capabilities | Task sequencing, dependency identification |
| Limitations | Organizational context, custom processes |
What It Does:
- Analyzes gap or risk details
- Generates appropriate task sequences
- Suggests dependencies
- Applies best practice patterns
Chat Assistant Model
Purpose: Answers compliance questions conversationally
| Attribute | Details |
|---|---|
| Name | PartnerAlly Assistant |
| Type | Conversational LLM |
| Capabilities | Question answering, guidance, explanations |
| Limitations | Not legal advice, knowledge cutoffs |
What It Does:
- Understands natural language questions
- Provides compliance guidance
- Explains platform features
- References relevant documentation
Model Information
For Each Model
The registry provides:
| Information | Description |
|---|---|
| Model ID | Unique identifier |
| Version | Current version in use |
| Purpose | What the model does |
| Inputs | What data it receives |
| Outputs | What it produces |
| Limitations | Known constraints |
| Last Updated | When last modified |
Technical Details
For technical users:
- Model architecture type
- Provider information
- Performance characteristics
- Integration approach
Model Lifecycle
How Models Are Updated
| Stage | Description |
|---|---|
| Development | New model versions created |
| Testing | Rigorous internal testing |
| Validation | Bias and quality assessment |
| Staged Rollout | Gradual deployment |
| Monitoring | Ongoing performance tracking |
Update Notification
You're notified when:
- Significant model changes occur
- Capabilities are added or removed
- Known limitations change
- Performance characteristics shift
Model Governance
Oversight
All models undergo:
- Regular performance review
- Bias assessment
- Security evaluation
- Compliance verification
Documentation
Each model has:
- Technical specification
- Risk assessment
- Testing results
- Deployment procedures
Change Control
Model updates follow:
- Formal change process
- Approval requirements
- Rollback procedures
- Communication protocols
AI models have inherent limitations. Always apply human judgment to AI outputs, especially for critical compliance decisions.
Using Model Information
For Compliance
Use registry information to:
- Document AI use in your program
- Support audit discussions
- Meet disclosure requirements
- Demonstrate oversight
For Decision-Making
Understand limitations to:
- Know when to rely on AI
- Identify when to add human review
- Set appropriate expectations
- Explain AI outputs to stakeholders
Model Limitations
Common Limitations
All AI models share some limitations:
| Limitation | Description |
|---|---|
| Knowledge Cutoffs | May not know recent events |
| Context Constraints | Limited context window |
| Confidence Calibration | May be over/under-confident |
| Edge Cases | May struggle with unusual scenarios |
Specific Limitations
Each model has specific limitations documented in the registry. Review before relying on AI for critical decisions.
Requesting Information
Additional Details
For more model information:
- Review the in-app registry
- Contact your account team
- Request technical documentation
- Schedule a technical discussion
For Audits
Auditors can request:
- Model documentation packages
- Testing results
- Governance documentation
- Technical specifications
Common Questions
Can I choose which model to use?
Model selection is managed by PartnerAlly. You can configure oversight and review settings for AI outputs.
How do I know when models change?
Model updates are communicated via:
- Release notes
- Email notifications (for significant changes)
- In-app announcements
Are models trained on my data?
Your data is not used to train models without explicit consent. See our privacy policy for details.
What if a model doesn't work well for my use case?
Report issues:
- Document the problem
- Contact support
- We investigate and improve
- Consider adjusted settings
Next Steps
- Oversight Settings - Configure AI controls
- Audit Trail - View AI activity
- Bias Assessments - Review fairness testing