Model inventory
Use Model Inventory to catalog all AI and LLM models within your organization. Track model metadata, approval status, capabilities, security assessments, and associated risks throughout the complete model lifecycle.
Prerequisites
Permissions
Admin or Editor role: Create, edit, and delete model records
Viewer role: View model records only
Access
Navigate to Sidebar → Discovery → Model Inventory
Overview
Model Inventory manages AI/LLM models deployed or evaluated within your organization. Each record captures provider information, model capabilities, approval status, security assessments, and links to use cases and frameworks.
The feature organizes information across three tabs:
Models: Main inventory with approval tracking
Model risks: Risk assessment and mitigation tracking
MLFlow data: MLFlow tracking server integration
Key capabilities:
Register models with metadata: provider, version, capabilities, and security assessment status
Track approval through four states: Approved, Restricted, Pending, Blocked
Link models to organizational use cases and compliance frameworks
Filter by approval status and search across provider, model name, and version
Associate risks with models, track mitigation plans and target dates
Accessing Model Inventory
To access the Model Inventory page:
Open the sidebar navigation
Navigate to the Discovery section
Select Model Inventory
The page displays summary cards at the top showing model counts by status (Approved, Restricted, Pending, Blocked), followed by tab navigation, filter controls, and the models table.
Creating a new model record
To register a model:
Click Add new model in the upper right corner
Complete the required fields in the model form
Click Save to create the record
The form includes required and optional fields organized in sections.
Provider (required)
Enter the organization or service that provides the model.
Examples
"OpenAI"
"Anthropic"
"Google"
"Meta"
"Hugging Face"
Model (required)
Enter the model name. The field provides autocomplete suggestions from common models. Type any value if your model doesn't appear in the suggestions list.
Examples
"GPT-4"
"Claude 3.5 Sonnet"
"Gemini Pro"
"Llama 3"
"Custom Model Name"
Version (required)
Specify the model version number or identifier.
Examples
"4.0"
"1.5"
"2024-03-01"
"v3.5"
Approver (required)
Select the person responsible for approving this model's use from the user dropdown. The list populates from your organization's user directory.
Status (required)
Select the current approval status from four options.
Approved The model is cleared for use within your organization. Displays with a green badge in the table.
Restricted The model has usage restrictions or conditional approval. Displays with an orange badge in the table.
Pending The model awaits approval decision. Displays with a yellow badge in the table.
Blocked The model is not approved for organizational use. Displays with a red badge in the table.
Status date (required)
Enter the date when the current status was set. Defaults to today's date. Use the date picker to select a different date.
Capabilities
Select one or more capabilities that describe the model's functionality. Choose from predefined options in this multi-select field.
Available capabilities
Vision: Image understanding and analysis
Caching: Response caching support
Tools: Function calling and tool use
Code: Code generation and analysis
Multimodal: Multiple input/output types
Audio: Audio processing
Video: Video analysis
Text Generation: Generate text content
Translation: Language translation
Summarization: Text summarization
Question Answering: Answer questions from context
Sentiment Analysis: Analyze text sentiment
Named Entity Recognition: Identify entities in text
Image Classification: Classify images
Object Detection: Detect objects in images
Speech Recognition: Transcribe speech
Text-to-Speech: Generate speech from text
Recommendation: Provide recommendations
Anomaly Detection: Detect anomalies in data
Forecasting: Predict future values
Used in use cases
Select AI use cases or projects where this model is deployed. The list populates from your organization's project registry. Linking models to projects helps track dependencies and assess change impact.
Used in frameworks
Select compliance frameworks this model supports or relates to. Available options: ISO 42001 and ISO 27001. These associations connect models to your organization's compliance program.
Reference link
Enter a URL to external documentation, model cards, or reference information about the model.
Examples
"https://platform.openai.com/docs/models/gpt-4"
"https://www.anthropic.com/claude"
"https://huggingface.co/models"
Biases
Document known biases in the model's training data or outputs. Include demographic, cultural, linguistic, or domain-specific biases that affect model behavior.
Examples
"Training data predominantly English language, limited multilingual performance"
"Underrepresents non-Western cultural perspectives and contexts"
"May exhibit gender bias in occupation and role associations"
"Skewed toward North American geographic references"
Hosting provider
Enter where the model is hosted or deployed.
Examples
"OpenAI"
"AWS SageMaker"
"Azure ML"
"Internal infrastructure"
Limitations
Document technical and functional constraints including context length, language support, accuracy boundaries, or capability gaps.
Examples
"Maximum context window: 8,192 tokens"
"Limited mathematical reasoning capabilities"
"Training data cutoff: January 2023, no real-time information"
"English only, no multilingual support"
"Reduced accuracy on domain-specific medical terminology"
Security assessment
Toggle this switch on to indicate that a security assessment has been completed for this model. The toggle defaults to off.
After completing required fields, click Save. The system displays a success message and adds the new model to the table.
Viewing and editing model records
Two methods provide access to edit model records:
Quick edit: Click anywhere on a model row in the table Action menu: Click the Edit icon (pencil) in the Actions column
The edit modal opens displaying all current model data in form fields. The modal header shows "Edit Model" with the model details. Modify fields as needed, then click Update model to save changes. The system displays a success confirmation and refreshes the table with updated information.
Permission requirement: Only Admin and Editor roles can edit model records. Viewers have read-only access. If you cannot see the edit icon or click table rows, verify your role permissions with your system administrator.
Deleting a model record
Deletion permanently removes model records. Models with associated risks trigger a choice between deleting the model only or deleting both model and risks.
To delete a model:
Locate the model in the table
Click the Delete icon (trash) in the Actions column
If associated risks exist, select a deletion scope:
Delete model only: Removes the model, preserves risk records
Delete model and risks: Removes the model and all associated risk records
Confirm deletion in the dialog
The system immediately removes the selected records.
Warning: Deletion is permanent and irreversible. Deleted records cannot be recovered. You must manually recreate records if deleted by mistake.
Permission requirement: The delete icon appears only for users with deletion permissions (Admin and Editor roles). If the icon is not visible, your role lacks deletion rights.
Filtering model records
The status filter narrows the table to show only models in a specific approval state.
To filter by status:
Click the status dropdown above the table
Select a status option:
All statuses: Shows all records (default)
Approved: Shows approved models
Restricted: Shows restricted models
Pending: Shows models awaiting approval
Blocked: Shows blocked models
The table updates immediately. The dropdown displays either "All statuses" or "Status: [selected status]" based on your selection.
Searching model records
Search helps you locate specific models across provider, model name, and version fields.
To search:
Click the search field above the table (placeholder text: "Search models...")
Type your search term
The search matches partial text in provider, model, and version fields. Results filter as you type. Search is case-insensitive.
Example: Typing "GPT" matches:
Provider: "OpenAI" with Model: "GPT-4"
Model: "GPT-3.5-turbo"
Any model or provider containing "GPT"
Combining search and filters: Use search together with the status filter. For example, search for "Claude" while filtering to "Approved" to find approved Claude models.
Clear search: Delete text from the search field to view all records again.
Understanding summary cards
Four summary cards at the top of the Models tab display approval status distribution across your model inventory.
Approved (green card) Count of models cleared for organizational use.
Restricted (orange card) Count of models with usage restrictions or conditional approval.
Pending (yellow card) Count of models awaiting approval decision.
Blocked (red card) Count of models not approved for use.
The sum across all cards equals your total model inventory count.
Understanding the models table
The table displays model records with one row per model and eight columns of information.
Table columns
Provider The organization or service providing the model.
Model The model name or identifier.
Version The model version number or release identifier.
Approver The full name of the person who approved or is responsible for approving the model.
Security Assessment Whether a security assessment has been completed. Displays as "Yes" (green badge) or "No" (gray badge).
Status Current approval status displayed as a color-coded badge:
Approved: Green background
Restricted: Orange background
Pending: Yellow background
Blocked: Red background
Status Date The date when the current status was set, shown in YYYY-MM-DD format.
Actions Edit and delete buttons (pencil and trash icons). Visible only if you have appropriate permissions for these operations.
Text truncation
Column values longer than 24 characters are truncated for display. Hover over truncated text to see the full value in a tooltip.
Row interaction
Click anywhere on a table row to open that model record in edit mode. This provides quick access without clicking the edit icon.
Working with model risks
The Model Risks tab tracks potential issues, impacts, and mitigation plans for models in your inventory.
Accessing model risks
Click the Model risks tab at the top of the page. The tab badge shows the count of tracked risks.
Creating a model risk
Prerequisites: At least one model must exist in your inventory. Risks require association with a model.
To create a risk:
Click the Model risks tab
Click Add model risk in the upper right corner
Complete the risk form fields
Click Save to create the risk record
Risk form fields
Risk name (required) Enter a descriptive name for the risk.
Risk category (required) Classify the risk type:
Performance: Accuracy, reliability, or output quality issues
Bias & Fairness: Discriminatory or unfair outcomes affecting specific groups
Security: Data breaches, unauthorized access, or privacy violations
Data Quality: Training data issues, input validation problems, or data integrity concerns
Compliance: Regulatory violations or policy non-compliance
Risk level (required) Assess severity based on potential impact:
Low: Minor impact with easy mitigation
Medium: Moderate impact requiring attention and resources
High: Significant impact needing prompt action and escalation
Critical: Severe impact demanding immediate executive-level action
Status (required) Track the risk state throughout its lifecycle (Open, In Progress, Mitigated, Closed).
Owner (required) Select the person responsible for managing this risk.
Target date (required) Set the date by which the risk should be addressed or mitigated.
Model (required) Select which model this risk applies to from your model inventory.
Description Provide details about the risk including what could go wrong and potential impacts.
Mitigation plan Document steps planned or taken to reduce or eliminate the risk.
Impact Describe the potential consequences if the risk materializes.
Filtering model risks
Three filters narrow the risk list by different criteria:
Category filter Select a risk type: Performance, Bias & Fairness, Security, Data Quality, or Compliance. Default shows all categories.
Risk level filter Select a severity level: Low, Medium, High, or Critical. Default shows all levels.
Status filter Control visibility of deleted risks:
Active only: Shows non-deleted risks (default)
Active + deleted: Shows all risks including deleted records
Deleted only: Shows only deleted risks
Filters work together. Apply multiple filters simultaneously to narrow results.
Editing and deleting risks
Click the edit icon (pencil) to modify risk details. Click the delete icon (trash) to remove risks. Deletion marks risks as deleted but preserves them in the database. View deleted risks by selecting "Active + deleted" or "Deleted only" in the status filter.
Working with MLFlow data
The MLFlow Data tab displays model information from an integrated MLFlow tracking server if configured.
Accessing MLFlow data
Click the MLFlow data tab at the top of the page. The tab shows the count of MLFlow records in parentheses.
This tab displays read-only data from your MLFlow tracking server. The information updates automatically when you navigate to the tab.
Best practices
Follow these practices to maintain accurate and useful model records.
Model documentation
Use official names Record models with official provider names and versions. Avoid informal names, abbreviations, or internal nicknames that create confusion.
Document all capabilities Select all relevant capabilities, including obvious ones. Comprehensive capability lists help team members evaluate model fit for specific use cases.
Maintain reference links Include links to official documentation, model cards, or technical specifications. Direct links accelerate research and reduce support requests.
Update status promptly Change approval status immediately when decisions are made. Outdated status creates risk of inappropriate model use.
Approval management
Start with Pending Create new model records with Pending status until approval is confirmed. This prevents premature use of unapproved models.
Document status dates Update the status date whenever status changes. Accurate dates help track approval timelines and support audit requirements.
Assign appropriate approvers Select approvers with authority to make model usage decisions. Consistent approver assignment helps establish accountability.
Security assessment tracking
Complete assessments before approval Run security assessments during the approval process, not after. Enable the security assessment toggle only after completion.
Document findings Use the Biases and Limitations fields to record security assessment findings. This information helps users understand model risks.
Risk management
Link risks to models promptly Create risk records immediately when issues are identified. Early documentation enables proactive mitigation and prevents issues from escalating.
Assign risk owners Assign owners with authority and resources to address the risk. Ownership without authority leads to unresolved risks and accountability gaps.
Set realistic target dates Balance proper mitigation time with urgency. Target dates should be achievable but maintain pressure for action. Unrealistic dates damage credibility and reduce accountability.
Update mitigation plans Document progress as actions are taken. Current mitigation plans demonstrate active risk management and support status reporting.
Use case and framework linking
Link models to active use cases Connect models only to projects currently using them. Accurate links enable dependency tracking and impact analysis when models change or are deprecated.
Update links when projects change Remove links immediately when projects are decommissioned or switch models. Outdated links create false dependencies and mislead impact assessments.
Track framework compliance Link models to compliance frameworks they support. Framework associations provide evidence for regulatory audits and compliance reviews.
Troubleshooting
Common issues and their solutions.
Cannot create or edit model records
Cause: Insufficient permissions for the operation.
Solution: Verify your user role. Only Admin and Editor roles can create, edit, or delete model records. Viewer role has read-only access. Contact your system administrator if you need elevated permissions.
Cannot create model risk without models
Cause: Model risks require at least one model in the inventory to associate with.
Solution: Navigate to the Models tab and create at least one model record first. Then return to the Model Risks tab to create risks.
Search not finding expected model
Cause: Search only matches provider, model name, and version fields, not other columns.
Solution: If searching for an approver name or other details, use the status filter in combination with manual scanning of results. You can also click column headers to mentally organize by the data you're looking for.
Duplicate model entries
Cause: Creating new records for version updates instead of editing existing records.
Solution: Edit existing records and update the version field for minor version changes. Create separate records only for major model versions that represent different capabilities or architectures. Example: GPT-4 and GPT-3.5 are separate records, but GPT-4 4.0 to 4.1 should update the existing GPT-4 record.
Autocomplete not showing model
Cause: Model list comes from a predefined set of common models. Custom or newer models may not appear.
Solution: The model field supports free text entry. Simply type the model name even if it doesn't appear in the autocomplete list. The field will accept any value you enter.
Cannot see summary cards
Cause: Summary cards only appear on the Models tab, not on Model Risks or MLFlow Data tabs.
Solution: Click the "Models" tab to view the summary cards showing approval status distribution.
Tab navigation
Model Inventory uses three tabs to organize different types of information.
Switching between tabs
Click any tab name to switch views:
Models: Main model inventory
Model risks: Risk tracking and mitigation
MLFlow data: MLFlow integration data
The browser URL updates as you switch tabs, so you can bookmark or share links to specific tabs.
Tab badges
Each tab displays a badge showing the count of records in that view. The count updates automatically as you add, edit, or delete records.
Last updated
Was this helpful?