Skip to content

Conversation

@smccarthy-ie
Copy link
Contributor

@smccarthy-ie smccarthy-ie commented Nov 5, 2025

  • Add doc updates for model catalog performance benchmark data, search, and filters.
  • Restructure overview to put catalog before registry to reflect workflow better.

HTML rendered preview:

Summary by CodeRabbit

  • Documentation
    • Updated Model Catalog guides with clearer UI descriptions (categories, model names, descriptions, labels) and reordered content to prioritize catalog-first discovery.
    • Enhanced search and filtering for models by task, provider, license, and name/description/provider.
    • Added Performance Insights and benchmarking details for validated models, with filters (workload, latency, percentiles, RPS, hardware).
    • Clarified deployment flow, resource naming rules, and updated related resource links.

@coderabbitai
Copy link

coderabbitai bot commented Nov 5, 2025

Walkthrough

Reordered and expanded four AsciiDoc modules to make the Model Catalog primary: added UI descriptions (categories, search, labeled filters), Performance Insights for validated models, updated discovery/registration/deployment procedures, clarified deployment naming constraints, and adjusted cross-references and conditional blocks.

Changes

Cohort / File(s) Summary
Overview & conceptual doc
modules/overview-of-model-registries.adoc
Title and section order changed to present the Model Catalog before Model Registry; new abstract; expanded Model Catalog content (providers, benchmarking, hardware evaluation, deployment readiness); recontextualized Model Registry as metadata/lifecycle store; updated cross-references and conditionals.
Deployment UI & flow
modules/deploying-a-model-from-the-model-catalog.adoc
Replaced dropdown-focused instructions with a Model Catalog UI overview (categories, model details, search, filters); expanded deployment flow (view details → select project → choose deployment options); added NOTE on Model deployment name vs Resource name with naming rules; updated Additional resources links.
Registering from catalog
modules/registering-a-model-from-the-model-catalog.adoc
Removed explicit catalog-source selection instructions and default-catalog note; added Model Catalog overview, search by name/description/provider, and filter menu description (Task, Provider, License).
Discovering & evaluating models
modules/viewing-models-in-the-catalog.adoc
Title/abstract updated to emphasize discovery/evaluation of gen AI models; prerequisites adjusted (conditional blocks); catalog navigation moved to category menu (All / Org / Validated / Community); added labeled search and filters (Task, Provider, License); introduced Performance Insights tab and filterable metrics for validated models; clarified load/verification behaviors.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant User
    participant UI as Model Catalog UI
    participant Catalog as Model Catalog Service
    participant Registry as Model Registry
    participant Deployer as Deployment Service

    Note over UI: Discover & evaluate models
    User->>UI: Open catalog, choose category (All / Org / Validated / Community)
    UI->>Catalog: Query models (search, filters, labels)
    Catalog-->>UI: Return model list (metadata, badges)

    alt View model details
      User->>UI: Open model details
      UI->>Catalog: Fetch metadata & benchmarks
      Catalog-->>UI: Return details + Performance Insights
      UI->>User: Display details & Performance Insights tab
    end

    alt Deploy model
      User->>UI: Click Deploy
      UI->>Registry: (optional) register or query model resource info
      Registry-->>UI: Return resource name constraints
      UI->>Deployer: Start deployment (project, deployment name, options)
      Deployer-->>UI: Return deployment status
      UI->>User: Show deployment result
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Check consistency of "Model Catalog first" terminology across modules.
  • Verify updated cross-reference targets and ifdef/ifndef conditional blocks.
  • Validate accuracy of Performance Insights metrics and filter descriptions.
  • Review the deployment naming NOTE and resource naming constraints for precision and alignment with platform rules.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'odh-34800: doc updates for model catalog performance data' accurately reflects the main changes in the PR, which include documentation updates about model catalog performance benchmarking, search, and filter capabilities.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@smccarthy-ie smccarthy-ie marked this pull request as draft November 5, 2025 16:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
modules/viewing-models-in-the-catalog.adoc (1)

38-56: Performance Insights section is comprehensive but complex.

The Performance Insights section (lines 38–56) provides detailed performance benchmark guidance for Red Hat AI validated models, covering workload types, latency metrics, percentiles, sliders, and hardware types. While thorough, the heavily nested bullet structure with four separate filter options and their sub-steps may be challenging to follow. Consider simplifying the organization or adding a brief introductory note to help users navigate the multiple filtering options.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e43d2f7 and 68650a5.

📒 Files selected for processing (4)
  • modules/deploying-a-model-from-the-model-catalog.adoc (2 hunks)
  • modules/overview-of-model-registries.adoc (1 hunks)
  • modules/registering-a-model-from-the-model-catalog.adoc (1 hunks)
  • modules/viewing-models-in-the-catalog.adoc (2 hunks)
🔇 Additional comments (5)
modules/deploying-a-model-from-the-model-catalog.adoc (2)

35-37: Aligned with catalog-first workflow.

The added Model Catalog overview and search/filter descriptions are consistent with the parallel guidance in other catalog modules and properly emphasize the UI-driven discovery approach.


46-57: Helpful clarification on deployment naming.

The NOTE block clearly distinguishes between the Model deployment name (inference service name) and Resource name (OpenShift label), with detailed constraints on resource naming conventions. This guidance should help users avoid configuration errors.

modules/overview-of-model-registries.adoc (2)

4-4: Well-executed restructuring for catalog-first narrative.

The reordering of sections to place the Model Catalog before the Model Registry, combined with the expanded abstract and updated cross-references, successfully establishes the catalog as the primary entry point for data scientists. The Model Registry is appropriately positioned as a supporting governance and metadata management component. This structure aligns well with the overall PR objective.

Also applies to: 13-13, 29-29


15-19: Catalog description effectively communicates breadth and quality.

Line 17 clearly explains the benchmarking value (third-party models benchmarked for performance/quality) and hardware configuration comparison capability. Line 19's list of provider examples (Red Hat, IBM, Meta, Nvidia, Mistral AI, Google) helps users understand the diversity of available models.

modules/registering-a-model-from-the-model-catalog.adoc (1)

15-17: Consistent with parallel guidance in other modules.

The Model Catalog overview and search/filter descriptions properly align with the catalog-first approach established across the documentation suite.

@smccarthy-ie smccarthy-ie marked this pull request as ready for review November 6, 2025 11:14
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
modules/viewing-models-in-the-catalog.adoc (1)

52-52: Clarify available percentile options.

Line 52 references "a percentile value, for example, P90" but doesn't list which percentile values are available (e.g., Mean, P90, P95, P99). Without explicit options, users may be uncertain what values they can select.

Consider replacing this with a clearer list similar to the metrics options above:

-You can also select a percentile value, for example, `P90`.
+You can select a percentile value from the list: `Mean`, `P90`, `P95`, or `P99`.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fcdd183 and 80e6700.

📒 Files selected for processing (1)
  • modules/viewing-models-in-the-catalog.adoc (2 hunks)

@rareddy
Copy link

rareddy commented Nov 7, 2025

except for couple places calling in "data scientists" explicitly as it also applies to "AI engineers" or Platform engineers the rest is LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants