Skip to content

feat(sdk): onboard nemo evaluator sdk#922

Closed
wprazuch wants to merge 1 commit intodev/0.3.0from
wprazuch/sdk-onboarding-approach1
Closed

feat(sdk): onboard nemo evaluator sdk#922
wprazuch wants to merge 1 commit intodev/0.3.0from
wprazuch/sdk-onboarding-approach1

Conversation

@wprazuch
Copy link
Copy Markdown
Contributor

Copies the public nemo_evaluator_sdk tree from NVIDIA-NeMo/Platform into NEL as src/nemo_evaluator/sdk/, minus the execution/ orchestrator submodule (NEL owns the evaluation loop — see AGENTS.md design principle).

What's included:

  • metrics/ — BLEU, ROUGE, F1, ExactMatch, ToolCalling, LLMJudge, Remote, RAGAS (11 variants) + aggregation, base types, template rendering
  • values/ — metric config types, result types, score types, secrets
  • resilience/ — scheduler with retry/rate-limiting, policies, classifier
  • datasets/ — multi-format loader (JSON/JSONL/CSV/Parquet/Feather/Arrow)
  • inference.py, agent_inference.py — OpenAI-compatible async client with preprocessing/postprocessing hooks
  • templates.py — Jinja2 prompt rendering
  • structured_output.py — structured output mode detection/validation
  • enums.py, constants.py — type definitions

Changes from upstream:

  • Import rewrites: nemo_evaluator_sdk.* → nemo_evaluator.sdk.*
  • init.py: drops the execution.evaluator.Evaluator export
  • metrics/remote.py: inlines the _run_sync helper (previously lived in execution.metric_execution) to avoid pulling in the full orchestrator

Dependencies: added via new [sdk] optional extra (jsonschema, jsonpath-ng, pyarrow, pandas, openai, sacrebleu, rouge_score). Python floor bumped from 3.10 to 3.11 to match SDK constraint.

Tests: ported 10 test files under tests/test_sdk/ covering inference, agent_inference, llm_judge, exact_match, remote, values. Skipped tests that depend on the execution orchestrator (bleu/rouge/f1/number_check/ string_check/tool_calling/api) and test_params.py (depends on nmp internal module). All 148 ported tests pass.

Verified: pip install -e .[sdk] succeeds, imports resolve, tests pass. This is Approach 1 (verbatim copy). A follow-up PR will distribute the SDK into NEL's existing scoring/, metrics/, adapters/, engine/, environments/, and config/ modules per the RFC Phase 1.5 design.

Copies the public nemo_evaluator_sdk tree from NVIDIA-NeMo/Platform into
NEL as src/nemo_evaluator/sdk/, minus the execution/ orchestrator submodule
(NEL owns the evaluation loop — see AGENTS.md design principle).

What's included:
- metrics/ — BLEU, ROUGE, F1, ExactMatch, ToolCalling, LLMJudge, Remote,
  RAGAS (11 variants) + aggregation, base types, template rendering
- values/ — metric config types, result types, score types, secrets
- resilience/ — scheduler with retry/rate-limiting, policies, classifier
- datasets/ — multi-format loader (JSON/JSONL/CSV/Parquet/Feather/Arrow)
- inference.py, agent_inference.py — OpenAI-compatible async client
  with preprocessing/postprocessing hooks
- templates.py — Jinja2 prompt rendering
- structured_output.py — structured output mode detection/validation
- enums.py, constants.py — type definitions

Changes from upstream:
- Import rewrites: nemo_evaluator_sdk.* → nemo_evaluator.sdk.*
- __init__.py: drops the execution.evaluator.Evaluator export
- metrics/remote.py: inlines the _run_sync helper (previously lived in
  execution.metric_execution) to avoid pulling in the full orchestrator

Dependencies: added via new [sdk] optional extra (jsonschema, jsonpath-ng,
pyarrow, pandas, openai, sacrebleu, rouge_score). Python floor bumped
from 3.10 to 3.11 to match SDK constraint.

Tests: ported 10 test files under tests/test_sdk/ covering inference,
agent_inference, llm_judge, exact_match, remote, values. Skipped tests
that depend on the execution orchestrator (bleu/rouge/f1/number_check/
string_check/tool_calling/api) and test_params.py (depends on nmp internal
module). All 148 ported tests pass.

Verified: pip install -e .[sdk] succeeds, imports resolve, tests pass.
This is Approach 1 (verbatim copy). A follow-up PR will distribute the
SDK into NEL's existing scoring/, metrics/, adapters/, engine/,
environments/, and config/ modules per the RFC Phase 1.5 design.

Signed-off-by: Wojciech Prazuch <wprazuch@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 22, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 22, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro Plus

Run ID: f884d49a-9768-460c-b555-227df8d7988c

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch wprazuch/sdk-onboarding-approach1

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added the tests label Apr 22, 2026
@wprazuch wprazuch changed the title feat(sdk): onboard nemo_evaluator_sdk from NVIDIA-NeMo/Platform feat(sdk): onboard nemo evaluator sdk Apr 22, 2026
@wprazuch wprazuch closed this Apr 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant