Skip to content

Conversation

@peri044
Copy link
Contributor

@peri044 peri044 commented Oct 22, 2025

What does this PR do ?

I was trying to implement full entropy by understanding ChunkedDistributedEntropy via a test case and later I've observed this PR #1200 :) So this PR is just a test case. Feel free to close it if it isn't needed. cc: @parthchadha

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Tests
    • Added comprehensive testing for distributed entropy computation with validation across multiple configurations, including forward/backward parity checks and numerical stability scenarios.

@peri044 peri044 requested review from a team as code owners October 22, 2025 04:42
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 22, 2025

📝 Walkthrough

Walkthrough

A new Ray-based test actor ChunkedDistributedEntropyTestActor is introduced to validate ChunkedDistributedEntropy end-to-end, with comprehensive tests for forward/backward passes against PyTorch baselines, fixture registration, and edge-case validation.

Changes

Cohort / File(s) Summary
Ray-based test actor for chunked distributed entropy
tests/unit/distributed/test_model_utils.py
Introduces ChunkedDistributedEntropyTestActor with _torch_baseline_entropy() for PyTorch baseline computation, test_chunked_distributed_entropy_forward_and_backward() for forward/backward parity validation, and test_edge_cases() for numerical stability checks. Adds test harness supporting multiple TP and chunk sizes with GPU guards. Registers new actor via register_chunked_distributed_entropy_test_actor() fixture and exposes test_chunked_distributed_entropy_all_tests() as entry point. Imports ChunkedDistributedEntropy for testing. Includes result tracking (forward/gradient/entropy diffs).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

✅ Passed checks (4 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "Add unit test for full entropy calculation" is related to the actual changes in the changeset. The PR does indeed add comprehensive unit tests that exercise entropy calculation through the new ChunkedDistributedEntropyTestActor, including forward/backward passes and numerical stability checks. While the title is somewhat generic and doesn't specifically mention ChunkedDistributedEntropy (the primary focus), it accurately describes the core action being taken—adding tests for entropy calculation. The title is not misleading or off-topic; it captures a real and valid aspect of the changes, even if it's not the most specific description possible.
Docstring Coverage ✅ Passed Docstring coverage is 87.50% which is sufficient. The required threshold is 80.00%.
Test Results For Major Changes ✅ Passed This PR adds comprehensive unit tests for ChunkedDistributedEntropy, including a new test actor, test fixtures, and end-to-end validation. While the PR description indicates these are primarily test additions (which are generally considered good practice), it lacks documentation of actual test results or validation outcomes in the PR description itself. The summary mentions that the test harness will report comparison results (forward_max_diff, grad_max_diff, entropy_max_diff) and includes numerical stability checks, but the actual PR description does not include evidence that these tests pass, quantified validation results, or demonstration that there is no regression. Since this is a test-only change without production code modifications, it is not a "major" change in the traditional sense, but the test infrastructure being added is non-trivial, and given the reference to entropy calculations (which could affect numerics), test results would strengthen confidence in the submission.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
tests/unit/distributed/test_model_utils.py (4)

613-616: Avoid negative-index gather for padded targets.

Using -1 in gather relies on Python negative indexing and can surprise future readers; multiply-by-mask zeroes it later but still indexes the last vocab entry. Prefer clamping/branching to avoid out-of-range indexing and keep gradients strictly zero for pads.

Apply:

-        target_mask = target >= 0  # Valid targets (assuming -1 or similar for padding) 
-        log_probs = torch.gather(log_softmax, -1, target.unsqueeze(-1)).squeeze(-1)
-        log_probs = log_probs * target_mask.float()
+        # Treat negative targets as padding; avoid negative-index gather
+        target_valid = target >= 0
+        safe_target = torch.where(target_valid, target, torch.zeros_like(target))
+        log_probs = torch.gather(log_softmax, -1, safe_target.unsqueeze(-1)).squeeze(-1)
+        log_probs = log_probs.masked_fill(~target_valid, 0.0)

1135-1144: Uniform‑case comment and minor cleanup.

For uniform distributions, sum p log p = -log(V). Also, expand_as is redundant since shapes already match.

-        # For uniform distribution over V items: H = -sum(1/V * log(1/V)) = log(V)
+        # For uniform over V items: sum p log p = log(1/V) = -log(V)
@@
-            expected_uniform_entropy.expand_as(entropy_uniform),
+            expected_uniform_entropy,

1211-1237: Enforce numerical thresholds in the harness (not just inside the actor).

Actor asserts already gate failures, but adding harness-level asserts keeps parity with the gather-logprob tests and surfaces diffs clearly if tolerances change.

         results = ray.get(futures)
 
-        for i, result in enumerate(results):
-            if "forward_max_diff" in result:
-                print(f"Worker {i} forward max diff: {result['forward_max_diff']:.2e}")
-            if "grad_max_diff" in result and "entropy_max_diff" in result:
-                print(
-                    f"Worker {i} gradient max diff: {result['grad_max_diff']:.2e}, "
-                    f"entropy max diff: {result['entropy_max_diff']:.2e}"
-                )
+        for i, result in enumerate(results):
+            print(f"Worker {i} forward max diff: {result['forward_max_diff']:.2e}")
+            print(
+                f"Worker {i} gradient max diff: {result['grad_max_diff']:.2e}, "
+                f"entropy max diff: {result['entropy_max_diff']:.2e}"
+            )
+            assert result["forward_max_diff"] < 1e-4, (
+                f"Worker {i} forward diff too large: {result['forward_max_diff']}"
+            )
+            assert result["grad_max_diff"] < 1e-4, (
+                f"Worker {i} grad diff too large: {result['grad_max_diff']}"
+            )
+            assert result["entropy_max_diff"] < 1e-4, (
+                f"Worker {i} entropy diff too large: {result['entropy_max_diff']}"
+            )

1176-1184: Optional: add a “single‑chunk” path to the param grid.

Include a chunk_size larger than seq_len to exercise the single-chunk path in ChunkedDistributedEntropy.

 @pytest.mark.parametrize(
     "tp_size, chunk_size",
     [
         (1, 4),
         (2, 4),
         (1, 1),
         (2, 1),
+        (1, 16),  # chunk_size > seq_len (single-chunk path)
+        (2, 16),
     ],
 )
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f2de476 and 4ce3230.

📒 Files selected for processing (1)
  • tests/unit/distributed/test_model_utils.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • tests/unit/distributed/test_model_utils.py
🧬 Code graph analysis (1)
tests/unit/distributed/test_model_utils.py (4)
nemo_rl/distributed/model_utils.py (6)
  • ChunkedDistributedEntropy (985-1056)
  • backward (101-140)
  • backward (210-255)
  • backward (317-381)
  • backward (746-775)
  • backward (1026-1056)
nemo_rl/distributed/virtual_cluster.py (2)
  • PY_EXECUTABLES (42-58)
  • RayVirtualCluster (177-435)
nemo_rl/distributed/named_sharding.py (3)
  • NamedSharding (19-222)
  • layout (99-101)
  • names (84-86)
nemo_rl/distributed/worker_groups.py (3)
  • RayWorkerBuilder (130-300)
  • RayWorkerGroup (303-1004)
  • run_all_workers_single_data (728-772)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Post submodule check comment / Comment on PR
  • GitHub Check: Post automodel integration comment / Comment on PR

Comment on lines +971 to +980
def _torch_baseline_entropy(self, full_logits):
"""Single-GPU PyTorch baseline implementation for entropy computation."""
# Compute log softmax and softmax using standard PyTorch
log_probs = torch.nn.functional.log_softmax(full_logits, dim=-1)
probs = torch.exp(log_probs)

# Compute entropy: H = -sum(p * log(p)) = -sum(p * log_p)
entropy = (probs * log_probs).sum(dim=-1) # [B, S]

return entropy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Entropy docstring/sign confusion.

The implementation returns sum_v p_v log p_v (non-positive), not the conventional positive Shannon entropy -sum p log p. Adjust the docstring/comments to match to avoid confusion.

-    def _torch_baseline_entropy(self, full_logits):
-        """Single-GPU PyTorch baseline implementation for entropy computation."""
+    def _torch_baseline_entropy(self, full_logits):
+        """Single-GPU baseline for sum_v p_v log p_v (non-positive 'entropy' used by ChunkedDistributedEntropy)."""
@@
-        # Compute entropy: H = -sum(p * log(p)) = -sum(p * log_p)
-        entropy = (probs * log_probs).sum(dim=-1)  # [B, S]
+        # Compute H_all = sum_v p_v log p_v (<= 0)
+        entropy = (probs * log_probs).sum(dim=-1)  # [B, S]
🤖 Prompt for AI Agents
In tests/unit/distributed/test_model_utils.py around lines 971 to 980, the
docstring and inline comment claim the function computes "entropy" but the code
returns sum_v p_v * log p_v (a non-positive value), i.e. the negative of
conventional Shannon entropy; update the function docstring and the inline
comment to state explicitly that the function returns the negative Shannon
entropy (sum_v p_v * log p_v, ≤ 0) or "negative entropy" (or change the sign in
the computation if you prefer positive Shannon entropy), and make the comment
above the entropy computation consistent with that description.

@peri044 peri044 changed the base branch from main to pchadha-add-full-entropy-log October 23, 2025 18:05
@peri044 peri044 requested review from a team as code owners October 23, 2025 18:05
@peri044 peri044 changed the base branch from pchadha-add-full-entropy-log to main October 23, 2025 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant