Skip to content

Conversation

@ytl0623
Copy link
Contributor

@ytl0623 ytl0623 commented Nov 5, 2025

Fixes #8600 .

Description

This np.linspace approach generates a descending array that starts exactly at 999 and ends exactly at 0 (after rounding), ensuring the scheduler samples the entire intended trajectory.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 5, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This PR refactors three loss classes in unified_focal_loss.py by replacing the to_onehot_y parameter with include_background, removing legacy num_classes and weight parameters, and restructuring forward logic to handle background exclusion and per-class asymmetric computations. Additionally, timestep generation in ddpm.py and ddim.py schedulers switches from a ratio-based approach to np.linspace, fixing endpoint coverage to reach num_train_timesteps.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Loss classes refactoring: Three interdependent classes (AsymmetricFocalLoss, AsymmetricFocalTverskyLoss, AsymmetricUnifiedFocalLoss) undergo substantial signature and logic changes; AsymmetricUnifiedFocalLoss now instantiates and delegates to sub-loss components with new activation and parameter handling.
  • Parameter changes: include_background replaces to_onehot_y; new parameters (sigmoid, softmax, lambda_focal, focal_loss_gamma/delta, tversky_loss_gamma/delta) expand configurability; requires verification of backward compatibility impact.
  • Scheduler fixes: Straightforward algorithm substitution in set_timesteps methods—verify np.linspace rounding behavior and device placement match original intent and resolve the endpoint issue.

Areas requiring extra attention:

  • Verify the asymmetric loss formulation correctness in refactored logic paths
  • Confirm per-class reduction handling (mean, sum, none) works as intended across all three loss classes
  • Check AsymmetricUnifiedFocalLoss weighted combination of sub-losses produces expected results
  • Validate timestep endpoint now reaches num_train_timesteps - 1 as intended in both schedulers
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between e72145c and ddf6898.

📒 Files selected for processing (3)
  • monai/losses/unified_focal_loss.py (1 hunks)
  • monai/networks/schedulers/ddim.py (1 hunks)
  • monai/networks/schedulers/ddpm.py (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ytl0623 ytl0623 closed this Nov 5, 2025
@ytl0623 ytl0623 deleted the fix-issue-8600 branch November 5, 2025 08:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

timesteps in set_timesteps of DDPM and DDIM schedulers doesn't reach end point T (e.g. T= 1000)

1 participant