-
Notifications
You must be signed in to change notification settings - Fork 1.3k
modify unified_focal_loss.py #8626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
Signed-off-by: ytl0623 <david89062388@gmail.com>
WalkthroughAdded include_background to AsymmetricFocalTverskyLoss, AsymmetricFocalLoss, and AsymmetricUnifiedFocalLoss; AsymmetricUnifiedFocalLoss also gained use_softmax and propagates both flags to internal losses initialized with reduction=None. Forward paths now handle multi-class and single-channel inputs (augmenting single-channel with a background channel), convert logits to probabilities with softmax or sigmoid per use_softmax, validate y_true as either one-hot or label formats, clamp dice values to [epsilon, 1-epsilon], optionally exclude/zero background, and concatenate per-class components before aggregation. Docstrings, parameter lists, examples, and reduction handling (SUM/NONE/MEAN) were updated. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/losses/unified_focal_loss.py (1)
199-246: Forward method doesn't apply sigmoid/softmax or filter by include_background.Parameters are defined in
__init__(lines 165-167) but never used:
- No sigmoid/softmax activation applied to
y_predbefore calling helper lossesinclude_backgroundfiltering (removing first channel when True) not implemented- Docstring still claims binary-only support despite PR objectives
Implement activation and filtering following the FocalLoss/TverskyLoss/DiceLoss pattern: check
include_background, filter channels, apply activation, then compute loss.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: packaging
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: build-docs
Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
monai/losses/unified_focal_loss.py (1)
198-199: Stored parameters remain unused inforward().
include_backgroundanduse_softmax(plus the missinguse_sigmoid) are stored but never referenced in theforward()method. The forward logic must apply the requested activation and handle background filtering before delegating to the asymmetric loss functions. See past review comment for detailed implementation guidance.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
189-189: Undefined name use_sigmoid
(F821)
190-190: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: build-docs
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: packaging
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (1)
monai/losses/unified_focal_loss.py (1)
189-190: Mutual exclusivity check is correct onceuse_sigmoidis added.The logic correctly enforces that sigmoid and softmax cannot both be True. This resolves the past review comment requesting this validation. The undefined name error will be fixed once
use_sigmoidis added to the signature.
Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
monai/losses/unified_focal_loss.py (1)
251-254: Multiclass path still binary-only.
The helpers invoked here still read only channel 0 and channel 1 (Line 79 and Line 135). Any tensor with more than two classes silently ignores channel ≥ 2, so the advertised multiclass support never materialises. Please refactor the helpers (or wrap them here) to aggregate every foreground class before exposing a multiclass API.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
240-240: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: packaging
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: build-docs
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (pytype)
Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
monai/losses/unified_focal_loss.py (1)
250-256: Remove background channel slicing—conflicts with sub-loss zeroing.Lines 255-256 drop the background channel, but sub-losses (after fixing to receive
include_background) are designed to zero background contributions internally (lines 86, 151). Slicing here creates a mismatch: after dropping channel 0, sub-losses still hardcode[:, 0]and[:, 1]indices, now referring to wrong classes.Remove the slicing and rely on sub-loss internal zeroing:
- if not self.include_background: - if n_pred_ch == 1: - warnings.warn("single channel prediction, `include_background=False` ignored.") - else: - # if skipping background, removing first channel - y_pred = y_pred[:, 1:] - y_true = y_true[:, 1:] -
🧹 Nitpick comments (1)
monai/losses/unified_focal_loss.py (1)
252-252: Add stacklevel to warning.Per static analysis and Python best practices, warnings should specify
stacklevel=2to report the caller's location.- warnings.warn("single channel prediction, `include_background=False` ignored.") + warnings.warn("single channel prediction, `include_background=False` ignored.", stacklevel=2)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
252-252: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: build-docs
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: packaging
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-py3 (3.10)
🔇 Additional comments (5)
monai/losses/unified_focal_loss.py (5)
42-42: LGTM: include_background parameter added correctly.Parameter properly added to signature, documented, and stored.
Also applies to: 50-50, 57-57
112-112: LGTM: include_background parameter added correctly.Parameter properly added to signature, documented, and stored.
Also applies to: 120-120, 127-127
177-178: LGTM: Parameters added and documented.The single
use_softmaxboolean cleanly controls activation (softmax when True, sigmoid when False). Docstring clearly describes behavior.Also applies to: 188-190
220-221: LGTM: Docstring updated for new activation options.Describes sigmoid/softmax transformation and shape expectations clearly.
258-261: LGTM: Activation applied correctly.Softmax applied across channel dimension when
use_softmax=True, otherwise sigmoid. Proper use of.float()ensures compatibility.
Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
monai/losses/unified_focal_loss.py (3)
181-182: Adduse_sigmoidparameter for explicit activation control.Only
use_softmaxexists. Per the docstring (line 194), sigmoid is the fallback whenuse_softmax=False, but there's no way to skip activation entirely if inputs are already probabilities. Adduse_sigmoid: bool = Falseand enforce mutual exclusivity withuse_softmax.Apply this diff:
def __init__( self, to_onehot_y: bool = False, num_classes: int = 2, weight: float = 0.5, gamma: float = 0.5, delta: float = 0.7, reduction: LossReduction | str = LossReduction.MEAN, include_background: bool = True, + use_sigmoid: bool = False, use_softmax: bool = False, ):Then add mutual exclusivity check after line 204:
super().__init__(reduction=LossReduction(reduction).value) + if use_sigmoid and use_softmax: + raise ValueError("use_sigmoid and use_softmax are mutually exclusive.") self.to_onehot_y = to_onehot_yUpdate line 215:
self.include_background = include_background + self.use_sigmoid = use_sigmoid self.use_softmax = use_softmaxUpdate docstring around line 193:
include_background : whether to include the background class in loss calculation. Defaults to True. + use_sigmoid: whether to apply sigmoid activation. Mutually exclusive with use_softmax. Defaults to False. use_softmax: whether to use softmax to transform the original logits into probabilities. - If True, softmax is used. If False, sigmoid is used. Defaults to False. + Mutually exclusive with use_sigmoid. Defaults to False.Update forward logic around lines 262-265:
if self.use_softmax: y_pred = torch.softmax(y_pred.float(), dim=1) - else: + elif self.use_sigmoid: y_pred = torch.sigmoid(y_pred.float())
210-213: Passinclude_backgroundto sub-losses.Sub-losses are instantiated without
include_background, so they default toTrue. But the forward method slices away the background channel (lines 259-260) wheninclude_background=False, causing the sub-losses to index out of range when accessing[:, 0:1]and[:, 1:].Apply this diff:
- self.asy_focal_loss = AsymmetricFocalLoss(to_onehot_y=self.to_onehot_y, gamma=self.gamma, delta=self.delta) + self.asy_focal_loss = AsymmetricFocalLoss( + to_onehot_y=self.to_onehot_y, gamma=self.gamma, delta=self.delta, include_background=self.include_background + ) self.asy_focal_tversky_loss = AsymmetricFocalTverskyLoss( - to_onehot_y=self.to_onehot_y, gamma=self.gamma, delta=self.delta + to_onehot_y=self.to_onehot_y, gamma=self.gamma, delta=self.delta, include_background=self.include_background )
254-260: Don't slice channels at wrapper level; let sub-losses handleinclude_background.Slicing away the background channel (lines 259-260) breaks the sub-losses, which expect channel 0 to be background and channel 1+ to be foreground. After slicing, indices like
[:, 0:1]and[:, 1:]in the sub-losses access wrong channels.Remove the slicing here; pass full tensors to sub-losses and let them handle
include_backgroundinternally via their zeroing logic (lines 85-86, 152-153).Apply this diff:
- if not self.include_background: - if n_pred_ch == 1: - warnings.warn("single channel prediction, `include_background=False` ignored.") - else: - # if skipping background, removing first channel - y_pred = y_pred[:, 1:] - y_true = y_true[:, 1:] - if self.use_softmax: y_pred = torch.softmax(y_pred.float(), dim=1) else: y_pred = torch.sigmoid(y_pred.float())
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
256-256: No explicit stacklevel keyword argument found
Set stacklevel=2
(B028)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: build-docs
- GitHub Check: packaging
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.9)
🔇 Additional comments (2)
monai/losses/unified_focal_loss.py (2)
82-91: Multiclass support improved, but parent class must pass full tensors.The slicing at lines 82-83 (
[:, 0:1]and[:, 1:]) correctly handles multiple foreground classes. Zeroingback_dice(line 86) instead of removing it preserves tensor structure. However, this logic assumes channel 0 is background and channel 1+ are foreground. WhenAsymmetricUnifiedFocalLossslices away the background channel before calling this (lines 259-260), the indices become invalid.The sub-loss must receive the full tensor with all channels, not pre-sliced inputs.
146-158: Multiclass support improved, but parent class must pass full tensors.The slicing at lines 146 and 149 (
[:, 0:1]and[:, 1:]) correctly handles multiple foreground classes, and zeroingback_ce(line 153) preserves tensor structure. However, this logic assumes channel 0 is background. WhenAsymmetricUnifiedFocalLossslices away the background channel before calling this (lines 259-260), the indices become invalid.The sub-loss must receive the full tensor with all channels, not pre-sliced inputs.
… to sub-losses. 3. let sub-losses handle include_background. Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
monai/losses/unified_focal_loss.py (2)
185-194: Docstring outdated and incomplete.
- Line 187: "it only supports 2 now" contradicts the multiclass-capable slicing at lines 82-83, 149.
- Line 189: gamma default is 0.5 in the constructor (line 178), not 0.75.
- Line 190:
epsilonis not a constructor parameter.- Missing:
reductionparameter (line 180) is undocumented.Apply this diff:
Args: to_onehot_y : whether to convert `y` into the one-hot format. Defaults to False. - num_classes : number of classes, it only supports 2 now. Defaults to 2. + num_classes : number of classes. Defaults to 2. + weight : weight for combining focal loss and focal tversky loss. Defaults to 0.5. + gamma : value of the exponent gamma in the definition of the Focal loss. Defaults to 0.5. delta : weight of the background. Defaults to 0.7. - gamma : value of the exponent gamma in the definition of the Focal loss. Defaults to 0.75. - epsilon : it defines a very small number each time. simmily smooth value. Defaults to 1e-7. - weight : weight for each loss function, if it's none it's 0.5. Defaults to None. + reduction : reduction mode for the loss. Defaults to LossReduction.MEAN. include_background : whether to include the background class in loss calculation. Defaults to True. use_softmax: whether to use softmax to transform the original logits into probabilities. If True, softmax is used. If False, sigmoid is used. Defaults to False.
242-247: Calling one_hot on logits is incorrect; validation order is wrong.Line 225 says y_pred should be logits, but lines 242-244 call
one_hot()expecting integer indices. If y_pred contains logits, this produces garbage. After line 244, y_true is one-hot (max=1), so line 246's checktorch.max(y_true) != self.num_classes - 1fails for num_classes>2.If single-channel binary logits are supported, expand to two channels instead:
if y_pred.shape[1] == 1: - y_pred = one_hot(y_pred, num_classes=self.num_classes) - y_true = one_hot(y_true, num_classes=self.num_classes) + # Expand single-channel binary logits to two-channel + if self.num_classes != 2: + raise ValueError(f"Single-channel input only supported for binary (num_classes=2), got {self.num_classes}") + y_pred = torch.cat([torch.zeros_like(y_pred), y_pred], dim=1) + if y_true.shape[1] == 1: + y_true = one_hot(y_true, num_classes=self.num_classes) -if torch.max(y_true) != self.num_classes - 1: +# Validate before any one-hot conversion +if y_true.shape[1] != self.num_classes and torch.max(y_true) > self.num_classes - 1: raise ValueError(f"Please make sure the number of classes is {self.num_classes-1}")
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: build-docs
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: packaging
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-pytorch (2.5.1)
🔇 Additional comments (1)
monai/losses/unified_focal_loss.py (1)
256-259: LGTM! Activation logic correct.Softmax for multiclass (dim=1), sigmoid otherwise. Sub-losses receive probabilities as expected.
Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
253-255: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: packaging
- GitHub Check: build-docs
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: flake8-py3 (codeformat)
🔇 Additional comments (5)
monai/losses/unified_focal_loss.py (5)
42-42: LGTM!
include_backgroundparameter correctly added, documented, and implemented. Clamping (line 80) addresses numerical stability. Multiclass support via[:, 1:]slicing (line 84) and aggregation viatorch.mean(line 92) are both correct.Also applies to: 50-50, 57-57, 80-80, 83-92
115-115: LGTM!
include_backgroundparameter correctly added and implemented. Multiclass support via slicing and mean aggregation consistent withAsymmetricFocalTverskyLoss.Also applies to: 123-123, 130-130, 147-159
211-224: LGTM!Sub-losses correctly receive
include_backgroundandreduction=LossReduction.NONE, allowing parent to apply final reduction. Addresses past review feedback.
270-273: LGTM!Activation transform correctly implemented: softmax when
use_softmax=True, sigmoid otherwise. Addresses PR objective for sigmoid/softmax interface.
280-286: LGTM!Parent reduction correctly applied after combining sub-losses. Handles SUM, NONE, and MEAN modes properly.
Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
monai/losses/unified_focal_loss.py (2)
166-166: Remove outdated binary-only statement.Line 166 states "only supported for binary image segmentation now" but the implementation now supports multiclass (per PR objective #8603).
Apply this diff:
- Actually, it's only supported for binary image segmentation now + Supports both binary and multiclass image segmentation.
228-237: Remove outdated TODO and binary-only statements.Line 228 TODO and lines 233-234 docstring still claim binary-only support, but multiclass is now implemented (per PR objective #8603).
Apply this diff:
- # TODO: Implement this function to support multiple classes segmentation def forward(self, y_pred: torch.Tensor, y_true: torch.Tensor) -> torch.Tensor: """ Args: y_pred : the shape should be BNH[WD], where N is the number of classes. - It only supports binary segmentation. The input should be the original logits since it will be transformed by a sigmoid or softmax in the forward function. y_true : the shape should be BNH[WD] or B1H[WD], where N is the number of classes. - It only supports binary segmentation.
♻️ Duplicate comments (1)
monai/losses/unified_focal_loss.py (1)
121-121: Fix docstring default value.Docstring states gamma defaults to 0.75, but actual default at line 112 is 2.
Apply this diff:
- gamma : value of the exponent gamma in the definition of the Focal loss . Defaults to 0.75. + gamma : value of the exponent gamma in the definition of the Focal loss . Defaults to 2.
🧹 Nitpick comments (1)
monai/losses/unified_focal_loss.py (1)
253-255: Consider extracting long exception messages.Static analysis flags long inline exception messages. Per project convention, consider extracting to constants or shorter messages.
Example:
- f"Single-channel input only supported for binary (num_classes=2), got {self.num_classes}" + f"Single-channel input requires num_classes=2, got {self.num_classes}"Also applies to: 261-264
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
253-255: Avoid specifying long messages outside the exception class
(TRY003)
261-264: Avoid specifying long messages outside the exception class
(TRY003)
🔇 Additional comments (5)
monai/losses/unified_focal_loss.py (5)
80-92: LGTM: Numerical stability and background handling.Clamping prevents instability when gamma > 1, and the background/foreground split with optional zeroing correctly implements include_background for multiclass.
147-158: LGTM: Background handling and aggregation.Cross-entropy background/foreground split with optional zeroing correctly implements include_background. Aggregation now matches AsymmetricFocalTverskyLoss.
211-224: LGTM: Sub-loss initialization.Sub-losses correctly receive include_background and reduction=NONE, allowing parent to apply final reduction after weighting.
251-264: LGTM: Input validation and single-channel handling.Logic correctly handles single-channel binary input by adding background channel. Validation uses OR to catch either channel mismatch or invalid label values, with clear error messaging.
273-277: LGTM: Activation application.Softmax/sigmoid correctly applied based on use_softmax flag before passing to sub-losses.
Signed-off-by: ytl0623 <david89062388@gmail.com>
…ute 'include_background'. Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
monai/losses/unified_focal_loss.py (1)
121-121: Fix docstring default value.Docstring states gamma defaults to 0.75, but actual default at line 112 is 2.
Apply this diff:
- gamma : value of the exponent gamma in the definition of the Focal loss . Defaults to 0.75. + gamma : value of the exponent gamma in the definition of the Focal loss . Defaults to 2.
🧹 Nitpick comments (1)
monai/losses/unified_focal_loss.py (1)
253-255: Optional: Extract error messages to constants.Static analysis flags long inline error messages. For consistency with Python best practices, consider extracting to named constants or using shorter messages.
Example refactor:
# At module level _ERR_SINGLE_CHANNEL_MULTICLASS = "Single-channel input only supported for binary (num_classes=2), got {}" _ERR_INVALID_YTRUE_SHAPE = "y_true must have {} channels (one-hot) or label values in [0, {}], but got shape {} with max value {}" # In forward method if self.num_classes != 2: raise ValueError(_ERR_SINGLE_CHANNEL_MULTICLASS.format(self.num_classes)) ... raise ValueError(_ERR_INVALID_YTRUE_SHAPE.format( self.num_classes, self.num_classes - 1, y_true.shape, torch.max(y_true) ))Also applies to: 261-264
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
253-255: Avoid specifying long messages outside the exception class
(TRY003)
261-264: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: packaging
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (windows-latest)
🔇 Additional comments (4)
monai/losses/unified_focal_loss.py (4)
80-92: LGTM - Tversky loss multiclass support implemented correctly.Clamping at line 80 prevents numerical issues when γ > 1. Background/foreground slicing and optional zeroing work as intended. Aggregation is correct.
147-158: LGTM - Focal loss multiclass support implemented correctly.Background/foreground slicing and optional zeroing match the Tversky implementation. Aggregation is correct.
251-258: Single-channel augmentation logic is correct.For binary segmentation with single-channel input, augmenting with a zero background channel before applying the loss functions is appropriate. The one-hot conversion for y_true ensures shape consistency.
273-289: LGTM - Activation and reduction logic correct.Softmax/sigmoid selection is appropriate for multi-class/binary tasks. Sub-loss combination with configurable weight is correct. Final reduction properly handles all modes (MEAN, SUM, NONE).
…ber of channels. Signed-off-by: ytl0623 <david89062388@gmail.com>
Signed-off-by: ytl0623 <david89062388@gmail.com>
…ion) Signed-off-by: ytl0623 <david89062388@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
monai/losses/unified_focal_loss.py (3)
182-183: Missing mutual exclusivity validation.Only
use_softmaxexists (nouse_sigmoid). This design treats sigmoid as the default whenuse_softmax=False. However, add validation to prevent misuse if both sigmoid and softmax logic paths could be triggered simultaneously (though current code structure makes this unlikely).Consider adding a brief comment in the docstring clarifying that sigmoid is applied when
use_softmax=Falsefor multi-channel inputs.
264-274: Single-channel augmentation may surprise users.Lines 264-266 silently augment single-channel predictions with a background channel. Users expecting single-channel output will receive 2-channel loss gradients. Consider adding a warning (with
stacklevel=2) when this conversion occurs, similar to theto_onehot_ywarning pattern.Apply this diff:
if y_pred.shape[1] == 1: + warnings.warn("single channel prediction, augmenting with background channel.", stacklevel=2) y_pred_sigmoid = torch.sigmoid(y_pred.float()) y_pred = torch.cat([1 - y_pred_sigmoid, y_pred_sigmoid], dim=1)
197-203: Example doesn't demonstrate multiclass capabilities.The example still shows binary usage with shape
(1,1,32,32). Add a multiclass example to demonstrate the new capabilities from issue #8603.Add after the existing example:
>>> # Multiclass example with 3 classes >>> pred_mc = torch.randn((1,3,32,32), dtype=torch.float32) >>> grnd_mc = torch.randint(0, 3, (1,1,32,32), dtype=torch.int64) >>> fl_mc = AsymmetricUnifiedFocalLoss(to_onehot_y=True, num_classes=3, use_softmax=True) >>> fl_mc(pred_mc, grnd_mc)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
252-255: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: packaging
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: build-docs
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (macOS-latest)
🔇 Additional comments (2)
monai/losses/unified_focal_loss.py (2)
80-92: Multiclass support implemented correctly.The slicing
[:, 1:]captures all foreground classes, clamping prevents numerical instability, and zeroing the background wheninclude_background=Falsepreserves gradient flow. Aggregation viatorch.mean(all_dice)is consistent with the focal loss sibling.
147-159: Multiclass support implemented correctly.The changes mirror
AsymmetricFocalTverskyLosscorrectly:[:, 1:]for all foreground classes, zeroing for background exclusion, and consistent aggregation.
Signed-off-by: ytl0623 <david89062388@gmail.com>
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
monai/losses/unified_focal_loss.py (6)
27-28: Update outdated docstring.Claims binary-only but code now supports multiclass via
dice_class[:, 1:]slicing.- AsymmetricFocalTverskyLoss is a variant of FocalTverskyLoss, which attentions to the foreground class. - - Actually, it's only supported for binary image segmentation now. + AsymmetricFocalTverskyLoss is a variant of FocalTverskyLoss that focuses on foreground classes. + Supports multi-class segmentation with optional background inclusion.
64-64: Addstacklevel=2to warning.Helps users identify the actual call site in tracebacks.
- warnings.warn("single channel prediction, `to_onehot_y=True` ignored.") + warnings.warn("single channel prediction, `to_onehot_y=True` ignored.", stacklevel=2)
100-101: Update outdated docstring.Claims binary-only but code supports multiclass via
cross_entropy[:, 1:]slicing.- AsymmetricFocalLoss is a variant of FocalTverskyLoss, which attentions to the foreground class. - - Actually, it's only supported for binary image segmentation now. + AsymmetricFocalLoss is a variant of Focal Loss that focuses on foreground classes. + Supports multi-class segmentation with optional background inclusion.
137-137: Addstacklevel=2to warning.Helps users identify the actual call site in tracebacks.
- warnings.warn("single channel prediction, `to_onehot_y=True` ignored.") + warnings.warn("single channel prediction, `to_onehot_y=True` ignored.", stacklevel=2)
166-167: Update outdated docstring.Claims binary-only but code supports multiclass (validates
num_classes, handles N>2).- AsymmetricUnifiedFocalLoss is a variant of Focal Loss. - - Actually, it's only supported for binary image segmentation now + AsymmetricUnifiedFocalLoss combines Asymmetric Focal Loss and Asymmetric Focal Tversky Loss. + Supports multi-class segmentation with configurable activation (sigmoid/softmax) and optional background inclusion.
267-267: Addstacklevel=2to warning.Helps users identify the actual call site in tracebacks.
- warnings.warn("single channel prediction, `to_onehot_y=True` ignored.") + warnings.warn("single channel prediction, `to_onehot_y=True` ignored.", stacklevel=2)
🧹 Nitpick comments (2)
monai/losses/unified_focal_loss.py (2)
253-253: Remove redundant f-string prefix.String has no placeholders.
- raise ValueError(f"y_true appears to be one-hot but contains values other than 0 and 1") + raise ValueError("y_true appears to be one-hot but contains values other than 0 and 1")
197-203: Consider expanding example.Current example doesn't demonstrate new
include_backgroundoruse_softmaxparameters introduced by this PR.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
253-253: Avoid specifying long messages outside the exception class
(TRY003)
253-253: f-string without any placeholders
Remove extraneous f prefix
(F541)
256-258: Avoid specifying long messages outside the exception class
(TRY003)
260-262: Avoid specifying long messages outside the exception class
(TRY003)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
monai/losses/unified_focal_loss.py (3)
27-27: Update outdated docstring.The statement "only supported for binary image segmentation" is incorrect. The implementation now supports multiclass segmentation via the
include_backgroundparameter and per-channel computations.Apply this diff:
- Actually, it's only supported for binary image segmentation now. + Supports both binary and multiclass segmentation.
100-100: Update outdated docstring.The statement "only supported for binary image segmentation" is incorrect. The implementation now supports multiclass segmentation.
Apply this diff:
- Actually, it's only supported for binary image segmentation now. + Supports both binary and multiclass segmentation.
166-166: Update outdated docstring.The statement "only supported for binary image segmentation" is incorrect. The implementation now supports multiclass segmentation.
Apply this diff:
- Actually, it's only supported for binary image segmentation now + Supports both binary and multiclass segmentation.
♻️ Duplicate comments (1)
monai/losses/unified_focal_loss.py (1)
228-228: Remove outdated TODO.Multiclass support is implemented in the forward method. The TODO is obsolete.
Apply this diff:
- # TODO: Implement this function to support multiple classes segmentation def forward(self, y_pred: torch.Tensor, y_true: torch.Tensor) -> torch.Tensor:
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(9 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
253-253: Avoid specifying long messages outside the exception class
(TRY003)
256-258: Avoid specifying long messages outside the exception class
(TRY003)
260-262: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (windows-latest)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: build-docs
- GitHub Check: packaging
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: quick-py3 (ubuntu-latest)
🔇 Additional comments (9)
monai/losses/unified_focal_loss.py (9)
42-42: LGTM!Parameter
include_backgroundcorrectly added, documented, and stored.Also applies to: 50-50, 57-57
80-80: LGTM!Clamping prevents numerical instability, and the background gating correctly preserves all channels while zeroing the background contribution when needed. Aggregation is now consistent.
Also applies to: 83-92
115-115: LGTM!Parameter correctly added, documented with accurate default value, and stored.
Also applies to: 121-121, 123-123, 130-130
147-158: LGTM!Background gating correctly preserves channels while zeroing contributions. Aggregation now matches the Tversky loss.
182-183: LGTM!New parameters properly added and documented. The use_softmax design is clear.
Also applies to: 188-195
211-226: LGTM!Sub-losses correctly receive
include_backgroundandreduction=NONE, allowing proper composition in the parent loss.
251-262: LGTM!Validation logic correctly handles both one-hot and label formats with clear error messages. Static analysis warns about long error messages (TRY003), but this is a minor style preference.
271-281: LGTM!Activation logic correctly handles both single-channel (binary with sigmoid expansion) and multi-channel (softmax/sigmoid based on
use_softmax) cases. The implementation properly transforms logits to probabilities before loss computation.
283-294: LGTM!Loss composition and reduction logic is correct. Sub-losses return unreduced values, allowing proper weighting and final reduction.
Signed-off-by: ytl0623 <david89062388@gmail.com>
…o fix-issue-8603
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
monai/losses/unified_focal_loss.py (3)
59-93: Forward ignores reduction parameter.The method always returns
torch.mean(all_dice)(line 92) regardless ofself.reduction. WhenAsymmetricUnifiedFocalLosspassesreduction=LossReduction.NONE(line 223), it expects per-element or per-batch losses, not a scalar. This breaks the parent's reduction logic (lines 291-297).Apply this diff:
- # Average class scores - loss = torch.mean(all_dice) - return loss + # Apply reduction + if self.reduction == LossReduction.MEAN.value: + return torch.mean(all_dice) + elif self.reduction == LossReduction.SUM.value: + return torch.sum(all_dice) + elif self.reduction == LossReduction.NONE.value: + return all_dice + else: + raise ValueError(f'Unsupported reduction: {self.reduction}')
132-159: Forward ignores reduction parameter.Same issue as
AsymmetricFocalTverskyLoss: the method always returnstorch.mean(all_ce)(line 158) regardless ofself.reduction. When the parent passesreduction=LossReduction.NONE(line 230), it needs unreduced losses.Apply this diff:
- loss = torch.mean(all_ce) - return loss + if self.reduction == LossReduction.MEAN.value: + return torch.mean(all_ce) + elif self.reduction == LossReduction.SUM.value: + return torch.sum(all_ce) + elif self.reduction == LossReduction.NONE.value: + return all_ce + else: + raise ValueError(f'Unsupported reduction: {self.reduction}')
247-248: Shape check occurs before conversion.Line 247 checks
y_pred.shape != y_true.shapebeforeto_onehot_yconversion (lines 267-271). Valid label inputs (B1H[WD]) paired with multi-class predictions (BNH[WD]) will fail here, even thoughto_onehot_y=Truewould convert them correctly.Move the shape check after the
to_onehot_yconversion block (after line 271).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
monai/losses/unified_focal_loss.py(8 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
⚙️ CodeRabbit configuration file
Review the Python code for quality and correctness. Ensure variable names adhere to PEP8 style guides, are sensible and informative in regards to their function, though permitting simple names for loop and comprehension variables. Ensure routine names are meaningful in regards to their function and use verbs, adjectives, and nouns in a semantically appropriate way. Docstrings should be present for all definition which describe each variable, return value, and raised exception in the appropriate section of the Google-style of docstrings. Examine code for logical error or inconsistencies, and suggest what may be changed to addressed these. Suggest any enhancements for code improving efficiency, maintainability, comprehensibility, and correctness. Ensure new or modified definitions will be covered by existing or new unit tests.
Files:
monai/losses/unified_focal_loss.py
🪛 Ruff (0.14.3)
monai/losses/unified_focal_loss.py
255-255: Avoid specifying long messages outside the exception class
(TRY003)
258-260: Avoid specifying long messages outside the exception class
(TRY003)
262-264: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: quick-py3 (ubuntu-latest)
- GitHub Check: flake8-py3 (mypy)
- GitHub Check: quick-py3 (macOS-latest)
- GitHub Check: flake8-py3 (pytype)
- GitHub Check: quick-py3 (windows-latest)
- GitHub Check: flake8-py3 (codeformat)
- GitHub Check: build-docs
- GitHub Check: packaging
- GitHub Check: min-dep-pytorch (2.7.1)
- GitHub Check: min-dep-py3 (3.11)
- GitHub Check: min-dep-py3 (3.12)
- GitHub Check: min-dep-pytorch (2.5.1)
- GitHub Check: min-dep-pytorch (2.8.0)
- GitHub Check: min-dep-pytorch (2.6.0)
- GitHub Check: min-dep-py3 (3.10)
- GitHub Check: min-dep-py3 (3.9)
- GitHub Check: min-dep-os (ubuntu-latest)
- GitHub Check: min-dep-os (macOS-latest)
- GitHub Check: min-dep-os (windows-latest)
| else: | ||
| if self.use_softmax: | ||
| y_pred = torch.softmax(y_pred.float(), dim=1) | ||
| else: | ||
| y_pred = y_pred.float() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing sigmoid activation.
Docstring (line 195) states "If False, sigmoid is used," but line 284 only casts to float without applying sigmoid. For multi-channel binary segmentation with use_softmax=False, raw logits are passed to sub-losses, which expect probabilities (see clamping at lines 72, 144).
Apply this diff:
else:
if self.use_softmax:
y_pred = torch.softmax(y_pred.float(), dim=1)
else:
- y_pred = y_pred.float()
+ y_pred = torch.sigmoid(y_pred.float())🤖 Prompt for AI Agents
In monai/losses/unified_focal_loss.py around lines 280 to 284, when
self.use_softmax is False the code only casts y_pred to float but does not apply
sigmoid as the docstring promises; update the branch to apply torch.sigmoid to
the float-cast logits (e.g., y_pred = torch.sigmoid(y_pred.float())) so
downstream sub-losses receive probabilities for multi-channel binary
segmentation, preserving device/dtype semantics as needed.
Fixes #8603
Description
Asymmetric Unified Focal Loss with sigmoid/softmax interface and multiclass support.
Types of changes
./runtests.sh -f -u --net --coverage../runtests.sh --quick --unittests --disttests.make htmlcommand in thedocs/folder.