Skip to content

Conversation

@xadupre
Copy link
Member

@xadupre xadupre commented Oct 17, 2025

From #2606.

@codecov
Copy link

codecov bot commented Oct 17, 2025

Codecov Report

❌ Patch coverage is 87.03704% with 7 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.12%. Comparing base (5be9d3b) to head (db53ee1).
⚠️ Report is 9 commits behind head on main.
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
onnxscript/function_libs/torch_lib/ops/core.py 87.03% 4 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2641      +/-   ##
==========================================
- Coverage   70.43%   70.12%   -0.32%     
==========================================
  Files         224      225       +1     
  Lines       26778    27168     +390     
  Branches     2680     2738      +58     
==========================================
+ Hits        18862    19052     +190     
- Misses       6985     7166     +181     
- Partials      931      950      +19     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@xadupre xadupre marked this pull request as ready for review October 24, 2025 15:40
@xadupre xadupre enabled auto-merge (squash) October 24, 2025 15:40
@gramalingam
Copy link
Collaborator

Adding some pointers/info for my own clarification:

expand_value_shape = []
for i, ind in enumerate(indices):
if isinstance(ind, torch.onnx._internal.exporter._tensors.SymbolicTensor): # pylint: disable=protected-access
ind.dtype = ir.DataType.INT64
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need the above line? Just wondering ... shouldn't it already have dtype set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it is not useful anymore but when I did this PR, it was needed.

perm[not_none[0]], perm[0] = perm[0], perm[not_none[0]]
return op.Transpose(
op.ScatterND(
op.Transpose(x, perm=perm),

Check notice

Code scanning / CodeQL

Explicit returns mixed with implicit (fall through) returns Note

Mixing implicit and explicit returns may indicate an error, as implicit returns always return None.

Copilot Autofix

AI about 13 hours ago

To fix the mixed explicit/implicit returns, we should ensure every code path in aten_index_bool ends with an explicit return. Since all explicit returns are of type TensorType, and the function signature declares this, the implicit fallthrough at the end should return a value of the same type. If an error or unexpected condition occurs (e.g., all indices are None and the loop doesn't yield a result), it makes sense to raise an error or explicitly return None (if downstream code safely handles this), but more robustly, an exception is best for unreachable/invalid cases. If returning None is preferable, make it explicit. Given the function signature expects a TensorType, raising an informative error is the most readable solution, but returning None also satisfies the CodeQL requirement for explicitness if that's been the behaviour.

Thus, add return None or raise ValueError("No valid indices provided to aten_index_bool") to the end of the function—choose return None to preserve existing behaviour, unless a contract is required.

Only lines inside aten_index_bool (lines 4366–4404) need fixing in file onnxscript/function_libs/torch_lib/ops/core.py.


Suggested changeset 1
onnxscript/function_libs/torch_lib/ops/core.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/onnxscript/function_libs/torch_lib/ops/core.py b/onnxscript/function_libs/torch_lib/ops/core.py
--- a/onnxscript/function_libs/torch_lib/ops/core.py
+++ b/onnxscript/function_libs/torch_lib/ops/core.py
@@ -4401,8 +4401,8 @@
                 for _ in range(count_of_none):
                     result = op.Transpose(result, perm=trans_perm)
                 return result
+        return None
 
-
 def aten_index_add(
     self: TensorType, dim: int, index: TensorType, source: TensorType, alpha: float = 1
 ) -> TensorType:
EOF
@@ -4401,8 +4401,8 @@
for _ in range(count_of_none):
result = op.Transpose(result, perm=trans_perm)
return result
return None


def aten_index_add(
self: TensorType, dim: int, index: TensorType, source: TensorType, alpha: float = 1
) -> TensorType:
Copilot is powered by AI and may make mistakes. Always verify output.
Unable to commit as this autofix suggestion is now outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

3 participants