-
Notifications
You must be signed in to change notification settings - Fork 89
Improves implementation of aten_index_put #2641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… xadupre/input_put
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2641 +/- ##
==========================================
- Coverage 70.43% 70.12% -0.32%
==========================================
Files 224 225 +1
Lines 26778 27168 +390
Branches 2680 2738 +58
==========================================
+ Hits 18862 19052 +190
- Misses 6985 7166 +181
- Partials 931 950 +19 ☔ View full report in Codecov by Sentry. |
|
Adding some pointers/info for my own clarification: |
| expand_value_shape = [] | ||
| for i, ind in enumerate(indices): | ||
| if isinstance(ind, torch.onnx._internal.exporter._tensors.SymbolicTensor): # pylint: disable=protected-access | ||
| ind.dtype = ir.DataType.INT64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need the above line? Just wondering ... shouldn't it already have dtype set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is not useful anymore but when I did this PR, it was needed.
…script into xadupre/input_put
| perm[not_none[0]], perm[0] = perm[0], perm[not_none[0]] | ||
| return op.Transpose( | ||
| op.ScatterND( | ||
| op.Transpose(x, perm=perm), |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 13 hours ago
To fix the mixed explicit/implicit returns, we should ensure every code path in aten_index_bool ends with an explicit return. Since all explicit returns are of type TensorType, and the function signature declares this, the implicit fallthrough at the end should return a value of the same type. If an error or unexpected condition occurs (e.g., all indices are None and the loop doesn't yield a result), it makes sense to raise an error or explicitly return None (if downstream code safely handles this), but more robustly, an exception is best for unreachable/invalid cases. If returning None is preferable, make it explicit. Given the function signature expects a TensorType, raising an informative error is the most readable solution, but returning None also satisfies the CodeQL requirement for explicitness if that's been the behaviour.
Thus, add return None or raise ValueError("No valid indices provided to aten_index_bool") to the end of the function—choose return None to preserve existing behaviour, unless a contract is required.
Only lines inside aten_index_bool (lines 4366–4404) need fixing in file onnxscript/function_libs/torch_lib/ops/core.py.
-
Copy modified line R4404
| @@ -4401,8 +4401,8 @@ | ||
| for _ in range(count_of_none): | ||
| result = op.Transpose(result, perm=trans_perm) | ||
| return result | ||
| return None | ||
|
|
||
|
|
||
| def aten_index_add( | ||
| self: TensorType, dim: int, index: TensorType, source: TensorType, alpha: float = 1 | ||
| ) -> TensorType: |
From #2606.