-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOSA] : Fix float to integer cast for torch.ops.aten.to
lowering.
#3946
Conversation
Hi @sahas3, for the Torch float to integer cast behavior, did you observe it to be the same for all Torch casting, or just |
I think |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Torch mandates round towards zero, can that not be synthetically expressed here @sahas3 ?
Looks like we had a misunderstanding earlier @sahas3 . Approving. |
The behavior of float -> integer cast in PyTorch (though I haven't found the actual code implementing the cast) appears to be (based on the results produced in PyTorch):
arith.fptosi/ui
)Currently we only emit
tosa.cast
for this operation but as per the spec https://www.mlplatform.org/tosa/tosa_spec.html#_cast the rounding performed for float -> integer is round to nearest integer (not zero). Hence, the current TOSA lowering fortorch.ops.aten.to
produces incorrect answer.