You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When operating tensors in a For loop using tensor dialect, such as inserting value into slices, module cannot be lowered to llvm correctly. tests will report error: unknown: operand #0 does not dominate this use and the exact operation.
To Reproduce
in tests/test_linalg.py def test_math_scalar()
def kernel(A: float32[M, K], B: float32[K, N]) -> float32[M, N]:
C: float32[M, N] = 0.0
D: float32[M, N] = 0.0
for i, j in allo.grid(M, N):
for k in allo.reduction(K):
C[i, j] += A[i, k] * B[k, j]
for i, j in allo.grid(M, N):
D[i, j] = (allo.exp(C[i, j]) + allo.log(C[i, j])) / C[i, j]
return D
File "/home/md2249/allo/allo/passes.py", line 25, in _mlir_lower_pipeline
mlir_pass_manager.parse(pipeline).run(module.operation)
hcl_mlir._mlir_libs._site_initialize.<locals>.MLIRError: Failure while executing pass pipeline:
error: unknown: operand #0 does not dominate this use
note: unknown: see current operation: "func.return"(%9) : (tensor<10x20xf32>) -> ()
note: unknown: operand defined here (op in a child region)
Expected behavior
Already used pass in allo/backend/llvm.py class LLVMModule
pm = PassManager.parse(
"builtin.module("
# used for lowering tensor.empty
"empty-tensor-to-alloc-tensor,"
# translate tensor dialect (virtual) to memref dialect (physical)
"one-shot-bufferize{allow-return-allocs bufferize-function-boundaries},"
# used for lowering memref.subview
"expand-strided-metadata,"
# common lowering passes
"func.func(convert-linalg-to-affine-loops),lower-affine"
")"
)
Hope to lower loop cases to llvm without reporting error
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
When operating tensors in a For loop using tensor dialect, such as inserting value into slices, module cannot be lowered to llvm correctly. tests will report
error: unknown: operand #0 does not dominate this use
and the exact operation.To Reproduce
in tests/test_linalg.py
def test_math_scalar()
the module after
customize()
isBuggy output
Expected behavior
Already used pass in allo/backend/llvm.py
class LLVMModule
Hope to lower loop cases to llvm without reporting error
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: