-
Notifications
You must be signed in to change notification settings - Fork 117
Open
Description
We are currently allowing negative padding for TTIR.PadOp and TTNN.PadOp.
This leads to a nasty runtime bug, where negative paddings are implicitly converted to a large unsigned integer, hence usually leading to the following error and inconsistent behaviour:
Output size cannot fit input with offset
As TTNN op currently doesn't support negative padding, we should decompose TTIR.Pad op with negative padding with the help of TTIR_SliceStaticOp.
Repro IR:
#loc1 = loc("p0.1")
module @SyncTensorsGraph.6 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false, ttcore.meshes = #ttcore.meshes<[<"mesh" = 1x1>]>} {
ttcore.device_module {
builtin.module @SyncTensorsGraph.6 attributes {mhlo.cross_program_prefetches = [], mhlo.input_output_alias = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false, ttcore.meshes = #ttcore.meshes<[<"mesh" = 1x1>]>} {
func.func @main(%arg0: tensor<1x1024x64x64xbf16> {ttcore.argument_type = #ttcore.argument_type<input>, ttcore.runtime_tensor_sharding = #ttcore<runtime_tensor_sharding shard_status = <unsharded>, local_shape = tensor<1x1024x64x64xbf16>>, ttir.name = "args_0"} loc("p0.1")) -> (tensor<1x1024x32x32xbf16> {ttcore.runtime_tensor_sharding = #ttcore<runtime_tensor_sharding shard_status = <unsharded>, local_shape = tensor<1x1024x32x32xbf16>>}) {
%0 = "ttir.pad"(%arg0) <{padding = array<i32: 0, 0, 0, 0, -16, -16, -16, -16>, value = 0.000000e+00 : f32}> : (tensor<1x1024x64x64xbf16>) -> tensor<1x1024x32x32xbf16> loc(#loc2)
return %0 : tensor<1x1024x32x32xbf16> loc(#loc)
} loc(#loc)
} loc(#loc)
} loc(#loc)
} loc(#loc)
#loc = loc(unknown)
#loc2 = loc("pad.4")Also, we should file a feature request to a metal team to support this op configuration.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels