When I converted from the torch onnx dialect to the torch dialect, the float16 LayerNormalization op failed to convert successfully. The main reason was that the operand type was f16, stash_type was not described in the attribute list, and stash_type in the code was set to 1 by default, that is, the default was float32 type, resulting in verification errors.
%3 = torch.operator "onnx.LayerNormalization"(%0, %1, %2) {torch.onnx.axis = -3 : si64, torch.onnx.epsilon = 9.99999997E-7 : f32} : (!torch.vtensor<[144,32,32,16],f16>, !torch.vtensor<[32,32,16],f16>, !torch.vtensor<[32,32,16],f16>) -> !torch.vtensor<[144,32,32,16],f16>
My torch_onnx file is converted from the onnx model into an mlir file by torch_mlir.tools.export_onnx module.