I can have an expected segmentation by using MediaPipe APIs (selfie multi-class) but I can't get the same result by running the model on stand-alone (without using MediaPipe APIs) TensorFlow or ONNX (converted TFLite to ONNX model)! I think it is because of some missing steps before feeding data (image) for inference or some missing step for post processing of model outputs. Are there any required pre/post processing steps? for example I know that I need to applying softmax on model outputs.