Skip to content

Inference operator for "Algo bundles" trained with Auto3DSeg #418

Open
@nvahmadi

Description

@nvahmadi

Is your feature request related to a problem? Please describe.
I would like to create a MAP from a MONAI Algo "bundle" that was trained with Auto3DSeg. It seems that this is not supported yet, because Auto3DSeg does not produce regular bundles. For example, the monai_bundle_inference_operator first checks whether there is a JSON/YAML config file. But Auto3DSeg Algos are structured differently, with python files for training/inference under the subfolder ./scripts, and an execution model based on Python Fire.

Describe the solution you'd like
An inference operator for MONAI "bundles" that are Algo directories which were trained with Auto3DSeg, e.g. monai_algo_inference_operator.py. Would be great to be able to take trained Algos and deploy them directly. 🙂

Describe alternatives you've considered
I've spent a few hours trying to understand all the steps performed in monai_bundle_inference_operator.py, and implement my own operator class by inheriting from it, i.e. class MonaiAlgoInferenceOperator(MonaiBundleInferenceOperator). But it was too complex or it would take me too long, given certain slight differences: For example, the MonaiBundleInferenceOperator assumes a separation of the inference into 3 parts: pre-processing, compute, and post-processing. But a trained Auto3DSeg Algo object encapsulates these steps into a single inference class, see below. Maybe there is a way to re-use that and create a very simple InferenceOperator for DeployAppSDK. Hopefully the code snippet below can help illustrate what I mean, maybe even help towards a solution.

Additional context
In principle, inference with an Algo object is easy, following this code snippet. let's assume I want to run inference with thetrained fold 0 of the Algo template segresnet, and I want to run it on a Nifti file located at image_filepath on disk:

import pickle

# add algorithm_templates folder to system path
import sys
algo_templates_dir = '/path/to/algorithm_templates'
sys.path.append(os.path.abspath(algo_templates_dir))

# load algorithm template class from templates folder
import segresnet

# read pickle file of trained algo object, let's assume from fold 0
pkl_filename='/path/to/workdir/segresnet_0/algo_object.pkl'
with open(pkl_filename, "rb") as f_pi:
    data_bytes = f_pi.read()
data = pickle.loads(data_bytes)
algo_bytes = data.pop("algo_bytes")
algo = pickle.loads(algo_bytes)

# choose inferer part of algo, and run inference
# (note: this already includes pre- and post-processing!)
inferer = algo.get_inferer()
pred = inferer.infer(image_filepath)

I still haven't fully understood the requirements to create an Inference Operator for DeployAppSDK. But maybe it's possible to inherit from InferenceOperator directly and implement a very simple inference operator like this:

class MonaiAlgoInferenceOperator(InferenceOperator):
    def __init__(
        self,
        *args,
        **kwargs,
    ):
        super().__init__(*args, **kwargs)
        # read pickle file of trained algo object, let's assume from fold 0
        pkl_filename='/path/to/workdir/segresnet_0/algo_object.pkl'
        with open(pkl_filename, "rb") as f_pi:
            data_bytes = f_pi.read()
        data = pickle.loads(data_bytes)
        algo_bytes = data.pop("algo_bytes")
        self.algo = pickle.loads(algo_bytes)

        # choose inference part of algo, model and load model weights
        self.inferer = self.algo.get_inferer()
        
    # imho, the abstract classes pre-process and post-process can stay undefined

    # we only need to define the compute function
    def compute(self, op_input, op_output, context):
        # not sure whether/how the inferer should be pulled from context?
        # (I already have it in memory as self.inferer)
        # ...
        
        # not sure what needs to be done to op_input before model ingestion
        # ...
        
        # run inference, incl. pre- and post-processing
        pred = self.inferer.infer(op_input)
        
        # not sure what needs to be done to pred to become op_output
        # op_output = do_something(pred)
        
        # send output, analoguous to MonaiBundleInferenceOperator
        self._send_output(output_dict[name], name, first_input_v.meta, op_output, context)

I know this is very amateurish, apologies 😅
Looking forward to more expert suggestions!

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions