Skip to content

Use SlidingWindowInferer on image already loaded in Python? #69

Open
@gdurif

Description

Hi,

To integrate the HoVer-Net model into a segmentation pipeline that is processing images as Numpy arrays already loaded in memory, is it possible to use the SlidingWindowInferer() on image already loaded into Python?

I want to use the model already trained to do segmentation only (I am not interested in cell type classification), because I do not have annotated data for fine tuning. We have real experimental images that need segmentation and thus no ground truth.

So far, following the README and the tutorial notebooks, I have been able to either:

  1. Feed a cropped image to the model (c.f. code below), however:
  • I do not know how to post-process the output to get the actual segmentation as I would with a SlidingWindowInferer();
  • this would require to write the sliding window algorithm by hand to process the whole image;
  • I am not sure which type of normalization I should implement.
import numpy as np

from cellpose.io import imread        # pip install cellpose

import cellseg_models_pytorch as csmp # pip install cellseg-models-pytorch
import torch

# image
image_dir = "."       # edit if necessary
image_name = "TCGA-E2-A14V-01Z-00-DX1"
ext = "png"

# image_path = os.path.join(crop_image_dir, f"{image_name}.{ext}")
image_path = os.path.join(image_dir, f"{image_name}.{ext}")

# load image
image = imread(image_path)

# hover-net model
model = csmp.models.hovernet_base(type_classes=10)

# image conversion (and crop and normalization)
image_tensor = torch.from_numpy(
    (image[np.newaxis, :, :].transpose([0, 3, 1, 2])[:, :, :256, :256] /
     255).astype(np.float32)
)

# forward
res = model(image_tensor)
  1. Use the SlidingWindowInferer() to process an image stored on disk (c.f. code below), however:
  • the segmentation result is rubbish (maybe related to the following point);
  • since I am using the pre-training model, which checkpoint_path am I supposed to provide?
# define the final activations for each model output
out_activations = {"hovernet": "tanh", "type": "softmax", "inst": "softmax"}

# define whether to weight down the predictions at the image boundaries
# typically, models perform the poorest at the image boundaries and with
# overlapping patches this causes issues which can be overcome by down-
# weighting the prediction boundaries
out_boundary_weights = {"hovernet": True, "type": False, "inst": False}

# define the inferer
inferer = csmp.inference.SlidingWindowInferer(
    model=model,
    input_path=f"{image_dir}",
    checkpoint_path=None,
    out_activations=out_activations,
    out_boundary_weights=out_boundary_weights,
    instance_postproc="hovernet",               # THE POST-PROCESSING METHOD
    normalization="percentile",                 # same normalization as in training
    patch_size=(256, 256),
    stride=128,
    padding=80,
    batch_size=1,
    device="cpu"  # or "cuda"
)

# inference
inferer.infer()

# result
inferer.out_masks

In the example, I use the following TCGA-E2-A14V-01Z-00-DX1.png image from the MoNuSeg dataset available under CC BY-NC-SA 4.0 license):

TCGA-E2-A14V-01Z-00-DX1

Thanks in advance.

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions