Skip to content

Commit f01b40f

Browse files
btbestk-dominik
andauthored
Object classification docs: Update and improve export section (#256)
Co-authored-by: Dominik Kutra <[email protected]>
1 parent ca4f3e5 commit f01b40f

File tree

2 files changed

+47
-30
lines changed

2 files changed

+47
-30
lines changed
Loading

documentation/objects/objects.md

+47-30
Original file line numberDiff line numberDiff line change
@@ -17,49 +17,46 @@ weight: 1
1717

1818
As the name suggests, the object classification workflow aims to classify full *objects*, based on object-level features and user annotations.
1919
An *object* in this context is a set of pixels that belong to the same instance.
20-
In order to do so, the workflow needs *segmentation* images besides the usual raw image data, that can e.g. be generated with the [Pixel Classification Workflow].
21-
Depending on the availability of these segmentation images, the user can choose between three flavors of object classification workflow, which differ by their input data:
20+
Object classification requires a second input besides the usual raw image data: an image that indicates for pixels whether they belong to an object or not, i.e. pixel predictions, a segmentation or a label image.
21+
This can be obtained e.g. using the [Pixel Classification Workflow].
22+
The workflow exists in two variants to handle different types for the second input:
2223

23-
* Pixel Classification + Object Classification
2424
* Object Classification [Inputs: Raw Data, Pixel Prediction Map]
2525
* Object Classification [Inputs: Raw Data, Segmentation]
2626

27-
**Size Limitations:**
27+
The combined "Pixel Classification + Object Classification" workflow (found under "Other Workflows") is primarily intended for demonstration purposes and its use in real projects is discouraged.
28+
We instead recommend to use the two workflows separately, exporting probability maps from the Pixel Classification workflow and using them as input for the Object Classification workflow.
2829

29-
In the current version of ilastik, computations on the **training** images are not performed lazily -- the entire image is processed at once.
30-
This means you can't use enormous images for training the object classifier.
31-
However, once you have created a satisfactory classifier using one or more small images, you can use the "Blockwise Object Classification"
32-
feature to run object classification on much larger images (prediction only -- no training.)
30+
**Size Limitation:**
3331

34-
<a href="figs/ilastik_start_screen.png" data-toggle="lightbox"><img src="figs/ilastik_start_screen.png" class="img-responsive" /></a>
35-
36-
### Pixel Classification + Object Classification
37-
This is a combined workflow, which lets you start from the raw data, perform pixel classification as described
38-
in the
39-
[Pixel Classification workflow docs]({{site.baseurl}}/documentation/pixelclassification/pixelclassification.html)
40-
and then thresholding the probability maps to obtain a segmentation that you then use in Object Classification.
41-
This workflow is primarily meant for demo purposes.
42-
For serious projects, we recommend to use the two workflows, [Pixel Classification]({{site.baseurl}}/documentation/pixelclassification/pixelclassification.html) and Object Classification separately using the generated output form the former as an additional input in the latter one.
32+
For object classification, images used in _training_ have to be small enough to fit entirely into your machine's RAM.
33+
However, once you have created a satisfactory classifier using one or more small images (or cutouts from your complete dataset), you can use the "Blockwise Object Classification"
34+
feature to run object classification on much larger images (prediction only - no training).
4335

44-
<a href="figs/input_pixel_class.png" data-toggle="lightbox"><img src="figs/input_pixel_class.png" class="img-responsive" /></a>
36+
<a href="figs/ilastik_start_screen.png" data-toggle="lightbox"><img src="figs/ilastik_start_screen.png" class="img-responsive" /></a>
4537

4638
### Object Classification [Inputs: Raw Data, Pixel Prediction Map]
47-
You should choose this workflow if you have pre-computed probability
48-
maps.
49-
The data input applet of this workflow expects you to load the probability maps in addition to the raw data:
39+
You should choose this workflow if you have a probability map, i.e. each pixel value is the pixel's probability to belong to an object.
40+
To obtain a segmentation from this input, the workflow includes a step for thresholding the probabilities.
41+
If you use the Pixel Classification workflow to identify objects, we recommend you export the probability maps there and then continue with this object classification workflow.
42+
43+
Load the probability maps in addition to the raw data in the Input Data step:
5044

5145
<a href="figs/input_prediction_image.png" data-toggle="lightbox"><img src="figs/input_prediction_image.png" class="img-responsive" /></a>
5246

5347
### Object Classification [Inputs: Raw Data, Segmentation]
54-
This workflow should be used if you already have a binary segmentation image.
48+
This workflow should be used if you already have a binary segmentation or a label image.
49+
Note that background pixels must have the value 0.
50+
5551
The image should be loaded in the data input applet:
5652

5753
<a href="figs/input_segmentation_image.png" data-toggle="lightbox"><img src="figs/input_segmentation_image.png" class="img-responsive" /></a>
5854

5955
## From probabilities to a segmentation - "Threshold and Size Filter" applet
60-
If you already have binary segmentation images, skip this section.
56+
This section only applies if you are in the "Object Classification [Inputs: Raw Data, Pixel Prediction Map]" workflow.
6157

6258
Suppose we have a probability map for a 2-class classification, which looks like this:
59+
6360
<a href="figs/pixel_results.png" data-toggle="lightbox"><img src="figs/pixel_results.png" class="img-responsive" /></a>
6461

6562
The basic idea of thresholding is to answer a question for _every pixel_ in an image:
@@ -70,6 +67,7 @@ In this applet this continuous range is transferred into a binary one, containin
7067
**Note:** To see the results of changing the parameter settings in this applet, press the "Apply" button.
7168

7269
There are two algorithms you can choose from to threshold your data: _Simple_ and _Hysteresis_, which can be selected using the "Method" drop down.
70+
The most important difference between the two is that hysteresis thresholding makes it possible to separate connected objects.
7371

7472
Both methods share the following parameters:
7573
* _Input_ Channel(s): Select which channel of the probability map contains the objects
@@ -140,7 +138,7 @@ Those features are computed by the [vigra library](https://ukoethe.github.io/vig
140138
An overview of available features can be found in [here]({{site.baseurl}}/documentation/objects/objectfeatures.html).
141139
The features are subdivided into three groups: "Location", "Shape", and "Intensity Distribution".
142140
Location-based features take into account _absolute coordinate positions_ in the image.
143-
These are only useful in special cases when the position of the object in the image can be used to infer the object type.
141+
These are only useful in special cases when the position of the object in the image can be used to infer the object type.
144142
Shape-based features extract shape descriptors from the object masks.
145143
Lastly, "Intensity Distribution" features operate on image value statistics.
146144
You will also notice features, which can be computed "in the neighborhood".
@@ -196,11 +194,14 @@ After a slight change in the segmentation (lower) threshold the objects indeed b
196194
<a href="figs/oc_prediction3.png" data-toggle="lightbox"><img src="figs/oc_prediction3.png" class="img-responsive" /></a>
197195
-->
198196
## Uncertainty Layer
199-
Uncertainty Layer displays how uncertain prediction for an object is. Applying the minimum number of labels for classifying objects containing up to three cells we have a very uncertain classification:
197+
The Uncertainty layer displays how uncertain the prediction for each object is.
198+
As in other workflows, this can be used to identify where the classifier needs more labels to improve the quality of the classification results.
199+
200+
Applying the minimum number of labels for classifying objects containing up to three cells we have a very uncertain classification:
200201

201202
<a href="figs/uncertainty_01.png" data-toggle="lightbox"><img src="figs/uncertainty_01.png" class="img-responsive" /></a>
202203

203-
Adding a few more labels we get a much better uncertainty estimate:
204+
Adding a few more labels reduces the uncertainty for most objects:
204205

205206
<a href="figs/uncertainty_02.png" data-toggle="lightbox"><img src="figs/uncertainty_02.png" class="img-responsive" /></a>
206207

@@ -209,10 +210,25 @@ Assuming our labels were correct this will lead to a good object classification:
209210

210211
## Export
211212

212-
In the [Export Applet][] you can export the following images: "Object Predictions", "Object Probabilities" "Blockwise Object Predictions".
213-
In addition to the image export, it is also possible to generate a table that encompasses all information about the objects used during classification.
214-
Table configuration can be accessed with the _Configure Feature Table Export_ button.
215-
In this new window there are three vertical tabs:
213+
### Regular image exports
214+
215+
In the [Export Applet][] you can export:
216+
* Object Predictions: An image where all pixels belonging to an object have the value of the object's most probable category (1 for the first label category, 2 for the second, etc.)
217+
* Object Probabilities: A multichannel image with one channel per object (label) class. In each channel, pixels belonging to an object hold the probability value for the respective class.
218+
* Blockwise Object Predictions/Probabilities: Same as above, but perform the computation on the input images in blocks of the size specified in the "Blockwise" applet ([see below][Blockwise Applet]). This makes it possible to process images that are larger than the machine's RAM.
219+
* Object Identities: An image where all pixels belonging to an object have the value of the object's id (1 for the first object, 2 for the second, etc.)
220+
221+
### Object feature table export
222+
223+
In addition to the image export, it is also possible to generate a table that encompasses all information about the objects used during classification. To activate this export:
224+
1. Access the table configuration with the _Configure Feature Table Export_ button. See below for more details on the configuration options.
225+
2. Change the file name and choose your preferred format.
226+
3. Choose the features you want to include in the table.
227+
4. Confirm with OK.
228+
229+
Once the table export has been configured, any regular image export (predictions, probabilities) will also generate the table file according to the chosen settings.
230+
231+
The table export can be configured in three sections:
216232

217233
* _General_: Choose Filename and Format.
218234
Note on formats: `csv` will export a table that can be read with common tools like LibreOffice, or Microsoft Excel.
@@ -314,5 +330,6 @@ The only difference is that you started the object classification workflow from
314330

315331
[Pixel Classification Workflow]: {{site.baseurl}}/documentation/pixelclassification/pixelclassification.html
316332
[Export Applet]: {{site.baseurl}}/documentation/basics/export.html
333+
[Blockwise Applet]: #preparing-for-large-scale-prediction---blockwise-object-classification-applet
317334

318335
Alternatively you call ilastik without the graphical user interface in [headless mode]({{site.baseurl}}/documentation/basics/headless.html#headless-mode-for-object-classification) in order to process large numbers of files.

0 commit comments

Comments
 (0)