You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: documentation/objects/objects.md
+47-30
Original file line number
Diff line number
Diff line change
@@ -17,49 +17,46 @@ weight: 1
17
17
18
18
As the name suggests, the object classification workflow aims to classify full *objects*, based on object-level features and user annotations.
19
19
An *object* in this context is a set of pixels that belong to the same instance.
20
-
In order to do so, the workflow needs *segmentation* images besides the usual raw image data, that can e.g. be generated with the [Pixel Classification Workflow].
21
-
Depending on the availability of these segmentation images, the user can choose between three flavors of object classification workflow, which differ by their input data:
20
+
Object classification requires a second input besides the usual raw image data: an image that indicates for pixels whether they belong to an object or not, i.e. pixel predictions, a segmentation or a label image.
21
+
This can be obtained e.g. using the [Pixel Classification Workflow].
22
+
The workflow exists in two variants to handle different types for the second input:
22
23
23
-
* Pixel Classification + Object Classification
24
24
* Object Classification [Inputs: Raw Data, Pixel Prediction Map]
25
25
* Object Classification [Inputs: Raw Data, Segmentation]
26
26
27
-
**Size Limitations:**
27
+
The combined "Pixel Classification + Object Classification" workflow (found under "Other Workflows") is primarily intended for demonstration purposes and its use in real projects is discouraged.
28
+
We instead recommend to use the two workflows separately, exporting probability maps from the Pixel Classification workflow and using them as input for the Object Classification workflow.
28
29
29
-
In the current version of ilastik, computations on the **training** images are not performed lazily -- the entire image is processed at once.
30
-
This means you can't use enormous images for training the object classifier.
31
-
However, once you have created a satisfactory classifier using one or more small images, you can use the "Blockwise Object Classification"
32
-
feature to run object classification on much larger images (prediction only -- no training.)
and then thresholding the probability maps to obtain a segmentation that you then use in Object Classification.
41
-
This workflow is primarily meant for demo purposes.
42
-
For serious projects, we recommend to use the two workflows, [Pixel Classification]({{site.baseurl}}/documentation/pixelclassification/pixelclassification.html) and Object Classification separately using the generated output form the former as an additional input in the latter one.
32
+
For object classification, images used in _training_ have to be small enough to fit entirely into your machine's RAM.
33
+
However, once you have created a satisfactory classifier using one or more small images (or cutouts from your complete dataset), you can use the "Blockwise Object Classification"
34
+
feature to run object classification on much larger images (prediction only - no training).
### Object Classification [Inputs: Raw Data, Pixel Prediction Map]
47
-
You should choose this workflow if you have pre-computed probability
48
-
maps.
49
-
The data input applet of this workflow expects you to load the probability maps in addition to the raw data:
39
+
You should choose this workflow if you have a probability map, i.e. each pixel value is the pixel's probability to belong to an object.
40
+
To obtain a segmentation from this input, the workflow includes a step for thresholding the probabilities.
41
+
If you use the Pixel Classification workflow to identify objects, we recommend you export the probability maps there and then continue with this object classification workflow.
42
+
43
+
Load the probability maps in addition to the raw data in the Input Data step:
Uncertainty Layer displays how uncertain prediction for an object is. Applying the minimum number of labels for classifying objects containing up to three cells we have a very uncertain classification:
197
+
The Uncertainty layer displays how uncertain the prediction for each object is.
198
+
As in other workflows, this can be used to identify where the classifier needs more labels to improve the quality of the classification results.
199
+
200
+
Applying the minimum number of labels for classifying objects containing up to three cells we have a very uncertain classification:
@@ -209,10 +210,25 @@ Assuming our labels were correct this will lead to a good object classification:
209
210
210
211
## Export
211
212
212
-
In the [Export Applet][] you can export the following images: "Object Predictions", "Object Probabilities" "Blockwise Object Predictions".
213
-
In addition to the image export, it is also possible to generate a table that encompasses all information about the objects used during classification.
214
-
Table configuration can be accessed with the _Configure Feature Table Export_ button.
215
-
In this new window there are three vertical tabs:
213
+
### Regular image exports
214
+
215
+
In the [Export Applet][] you can export:
216
+
* Object Predictions: An image where all pixels belonging to an object have the value of the object's most probable category (1 for the first label category, 2 for the second, etc.)
217
+
* Object Probabilities: A multichannel image with one channel per object (label) class. In each channel, pixels belonging to an object hold the probability value for the respective class.
218
+
* Blockwise Object Predictions/Probabilities: Same as above, but perform the computation on the input images in blocks of the size specified in the "Blockwise" applet ([see below][Blockwise Applet]). This makes it possible to process images that are larger than the machine's RAM.
219
+
* Object Identities: An image where all pixels belonging to an object have the value of the object's id (1 for the first object, 2 for the second, etc.)
220
+
221
+
### Object feature table export
222
+
223
+
In addition to the image export, it is also possible to generate a table that encompasses all information about the objects used during classification. To activate this export:
224
+
1. Access the table configuration with the _Configure Feature Table Export_ button. See below for more details on the configuration options.
225
+
2. Change the file name and choose your preferred format.
226
+
3. Choose the features you want to include in the table.
227
+
4. Confirm with OK.
228
+
229
+
Once the table export has been configured, any regular image export (predictions, probabilities) will also generate the table file according to the chosen settings.
230
+
231
+
The table export can be configured in three sections:
216
232
217
233
*_General_: Choose Filename and Format.
218
234
Note on formats: `csv` will export a table that can be read with common tools like LibreOffice, or Microsoft Excel.
@@ -314,5 +330,6 @@ The only difference is that you started the object classification workflow from
Alternatively you call ilastik without the graphical user interface in [headless mode]({{site.baseurl}}/documentation/basics/headless.html#headless-mode-for-object-classification) in order to process large numbers of files.
0 commit comments