|
1 | 1 | # QuPath extension WSInfer |
2 | 2 |
|
3 | | -This repo contains a *work-in-progress* extension to work with WSInfer models in QuPath. |
| 3 | +This repo contains the extension to work with WSInfer models in QuPath. |
| 4 | + |
| 5 | +This helps make deep learning-based patch classification in pathology images easy and interactive. |
4 | 6 |
|
5 | 7 | See https://wsinfer.readthedocs.io for details. |
| 8 | + |
| 9 | +> **If you use this extension, please cite both the WSInfer preprint and the [QuPath paper](https://qupath.readthedocs.io/en/0.4/docs/intro/citing.html)!** |
| 10 | +
|
| 11 | +## Installation |
| 12 | + |
| 13 | +Download the latest version of the extension from the [releases page](https://github.com/qupath/qupath-extension-wsinfer/releases). |
| 14 | + |
| 15 | +Then drag & drop the downloaded .jar file onto the main QuPath window to install it. |
| 16 | + |
| 17 | +## Usage |
| 18 | + |
| 19 | +The WSInfer extension adds a new menu item to QuPath's **Extensions** menu, which can be used to open a WSInfer dialog. |
| 20 | + |
| 21 | +The dialog tries to guide you through the main steps, from top to bottom. |
| 22 | + |
| 23 | +Briefly: after selecting a WSInfer model, you'll need to select one or more tiles to use for inference. |
| 24 | +The easiest way to do this is generally to draw an annotation, and leave it up to QuPath to create the tiles. |
| 25 | + |
| 26 | +Pressing run will download the model and PyTorch (if necessary), then run the model across the tiles. |
| 27 | + |
| 28 | +You can see the results in the form of measurement maps, as a results table, or as colored tiles in the QuPath viewer. |
| 29 | + |
| 30 | +> Tip: To see the tiles properly, you'll need to ensure that they are both displayed and filled in the viewer (i.e. ensure the two buttons showing three green objects are selected). |
| 31 | +
|
| 32 | +## Additional options |
| 33 | + |
| 34 | +It's worth checking out the *Additional options* to see where models will be stored. |
| 35 | + |
| 36 | +You can also use this to select whether inference should use the CPU or GPU - if a GPU is available and compatible. |
| 37 | + |
| 38 | +> GPU acceleration is selected by choosing *MPS* on an Apple Silicon Mac, for *Metal Performance Shaders*. |
0 commit comments