Skip to content

Commit c5adc24

Browse files
Update documentation
1 parent ff83d72 commit c5adc24

File tree

8 files changed

+4600
-4452
lines changed

8 files changed

+4600
-4452
lines changed

micro_sam.html

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ <h2 id="citation">Citation</h2>
170170
<p>If you are using <code><a href="">micro_sam</a></code> in your research please cite</p>
171171

172172
<ul>
173-
<li>our <a href="https://doi.org/10.1101/2023.08.21.554208">preprint</a></li>
173+
<li>our <a href="https://www.nature.com/articles/s41592-024-02580-4">paper</a> (now published in Nature Methods!)</li>
174174
<li>and the original <a href="https://arxiv.org/abs/2304.02643">Segment Anything publication</a>.</li>
175175
<li>If you use a <code>vit-tiny</code> models, please also cite <a href="https://arxiv.org/abs/2306.14289">Mobile SAM</a>.</li>
176176
</ul>
@@ -221,7 +221,7 @@ <h2 id="from-conda">From conda</h2>
221221
</div>
222222

223223
<p>This will also install <code>pytorch</code> from the <code>conda-forge</code> channel. If you have a recent enough operating system, it will automatically install the best suitable <code>pytorch</code> version on your system.
224-
This means it will install the CPU version if you don't have a nVidia GPU, and will install a GPU version if you have.
224+
This means it will install the CPU version if you don't have a nvidia GPU, and will install a GPU version if you have.
225225
However, if you have an older operating system, or a CUDA version older than 12, than it may not install the correct version. In this case you will have to specify you're CUDA version, for example for CUDA 11, like this:</p>
226226

227227
<div class="pdoc-code codehilite">
@@ -533,6 +533,13 @@ <h2 id="finetuning-ui">Finetuning UI</h2>
533533

534534
<p>The <code>Configuration</code> option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found <a href="#training-your-own-model">here</a>.</p>
535535

536+
<p>NOTE: We recommend to fine-tune Segment Anything models on your data by</p>
537+
538+
<ul>
539+
<li>running <code>$ micro_sam.train</code> in the command line.</li>
540+
<li>calling <code>micro_sam.training.train_sam</code> in a python script. Check out <a href="https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py">examples/finetuning/finetune_hela.py</a> OR <a href="https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb">notebooks/sam_finetuning.ipynb</a> for details.</li>
541+
</ul>
542+
536543
<h1 id="using-the-command-line-interface-cli">Using the Command Line Interface (CLI)</h1>
537544

538545
<p><code>micro-sam</code> extends access to a bunch of functionalities using the command line interface (CLI) scripts via terminal.</p>
@@ -545,6 +552,7 @@ <h1 id="using-the-command-line-interface-cli">Using the Command Line Interface (
545552
<li>Running <code>$ micro_sam.annotator_3d</code> for starting the 3d annotator.</li>
546553
<li>Running <code>$ micro_sam.annotator_tracking</code> for starting the tracking annotator.</li>
547554
<li>Running <code>$ micro_sam.image_series_annotator</code> for starting the image series annotator.</li>
555+
<li>Running <code>$ micro_sam.train</code> for finetuning Segment Anything models on your data.</li>
548556
<li>Running <code>$ <a href="micro_sam/automatic_segmentation.html">micro_sam.automatic_segmentation</a></code> for automatic instance segmentation.
549557
<ul>
550558
<li>We support all post-processing parameters for automatic instance segmentation (for both AMG and AIS).
@@ -564,6 +572,7 @@ <h1 id="using-the-command-line-interface-cli">Using the Command Line Interface (
564572

565573
<pre><code> - Remember to specify the automatic segmentation mode using `--mode &lt;MODE_NAME&gt;` when using additional post-processing parameters.
566574
- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.
575+
- A good practice is to set `--ndim &lt;NDIM&gt;`, where `&lt;NDIM&gt;` corresponds to the number of dimensions of input images.
567576
</code></pre>
568577

569578
<p>NOTE: For all CLIs above, you can find more details by adding the argument <code>-h</code> to the CLI script (eg. <code>$ micro_sam.annotator_2d -h</code>).</p>
@@ -604,7 +613,7 @@ <h2 id="training-your-own-model">Training your Own Model</h2>
604613
We use this functionality to provide the <a href="#finetuned-models">finetuned microscopy models</a> and it can also be used to train models on your own data.
605614
In fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.
606615
So a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.
607-
We recommend checking out our latest <a href="https://doi.org/10.1101/2023.08.21.554208">preprint</a> for details on the results on how much data is required for finetuning Segment Anything.</p>
616+
We recommend checking out our <a href="https://www.nature.com/articles/s41592-024-02580-4">paper</a> for details on the results on how much data is required for finetuning Segment Anything.</p>
608617

609618
<p>The training logic is implemented in <code><a href="micro_sam/training.html">micro_sam.training</a></code> and is based on <a href="https://github.com/constantinpape/torch-em">torch-em</a>. Check out <a href="https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb">the finetuning notebook</a> to see how to use it.
610619
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
@@ -857,7 +866,7 @@ <h3 id="2-i-cannot-install-micro_sam-using-the-installer-i-am-getting-some-error
857866
<h3 id="3-what-is-the-minimum-system-requirement-for-micro_sam">3. What is the minimum system requirement for <code><a href="">micro_sam</a></code>?</h3>
858867

859868
<p>From our experience, the <code><a href="">micro_sam</a></code> annotation tools work seamlessly on most laptop or workstation CPUs and with &gt; 8GB RAM.
860-
You might encounter some slowness for $\leq$ 8GB RAM. The resources <code><a href="">micro_sam</a></code>'s annotation tools have been tested on are:</p>
869+
You might encounter some slowness for &gt;= 8GB RAM. The resources <code><a href="">micro_sam</a></code>'s annotation tools have been tested on are:</p>
861870

862871
<ul>
863872
<li>Windows:

micro_sam/__version__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ <h1 class="modulename">
5252

5353
<label class="view-source-button" for="mod-__version__-view-source"><span>View Source</span></label>
5454

55-
<div class="pdoc-code codehilite"><pre><span></span><span id="L-1"><a href="#L-1"><span class="linenos">1</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="s2">&quot;1.3.0&quot;</span>
55+
<div class="pdoc-code codehilite"><pre><span></span><span id="L-1"><a href="#L-1"><span class="linenos">1</span></a><span class="n">__version__</span> <span class="o">=</span> <span class="s2">&quot;1.3.1&quot;</span>
5656
</span></pre></div>
5757

5858

0 commit comments

Comments
 (0)