You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<p>This will also install <code>pytorch</code> from the <code>conda-forge</code> channel. If you have a recent enough operating system, it will automatically install the best suitable <code>pytorch</code> version on your system.
224
-
This means it will install the CPU version if you don't have a nVidia GPU, and will install a GPU version if you have.
224
+
This means it will install the CPU version if you don't have a nvidia GPU, and will install a GPU version if you have.
225
225
However, if you have an older operating system, or a CUDA version older than 12, than it may not install the correct version. In this case you will have to specify you're CUDA version, for example for CUDA 11, like this:</p>
<p>The <code>Configuration</code> option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found <ahref="#training-your-own-model">here</a>.</p>
535
535
536
+
<p>NOTE: We recommend to fine-tune Segment Anything models on your data by</p>
537
+
538
+
<ul>
539
+
<li>running <code>$ micro_sam.train</code> in the command line.</li>
540
+
<li>calling <code>micro_sam.training.train_sam</code> in a python script. Check out <ahref="https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py">examples/finetuning/finetune_hela.py</a> OR <ahref="https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb">notebooks/sam_finetuning.ipynb</a> for details.</li>
541
+
</ul>
542
+
536
543
<h1id="using-the-command-line-interface-cli">Using the Command Line Interface (CLI)</h1>
537
544
538
545
<p><code>micro-sam</code> extends access to a bunch of functionalities using the command line interface (CLI) scripts via terminal.</p>
@@ -545,6 +552,7 @@ <h1 id="using-the-command-line-interface-cli">Using the Command Line Interface (
545
552
<li>Running <code>$ micro_sam.annotator_3d</code> for starting the 3d annotator.</li>
546
553
<li>Running <code>$ micro_sam.annotator_tracking</code> for starting the tracking annotator.</li>
547
554
<li>Running <code>$ micro_sam.image_series_annotator</code> for starting the image series annotator.</li>
555
+
<li>Running <code>$ micro_sam.train</code> for finetuning Segment Anything models on your data.</li>
548
556
<li>Running <code>$ <ahref="micro_sam/automatic_segmentation.html">micro_sam.automatic_segmentation</a></code> for automatic instance segmentation.
549
557
<ul>
550
558
<li>We support all post-processing parameters for automatic instance segmentation (for both AMG and AIS).
@@ -564,6 +572,7 @@ <h1 id="using-the-command-line-interface-cli">Using the Command Line Interface (
564
572
565
573
<pre><code> - Remember to specify the automatic segmentation mode using `--mode <MODE_NAME>` when using additional post-processing parameters.
566
574
- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.
575
+
- A good practice is to set `--ndim <NDIM>`, where `<NDIM>` corresponds to the number of dimensions of input images.
567
576
</code></pre>
568
577
569
578
<p>NOTE: For all CLIs above, you can find more details by adding the argument <code>-h</code> to the CLI script (eg. <code>$ micro_sam.annotator_2d -h</code>).</p>
@@ -604,7 +613,7 @@ <h2 id="training-your-own-model">Training your Own Model</h2>
604
613
We use this functionality to provide the <ahref="#finetuned-models">finetuned microscopy models</a> and it can also be used to train models on your own data.
605
614
In fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.
606
615
So a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.
607
-
We recommend checking out our latest <ahref="https://doi.org/10.1101/2023.08.21.554208">preprint</a> for details on the results on how much data is required for finetuning Segment Anything.</p>
616
+
We recommend checking out our <ahref="https://www.nature.com/articles/s41592-024-02580-4">paper</a> for details on the results on how much data is required for finetuning Segment Anything.</p>
608
617
609
618
<p>The training logic is implemented in <code><ahref="micro_sam/training.html">micro_sam.training</a></code> and is based on <ahref="https://github.com/constantinpape/torch-em">torch-em</a>. Check out <ahref="https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb">the finetuning notebook</a> to see how to use it.
610
619
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
<h3id="3-what-is-the-minimum-system-requirement-for-micro_sam">3. What is the minimum system requirement for <code><ahref="">micro_sam</a></code>?</h3>
858
867
859
868
<p>From our experience, the <code><ahref="">micro_sam</a></code> annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.
860
-
You might encounter some slowness for $\leq$ 8GB RAM. The resources <code><ahref="">micro_sam</a></code>'s annotation tools have been tested on are:</p>
869
+
You might encounter some slowness for >= 8GB RAM. The resources <code><ahref="">micro_sam</a></code>'s annotation tools have been tested on are:</p>
0 commit comments