You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><code>vit_l</code>: Default Segment Anything model with ViT Large backbone.</li>
717
718
<li><code>vit_b</code>: Default Segment Anything model with ViT Base backbone.</li>
718
719
<li><code>vit_t</code>: Segment Anything model with ViT Tiny backbone. From the <ahref="https://arxiv.org/abs/2306.14289">Mobile SAM publication</a>.</li>
719
-
<li><code>vit_l_lm</code>: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Large backbone. (<ahref="https://doi.org/10.5281/zenodo.11111176">Zenodo</a>) (<ahref="TODO">idealistic-rat on BioImage.IO</a>)</li>
720
-
<li><code>vit_b_lm</code>: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Base backbone. (<ahref="https://zenodo.org/doi/10.5281/zenodo.11103797">Zenodo</a>) (<ahref="TODO">diplomatic-bug on BioImage.IO</a>)</li>
721
-
<li><code>vit_t_lm</code>: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Tiny backbone. (<ahref="https://doi.org/10.5281/zenodo.11111328">Zenodo</a>) (<ahref="TODO">faithful-chicken BioImage.IO</a>)</li>
722
-
<li><code>vit_l_em_organelles</code>: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Large backbone. (<ahref="https://doi.org/10.5281/zenodo.11111054">Zenodo</a>) (<ahref="TODO">humorous-crab on BioImage.IO</a>)</li>
723
-
<li><code>vit_b_em_organelles</code>: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Base backbone. (<ahref="https://doi.org/10.5281/zenodo.11111293">Zenodo</a>) (<ahref="TODO">noisy-ox on BioImage.IO</a>)</li>
724
-
<li><code>vit_t_em_organelles</code>: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Tiny backbone. (<ahref="https://doi.org/10.5281/zenodo.11110950">Zenodo</a>) (<ahref="TODO">greedy-whale on BioImage.IO</a>)</li>
720
+
<li><code>vit_l_lm</code>: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Large backbone. (<ahref="https://doi.org/10.5281/zenodo.11111176">Zenodo</a>) (<ahref="https://bioimage.io/#/?id=idealistic-rat">idealistic-rat on BioImage.IO</a>)</li>
721
+
<li><code>vit_b_lm</code>: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Base backbone. (<ahref="https://zenodo.org/doi/10.5281/zenodo.11103797">Zenodo</a>) (<ahref="https://bioimage.io/#/?id=diplomatic-bug">diplomatic-bug on BioImage.IO</a>)</li>
722
+
<li><code>vit_t_lm</code>: Finetuned Segment Anything model for cells and nuclei in light microscopy data with ViT Tiny backbone. (<ahref="https://doi.org/10.5281/zenodo.11111328">Zenodo</a>) (<ahref="https://bioimage.io/#/?id=faithful-chicken">faithful-chicken BioImage.IO</a>)</li>
723
+
<li><code>vit_l_em_organelles</code>: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Large backbone. (<ahref="https://doi.org/10.5281/zenodo.11111054">Zenodo</a>) (<ahref="https://bioimage.io/#/?id=humorous-crab">humorous-crab on BioImage.IO</a>)</li>
724
+
<li><code>vit_b_em_organelles</code>: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Base backbone. (<ahref="https://doi.org/10.5281/zenodo.11111293">Zenodo</a>) (<ahref="https://bioimage.io/#/?id=noisy-ox">noisy-ox on BioImage.IO</a>)</li>
725
+
<li><code>vit_t_em_organelles</code>: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with ViT Tiny backbone. (<ahref="https://doi.org/10.5281/zenodo.11110950">Zenodo</a>) (<ahref="https://bioimage.io/#/?id=greedy-whale">greedy-whale on BioImage.IO</a>)</li>
725
726
</ul>
726
727
727
728
<p>See the two figures below of the improvements through the finetuned model for LM and EM data. </p>
<p>Previous versions of our models are available on Zenodo:</p>
757
758
759
+
<h3id="v2-models">v2 Models</h3>
760
+
761
+
<ul>
762
+
<li>vit_t_lm (<ahref="https://zenodo.org/records/11111329">Zenodo</a>): the ViT-Tiny model for segmenting cells and nuclei in LM.</li>
763
+
<li>vit_b_lm (<ahref="https://zenodo.org/records/11103798">Zenodo</a>): the ViT-Base model for segmenting cells and nuclei in LM.</li>
764
+
<li>vit_l_lm (<ahref="https://zenodo.org/records/11111177">Zenodo</a>): the ViT-Large mopdel for segmenting cells and nuclei in LM.</li>
765
+
</ul>
766
+
767
+
<h3id="v1-models">v1 Models</h3>
768
+
758
769
<ul>
759
770
<li><ahref="https://zenodo.org/records/10524894">vit_b_em_boundaries</a>: for segmenting compartments delineated by boundaries such as cells or neurites in EM.</li>
760
771
<li><ahref="https://zenodo.org/records/10524828">vit_b_em_organelles</a>: for segmenting mitochondria, nuclei or other organelles in EM.</li>
<li><ahref="https://doi.org/10.5281/zenodo.11117615">Finetuned Models for the user studies</a></li>
781
792
</ul>
782
793
794
+
<h1id="community-data-submissions">Community Data Submissions</h1>
795
+
796
+
<p>We are looking to further improve the <code><ahref="">micro_sam</a></code> models by training on more diverse microscopy data.
797
+
For this, we want to collect data where the models don't work well yet, and need your help!</p>
798
+
799
+
<p>If you are using <code><ahref="">micro_sam</a></code> for a task where the current models don't do a good job, but you have annotated data and successfully fine-tuned a model, then you can submit this data to us, so that we can use it to train our next version of improved microscopy models.
800
+
To do this, please either create an <ahref="https://github.com/computational-cell-analytics/micro-sam/issues">issue on github</a> or a post on <ahref="https://forum.image.sc/">image.sc</a> and:</p>
801
+
802
+
<ul>
803
+
<li>Use a title "Data submission for micro_sam: ..." ("..." should be a title for your data, e.g. "cells in brightfield microscopy")
804
+
<ul>
805
+
<li>On image.sc use the tag <code>micro-sam</code>.</li>
806
+
</ul></li>
807
+
<li>Briefly describe your data and add an image that shows the microscopy data and the segmentation masks you have.</li>
808
+
<li>Make sure to describe:
809
+
<ul>
810
+
<li>The imaging modality and the structure(s) that you have segmented.</li>
811
+
<li>The <code><ahref="">micro_sam</a></code> model you have used for finetuning and segmenting the data.
812
+
<ul>
813
+
<li>You can also submit data that was not segmented with <code><ahref="">micro_sam</a></code>, as long as you have sufficient annotations we are happy to include it!</li>
814
+
</ul></li>
815
+
<li>How many images and annotations you have / can submit and how you have created the annotations.
816
+
<ul>
817
+
<li>You should submit at least 5 images / 100 annotated objects to have a meaningful impact. If you are unsure if you have enough data please go ahead and create the issue / post and we can discuss the details.</li>
818
+
</ul></li>
819
+
<li>Which data-format your images and annotations are stored in. We recommend using either <code>tif</code> images or <code>ome.zarr</code> files.</li>
820
+
</ul></li>
821
+
<li>Please indicate that you are willing to share the data for training purpose (see also next paragraph).</li>
822
+
</ul>
823
+
824
+
<p>Once you have created the post / issue, we will check if your data is suitable for submission or discuss with you how it could be extended to be suitable. Then:</p>
825
+
826
+
<ul>
827
+
<li>We will share an agreement for data sharing. You can find <strong>a draft</strong><ahref="https://docs.google.com/document/d/1X3VOf1qtJ5WtwDGcpGYZ-kfr3E2paIEquyuCtJnF_I0/edit?usp=sharing">here</a>.</li>
828
+
<li>You will be able to choose how you want to submit / publish your data.
829
+
<ul>
830
+
<li>Share it under a CC0 license. In this case, we will use the data for re-training and also make it publicly available as soon as the next model versions become available.</li>
831
+
<li>Share it for training with the option to publish it later. For example, if your data is unpublished and you want to only published once the respective publication is available. In this case, we will use the data for re-training, but not make it freely available yet. We will check with you peridiodically to see if your data can now be published.</li>
832
+
<li>Share it for training only. In this case, we will re-train the model on it, but not make it publicly available.</li>
833
+
</ul></li>
834
+
<li>We encourage you to choose the first option (making the data available under CC0).</li>
835
+
<li>We will then send you a link to upload your data, after you have agreed to these terms.</li>
836
+
</ul>
837
+
783
838
<h1id="faq">FAQ</h1>
784
839
785
840
<p>Here we provide frequently asked questions and common issues.
<p><code>micro-sam</code> has tooltips for menu options across all widgets (i.e. an information window will appear if you hover over name of the menu), which briefly describe the utility of the specific menu option.</p>
986
1041
1042
+
<h3id="17-i-want-to-use-an-older-version-of-the-pretrained-models">17. I want to use an older version of the pretrained models.</h3>
1043
+
1044
+
<p>The older model versions are still available on zenodo. You can find the download links for all of them <ahref="https://computational-cell-analytics.github.io/micro-sam/micro_sam.html#other-models">here</a>.
1045
+
You can then use those models with the custom checkpoint option, see answer 15 for details.</p>
<h3id="1-i-have-a-microscopy-dataset-i-would-like-to-fine-tune-segment-anything-for-is-it-possible-using-micro_sam">1. I have a microscopy dataset I would like to fine-tune Segment Anything for. Is it possible using <code><ahref="">micro_sam</a></code>?</h3>
@@ -1344,16 +1404,17 @@ <h2 id="transfering-data-to-band">Transfering data to BAND</h2>
0 commit comments