SAM3 semantic segmentation tool#221
Conversation
test use huggingface loc
Update remote repository URL for SAM3 tool.
Added SAM3 tool for text-prompted semantic segmentation on images and videos, including requirements, command, inputs, outputs, and help documentation.
Enhance text prompt help for segmentation parameter
|
Hello @bgruening , the pull request looks good overall. The only thing I'm not sure about is the handling of the SAM3 model. |
|
What do you mean with the SAM model? The admins need to install them manually into the correct path and modify the location file accordingly. |
Does it have to be available on Hugging Face, or is it just a preferred practice? |
|
Best practice I would say, the model can be elsewhere. We just need to communicate this to the Admin that has to configure it. |
yvanlebras
left a comment
There was a problem hiding this comment.
THANK YOU Arthur! Seems to me ok. Just 2 comments to add fundings (as it seems to me we can now do that in tool xml) + looking at space after commas on example prompt (and maybe related python script to see if there is no issues managing space removing or others)
Co-authored-by: Björn Grüning <bjoern@gruenings.eu>
|
THANK YOU Arthur, Björn, Pauline ! This seems to me ok for a first version of tool in production in Galaxy Ecology! |
This pull request adds a new Galaxy tool called SAM3 Semantic Segmentation.
The tool performs text-prompted semantic segmentation on images or videos using the SAM3 model.
Users can provide one or more images or a single video, along with a text prompt describing the object to segment.
The tool supports multiple output formats, including COCO annotations and YOLO bounding boxes or segmentation masks.