Skip to content

Conversation

@sijie-han
Copy link

link to the updated perception data: https://drive.google.com/file/d/1cj_z8gSng43MhVM8wfE19QSXQb-u0akO/view?usp=sharing
to run the code, you can simply run python run_organa.py in the foundationpose environment and modify the perception_data path inside the code.(foundationpose is in 3.9)
To run SAM2 at the same time, you need to make a new conda environment in python 3.10+ following the instruction provided by SAM2 https://github.com/facebookresearch/segment-anything-2 and clone repo and download the checkpoints. To only generate the segmented mask using SAM2 by clicking, running python segmentation2.py would be fine.

There persists a GPU overflow problem right now if we generate too many masks i.e it doesnt clear the gpu caches, this is yet to solve.

to switch to drawing opencv bouding boxes, just change the import in datareader.py from segmentation2.py to segmentation1.py
thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants