Skip to content

Add dev container for dev env of computer vision stuff#19

Open
LiyouZhou wants to merge 4 commits intomainfrom
11-20-liyou/dev_container
Open

Add dev container for dev env of computer vision stuff#19
LiyouZhou wants to merge 4 commits intomainfrom
11-20-liyou/dev_container

Conversation

@LiyouZhou
Copy link
Copy Markdown
Contributor

@LiyouZhou LiyouZhou commented Nov 24, 2023

Create a dev container that can be used with vscode. that way we can consistently reproduce the running env of any software developed.

jupyter notebook is great for experimentation but before committing to git it is important to strip any running output:

$ jupyter nbconvert --clear-output --inplace calibrator/visual_calibrator/src/visual_calibration.ipynb

@LiyouZhou
Copy link
Copy Markdown
Contributor Author

Current dependencies on/for this PR:

This comment was autogenerated by Freephite.

@LiyouZhou LiyouZhou requested a review from jwansek November 24, 2023 23:56
@jwansek
Copy link
Copy Markdown
Contributor

jwansek commented Nov 25, 2023

Not sure how I feel about this one. I'm not sure jupyter will be especially useful when we're dealing with camera streams and stuff. And passing through camera devices into docker containers might be a pain

I get the idea behind containerization but I honestly think the script will be so trivial that there isn't much of an advantage to containerization

@LiyouZhou
Copy link
Copy Markdown
Contributor Author

I am thinking the image acquisition will be done on a pi or mobile phone. That part will generate images or videos saved to disk. Then we can run some sort of calibration pipeline, this will include estimating the camera intrinsics and extrinsics, detecting shape of wisker, matching with the magnetometer signal, etc. This will involve things like open cv and pytorch. this part will run offline separately to the image acquisition. I think this bit will benefit from containerisation.

The vs code dev container, is a convenient way to set up a consistent run time across my macbook and linux box and switch between them seamlessly. We don't have to use it to deploy, just convenient for dev. The notebook also, we should definitely convert it to a script with a CLI when we have a good idea what we are doing. The notebook is just convenient for dev and experimentation.

WDYT @jwansek

@jwansek
Copy link
Copy Markdown
Contributor

jwansek commented Nov 26, 2023

Last week I experimented with doing image acquisition on a Raspberry Pi Camera and mobile phone app and sending it over as an IP camera, and found that it added a lot of latency (presumably due to the compression required to move it images through a network). What I'm currently working on (on the eden/visual_calibrator branch) is using a regular USB camera connected directly to ROS to capture video. I'm also experimenting with using an Xbox Kinect I have (since it is a pre-calibrated sensor with a decent resolution) to capture video. What I was thinking of doing is recording video (and also whisker sensor data) in ROS bags which we could then play back at our discretion and use to experiment with making pytorch models.

Perhaps it would be prudent to have a container that plays back from ROS bags and manages the post-processing part.

@LiyouZhou
Copy link
Copy Markdown
Contributor Author

@jwansek That totally make sense.

The default rosbag format recently changed to MCAP. Which makes it great for working with outside of ROS.

@LiyouZhou LiyouZhou force-pushed the 11-20-liyou/dev_container branch from 47b408a to 88b2c61 Compare November 27, 2023 17:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants