Add dev container for dev env of computer vision stuff#19
Add dev container for dev env of computer vision stuff#19
Conversation
|
Current dependencies on/for this PR: This comment was autogenerated by Freephite. |
|
Not sure how I feel about this one. I'm not sure jupyter will be especially useful when we're dealing with camera streams and stuff. And passing through camera devices into docker containers might be a pain I get the idea behind containerization but I honestly think the script will be so trivial that there isn't much of an advantage to containerization |
|
I am thinking the image acquisition will be done on a pi or mobile phone. That part will generate images or videos saved to disk. Then we can run some sort of calibration pipeline, this will include estimating the camera intrinsics and extrinsics, detecting shape of wisker, matching with the magnetometer signal, etc. This will involve things like open cv and pytorch. this part will run offline separately to the image acquisition. I think this bit will benefit from containerisation. The vs code dev container, is a convenient way to set up a consistent run time across my macbook and linux box and switch between them seamlessly. We don't have to use it to deploy, just convenient for dev. The notebook also, we should definitely convert it to a script with a CLI when we have a good idea what we are doing. The notebook is just convenient for dev and experimentation. WDYT @jwansek |
|
Last week I experimented with doing image acquisition on a Raspberry Pi Camera and mobile phone app and sending it over as an IP camera, and found that it added a lot of latency (presumably due to the compression required to move it images through a network). What I'm currently working on (on the eden/visual_calibrator branch) is using a regular USB camera connected directly to ROS to capture video. I'm also experimenting with using an Xbox Kinect I have (since it is a pre-calibrated sensor with a decent resolution) to capture video. What I was thinking of doing is recording video (and also whisker sensor data) in ROS bags which we could then play back at our discretion and use to experiment with making pytorch models. Perhaps it would be prudent to have a container that plays back from ROS bags and manages the post-processing part. |
47b408a to
88b2c61
Compare
Create a dev container that can be used with vscode. that way we can consistently reproduce the running env of any software developed.
jupyter notebook is great for experimentation but before committing to git it is important to strip any running output: