- Left Eye: AI-Thinker ESP32-CAM
- Right Eye: M5Stack Wide ESP32-CAM
- Camera Sensors: OV2640 with fisheye lenses (using 7.5cm cables)
- Define in this line whether the robot and computer will share the phone hotspot (True) or the home WiFi (False) as the common network for communication:
USE_HOTSPOT = True
- Define in these lines the ESP32-CAMs' IPs the phone hotspot or home WiFi will assign (because you ask for them in the
esp32/cam/sketches you flash to the ESP32-CAMs):
IP_LEFT = "172.20.10.11" if USE_HOTSPOT else "192.168.1.181" # Left eye (AI-Thinker ESP32-CAM)
IP_RIGHT = "172.20.10.10" if USE_HOTSPOT else "192.168.1.180" # Right eye (M5Stack Wide ESP32-CAM)
- Ensure both ESP32-CAMs are properly flashed with firmware that exposes the image endpoints
- Verify cameras are accessible at their respective IP addresses (e.g., open a couple of browser tabs and go to http://192.168.1.180:80/image.jpg and http://192.168.1.181:80/image.jpg) before running the script
- Chessboard pattern (10x7 squares = 9x6 inner corners)
- Flat surface to mount the pattern
Before capturing:
- Open both ESP32-CAM IPs in browser to verify functionality
- Use a fixed reference object (e.g., book spine) to check:
- Cameras have similar rotation (spine angle matches in both views)
- Cameras have similar height (spine position matches vertically)
Configurable parameters:
JPEG_QUALITY = [Set your value (I used 6)]
FRAME_SIZE = [Set your value (I used "FRAMESIZE_VGA")]
- Run the capture script
- Hold the chessboard in various positions and angles
- Ensure pattern is fully visible in both cameras
- Press 's' to save the current image pair
- Capture 14-40 good image pairs from different angles
Before running calibration:
- Measure the physical chessboard square size
- Update
SQUARE_SIZEin the calibration script
-
Intrinsic Calibration (per camera):
- Camera matrix
- Distortion coefficients
- Rotation vectors
- Translation vectors
-
Extrinsic Calibration (stereo pair):
- Rotation matrix
- Translation vector
- Essential matrix
- Fundamental matrix
- Run calibration after every few captures to check quality
- Keep chessboard closer to the cameras for better results
- Don't over-tilt the chessboard
- Remove image pairs if corner detection fails
- 14-40 good image pairs typically sufficient
Good calibration should achieve low error values. I got around (and if you can improve upon them, even better):
- Left Camera: ~0.29
- Right Camera: ~0.28
- Stereo Calibration: ~0.31
After calibration:
- Check reported RMS error values
- Verify corner detection visually in saved side-by-side images
- Test undistortion results using
undistortion_and_rectification/undistort_and_rectify.py
- Higher
JPEG_QUALITYmay help if you plan to stitch images for panoramic view - Image size affects calibration quality and may need scaling if used with different resolution
- If corner detection fails or lines are incorrect, remove the problematic image pairs
- If insufficient good pairs remain, rerun
store_images_to_calibrate.pyto capture more - Image synchronization between cameras is important for good calibration (if one lags over the other, i.e., takes significatly more time than the other to capture the image once instructed to, and you move the chessboard in between captures, they will get the pattern at different positions and it will not work well / at all)