Description
Hi, thank you for sharing your code! I have a question about the experiment setup.
I experimented with MADNet and DispNetC, and the performance(EPE) on the KITTI RAW was different.
Is there anything different about my setting or code than yours? I only added code that converts depth to disparity for kitti dataset.
this is my python environment:
cuda 10.2
python 3.6.13
tensorflow-gpu 1.12.0
numpy 1.16.0
opencv-python 4.1.1.26
matplotlib 3.3.4
my result(on the city sequences from KITTI RAW) is:
and my run command and code
python3 Stereo_Online_Adaptation.py -l ./path_list/kitti_city.csv -o ./outputs/madnet_test --weights pretrained_nets/MADNet/synthetic/weights.ckpt --blockConfig block_config/MadNet_full.json --modelName MADNet --mode NONE --logDispStep -1
def read_depth(path):
depth = Image.open(path)
depth = np.array(depth).astype(np.float) / 256.0
return depth
def depth2disp(depth):
baseline = 0.54
width_to_focal = dict()
width_to_focal[1242] = 721.5377
width_to_focal[1241] = 718.856
width_to_focal[1224] = 707.0493
width_to_focal[1226] = 708.2046 # NOTE: [wrong] assume linear to width 1224
width_to_focal[1238] = 718.3351
focal_length = width_to_focal[depth.shape[1]]
invalid_mask = depth <= 0
disp = baseline * focal_length / (depth + 1E-8)
disp[invalid_mask] = 0
return disp
def save_disp(path):
# Save disparity map
dispy_to_save = np.clip(left_disp, 0, MAX_DISP)
dispy_to_save = (dispy_to_save*256.0).astype(np.uint16)
cv2.imwrite(path, dispy_to_save)
Thank you for reading.