Open
Description
I've been reading the FoundationStereo paper and noticed that several metrics are reported for comparisons on standard benchmarks like Middlebury, ETH3D, KITTI, and Scene Flow (e.g., BP-1, BP-2, D1, EPE, etc.). However, I couldn't find the corresponding evaluation code or scripts in the GitHub repo to reproduce these results or compute the same metrics on custom data or the benchmarks. Could you please clarify that are these evaluation scripts or tools available anywhere?
Metadata
Metadata
Assignees
Labels
No labels