Skip to content

Missing code for benchmarking metrics reported in the paper #86

Open
@neha013

Description

@neha013

I've been reading the FoundationStereo paper and noticed that several metrics are reported for comparisons on standard benchmarks like Middlebury, ETH3D, KITTI, and Scene Flow (e.g., BP-1, BP-2, D1, EPE, etc.). However, I couldn't find the corresponding evaluation code or scripts in the GitHub repo to reproduce these results or compute the same metrics on custom data or the benchmarks. Could you please clarify that are these evaluation scripts or tools available anywhere?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions