-
Notifications
You must be signed in to change notification settings - Fork 261
Open
Description
Hey,
I ran the inference on the 29 Huang annotated sequences from DAVIS 2017.
srun python video_completion.py \
--mode object_removal \
--seamless \
--path ../data/Davis/Huang_annotations/rgb_png \
--path_mask ../data/Davis/Huang_annotations/mask_png \
the results visibly match the videos on your project page. Anyhow I cannot come up with an evaluation method that matches your results. In the case of the object removal task I mixed up color sequences with other mask sequences from the set (e.g hiking_frames <-> flamingo_masks[cropped to matching length]). Inferencing all sequences does not result in an SSIM nor the PSNR stated in table 1 of the paper. From the visible results on the Huang annotions I'd expect a SSIM of 0.99 but since we cannot calculate any ground truth related metrics on this set I need your advice.
What are the evaluations pairs for table1?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels