Skip to content

Issue with Validation depth data processing #244

@KoushikSamudrala

Description

@KoushikSamudrala

Hello everyone,
I am trying to finetune the packnetsfm checkpoint with my custom Image data and I assumed depth data from the RGBD camera as my ground truth data and fed the RGB Images for training and to plot the validation loss curve, I used part of the RGB data as evaluation in a config file and depth path is given with the folder of depth images. Now the model can recognize the training data and outputs the training loss whereas in the case of validation, it faces trouble with interpolation of batch['depth'] size with the depth_pp size and throws an error as follows:
image
Has anyone faced this error before and what can be done in order to avoid this error?
I'm using Colab to run the scripts and here is what my config .yaml file looks like:
image
I modified the scripts modelwrapper.py and Image_dataset.py in order to take depth images in the batch dictionary.
I even tried to reduce the batch_size to avoid the above error but even though I used 1 image as a batch the error is still prevailing. Any help in this regard is appreciated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions