Skip to content

I have a question about dataloader(lmdb) #27

@gkalstn000

Description

@gkalstn000

Hello,

I am writing to ask a question about the dataloader batch size. I tried increasing the batch size from the default value to fully utilize my GPU memory for training. However, I noticed that the average iteration time increased even when I increased the num_workers. For example, when training with a batch size of 4 for a resolution of 256, the average iteration time was much faster than when I increased it to 16.

I would like to know the reason why using a larger batch size seems to slow down the training process instead of speeding it up.

Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions