Skip to content

question about peak memory in TinyTL #12

@Tsingularity

Description

@Tsingularity

Hi, thanks for the great work!

We implemented the update-bias-only transfer learning in our own codebase, however, we didn't notice a huge decrease of peak memory / memory usage during fine-tuning as shown in your paper (only <10% decrease vs >90% claimed in ur paper). The command we used to check GPU memory usage is torch.cuda.max_memory_allocated().

We also checked your released codebase, and the only relevant part is this, which is the same as our implementation.

So I am just wondering how could we get this training memory decrease empirically? Or we're using the wrong command to check memory usage?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions