Skip to content

fix: use torch device class instead of string for compatibility with comfy memory management #200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 25, 2025

Conversation

josephrocca
Copy link

The comfy.model_management.free_memory function expects the device param to be a <class 'torch.device'>. Currently this equality check fails because it's comparing a <class 'torch.device'> with a str.

This proposed change fixes one class of OOM on my laptop's 8GB RTX 2070 GPU.

I think there is at least one other change needed to play nice with Comfy's memory management on low-VRAM GPUs. I think the main one is related to:

because the model.detach call here doesn't seem to work. If I can work that out I'll propose that change in separate pull request.

@lmxyy lmxyy changed the base branch from main to dev May 24, 2025 01:44
Copy link
Collaborator

@lmxyy lmxyy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

passing tests. approved.

@lmxyy lmxyy merged commit 29a8b51 into mit-han-lab:dev May 25, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants