Skip to content

how to run inference on multi GPUs #57

@GallonDeng

Description

@GallonDeng

how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions