Skip to content

Can't install vllm, llama.cpp #1369

Open
@lullabies777

Description

@lullabies777

🐛 Bug

Recently, CUDA has updated to12. However, I'm encountering errors when attempting to install VLLM and llama.cpp using pip. Could there be an issue with my CUDA installation? For reference, the error messages are:
image
image

To Reproduce

Expected behavior

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugbug & failures with existing packageshelp wanted

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions