Skip to content

Unable to run llama server from the jetson container for llama_cpp #1424

@varunmathurv

Description

@varunmathurv

Search before asking

  • I have searched the jetson-containers issues and found no similar feature requests.

Question

Why is any llama command missing from the latest jetson container: llama_cpp:r36.4.tegra-aarch64-cu126-22.04-cuda-python?
I'm just trying to create a llama-server on a 8GB Orin Nano. Sorry if this is obvious.

Additional

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions