Skip to content

bentoml/BentoLlamaCpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Installation

uv venv -p 3.11

# For M1 Mac
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DGGML_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python
# For Mac
CMAKE_ARGS="-DGGML_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python

uv pip install -r pyproject.toml

bentoml serve

If you want to use different models:

bentoml serve -f qwq.yaml

It will use Gemma 3 by default here.

Deploy

bentoml deploy

About

BentoML + llama.cpp

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages