Skip to content

Latest commit

 

History

History
144 lines (99 loc) · 2.92 KB

File metadata and controls

144 lines (99 loc) · 2.92 KB

LlamaStack Agent

Agent built only on the LlamaStack API (llama-stack-client), without LlamaIndex. Uses AIAgent with chat, tools, and Action/Observation loop. Python 3.12+ required.

Use Agent Locally

Installation

git clone <repository-url>
cd Agentic-Starter-Kits
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

If you want to install Ollama: Ollama site or Brew.

Install Llama Stack:

pip install llama-stack llama-stack-client

Setup Instructions

Step 1: Pull Required Models

ollama pull llama3.2:3b

Step 2: Start Ollama Service

ollama serve

Keep this terminal open – Ollama needs to keep running.

Step 3: Start Llama Stack Server

From the repository root directory:

llama stack run run_llama_server.yaml

Keep this terminal open – server runs at http://localhost:8321.

Step 4: Install Agent Dependencies

cd agents/base/llamastack_agent
pip install -r requirements.txt

Step 5: Configure Environment Variables

Copy the template (from repo root: template.env) or create .env in the agent directory:

cp ../../../template.env .env

Edit .env:

BASE_URL=http://localhost:8321
MODEL_ID=ollama/llama3.2:3b
API_KEY=not-needed

Step 6: Run the Interactive Chat

cd examples
python execute_ai_service_locally.py

⚡ Or with uv (from repo root):

  1. Create venv and activate:
uv venv --python 3.12
source .venv/bin/activate
  1. Copy shared utils into the agent package:
cp utils.py agents/base/llamastack_agent/src/llamastack_agent_base/
  1. Install agent (editable) and its requirements:
uv pip install -e agents/base/llamastack_agent/. -r agents/base/llamastack_agent/requirements.txt
  1. Run the example:
uv run agents/base/llamastack_agent/examples/execute_ai_service_locally.py

Deployment on Red Hat OpenShift Cluster

Step 1: Initialize the Agent

cd agents/base/llamastack_agent
chmod +x init.sh deploy.sh
./init.sh

This loads .env, validates variables, and copies utils.py into the agent package.

Step 2: Build Image and Deploy

./deploy.sh

This creates the API key secret, builds and pushes the image, and deploys the agent (Deployment, Service, Route).

Step 3: Test the Agent

Get the route host:

oc get route llamastack-agent -o jsonpath='{.spec.host}'

Send a test request:

curl -X POST https://<YOUR_ROUTE_URL>/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "What is 2+2? Answer briefly."}'

References