Skip to content

Commit 58aeeeb

Browse files
williamwclaude
andauthored
Migrate ai-weather-agent recipe from Magic to Pixi (#69)
- Updated .gitignore to use Pixi environments and include pixi.lock - Replaced Magic CLI installation instructions with Pixi in README.md - Changed "MAX Serve" references to "MAX" throughout documentation - Updated system requirements link to point to FAQ page - Modified metadata.yaml to use pixi run commands instead of magic run - Updated root pixi.toml to prioritize max-nightly channel and add modular dependency - Updated backend/pyproject.toml with modular dependency and reordered channels - Modified Procfile, Procfile.clean, and Procfile.demo to use pixi run commands - Replaced global max-pipelines installation with modular package dependency 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Claude <noreply@anthropic.com>
1 parent 17c5a2e commit 58aeeeb

File tree

8 files changed

+32
-41
lines changed

8 files changed

+32
-41
lines changed

ai-weather-agent/.gitignore

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
1-
# pixi environments
1+
# Pixi environments
22
.pixi
33
*.egg-info
4-
# magic environments
5-
.magic
4+
pixi.lock
65
max

ai-weather-agent/Procfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
max-serve-llm: MAX_SERVE_PORT=8010 HUGGING_FACE_HUB_TOKEN=$(cat backend/.env | grep HUGGING_FACE_HUB_TOKEN | cut -d '=' -f2) max serve --model-path modularai/Llama-3.1-8B-Instruct-GGUF --max-length 2048
22
max-serve-embedding: MAX_SERVE_PORT=7999 HUGGING_FACE_HUB_TOKEN=$(cat backend/.env | grep HUGGING_FACE_HUB_TOKEN | cut -d '=' -f2) max serve --model-path sentence-transformers/all-mpnet-base-v2
3-
backend: cd backend && magic run backend
4-
frontend: cd frontend && magic run frontend
3+
backend: cd backend && pixi run backend
4+
frontend: cd frontend && pixi run frontend

ai-weather-agent/Procfile.clean

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
1-
cleanup: pkill -f "max serve" || true && pkill -f "magic run backend" || true && pkill -f "magic run frontend" || true
1+
cleanup: pkill -f "max serve" || true && pkill -f "pixi run backend" || true && pkill -f "pixi run frontend" || true
22
gpu-cleanup: command -v nvidia-smi >/dev/null && nvidia-smi pmon -c 1 | grep python | awk '{print $2}' | xargs -r kill -9 2>/dev/null || true
33
port-cleanup: lsof -ti:7999,8001,8010,3000 | xargs -r kill -9 2>/dev/null || true

ai-weather-agent/Procfile.demo

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
backend: cd backend && magic run backend
2-
frontend: cd frontend && magic run frontend
1+
backend: cd backend && pixi run backend
2+
frontend: cd frontend && pixi run frontend

ai-weather-agent/README.md

Lines changed: 16 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Agentic Workflows: Build your own Weather Agent with MAX Serve, FastAPI and NextJS
1+
# Agentic Workflows: Build your own Weather Agent with MAX, FastAPI and NextJS
22

33
This recipe demonstrates how to build an intelligent weather assistant that combines:
44

@@ -39,24 +39,12 @@ You'll learn how to:
3939

4040
## Requirements
4141

42-
Please make sure your system meets our [system requirements](https://docs.modular.com/max/get-started).
42+
Please make sure your system meets our [system requirements](https://docs.modular.com/max/faq/#system-requirements).
4343

44-
To proceed, ensure you have the `magic` CLI installed with the `magic --version` to be **0.7.2** or newer:
44+
To proceed, ensure you have the `pixi` CLI installed. You can install it via:
4545

4646
```bash
47-
curl -ssL https://magic.modular.com/ | bash
48-
```
49-
50-
or update it via:
51-
52-
```bash
53-
magic self-update
54-
```
55-
56-
Then install `max-pipelines` via:
57-
58-
```bash
59-
magic global install -u max-pipelines
47+
curl -fsSL https://pixi.sh/install.sh | bash
6048
```
6149

6250
For this recipe, you will also need:
@@ -74,27 +62,27 @@ echo "WEATHERAPI_API_KEY=your_api_key" >> backend/.env
7462

7563
## Quick start
7664

77-
1. Download the code for this recipe using `magic` CLI:
65+
1. Download the code for this recipe:
7866

7967
```bash
80-
magic init ai-weather-agent --from modular/max-recipes/ai-weather-agent
81-
cd ai-weather-agent
68+
git clone https://github.com/modular/max-recipes.git
69+
cd max-recipes/ai-weather-agent
8270
```
8371

8472
2. Run the application:
8573

8674
**Make sure the ports `7999, 8001` and `8010` are available. You can adjust the port settings in [Procfile](./Procfile).**
8775

8876
```bash
89-
magic run app
77+
pixi run app
9078
```
9179

9280
Note that it may take a few minutes for models to be downloaded and compiled.
9381

9482
3. Open [http://localhost:3000](http://localhost:3000) in your browser to see the UI when **all services** below are ready:
9583

96-
* MAX Serve embedding on port `7999`
97-
* MAX Serve Llama 3 on port `8000` and
84+
* MAX embedding on port `7999`
85+
* MAX Llama 3 on port `8000` and
9886
* Backend FastAPI on port `8001`
9987

10088
<img src="ui.png" alt="Chat interface" width="100%" style="max-width: 800px;">
@@ -112,7 +100,7 @@ echo "WEATHERAPI_API_KEY=your_api_key" >> backend/.env
112100
4. And once done with the app, to clean up the resources run:
113101
114102
```bash
115-
magic run clean
103+
pixi run clean
116104
```
117105
118106
## System architecture
@@ -125,12 +113,12 @@ The architecture consists of several key components:
125113
126114
* **Frontend (Next.js)**: A modern React application providing real-time chat interface and weather visualization
127115
* **Backend (FastAPI)**: Orchestrates the entire flow, handling request routing and response generation
128-
* **MAX Serve**: Runs the Llama 3 model for intent detection, function calling, and response generation
116+
* **MAX**: Runs the Llama 3 model for intent detection, function calling, and response generation
129117
* **WeatherAPI**: External service providing current weather conditions and forecasts
130118
* **Sentence Transformers**: Used `sentence-transformers/all-mpnet-base-v2` for generating embeddings for semantic caching
131119
* **Semantic Cache**: Stores recent query results to improve response times
132120
133-
Each component is designed to be independently scalable and maintainable. The backend uses FastAPI's async capabilities to handle concurrent requests efficiently, while MAX Serve provides high-performance inference for the LLM components.
121+
Each component is designed to be independently scalable and maintainable. The backend uses FastAPI's async capabilities to handle concurrent requests efficiently, while MAX provides high-performance inference for the LLM components.
134122

135123
## Request flow
136124

@@ -404,7 +392,7 @@ class SemanticCache:
404392
self._lock = Lock()
405393
406394
async def _compute_embedding(self, text: str) -> np.ndarray:
407-
"""Get embeddings from MAX Serve embedding endpoint"""
395+
"""Get embeddings from MAX embedding endpoint"""
408396
response = await embedding_client.embeddings.create(
409397
model=EMBEDDING_MODEL,
410398
input=text
@@ -597,7 +585,7 @@ const operationLabels: Record<string, string> = {
597585
Common issues and solutions:
598586
599587
1. **LLM server connection issues**
600-
* Ensure MAX Serve is running (`magic run app`)
588+
* Ensure MAX is running (`pixi run app`)
601589
* Check logs for any GPU-related errors
602590
* Verify Hugging Face token is set correctly
603591
@@ -670,7 +658,7 @@ The patterns and components shown here provide a solid foundation for building y
670658
Now that you've built a foundation for AI-powered applications, you can explore more advanced deployments and features:
671659
672660
* Explore [MAX documentation](https://docs.modular.com/max/) for more features
673-
* Deploy MAX Serve on [AWS, GCP or Azure](https://docs.modular.com/max/tutorials/max-serve-local-to-cloud/)
661+
* Deploy MAX on [AWS, GCP or Azure](https://docs.modular.com/max/tutorials/max-serve-local-to-cloud/)
674662
* Join our [Modular Forum](https://forum.modular.com/) and [Discord community](https://discord.gg/modular)
675663
676664
We're excited to see what you'll build with MAX! Share your projects with us using `#ModularAI` on social media.

ai-weather-agent/backend/pyproject.toml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,14 @@ packages = ["src"]
2727

2828
[tool.pixi.project]
2929
channels = [
30-
"conda-forge",
3130
"https://conda.modular.com/max-nightly",
31+
"conda-forge",
3232
]
3333
platforms = ["linux-64", "osx-arm64", "linux-aarch64"]
3434

35+
[tool.pixi.dependencies]
36+
modular = ">=25.5.0.dev2025070905,<26"
37+
3538
[tool.pixi.pypi-dependencies]
3639
backend = { path = ".", editable = true }
3740

ai-weather-agent/metadata.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
version: 1.0
2-
long_title: "Agentic Workflows: Build your own Weather Agent with MAX Serve, FastAPI and NextJS"
2+
long_title: "Agentic Workflows: Build your own Weather Agent with MAX, FastAPI and NextJS"
33
short_title: "Build your own AI Weather Agent"
44
author: "Ehsan M. Kermani"
55
author_image: "author/ehsan.jpg"
@@ -15,5 +15,5 @@ tags:
1515
- observability
1616

1717
tasks:
18-
- magic run app
19-
- magic run clean
18+
- pixi run app
19+
- pixi run clean

ai-weather-agent/pixi.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[project]
22
authors = ["Modular <hello@modular.com>"]
33
channels = [
4-
"conda-forge",
54
"https://conda.modular.com/max-nightly",
5+
"conda-forge",
66
]
77
description = "Add a short description here"
88
name = "frontend"
@@ -19,3 +19,4 @@ demo = "honcho -f Procfile.demo start"
1919
docker-compose = ">=2.29"
2020
bash = ">=5.2.21,<6"
2121
honcho = ">=2.0.0,<3"
22+
modular = ">=25.5.0.dev2025070905,<26"

0 commit comments

Comments
 (0)