You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Migrate ai-weather-agent recipe from Magic to Pixi (#69)
- Updated .gitignore to use Pixi environments and include pixi.lock
- Replaced Magic CLI installation instructions with Pixi in README.md
- Changed "MAX Serve" references to "MAX" throughout documentation
- Updated system requirements link to point to FAQ page
- Modified metadata.yaml to use pixi run commands instead of magic run
- Updated root pixi.toml to prioritize max-nightly channel and add modular dependency
- Updated backend/pyproject.toml with modular dependency and reordered channels
- Modified Procfile, Procfile.clean, and Procfile.demo to use pixi run commands
- Replaced global max-pipelines installation with modular package dependency
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
4. And once done with the app, to clean up the resources run:
113
101
114
102
```bash
115
-
magic run clean
103
+
pixi run clean
116
104
```
117
105
118
106
## System architecture
@@ -125,12 +113,12 @@ The architecture consists of several key components:
125
113
126
114
* **Frontend (Next.js)**: A modern React application providing real-time chat interface and weather visualization
127
115
* **Backend (FastAPI)**: Orchestrates the entire flow, handling request routing and response generation
128
-
* **MAX Serve**: Runs the Llama 3 model for intent detection, function calling, and response generation
116
+
* **MAX**: Runs the Llama 3 model for intent detection, function calling, and response generation
129
117
* **WeatherAPI**: External service providing current weather conditions and forecasts
130
118
* **Sentence Transformers**: Used `sentence-transformers/all-mpnet-base-v2` for generating embeddings for semantic caching
131
119
* **Semantic Cache**: Stores recent query results to improve response times
132
120
133
-
Each component is designed to be independently scalable and maintainable. The backend uses FastAPI's async capabilities to handle concurrent requests efficiently, while MAX Serve provides high-performance inference for the LLM components.
121
+
Each component is designed to be independently scalable and maintainable. The backend uses FastAPI's async capabilities to handle concurrent requests efficiently, while MAX provides high-performance inference for the LLM components.
0 commit comments