This PR adds one elite-tier project to the Training & Fine-tuning Ecosystem section:
| Project | Stars | License | Description |
|---|---|---|---|
| Oumi | 9,185+ | Apache-2.0 | Fully open-source platform for the complete foundation model lifecycle - from data preparation and training to evaluation and deployment. Supports 100+ models with 200+ recipes for fine-tuning gpt-oss, Qwen3, DeepSeek-R1, and more. |
- ✅ Stars: 9,185+ (well above 1,000 threshold)
- ✅ Activity: Last pushed 2026-04-17 (active development)
- ✅ License: Apache-2.0 (OSI-approved)
- ✅ Quality: Production-ready with comprehensive documentation, 200+ training recipes, and support for 100+ models
Oumi is a significant addition because it:
- Provides an end-to-end open-source platform (not just training, but full lifecycle)
- Has rapidly gained traction with 9,000+ stars
- Supports the latest models including gpt-oss, Qwen3, and DeepSeek-R1
- Offers 200+ pre-built recipes for common fine-tuning tasks
- Is fully Apache 2.0 licensed
- Verified project has 1000+ stars
- Verified OSI-approved open source license
- Verified active development (commits within last 3 months)
- Added to appropriate section (Training & Fine-tuning Ecosystem > Full Training Frameworks)
- Follows existing formatting and style