[Research] Add 2 elite-tier projects to Inference Engines & Serving#131
Merged
[Research] Add 2 elite-tier projects to Inference Engines & Serving#131
Conversation
## Projects Added ### One-API (songquanpeng/one-api) - ⭐ Stars: 31,512 (threshold: 1000+) - 🔄 Active: 2026-01-09 (within 6 months) - 🏭 Production: LLM API gateway with rate limiting and quota management - 📚 Quality: MIT license, full documentation ### OpenLLM (bentoml/OpenLLM) - ⭐ Stars: 12,273 (threshold: 1000+) - 🔄 Active: 2026-04-06 (within 6 months) - 🏭 Production: Enterprise-grade LLM serving platform - 📚 Quality: Apache 2.0 license, OpenAI-compatible API Category: Inference Engines & Serving (§3) Research Date: 2026-04-07
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Project: One-API\n\n### Elite Criteria Checklist (ALL Required)\n\n- [x] Elite Criteria: ALL criteria met\n - ⭐ Stars: 31,512 (threshold: 1000+)\n - 🔄 Active: 2026-01-09 (within 6 months)\n - 🏭 Production: LLM API gateway with rate limiting, quota management, and cost tracking. Production deployments across multiple providers.\n - 📚 Quality: MIT license, comprehensive docs, stable releases\n\n### Evidence of Production Usage\n- https://github.com/songquanpeng/one-api - Used as unified API gateway for managing multiple LLM providers\n\n### Why This Belongs in Elite Tier\nOne-API solves a critical production need: unifying disparate LLM provider APIs under a single OpenAI-compatible interface. It includes rate limiting, quota management, and cost tracking - essential for production deployments.\n\n### Category\nInference Engines & Serving - High-performance Serving & API Servers\n\n---\n\n## Project: OpenLLM (BentoML)\n\n### Elite Criteria Checklist (ALL Required)\n\n- [x] Elite Criteria: ALL criteria met\n - ⭐ Stars: 12,273 (threshold: 1000+)\n - 🔄 Active: 2026-04-06 (within 6 months)\n - 🏭 Production: Enterprise-grade LLM serving platform used in production environments\n - 📚 Quality: Apache 2.0 license, full documentation, regular releases\n\n### Evidence of Production Usage\n- https://github.com/bentoml/OpenLLM - Deploy and serve LLMs in cloud environments with OpenAI-compatible API\n- https://bentoml.com - Commercial platform offering managed OpenLLM deployments\n\n### Why This Belongs in Elite Tier\nOpenLLM from BentoML provides a complete production-grade solution for running open-source LLMs. It supports 50+ models with built-in streaming, batching, and auto-acceleration. The project is backed by a commercial entity (BentoML) ensuring ongoing maintenance and support.\n\n### Category\nInference Engines & Serving - High-performance Serving & API Servers\n\n---\n\n## Summary\n\nThis PR adds 2 elite-tier inference/serving projects to the Inference Engines & Serving category:\n\n1. One-API (31,512 ⭐) - LLM API management and key redistribution system\n2. OpenLLM (12,273 ⭐) - Production-grade platform for running open-source LLMs\n\nBoth projects meet all elite-tier criteria and complement the existing inference ecosystem coverage.