We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 92ccffa commit 392d084Copy full SHA for 392d084
README.md
@@ -208,8 +208,6 @@ This server works as a frontend that connects to an external LLM inference serve
208
- Token and audio caching
209
- Optimised batch sizes
210
211
-For best performance, adjust the API_URL in `tts_engine/inference.py` to point to your LLM inference server endpoint.
212
-
213
### Hardware Detection and Optimization
214
215
The system features intelligent hardware detection that automatically optimizes performance based on your hardware capabilities:
0 commit comments