Similar to rag_api
except that it first lets the LLM attempt to answer the query. If not confident enough to answer, re-runs the query using data retrieved from nyx subscriptions.
- See
rag_api
- Uses the
Utils.with_confidence
prompt modification to demand the LLM judge its own answer.
See parent README
ℹ️ Make sure you have set the OPENAI_API_KEY
environment variable.
- See
rag_api