Is TruLen have an Build in eval functions that can be used with Gemini AI model or only it used with OpenAI #2203
-
|
Is TruLen have an Build in eval functions that can be used with Gemini AI model or only it used with OpenAI |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
|
Hi @Farahala! I'm Dosu and I’m helping the trulens team. TruLens has built-in evaluation (feedback) functions for several providers, including OpenAI, Huggingface, AWS Bedrock (Amazon, Anthropic, Cohere, AI21, Meta, Mistral), and LangChain (which can wrap many LLMs depending on integration). However, there is no built-in support for Google's Gemini AI models—no provider, feedback function, or code reference for Gemini exists in the current codebase or documentation. The evaluation infrastructure is model-agnostic and can work with any provider that implements the required interface, but out-of-the-box, Gemini is not supported. If you want to use Gemini, you would need to implement a custom provider, similar to how OpenAI and others are integrated. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
Hello , another question is the dashboard can be self hosted ? @sfc-gh-jreini |
Beta Was this translation helpful? Give feedback.
Hi @Farahala - sorry about this.
Last week we turned on OpenTelemetry by default, but this example does not yet use OpenTelemetry.
To try this example as is, you can simply turn off OTEL tracing by setting the environment variable:
os.environ["TRULENS_OTEL_TRACING"] = "0"Alternatively, the google provider could be used in the standard QuickStart by just changing the provider line to: