Skip to content

Commit 8da898c

Browse files
[feature] ConfidentAI logging enabled for proxy and sdk (BerriAI#10649)
* async success implemented * fail async event * sync events added * docs added * docs added * test added * style * test * . * lock file genrated due to tenacity change * mypy errors * resolved comments * resolved comments * resolved comments * resolved comments * style * style * resolved comments
1 parent e9b7059 commit 8da898c

File tree

13 files changed

+622
-4933
lines changed

13 files changed

+622
-4933
lines changed
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
import Image from '@theme/IdealImage';
2+
3+
# 🔭 DeepEval - Open-Source Evals with Tracing
4+
5+
### What is DeepEval?
6+
[DeepEval](https://deepeval.com) is an open-source evaluation framework for LLMs ([Github](https://github.com/confident-ai/deepeval)).
7+
8+
### What is Confident AI?
9+
10+
[Confident AI](https://documentation.confident-ai.com) (the ***deepeval*** platfrom) offers an Observatory for teams to trace and monitor LLM applications. Think Datadog for LLM apps. The observatory allows you to:
11+
12+
- Detect and debug issues in your LLM applications in real-time
13+
- Search and analyze historical generation data with powerful filters
14+
- Collect human feedback on model responses
15+
- Run evaluations to measure and improve performance
16+
- Track costs and latency to optimize resource usage
17+
18+
<Image img={require('../../img/deepeval_dashboard.png')} />
19+
20+
### Quickstart
21+
22+
```python
23+
import os
24+
import time
25+
import litellm
26+
27+
28+
os.environ['OPENAI_API_KEY']='<your-openai-api-key>'
29+
os.environ['CONFIDENT_API_KEY']='<your-confident-api-key>'
30+
31+
litellm.success_callback = ["deepeval"]
32+
litellm.failure_callback = ["deepeval"]
33+
34+
try:
35+
response = litellm.completion(
36+
model="gpt-3.5-turbo",
37+
messages=[
38+
{"role": "user", "content": "What's the weather like in San Francisco?"}
39+
],
40+
)
41+
except Exception as e:
42+
print(e)
43+
44+
print(response)
45+
```
46+
47+
:::info
48+
You can obtain your `CONFIDENT_API_KEY` by logging into [Confident AI](https://app.confident-ai.com/project) platform.
49+
:::
50+
51+
## Support & Talk with Deepeval team
52+
- [Confident AI Docs 📝](https://documentation.confident-ai.com)
53+
- [Platform 🚀](https://confident-ai.com)
54+
- [Community Discord 💭](https://discord.gg/wuPM9dRgDw)
55+
- Support ✉️ support@confident-ai.com

docs/my-website/docs/proxy/logging.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ Log Proxy input, output, and exceptions using:
1111
- GCS, s3, Azure (Blob) Buckets
1212
- Lunary
1313
- MLflow
14+
- Deepeval
1415
- Custom Callbacks - Custom code and API endpoints
1516
- Langsmith
1617
- DataDog
@@ -1182,7 +1183,58 @@ curl --location 'http://0.0.0.0:4000/chat/completions' \
11821183
'
11831184
```
11841185

1186+
## Deepeval
1187+
LiteLLM supports logging on [Confidential AI](https://documentation.confident-ai.com/) (The Deepeval Platform):
11851188

1189+
### Usage:
1190+
1. Add `deepeval` in the LiteLLM `config.yaml`
1191+
1192+
```yaml
1193+
model_list:
1194+
- model_name: gpt-4o
1195+
litellm_params:
1196+
model: gpt-4o
1197+
litellm_settings:
1198+
success_callback: ["deepeval"]
1199+
failure_callback: ["deepeval"]
1200+
```
1201+
1202+
2. Set your environment variables in `.env` file.
1203+
```shell
1204+
CONFIDENT_API_KEY=<your-api-key>
1205+
```
1206+
:::info
1207+
You can obtain your `CONFIDENT_API_KEY` by logging into [Confident AI](https://app.confident-ai.com/project) platform.
1208+
:::
1209+
1210+
3. Start your proxy server:
1211+
```shell
1212+
litellm --config config.yaml --debug
1213+
```
1214+
1215+
4. Make a request:
1216+
```shell
1217+
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
1218+
-H 'Content-Type: application/json' \
1219+
-H 'Authorization: Bearer sk-1234' \
1220+
-d '{
1221+
"model": "gpt-3.5-turbo",
1222+
"messages": [
1223+
{
1224+
"role": "system",
1225+
"content": "You are a helpful math tutor. Guide the user through the solution step by step."
1226+
},
1227+
{
1228+
"role": "user",
1229+
"content": "how can I solve 8x + 7 = -23"
1230+
}
1231+
]
1232+
}'
1233+
```
1234+
1235+
5. Check trace on platform:
1236+
1237+
<Image img={require('../../img/deepeval_visible_trace.png')} />
11861238

11871239
## s3 Buckets
11881240

639 KB
Loading
598 KB
Loading

docs/my-website/sidebars.js

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -460,6 +460,7 @@ const sidebars = {
460460
"observability/agentops_integration",
461461
"observability/langfuse_integration",
462462
"observability/lunary_integration",
463+
"observability/deepeval_integration",
463464
"observability/mlflow",
464465
"observability/gcs_bucket_integration",
465466
"observability/langsmith_integration",

litellm/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,7 @@
118118
"generic_api",
119119
"resend_email",
120120
"smtp_email",
121+
"deepeval"
121122
]
122123
logged_real_time_event_types: Optional[Union[List[str], Literal["*"]]] = None
123124
_known_custom_logger_compatible_callbacks: List = list(
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
# duplicate -> https://github.com/confident-ai/deepeval/blob/main/deepeval/confident/api.py
2+
import logging
3+
import httpx
4+
from enum import Enum
5+
from litellm._logging import verbose_logger
6+
7+
DEEPEVAL_BASE_URL = "https://deepeval.confident-ai.com"
8+
DEEPEVAL_BASE_URL_EU = "https://eu.deepeval.confident-ai.com"
9+
API_BASE_URL = "https://api.confident-ai.com"
10+
API_BASE_URL_EU = "https://eu.api.confident-ai.com"
11+
retryable_exceptions = httpx.HTTPError
12+
13+
from litellm.llms.custom_httpx.http_handler import (
14+
HTTPHandler,
15+
get_async_httpx_client,
16+
httpxSpecialProvider,
17+
)
18+
19+
20+
def log_retry_error(details):
21+
exception = details.get("exception")
22+
tries = details.get("tries")
23+
if exception:
24+
logging.error(f"Confident AI Error: {exception}. Retrying: {tries} time(s)...")
25+
else:
26+
logging.error(f"Retrying: {tries} time(s)...")
27+
28+
29+
class HttpMethods(Enum):
30+
GET = "GET"
31+
POST = "POST"
32+
DELETE = "DELETE"
33+
PUT = "PUT"
34+
35+
36+
class Endpoints(Enum):
37+
DATASET_ENDPOINT = "/v1/dataset"
38+
TEST_RUN_ENDPOINT = "/v1/test-run"
39+
TRACING_ENDPOINT = "/v1/tracing"
40+
EVENT_ENDPOINT = "/v1/event"
41+
FEEDBACK_ENDPOINT = "/v1/feedback"
42+
PROMPT_ENDPOINT = "/v1/prompt"
43+
RECOMMEND_ENDPOINT = "/v1/recommend-metrics"
44+
EVALUATE_ENDPOINT = "/evaluate"
45+
GUARD_ENDPOINT = "/guard"
46+
GUARDRAILS_ENDPOINT = "/guardrails"
47+
BASELINE_ATTACKS_ENDPOINT = "/generate-baseline-attacks"
48+
49+
50+
class Api:
51+
def __init__(self, api_key: str, base_url=None):
52+
self.api_key = api_key
53+
self._headers = {
54+
"Content-Type": "application/json",
55+
# "User-Agent": "Python/Requests",
56+
"CONFIDENT_API_KEY": api_key,
57+
}
58+
# using the global non-eu variable for base url
59+
self.base_api_url = base_url or API_BASE_URL
60+
self.sync_http_handler = HTTPHandler()
61+
self.async_http_handler = get_async_httpx_client(
62+
llm_provider=httpxSpecialProvider.LoggingCallback
63+
)
64+
65+
def _http_request(
66+
self, method: str, url: str, headers=None, json=None, params=None
67+
):
68+
if method != "POST":
69+
raise Exception("Only POST requests are supported")
70+
try:
71+
self.sync_http_handler.post(
72+
url=url,
73+
headers=headers,
74+
json=json,
75+
params=params,
76+
)
77+
except httpx.HTTPStatusError as e:
78+
raise Exception(f"DeepEval logging error: {e.response.text}")
79+
except Exception as e:
80+
raise e
81+
82+
def send_request(
83+
self, method: HttpMethods, endpoint: Endpoints, body=None, params=None
84+
):
85+
url = f"{self.base_api_url}{endpoint.value}"
86+
res = self._http_request(
87+
method=method.value,
88+
url=url,
89+
headers=self._headers,
90+
json=body,
91+
params=params,
92+
)
93+
94+
if res.status_code == 200:
95+
try:
96+
return res.json()
97+
except ValueError:
98+
return res.text
99+
else:
100+
verbose_logger.debug(res.json())
101+
raise Exception(res.json().get("error", res.text))
102+
103+
async def a_send_request(
104+
self, method: HttpMethods, endpoint: Endpoints, body=None, params=None
105+
):
106+
if method != HttpMethods.POST:
107+
raise Exception("Only POST requests are supported")
108+
109+
url = f"{self.base_api_url}{endpoint.value}"
110+
try:
111+
await self.async_http_handler.post(
112+
url=url,
113+
headers=self._headers,
114+
json=body,
115+
params=params,
116+
)
117+
except httpx.HTTPStatusError as e:
118+
raise Exception(f"DeepEval logging error: {e.response.text}")
119+
except Exception as e:
120+
raise e

0 commit comments

Comments
 (0)