Skip to content

Commit b3c6abe

Browse files
committed
fixup agentgatewaysyncer README.md
Signed-off-by: David L. Chandler <[email protected]>
1 parent ac8078d commit b3c6abe

File tree

1 file changed

+83
-125
lines changed

1 file changed

+83
-125
lines changed

pkg/kgateway/agentgatewaysyncer/README.md

Lines changed: 83 additions & 125 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ metadata:
1818
namespace: default
1919
spec:
2020
logging:
21-
format: Text
21+
format: text
2222
image:
2323
tag: bc92714
2424
---
@@ -1125,30 +1125,24 @@ Port-forward, and send a request through the gateway:
11251125

11261126
### Tracing and Observability
11271127

1128-
The agentgateway data plane supports comprehensive observability through OpenTelemetry (OTEL) tracing. You can configure tracing using custom ConfigMaps to integrate with various observability platforms and add custom trace fields for enhanced monitoring of your AI/LLM traffic.
1128+
The agentgateway data plane supports comprehensive observability through OpenTelemetry (OTEL) tracing. You can configure tracing using the `rawConfig` field in AgentgatewayParameters to integrate with various observability platforms and add custom trace fields for enhanced monitoring of your AI/LLM traffic.
11291129

11301130
For detailed information about tracing configuration and observability features, see the [agentgateway observability documentation](https://agentgateway.dev/docs/reference/observability/traces/).
11311131

1132-
#### Configuring Tracing with ConfigMaps
1132+
#### Configuring Tracing with RawConfig
11331133

1134-
> **Note**: This approach statically configures tracing at startup. Changes to the ConfigMap are only picked up during agentgateway pod initialization, so you must restart the pod to apply configuration updates.
1135-
1136-
To enable tracing, you need to:
1137-
1138-
1. **Create a custom ConfigMap** with your tracing configuration
1139-
2. **Reference the ConfigMap** in your AgentgatewayParameters
1140-
3. **Deploy your Gateway** with the agentgateway class
1141-
1142-
**Step 1: Create a ConfigMap**
1134+
To enable tracing, configure the `rawConfig` field in AgentgatewayParameters with your tracing settings. Changes to `rawConfig` will automatically trigger an agentgateway pod rollout.
11431135

11441136
```yaml
1145-
apiVersion: v1
1146-
kind: ConfigMap
1137+
apiVersion: agentgateway.dev/v1alpha1
1138+
kind: AgentgatewayParameters
11471139
metadata:
1148-
name: agent-gateway-config
1140+
name: agentgateway-params
11491141
namespace: default
1150-
data:
1151-
config.yaml: |-
1142+
spec:
1143+
logging:
1144+
format: json
1145+
rawConfig:
11521146
config:
11531147
tracing:
11541148
otlpEndpoint: http://jaeger-collector.observability.svc.cluster.local:4317
@@ -1162,24 +1156,7 @@ data:
11621156
gen_ai.response.model: "llm.response_model"
11631157
gen_ai.usage.completion_tokens: "llm.output_tokens"
11641158
gen_ai.usage.prompt_tokens: "llm.input_tokens"
1165-
```
1166-
1167-
**Step 2: Configure AgentgatewayParameters**
1168-
1169-
```yaml
1170-
apiVersion: agentgateway.dev/v1alpha1
1171-
kind: AgentgatewayParameters
1172-
metadata:
1173-
name: agentgateway-params
1174-
namespace: default
1175-
spec:
1176-
logging:
1177-
format: Json
1178-
```
1179-
1180-
**Step 3: Create Gateway with agentgateway class**
1181-
1182-
```yaml
1159+
---
11831160
apiVersion: gateway.networking.k8s.io/v1
11841161
kind: GatewayClass
11851162
metadata:
@@ -1241,112 +1218,93 @@ config:
12411218

12421219
#### Integration Examples
12431220

1221+
These examples show the `rawConfig` configuration for different observability platforms.
1222+
12441223
**Jaeger Integration:**
12451224
```yaml
1246-
apiVersion: v1
1247-
kind: ConfigMap
1248-
metadata:
1249-
name: jaeger-tracing-config
1250-
data:
1251-
config.yaml: |-
1252-
config:
1253-
tracing:
1254-
otlpEndpoint: http://jaeger-collector.jaeger.svc.cluster.local:4317
1255-
otlpProtocol: grpc
1256-
randomSampling: true
1257-
fields:
1258-
add:
1259-
gen_ai.operation.name: '"chat"'
1260-
gen_ai.system: "llm.provider"
1261-
gen_ai.request.model: "llm.request_model"
1262-
gen_ai.response.model: "llm.response_model"
1263-
gen_ai.usage.completion_tokens: "llm.output_tokens"
1264-
gen_ai.usage.prompt_tokens: "llm.input_tokens"
1225+
rawConfig:
1226+
config:
1227+
tracing:
1228+
otlpEndpoint: http://jaeger-collector.jaeger.svc.cluster.local:4317
1229+
otlpProtocol: grpc
1230+
randomSampling: true
1231+
fields:
1232+
add:
1233+
gen_ai.operation.name: '"chat"'
1234+
gen_ai.system: "llm.provider"
1235+
gen_ai.request.model: "llm.request_model"
1236+
gen_ai.response.model: "llm.response_model"
1237+
gen_ai.usage.completion_tokens: "llm.output_tokens"
1238+
gen_ai.usage.prompt_tokens: "llm.input_tokens"
12651239
```
12661240

12671241
**Langfuse Integration:**
12681242
```yaml
1269-
apiVersion: v1
1270-
kind: ConfigMap
1271-
metadata:
1272-
name: langfuse-tracing-config
1273-
data:
1274-
config.yaml: |-
1275-
config:
1276-
tracing:
1277-
otlpEndpoint: https://us.cloud.langfuse.com/api/public/otel
1278-
otlpProtocol: http
1279-
headers:
1280-
Authorization: "Basic <base64-encoded-credentials>"
1281-
randomSampling: true
1282-
fields:
1283-
add:
1284-
gen_ai.operation.name: '"chat"'
1285-
gen_ai.system: "llm.provider"
1286-
gen_ai.prompt: "llm.prompt"
1287-
gen_ai.completion: 'llm.completion.map(c, {"role":"assistant", "content": c})'
1288-
gen_ai.usage.completion_tokens: "llm.output_tokens"
1289-
gen_ai.usage.prompt_tokens: "llm.input_tokens"
1290-
gen_ai.request.model: "llm.request_model"
1291-
gen_ai.response.model: "llm.response_model"
1292-
gen_ai.request: "flatten(llm.params)"
1243+
rawConfig:
1244+
config:
1245+
tracing:
1246+
otlpEndpoint: https://us.cloud.langfuse.com/api/public/otel
1247+
otlpProtocol: http
1248+
headers:
1249+
Authorization: "Basic <base64-encoded-credentials>"
1250+
randomSampling: true
1251+
fields:
1252+
add:
1253+
gen_ai.operation.name: '"chat"'
1254+
gen_ai.system: "llm.provider"
1255+
gen_ai.prompt: "llm.prompt"
1256+
gen_ai.completion: 'llm.completion.map(c, {"role":"assistant", "content": c})'
1257+
gen_ai.usage.completion_tokens: "llm.output_tokens"
1258+
gen_ai.usage.prompt_tokens: "llm.input_tokens"
1259+
gen_ai.request.model: "llm.request_model"
1260+
gen_ai.response.model: "llm.response_model"
1261+
gen_ai.request: "flatten(llm.params)"
12931262
```
12941263

12951264
**Phoenix (Arize) Integration:**
12961265
```yaml
1297-
apiVersion: v1
1298-
kind: ConfigMap
1299-
metadata:
1300-
name: phoenix-tracing-config
1301-
data:
1302-
config.yaml: |-
1303-
config:
1304-
tracing:
1305-
otlpEndpoint: http://localhost:4317
1306-
randomSampling: true
1307-
fields:
1308-
add:
1309-
span.name: '"openai.chat"'
1310-
openinference.span.kind: '"LLM"'
1311-
llm.system: "llm.provider"
1312-
llm.input_messages: 'flatten_recursive(llm.prompt.map(c, {"message": c}))'
1313-
llm.output_messages: 'flatten_recursive(llm.completion.map(c, {"role":"assistant", "content": c}))'
1314-
llm.token_count.completion: "llm.output_tokens"
1315-
llm.token_count.prompt: "llm.input_tokens"
1316-
llm.token_count.total: "llm.total_tokens"
1266+
rawConfig:
1267+
config:
1268+
tracing:
1269+
otlpEndpoint: http://localhost:4317
1270+
randomSampling: true
1271+
fields:
1272+
add:
1273+
span.name: '"openai.chat"'
1274+
openinference.span.kind: '"LLM"'
1275+
llm.system: "llm.provider"
1276+
llm.input_messages: 'flatten_recursive(llm.prompt.map(c, {"message": c}))'
1277+
llm.output_messages: 'flatten_recursive(llm.completion.map(c, {"role":"assistant", "content": c}))'
1278+
llm.token_count.completion: "llm.output_tokens"
1279+
llm.token_count.prompt: "llm.input_tokens"
1280+
llm.token_count.total: "llm.total_tokens"
13171281
```
13181282

13191283
**OpenLLMetry Integration:**
13201284
```yaml
1321-
apiVersion: v1
1322-
kind: ConfigMap
1323-
metadata:
1324-
name: openllmetry-tracing-config
1325-
data:
1326-
config.yaml: |-
1327-
config:
1328-
tracing:
1329-
otlpEndpoint: http://localhost:4317
1330-
randomSampling: true
1331-
fields:
1332-
add:
1333-
span.name: '"openai.chat"'
1334-
gen_ai.operation.name: '"chat"'
1335-
gen_ai.system: "llm.provider"
1336-
gen_ai.prompt: "flatten_recursive(llm.prompt)"
1337-
gen_ai.completion: 'flatten_recursive(llm.completion.map(c, {"role":"assistant", "content": c}))'
1338-
gen_ai.usage.completion_tokens: "llm.output_tokens"
1339-
gen_ai.usage.prompt_tokens: "llm.input_tokens"
1340-
gen_ai.request.model: "llm.request_model"
1341-
gen_ai.response.model: "llm.response_model"
1342-
gen_ai.request: "flatten(llm.params)"
1343-
llm.is_streaming: "llm.streaming"
1285+
rawConfig:
1286+
config:
1287+
tracing:
1288+
otlpEndpoint: http://localhost:4317
1289+
randomSampling: true
1290+
fields:
1291+
add:
1292+
span.name: '"openai.chat"'
1293+
gen_ai.operation.name: '"chat"'
1294+
gen_ai.system: "llm.provider"
1295+
gen_ai.prompt: "flatten_recursive(llm.prompt)"
1296+
gen_ai.completion: 'flatten_recursive(llm.completion.map(c, {"role":"assistant", "content": c}))'
1297+
gen_ai.usage.completion_tokens: "llm.output_tokens"
1298+
gen_ai.usage.prompt_tokens: "llm.input_tokens"
1299+
gen_ai.request.model: "llm.request_model"
1300+
gen_ai.response.model: "llm.response_model"
1301+
gen_ai.request: "flatten(llm.params)"
1302+
llm.is_streaming: "llm.streaming"
13441303
```
13451304

13461305
#### Important Notes
13471306

1348-
- **ConfigMap Updates**: The ConfigMap is only read during agentgateway pod startup. To apply ConfigMap changes, restart the agentgateway pod (TODO(chandler): DLC: update this README)
1349-
- **Namespace**: The ConfigMap must be in the same namespace as the AgentgatewayParameters resource
1307+
- **RawConfig Updates**: Changes to `rawConfig` in AgentgatewayParameters will trigger an agentgateway pod rollout automatically
13501308
- **Validation**: Invalid CEL expressions in trace fields will be logged but won't prevent the gateway from starting
13511309
- **Performance**: Be mindful of the number and complexity of custom trace fields, as they impact performance
13521310
- **Sampling**: Use `randomSampling` to control trace volume in production environments
@@ -1363,7 +1321,7 @@ metadata:
13631321
namespace: default
13641322
spec:
13651323
logging:
1366-
format: Text
1324+
format: text
13671325
rawConfig:
13681326
config:
13691327
tracing:
@@ -1449,7 +1407,7 @@ metadata:
14491407
namespace: default
14501408
spec:
14511409
logging:
1452-
format: Text
1410+
format: text
14531411
rawConfig:
14541412
config:
14551413
tracing:

0 commit comments

Comments
 (0)