-
Notifications
You must be signed in to change notification settings - Fork 172
feat: google genai instrumentation #2515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@arizeai/openinference-core
@arizeai/openinference-genai
@arizeai/openinference-instrumentation-anthropic
@arizeai/openinference-instrumentation-bedrock
@arizeai/openinference-instrumentation-bedrock-agent-runtime
@arizeai/openinference-instrumentation-beeai
@arizeai/openinference-instrumentation-google-genai
@arizeai/openinference-instrumentation-langchain
@arizeai/openinference-instrumentation-langchain-v0
@arizeai/openinference-instrumentation-mcp
@arizeai/openinference-instrumentation-openai
@arizeai/openinference-mastra
@arizeai/openinference-semantic-conventions
@arizeai/openinference-vercel
commit: |
| yield chunk; | ||
| } | ||
| })(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Streaming methods leave spans open on error
The generateContentStream and sendMessageStream methods lack error handling for failures during stream consumption. The for await loop that iterates over stream chunks has no try/catch or .catch() handler. If an error occurs mid-stream (network failure, API error), the span is never ended, causing a resource leak. Unlike generateContent which has a .catch() handler, these streaming methods only use .then(), so any thrown error during chunk iteration leaves the span open indefinitely.
Additional Locations (1)
| yield chunk; | ||
| } | ||
| })(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Streaming methods buffer all chunks before yielding
The generateContentStream and sendMessageStream methods fully consume the original stream into a buffer before the promise resolves and yields any chunks back to the caller. When a user awaits these methods, they block until all chunks have been received from the API. The returned generator then yields from the buffer instantly. This defeats the purpose of streaming APIs, where users expect progressive output for real-time display. Users will see nothing until the entire response is complete, then get all chunks at once.
Note
Introduces a new package that instruments the @google/genai SDK with OpenTelemetry/OpenInference spans, plus helper API, examples, and tests.
js/packages/openinference-instrumentation-google-genai@google/genai.createInstrumentedGoogleGenAIto create and instrument instances.ai.models.generateContent,generateContentStream,generateImages.ai.chats.createand chat methodssendMessage,sendMessageStream.ai.batches.createEmbeddings.vertexai, providergoogle).traceConfig, context propagation, and tracing suppression.README.md,CHANGELOG.md, and example apps (chat.ts,streaming.ts,chat-session.ts,tools.ts,embeddings.ts,instrumentation.ts).Written by Cursor Bugbot for commit a8697a4. This will update automatically on new commits. Configure here.