-
Notifications
You must be signed in to change notification settings - Fork 641
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Vertex AI spans for request parameters #3192
base: main
Are you sure you want to change the base?
Conversation
72a92bb
to
105cf3c
Compare
8e8effb
to
76887e9
Compare
...i/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/utils.py
Outdated
Show resolved
Hide resolved
76887e9
to
73cfcda
Compare
73cfcda
to
816212b
Compare
e3f0d72
to
66ed1de
Compare
Also this is what LangChain uses
...i/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/patch.py
Show resolved
Hide resolved
"gen_ai.request.model": "gemini-1.5-flash-002", | ||
"gen_ai.request.presence_penalty": -1.5, | ||
"gen_ai.request.stop_sequences": ("\n\n\n",), | ||
"gen_ai.request.temperature": 0.20000000298023224, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These weird floating point differences are because the proto-plus library truncates to 32 bits. I don't think it's technically wrong but kind of distracting.
Asked for a workaround here googleapis/proto-plus-python#515
...i/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/utils.py
Outdated
Show resolved
Hide resolved
...i/opentelemetry-instrumentation-vertexai/src/opentelemetry/instrumentation/vertexai/utils.py
Outdated
Show resolved
Hide resolved
): | ||
model = _get_model_name(params.model) | ||
generation_config = params.generation_config | ||
attributes: dict[str, AttributeValue] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are not using Attributes because of the Optional right? We should probably drop it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya that's right. What do you mean by drop it though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drop Optional from Attributes definition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that would be a breaking change in our typing though. Most of the API allows you to pass None anywhere attributes is: https://github.com/open-telemetry/opentelemetry-python/blob/58f2d161d4b772ce6e62d9f40ba9de16445f4193/opentelemetry-api/src/opentelemetry/trace/__init__.py#L291
We could add another alias for this
Description
Part of #3041, follow up to #3123.
Adds basic tracing instrumentation for Vertex AI. Right now, it supports only request span attributes–I will add response attributes and content logging in later PR(s). I also added VCR tests with proper sanitization.
In the end, I didn't copy any code from OpenLLMetry and mostly adapted from openai-v2. Also the code is typed 😃
This instrumentation wraps the underlying prediction service GAPIC clients. It works for both v1 and v1beta1 APIs which is statically checked.
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Does This PR Require a Core Repo Change?
CORE_REPO_SHA=f8df9051ca883b63a9047533e6d0b26f24e53b71 tox -re typecheck
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.