Skip to content

🚀 Feature: Langchain emit error/exception metrics #3440

@smishra-rbx

Description

@smishra-rbx

Which component is this feature for?

Langchain Instrumentation

🔖 Feature description

Emit error/exception metrics in Langchain instrumentation - similar to OpenAI, Anthropic, Bedrock.

🎤 Why is this feature needed ?

Langchain instrumentation does not seem to be emitting error / exception metrics as part of it's _handle_error method.

Is there a plan to implement this? If yes, would the recommendation be to emit separate metrics for each component type - tool, agent, task, workflow, retriever, etc? It would be nice to have a standardized way for emitting these kinds of metrics.

✌️ How do you aim to achieve this?

Metrics are being emitted for instrumentations like Anthropic, OpenAI and Bedrock. However, Langchain needs a more elaborate implementation here since errors could occur in various components like - tool call, agent, task, workflow, etc. It would be nice to differentiate the emitted metrics based on these components. A simple Counter like Meters.LLM_ANTHROPIC_COMPLETION_EXCEPTIONS might not suffice.

🔄️ Additional Information

No response

👀 Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions