Skip to content

Enhanced Langfuse Integration: Trace Text Extraction & User Feedback#19

Closed
MrOrz wants to merge 2 commits into
masterfrom
feat/langfuse-integration-enhancements-15597546857061770299
Closed

Enhanced Langfuse Integration: Trace Text Extraction & User Feedback#19
MrOrz wants to merge 2 commits into
masterfrom
feat/langfuse-integration-enhancements-15597546857061770299

Conversation

@MrOrz
Copy link
Copy Markdown
Member

@MrOrz MrOrz commented Mar 15, 2026

This submission enhances the existing Langfuse integration to improve observability and user engagement.

Key changes:

  1. Trace Readability (Backend): Added a custom TextExtractionSpanProcessor in the Python ADK. It intercepts OpenTelemetry spans and extracts plain text from the complex JSON structures typically sent as input/output, making the Langfuse session view much easier to browse.
  2. Trace ID Synchronization: The backend now captures the active trace_id and includes it in the custom_metadata of its responses. The frontend extracts this ID and stores it within each chat message, creating a reliable link between the UI and backend traces.
  3. User Feedback (Frontend): Integrated the Langfuse Web SDK and updated the AgentMessage component. Users can now upvote or downvote agent responses, and these ratings are sent to Langfuse as user_feedback scores (1 or -1) associated with the specific trace ID of that message.
  4. Dependency Management: Added the langfuse JS SDK to the frontend dependencies.

These changes provide a comprehensive feedback loop and significantly better debugging capabilities for the Cofacts AI system.

Fixes #14


PR created automatically by Jules for task 15597546857061770299 started by @MrOrz

…r feedback

- Implement TextExtractionSpanProcessor in Python ADK for cleaner trace readability
- Synchronize Trace ID between backend and frontend via custom metadata
- Integrate Langfuse Web SDK in frontend
- Add upvote/downvote scoring associated with trace ID

Co-authored-by: MrOrz <108608+MrOrz@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@github-actions github-actions Bot temporarily deployed to pr-19 March 15, 2026 07:01 Destroyed
@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the observability and user feedback mechanisms for the Cofacts AI system by deepening its integration with Langfuse. It introduces a backend component to simplify trace data by extracting human-readable text from complex JSON structures and ensures trace IDs are consistently propagated to the frontend. On the frontend, users can now directly rate agent responses, with this feedback being captured and associated with specific traces in Langfuse, providing a comprehensive loop for monitoring and improving AI performance.

Highlights

  • Backend Trace Readability: Implemented a TextExtractionSpanProcessor in the Python ADK to automatically extract plain text from complex JSON inputs/outputs within OpenTelemetry spans, enhancing readability in Langfuse session views.
  • Trace ID Propagation: Modified the backend to capture the active OpenTelemetry trace_id and include it in the custom_metadata of LLM responses, enabling the frontend to associate chat messages with specific traces.
  • Frontend User Feedback: Integrated the Langfuse Web SDK into the frontend, allowing users to provide upvote/downvote feedback on agent responses. This feedback is sent to Langfuse as numeric scores (1 or -1) linked to the corresponding trace IDs.
  • Dependency Update: Added the langfuse JavaScript SDK as a new dependency for the frontend.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • adk/cofacts_ai/agent.py
    • Imported opentelemetry.trace.
    • Added logic to capture the current OpenTelemetry trace_id and store it in llm_response.custom_metadata.
  • adk/cofacts_ai/instrumentation.py
    • Added a new TextExtractionSpanProcessor class to extract text from JSON input.value and output.value attributes of OpenTelemetry spans.
    • Registered the TextExtractionSpanProcessor with the OpenTelemetry tracer provider during instrumentation setup.
  • package.json
    • Added langfuse (version ^3.38.6) to frontend dependencies.
  • pnpm-lock.yaml
    • Updated the lockfile to include langfuse and langfuse-core dependencies.
    • Added libc specifications to various @oxc-minify, @oxc-transform, @rollup, @tailwindcss/oxide, and @unrs/resolver-binding packages.
  • src/components/AgentMessage.tsx
    • Imported the langfuse client.
    • Implemented a handleFeedback function to manage user upvote/downvote state and send scores to Langfuse using the associated traceId.
    • Updated the upvote and downvote buttons to call the new handleFeedback function.
  • src/lib/adk.ts
    • Extended the ChatMessage interface to include an optional traceId property.
  • src/lib/chatCache.ts
    • Extracted trace_id from event.customMetadata.
    • Updated ChatMessage objects in the cache to store the traceId for tool calls and text responses.
  • src/lib/langfuse.ts
    • Created a new file to initialize and export the LangfuseWeb client using environment variables for public key and base URL.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively enhances the Langfuse integration by adding trace text extraction on the backend and user feedback capabilities on the frontend. The backend now includes the trace_id in responses, which the frontend correctly uses to link user feedback to specific traces. The code is generally well-structured, but I've identified a couple of opportunities for improvement in the Python backend to enhance code clarity, efficiency, and maintainability.

Comment thread adk/cofacts_ai/agent.py
Comment on lines +53 to +54
if not llm_response.custom_metadata:
llm_response.custom_metadata = {}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition if not llm_response.custom_metadata: evaluates to True for both None and an empty dictionary. This means that if custom_metadata is already an empty dictionary, it will be needlessly reassigned. Using is None provides a more precise check for your intent to initialize the dictionary only when it's not yet set.

Suggested change
if not llm_response.custom_metadata:
llm_response.custom_metadata = {}
if llm_response.custom_metadata is None:
llm_response.custom_metadata = {}

Comment on lines +27 to +51
modified = False

# Extract input text
input_value = attributes.get("input.value")
if input_value and isinstance(input_value, str):
try:
data = json.loads(input_value)
text = self._extract_text_from_data(data)
if text:
span.set_attribute("input.value", text)
modified = True
except json.JSONDecodeError:
pass

# Extract output text
output_value = attributes.get("output.value")
if output_value and isinstance(output_value, str):
try:
data = json.loads(output_value)
text = self._extract_text_from_data(data)
if text:
span.set_attribute("output.value", text)
modified = True
except json.JSONDecodeError:
pass
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for processing input.value and output.value is duplicated. Additionally, the modified variable is assigned but never used. You can improve this by refactoring the logic into a local helper function inside on_end. This will remove the code duplication and the unused variable, making the code cleaner and more maintainable.

Suggested change
modified = False
# Extract input text
input_value = attributes.get("input.value")
if input_value and isinstance(input_value, str):
try:
data = json.loads(input_value)
text = self._extract_text_from_data(data)
if text:
span.set_attribute("input.value", text)
modified = True
except json.JSONDecodeError:
pass
# Extract output text
output_value = attributes.get("output.value")
if output_value and isinstance(output_value, str):
try:
data = json.loads(output_value)
text = self._extract_text_from_data(data)
if text:
span.set_attribute("output.value", text)
modified = True
except json.JSONDecodeError:
pass
def _process_attribute(key: str) -> None:
value = attributes.get(key)
if not (value and isinstance(value, str)):
return
try:
data = json.loads(value)
text = self._extract_text_from_data(data)
if text:
span.set_attribute(key, text)
except json.JSONDecodeError:
pass
_process_attribute("input.value")
_process_attribute("output.value")

… user feedback

- Implement TextExtractionSpanProcessor in Python ADK for cleaner trace logs.
- Synchronize OpenTelemetry trace IDs from backend to frontend via custom metadata.
- Integrate Langfuse Web SDK in the frontend to capture user feedback (upvotes/downvotes).
- Update service.template.yaml with required frontend environment variables.

Co-authored-by: MrOrz <108608+MrOrz@users.noreply.github.com>
@github-actions github-actions Bot temporarily deployed to pr-19 March 15, 2026 09:36 Destroyed
@MrOrz
Copy link
Copy Markdown
Member Author

MrOrz commented Apr 20, 2026

Let's isolate FE and BE changes. I will close this and restart a task that does only the feedback buttons.

It seems that feedbacks would map to individual messages = ADK events, and there can be multiple agent messages in 1 turn. We may need to provide feedback on specific span rather than traces.

@MrOrz MrOrz closed this Apr 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Langfuse integration

1 participant