Skip to content

Conversation

@BeautyyuYanli
Copy link
Contributor

@BeautyyuYanli BeautyyuYanli commented Jan 26, 2026

Summary

This is a part of langgenius/dify-official-plugins#2492

Adds end-to-end support for preserving assistant opaque_body produced by plugin LLM calls.

  • Adds message-level and content-level opaque_body fields in model runtime prompt/message entities.
  • Parses opaque_body from plugin LLM invocations (including streaming tool-call parsing).
  • Captures assistant opaque_body during app run streaming, persists it into message_metadata as assistant_opaque_body, and restores it when rebuilding memory prompt history.

Testing

  • Added unit tests: api/tests/unit_tests/core/model_runtime/__base/test_llm_invoke_opaque_body.py

Copilot AI review requested due to automatic review settings January 26, 2026 17:15
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jan 26, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BeautyyuYanli, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system's capability to handle and retain custom, unstructured data associated with LLM responses from plugins. By integrating opaque_body fields into core message entities and ensuring its persistence and retrieval across various stages of the application's lifecycle, the change allows for more flexible and powerful interactions with LLMs that provide additional metadata.

Highlights

  • Enhanced Opaque Body Support: Introduced end-to-end support for preserving opaque_body produced by plugin LLM calls, ensuring this custom data is maintained throughout the system.
  • Model Runtime Integration: Added opaque_body fields at both message-level and content-level within model runtime prompt and message entities to accommodate this new data.
  • Plugin LLM Invocation Parsing: Implemented logic to parse opaque_body from plugin LLM invocations, including robust handling for streaming tool-call parsing.
  • Persistence and Memory Reconstruction: Ensured that the assistant's opaque_body is captured during app run streaming, persisted into message_metadata as assistant_opaque_body, and correctly restored when rebuilding memory prompt history.
  • Comprehensive Testing: New unit tests have been added to validate the correct preservation of opaque_body in both non-streaming and streaming LLM invocation scenarios.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces end-to-end support for preserving the assistant's opaque_body from plugin LLM calls. This includes adding opaque_body fields to message entities, parsing it during LLM invocations (both streaming and non-streaming), persisting it in message metadata, and restoring it when rebuilding memory prompt history. The changes are well-implemented and supported by new unit tests.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds runtime + app-layer plumbing to preserve and persist an assistant opaque_body returned from plugin-based LLM calls, including streaming flows, so it can be restored into prompt history later.

Changes:

  • Extend model-runtime message entities to carry opaque_body at both message and content levels.
  • Preserve assistant opaque_body when aggregating plugin LLM chunks (streaming and non-streaming) and expose it in after-invoke callbacks.
  • Capture assistant opaque_body during app streaming, persist it into message_metadata as assistant_opaque_body, and restore it in TokenBufferMemory.

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
api/tests/unit_tests/core/model_runtime/__base/test_llm_invoke_opaque_body.py Adds unit coverage ensuring opaque_body survives plugin LLM invoke (stream/non-stream).
api/core/model_runtime/model_providers/__base/large_language_model.py Aggregates opaque_body (and tool calls) across chunks into final LLMResult / after-callback result.
api/core/model_runtime/entities/message_entities.py Introduces opaque_body fields on prompt content and assistant messages.
api/core/memory/token_buffer_memory.py Restores persisted assistant_opaque_body from message_metadata into AssistantPromptMessage.
api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py Persists assistant opaque_body into message_metadata under assistant_opaque_body.
api/core/app/apps/base_app_runner.py Captures opaque_body during streaming and includes it in the emitted final LLMResult.
agent-notes/.gitkeep No functional change (repo housekeeping).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

def __init__(self) -> None:
self.after_result = None

def on_before_invoke(self, **kwargs: Any) -> None: # noqa: ANN401
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method requires 1 positional argument, whereas overridden Callback.on_before_invoke requires at least 6.

Suggested change
def on_before_invoke(self, **kwargs: Any) -> None: # noqa: ANN401
def on_before_invoke( # noqa: ANN401
self,
arg1: Any,
arg2: Any,
arg3: Any,
arg4: Any,
arg5: Any,
arg6: Any,
*args: Any,
**kwargs: Any,
) -> None:

Copilot uses AI. Check for mistakes.
def on_before_invoke(self, **kwargs: Any) -> None: # noqa: ANN401
return None

def on_new_chunk(self, **kwargs: Any) -> None: # noqa: ANN401
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method requires 1 positional argument, whereas overridden Callback.on_new_chunk requires at least 7.

Suggested change
def on_new_chunk(self, **kwargs: Any) -> None: # noqa: ANN401
def on_new_chunk(self, *args: Any, **kwargs: Any) -> None: # noqa: ANN401

Copilot uses AI. Check for mistakes.
def on_new_chunk(self, **kwargs: Any) -> None: # noqa: ANN401
return None

def on_after_invoke(self, result: LLMResult, **kwargs: Any) -> None: # noqa: ANN401
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method requires 2 positional arguments, whereas overridden Callback.on_after_invoke requires at least 7.

Suggested change
def on_after_invoke(self, result: LLMResult, **kwargs: Any) -> None: # noqa: ANN401
def on_after_invoke(
self,
tenant_id: str | None = None, # type: ignore[unused-argument]
model_provider_instance_id: str | None = None, # type: ignore[unused-argument]
model_instance_id: str | None = None, # type: ignore[unused-argument]
model: str | None = None, # type: ignore[unused-argument]
credentials: dict[str, Any] | None = None, # type: ignore[unused-argument]
prompt_messages: list[PromptMessage] | None = None, # type: ignore[unused-argument]
result: LLMResult | None = None,
**kwargs: Any, # noqa: ANN401
) -> None:

Copilot uses AI. Check for mistakes.
def on_after_invoke(self, result: LLMResult, **kwargs: Any) -> None: # noqa: ANN401
self.after_result = result

def on_invoke_error(self, **kwargs: Any) -> None: # noqa: ANN401
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method requires 1 positional argument, whereas overridden Callback.on_invoke_error requires at least 7.

Suggested change
def on_invoke_error(self, **kwargs: Any) -> None: # noqa: ANN401
def on_invoke_error(self, *args: Any, **kwargs: Any) -> None: # noqa: ANN401

Copilot uses AI. Check for mistakes.
@BeautyyuYanli BeautyyuYanli marked this pull request as draft January 26, 2026 17:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

💪 enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants