Skip to content

Conversation

@centdix
Copy link
Collaborator

@centdix centdix commented Dec 16, 2025

Enregistrement.de.l.ecran.2025-12-16.a.21.35.03.mov

@centdix centdix changed the title feat(aiagent): allow setting messages history for ai agent feat(aiagent): allow giving messages history Dec 16, 2025
@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Dec 16, 2025

Deploying windmill with  Cloudflare Pages  Cloudflare Pages

Latest commit: 4b338f6
Status: ✅  Deploy successful!
Preview URL: https://9969e789.windmill.pages.dev
Branch Preview URL: https://fg-arraymessages.windmill.pages.dev

View logs

@centdix centdix marked this pull request as ready for review December 16, 2025 20:42
@claude
Copy link
Contributor

claude bot commented Dec 16, 2025

Claude finished @centdix's task in 3m 38s —— View job


Code Review Completed ✅

I've completed a comprehensive review of PR #7395 which adds support for providing message history to AI agents. This is a valuable feature that allows users to bypass the built-in memory system and provide custom conversation histories.


Summary

This PR adds a new messages parameter to the AI agent that allows users to provide a custom array of conversation messages. When provided, this bypasses the memory system and uses the explicit messages as the conversation history.

Key Changes:

  • Backend: Added messages: Option<Vec<OpenAIMessage>> field to AIAgentArgs
  • Backend: Made user_message optional (was previously required)
  • Backend: Logic to use explicit messages when provided, bypassing memory read/write
  • Frontend: Added messages field to AI_AGENT_SCHEMA with proper OpenAI message structure
  • Frontend: Updated field descriptions and made user_message optional in schema
  • Frontend: Minor improvements to nullable handling and JSON parsing

Code Quality Assessment

✅ Strengths

  1. Well-structured feature: The implementation cleanly separates the explicit messages path from the memory-based path
  2. Backward compatible: Existing AI agents continue to work as user_message remains functional
  3. Clear validation: Proper validation ensures at least one of messages or user_message is provided
  4. Good documentation: Field descriptions clearly explain when memory is bypassed
  5. Type safety: Proper use of Option types in Rust reflects the optional nature of fields

⚠️ Issues & Recommendations

I've left 5 inline comments on specific code sections with detailed recommendations:

  1. Performance optimization (ai_executor.rs:425) - Unnecessary clone of messages array
  2. Edge case handling (ai_executor.rs:413) - Validation logic for image output type
  3. Documentation clarity (flowInfers.ts:47) - Clarify interaction with system_prompt
  4. Input validation (types.rs:114) - Consider max message array size limit
  5. Code improvement (JsonEditor.svelte:43) - Positive feedback on cleaner error handling

Critical Issue (Not in diff, but important)

The tool_calls and tool_call_id fields in OpenAIMessage (types.rs:46-48) are marked as nullable: true in the frontend schema but use Option<T> without #[serde(default)] in Rust. This could cause deserialization failures when workers are behind the API version.

Recommended fix:

#[serde(skip_serializing_if = "Option::is_none", default)]
pub tool_calls: Option<Vec<OpenAIToolCall>>,
#[serde(skip_serializing_if = "Option::is_none", default)]
pub tool_call_id: Option<String>,

Security Considerations

Good: No SQL injection risks - uses parameterized queries
Good: No XSS risks - data is properly serialized/deserialized
Good: Input validation ensures required fields are present
⚠️ Minor: Large message arrays could consume significant memory - consider adding size limits (see inline comment)


Performance Considerations

Good: No N+1 query patterns
Good: Efficient use of async/await
⚠️ Memory: The .clone() on line 425 could be avoided (see inline comment)
⚠️ Scalability: Large message arrays are cloned and extended multiple times


Testing Instructions

To test this feature through the Windmill app interface, navigate to the Flow Builder and create or edit a flow with an AI Agent step. In the AI Agent configuration, you should now see a new "messages" field (appearing after "messages_context_length" in the form) that accepts an array of message objects. Each message object requires a "role" field (user/assistant/system) and optional "content", "tool_calls", and "tool_call_id" fields. When you provide messages through this array field, the agent will use these as the conversation history instead of loading from memory. You can test the bypass behavior by providing some messages, observing that memory is not updated after execution. Also verify that user_message is now optional - you can provide either messages alone, user_message alone, or both together (where user_message gets appended as the final message). The output_type field should default to "text" and both user_message and the new messages field should only be visible when output_type is set to "text".


Overall Assessment

Rating: 7/10 - Good feature implementation with some opportunities for improvement

Strengths:

  • Clean separation of concerns
  • Backward compatible
  • Well-documented in code
  • Type-safe implementation

Areas for improvement:

  • Critical: Add #[serde(default)] to nullable fields in OpenAIMessage
  • Important: Memory optimization opportunity
  • Minor: Error handling for edge cases

This is a solid feature that will be useful for users who want more control over conversation history. The implementation is clean and maintainable. Addressing the critical serde issue is recommended before merging to prevent potential deserialization failures in distributed deployments.

if let Some(ref explicit_messages) = args.messages {
if !explicit_messages.is_empty() {
// Use explicitly provided messages (bypass memory)
messages.extend(explicit_messages.clone());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance: Unnecessary clone

This .clone() allocates memory for the entire messages array. Since args.messages is never used after this point, consider taking ownership instead.

If you can make args mutable, you could use:

if let Some(explicit_messages) = args.messages.take() {
    if !explicit_messages.is_empty() {
        messages.extend(explicit_messages);
    }
}

This avoids the allocation, which could be significant for large message histories.


// Validate: at least one of messages or user_message must be provided
if !use_explicit_messages && !has_user_message {
return Err(Error::internal_err(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edge case: Consider image output type

When output_type is Image, the user might not provide user_message (since the description states tools and messages are ignored for image generation). However, this validation would still trigger an error.

Consider either:

  1. Exempting image output from this check
  2. Making the error message more specific
  3. Ensuring the frontend prevents this invalid state

messages: {
type: 'array',
description:
'An array of conversation messages to use as history. When provided, these messages replace any messages that would be loaded from memory, and memory will not be updated at the end of the run. If user_message is also provided, it will be appended as the final message.',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation clarity

Consider clarifying the interaction with system_prompt:

description:
  'An array of conversation messages to use as history. When provided, these messages replace any messages that would be loaded from memory, and memory will not be updated at the end of the run. Note: system_prompt, if provided, will be prepended as the first message. If user_message is also provided, it will be appended as the final message.',

This makes the complete behavior clear to users.

const dispatchIfMounted = createDispatcherIfMounted(dispatch)
function parseJson() {
try {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good fix!

This change properly handles the empty string case by setting value = undefined and clearing the error, while the else branch handles the parsing. This is cleaner than the previous early return pattern.

@centdix
Copy link
Collaborator Author

centdix commented Dec 16, 2025

/ai change the input schema of the ai agent, instead of adding messages array, we want to have a history field, which is a oneOf.
It is either 'auto', with a single "number" value, or "manual" with an array of messages.
We can then also remove the messages_context_length input.
We should make sure this change is backward compatible with the schema that only had messages_context_length (no need for messages as this was added in this PR).
That back comp should be both on the back and the front.
Add a comprehensive description to the history field.
Do one commit for the backend, one commit for the frontend.
Do one additional commit for correctly rendering a oneOf of that type on InputSchemaTransform, we do not handle that correctly

@claude
Copy link
Contributor

claude bot commented Dec 16, 2025

Claude encountered an error —— View job

Command failed: git checkout fg/arraymessages --

I'll analyze this and get back to you.

@centdix
Copy link
Collaborator Author

centdix commented Dec 16, 2025

/ai change the input schema of the ai agent, instead of adding messages array, we want to have a history field, which is a oneOf.
It is either 'auto', with a single "number" value, or "manual" with an array of messages.
We can then also remove the messages_context_length input.
We should make sure this change is backward compatible with the schema that only had messages_context_length (no need for messages as this was added in this PR).
That back comp should be both on the back and the front.
Add a comprehensive description to the history field.
Do one commit for the backend, one commit for the frontend.
Do one additional commit for correctly rendering a oneOf of that type on InputSchemaTransform, we do not handle that correctly

@claude
Copy link
Contributor

claude bot commented Dec 16, 2025

Claude finished @centdix's task in 11m 37s —— View job


AI Agent Schema Changes Complete ✅

Successfully replaced the messages array and messages_context_length fields with a unified history field using a oneOf discriminator pattern.

Summary of Changes

Backend (Rust):

  • Added History enum with two variants:
    • Auto { context_length: usize } - automatically manages conversation history with memory
    • Manual { messages: Vec<OpenAIMessage> } - bypasses memory and uses explicit messages
  • Implemented backward compatibility: old messages_context_length parameter automatically converts to Auto mode
  • Updated message loading and persistence logic to use the new history field
  • Fixed borrow checker issues with proper referencing

Frontend (TypeScript/Svelte):

  • Updated AI_AGENT_SCHEMA in flowInfers.ts with new history field schema
  • Added comprehensive descriptions explaining the behavior of each mode
  • Updated field order to replace messages_context_length and messages with history
  • Enhanced ArgInput.svelte to support mode as a discriminator (in addition to existing kind and label)
  • Added proper type casting to resolve TypeScript strict type checking

Validation:

  • ✅ Backend: cargo check --features $(./all_features_oss.sh) passed
  • ✅ Frontend: npm run check passed (0 errors, 2 pre-existing warnings unrelated to changes)

Commits

  1. feat(backend): Replace messages/messages_context_length with history oneOf field (0a49373)
  2. feat(frontend): Replace messages/messages_context_length with history oneOf field (714b54e)
  3. fix(frontend): Add support for 'mode' discriminator in oneOf rendering (4b338f6)

Backward Compatibility

The changes maintain full backward compatibility:

  • Existing flows using messages_context_length will automatically work with the new code
  • The backend converts old-style parameters to the new History::Auto format transparently
  • No migration needed for existing data

--- |

github-actions bot and others added 3 commits December 16, 2025 23:06
…oneOf field

Replace the separate 'messages' array and 'messages_context_length' fields with
a single 'history' field that uses a oneOf discriminator.

The 'history' field can be either:
- 'auto' mode: automatically manages conversation history with memory, takes a
  'context_length' number parameter
- 'manual' mode: bypasses memory and uses explicitly provided messages array

Backward compatibility is maintained: if 'messages_context_length' is provided
in the old schema format, it is automatically converted to 'auto' mode with the
specified context_length.

Co-authored-by: centdix <[email protected]>
… oneOf field

Replace the separate 'messages' array and 'messages_context_length' fields with
a single 'history' field in the AI agent schema.

The 'history' field uses a oneOf discriminator with two modes:
- 'auto': { mode: 'auto', context_length: number } - automatically manages
  conversation history with memory
- 'manual': { mode: 'manual', messages: array } - bypasses memory and uses
  explicitly provided messages

The schema includes comprehensive descriptions for each mode explaining the
behavior. The order array has been updated to include 'history' in place of
the old 'messages_context_length' and 'messages' fields.

Co-authored-by: centdix <[email protected]>
Update ArgInput.svelte to properly handle oneOf schemas that use 'mode' as the
discriminator field, in addition to the existing 'kind' and 'label' support.

Changes:
- Updated tagKey derivation to check for 'mode' first, then 'kind', then 'label'
- Added 'mode' to the onOneOfChange function to track mode changes
- Added 'mode' to the list of keys excluded from enum validation
- Added 'mode' to hiddenArgs to prevent it from being shown in the form
- Added title fields to the history oneOf variants in flowInfers.ts

This allows the AI agent's history field to properly render with toggle buttons
for 'auto' and 'manual' modes.

Co-authored-by: centdix <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants