Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -38,17 +38,35 @@ public class ContextualPromptExecutor(
@OptIn(ExperimentalUuidApi::class)
val eventId = Uuid.random().toString()

logger.debug { "Executing LLM call (event id: $eventId, prompt: $prompt, tools: [${tools.joinToString { it.name }}])" }
context.pipeline.onLLMCallStarting(eventId, context.executionInfo, context.runId, prompt, model, tools, context)
logger.debug { "Transforming prompt (event id: $eventId, prompt: $prompt, tools: [${tools.joinToString { it.name }}])" }
val transformedPrompt = context.pipeline.onLLMPromptTransforming(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: other methods here like "executeStreaming" or "moderate" should also call "onLLMPromptTransforming".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@serge-p7v, could you please clarify the general question here. With this update, we have two events with same set of parameters one after another. It looks a bit redundant for me, if I do not miss anything, as you can transform prompt inside the onLLMCallStarting handler and prompt is a variable in the AIAgentLLMWriteSession that can be updated. Would that work as well? Why do we need a separate interceptor here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, it is not possible to change the prompt inside of onLLMCallStarting (the prompt is immutable); for the intercepting approach we need to modify onLLMCallStarting. There might be two issues here:

  • Prompt transformations through AIAgentLLMWriteSession would change the conversation state. In the PR they are applied to a current call to an LLM (they are kind of "transient").
  • onLLMCallStarting would become a possibly mutating handler and it would be harder to reason about what it can do in a chain. In the PR onLLMPromptTransforming does only prompt transformation, so by chaining multiple onLLMCallStarting with multiple onLLMPromptTransforming we can be sure where the transformation is.

About the same set of parameters: good idea, thank you, most likely the transformer needs only the prompt and the context!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sdubov has suggested a great idea with extracting memory-specific handlers into a different feature. Converting the PR to draft.

eventId,
context.executionInfo,
context.runId,
prompt,
model,
context
)

val responses = executor.execute(prompt, model, tools)
logger.debug { "Executing LLM call (event id: $eventId, prompt: $transformedPrompt, tools: [${tools.joinToString { it.name }}])" }
context.pipeline.onLLMCallStarting(
eventId,
context.executionInfo,
context.runId,
transformedPrompt,
model,
tools,
context
)

val responses = executor.execute(transformedPrompt, model, tools)

logger.trace { "Finished LLM call (event id: $eventId) with responses: [${responses.joinToString { "${it.role}: ${it.content}" }}]" }
context.pipeline.onLLMCallCompleted(
eventId,
context.executionInfo,
context.runId,
prompt,
transformedPrompt,
model,
tools,
responses,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,14 @@ public sealed interface AgentLifecycleEventType {
//region LLM

/**
* Represents an event triggered when an error occurs during a language model call.
* Represents an event triggered when a prompt is being transformed.
* This allows features to modify the prompt before [LLMCallStarting] is triggered
* and before the prompt is sent to the language model.
*/
public object LLMPromptTransforming : AgentLifecycleEventType

/**
* Represents an event triggered before a call is made to the language model.
*/
public object LLMCallStarting : AgentLifecycleEventType

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,32 @@ import ai.koog.prompt.message.Message
*/
public interface LLMCallEventContext : AgentLifecycleEventContext

/**
* Represents the context for transforming a prompt before it is sent to the language model.
* This context is used by features that need to modify the prompt, such as adding context from
* a database, implementing RAG (Retrieval-Augmented Generation), or applying prompt templates.
*
* Prompt transformation occurs before [LLMCallStartingContext] is triggered, allowing
* modifications to be applied prior to the LLM call event handlers.
*
* @property executionInfo The execution information containing parentId and current execution path.
* @property runId The unique identifier for this LLM call session.
* @property prompt The prompt that will be transformed. This is the current state of the prompt
* after any previous transformations.
* @property model The language model instance that will be used for the call.
* @property context The AI agent context providing access to agent state and configuration.
*/
public data class LLMPromptTransformingContext(
override val eventId: String,
override val executionInfo: AgentExecutionInfo,
val runId: String,
val prompt: Prompt,
val model: LLModel,
val context: AIAgentContext
) : LLMCallEventContext {
override val eventType: AgentLifecycleEventType = AgentLifecycleEventType.LLMPromptTransforming
}

/**
* Represents the context for handling a before LLM call event.
*
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,31 @@
package ai.koog.agents.core.feature.handler.llm

import ai.koog.prompt.dsl.Prompt

/**
* A handler responsible for managing the execution flow of a Large Language Model (LLM) call.
* It allows customization of logic to be executed before and after the LLM is called.
* It allows customization of logic to be executed before and after the LLM is called,
* as well as transformation of the prompt before it is sent to the model.
*/
public class LLMCallEventHandler {

/**
* A transformer that can modify the prompt before it is sent to the language model.
*
* This transformer enables features to implement patterns like:
* - RAG (Retrieval-Augmented Generation): Query a database and add relevant context to the prompt
* - Prompt templates: Apply standardized formatting or instructions
* - Context injection: Add user-specific or session-specific information
* - Content filtering: Modify or sanitize the prompt before sending
*
* Multiple transformers can be chained together.
* Each transformer receives the prompt from the previous one and returns a modified version.
*
* By default, the transformer returns the prompt unchanged.
*/
public var llmPromptTransformingHandler: LLMPromptTransformingHandler =
LLMPromptTransformingHandler { _, prompt -> prompt }

/**
* A handler that is invoked before making a call to the Language Learning Model (LLM).
*
Expand All @@ -29,6 +49,20 @@ public class LLMCallEventHandler {
*/
public var llmCallCompletedHandler: LLMCallCompletedHandler =
LLMCallCompletedHandler { _ -> }

/**
* Transforms the provided prompt using the configured prompt transformer.
*
* This transformation occurs before [LLMCallStartingHandler] is invoked.
*
* @param context The context containing information about the prompt transformation
* @param prompt The prompt to be transformed
* @return The transformed prompt
*/
public suspend fun transformRequest(
context: LLMPromptTransformingContext,
prompt: Prompt
): Prompt = llmPromptTransformingHandler.transform(context, prompt)
}

/**
Expand Down Expand Up @@ -62,3 +96,50 @@ public fun interface LLMCallCompletedHandler {
*/
public suspend fun handle(eventContext: LLMCallCompletedContext)
}

/**
* A functional interface for transforming prompts before they are sent to the language model.
*
* This handler is invoked before [LLMCallStartingHandler], allowing prompt modification
* prior to the LLM call event handlers being triggered.
*
* This handler enables features to implement patterns such as:
* - RAG (Retrieval-Augmented Generation): Query a vector database and add relevant context
* - Prompt augmentation: Add system instructions, user context, or conversation history
* - Content filtering: Sanitize or modify prompts before sending
* - Logging and auditing: Record prompts for compliance or debugging
*
* Multiple transformers can be registered and will be applied in sequence (chain pattern).
* Each transformer receives the prompt from the previous one and returns a modified version.
*
* Example usage:
* ```kotlin
* LLMPromptTransformingHandler { context, prompt ->
* // Query database for relevant context
* val relevantDocs = database.search(prompt.messages.last().content)
*
* // Augment the prompt with retrieved context
* prompt.copy(
* messages = listOf(
* Message.System("Context: ${relevantDocs.joinToString()}"),
* *prompt.messages.toTypedArray()
* )
* )
* }
* ```
*/
public fun interface LLMPromptTransformingHandler {
/**
* Transforms the provided prompt based on the given context.
*
* @param context The context containing information about the LLM request, including
* the run ID, model, available tools, and agent context.
* @param prompt The current prompt to be transformed.
* @return The transformed prompt that will be sent to the language model
* (or passed to the next transformer in the chain).
*/
public suspend fun transform(
context: LLMPromptTransformingContext,
prompt: Prompt
): Prompt
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ import ai.koog.agents.core.feature.handler.agent.AgentExecutionFailedContext
import ai.koog.agents.core.feature.handler.agent.AgentStartingContext
import ai.koog.agents.core.feature.handler.llm.LLMCallCompletedContext
import ai.koog.agents.core.feature.handler.llm.LLMCallStartingContext
import ai.koog.agents.core.feature.handler.llm.LLMPromptTransformingContext
import ai.koog.agents.core.feature.handler.strategy.StrategyCompletedContext
import ai.koog.agents.core.feature.handler.strategy.StrategyStartingContext
import ai.koog.agents.core.feature.handler.streaming.LLMStreamingCompletedContext
Expand Down Expand Up @@ -245,6 +246,30 @@ public expect abstract class AIAgentPipeline(agentConfig: AIAgentConfig, clock:

//region Trigger LLM Call Handlers

/**
* Transforms the prompt by applying all registered transformers.
*
* This method is called before [onLLMCallStarting] and allows features to modify
* the prompt before it is sent to the language model. Multiple transformers can be
* registered and will be applied in sequence (chain pattern).
*
* @param eventId Unique identifier for this event
* @param executionInfo The execution information containing parentId and current execution path
* @param runId The unique identifier for this LLM call session
* @param prompt The original prompt to be transformed
* @param model The language model that will be used
* @param context The AI agent context
* @return The transformed prompt that will be sent to the language model
*/
public override suspend fun onLLMPromptTransforming(
eventId: String,
executionInfo: AgentExecutionInfo,
runId: String,
prompt: Prompt,
model: LLModel,
context: AIAgentContext
): Prompt

/**
* Notifies all registered LLM handlers before a language model call is made.
*
Expand Down Expand Up @@ -622,6 +647,46 @@ public expect abstract class AIAgentPipeline(agentConfig: AIAgentConfig, clock:
handle: suspend (StrategyCompletedContext) -> Unit
)

/**
* Registers a transformer that can modify the prompt before it is sent to the language model.
*
* This transformer is invoked before [interceptLLMCallStarting], allowing prompt modification
* prior to the LLM call event handlers being triggered.
*
* This interceptor enables features to implement patterns such as:
* - RAG (Retrieval-Augmented Generation): Query a vector database and add relevant context
* - Prompt augmentation: Add system instructions, user context, or conversation history
* - Content filtering: Sanitize or modify prompts before sending
* - Logging and auditing: Record prompts for compliance or debugging
*
* Multiple transformers can be registered and will be applied in sequence (chain pattern).
* Each transformer receives the prompt from the previous one and returns a modified version.
*
* @param feature The feature registering this transformer
* @param transform A function that takes the transforming context and current prompt,
* and returns the transformed prompt
*
* Example:
* ```
* pipeline.interceptLLMPromptTransforming(feature) { prompt ->
* // Query database for relevant context
* val relevantDocs = database.search(prompt.messages.last().content)
*
* // Return augmented prompt
* prompt.copy(
* messages = listOf(
* Message.System("Context: ${relevantDocs.joinToString()}"),
* *prompt.messages.toTypedArray()
* )
* )
* }
* ```
*/
public override fun interceptLLMPromptTransforming(
feature: AIAgentFeature<*, *>,
transform: suspend LLMPromptTransformingContext.(Prompt) -> Prompt
)

/**
* Intercepts LLM calls before they are made to modify or log the prompt.
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ import ai.koog.agents.core.feature.handler.agent.AgentExecutionFailedContext
import ai.koog.agents.core.feature.handler.agent.AgentStartingContext
import ai.koog.agents.core.feature.handler.llm.LLMCallCompletedContext
import ai.koog.agents.core.feature.handler.llm.LLMCallStartingContext
import ai.koog.agents.core.feature.handler.llm.LLMPromptTransformingContext
import ai.koog.agents.core.feature.handler.strategy.StrategyCompletedContext
import ai.koog.agents.core.feature.handler.strategy.StrategyStartingContext
import ai.koog.agents.core.feature.handler.streaming.LLMStreamingCompletedContext
Expand Down Expand Up @@ -126,6 +127,15 @@ public interface AIAgentPipelineAPI {
//endregion

//region Trigger LLM Handlers
public suspend fun onLLMPromptTransforming(
eventId: String,
executionInfo: AgentExecutionInfo,
runId: String,
prompt: Prompt,
model: LLModel,
context: AIAgentContext
): Prompt

public suspend fun onLLMCallStarting(
eventId: String,
executionInfo: AgentExecutionInfo,
Expand Down Expand Up @@ -274,6 +284,11 @@ public interface AIAgentPipelineAPI {
handle: suspend (StrategyCompletedContext) -> Unit
)

public fun interceptLLMPromptTransforming(
feature: AIAgentFeature<*, *>,
transform: suspend LLMPromptTransformingContext.(Prompt) -> Prompt
)

public fun interceptLLMCallStarting(
feature: AIAgentFeature<*, *>,
handle: suspend (eventContext: LLMCallStartingContext) -> Unit
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ import ai.koog.agents.core.feature.handler.llm.LLMCallCompletedHandler
import ai.koog.agents.core.feature.handler.llm.LLMCallEventHandler
import ai.koog.agents.core.feature.handler.llm.LLMCallStartingContext
import ai.koog.agents.core.feature.handler.llm.LLMCallStartingHandler
import ai.koog.agents.core.feature.handler.llm.LLMPromptTransformingContext
import ai.koog.agents.core.feature.handler.llm.LLMPromptTransformingHandler
import ai.koog.agents.core.feature.handler.strategy.StrategyCompletedContext
import ai.koog.agents.core.feature.handler.strategy.StrategyCompletedHandler
import ai.koog.agents.core.feature.handler.strategy.StrategyEventHandler
Expand Down Expand Up @@ -277,6 +279,20 @@ public class AIAgentPipelineImpl(

//region Trigger LLM Call Handlers

public override suspend fun onLLMPromptTransforming(
eventId: String,
executionInfo: AgentExecutionInfo,
runId: String,
prompt: Prompt,
model: LLModel,
context: AIAgentContext
): Prompt {
val eventContext = LLMPromptTransformingContext(eventId, executionInfo, runId, prompt, model, context)
return llmCallEventHandlers.values.fold(prompt) { currentPrompt, handler ->
handler.transformRequest(eventContext.copy(prompt = currentPrompt), currentPrompt)
}
}

public override suspend fun onLLMCallStarting(
eventId: String,
executionInfo: AgentExecutionInfo,
Expand Down Expand Up @@ -567,6 +583,17 @@ public class AIAgentPipelineImpl(
)
}

public override fun interceptLLMPromptTransforming(
feature: AIAgentFeature<*, *>,
transform: suspend LLMPromptTransformingContext.(Prompt) -> Prompt
) {
val handler = llmCallEventHandlers.getOrPut(feature.key) { LLMCallEventHandler() }

handler.llmPromptTransformingHandler = LLMPromptTransformingHandler(
function = createConditionalTransformHandler(feature, transform)
)
}

public override fun interceptLLMCallStarting(
feature: AIAgentFeature<*, *>,
handle: suspend (eventContext: LLMCallStartingContext) -> Unit
Expand Down Expand Up @@ -996,6 +1023,21 @@ public class AIAgentPipelineImpl(
eventContext.handle(env)
}

@InternalAgentsApi
public fun createConditionalTransformHandler(
feature: AIAgentFeature<*, *>,
handle: suspend LLMPromptTransformingContext.(Prompt) -> Prompt
): suspend (LLMPromptTransformingContext, Prompt) -> Prompt =
handler@{ eventContext, prompt ->
val featureConfig = registeredFeatures[feature.key]?.featureConfig

if (featureConfig != null && !featureConfig.isAccepted(eventContext)) {
return@handler prompt
}

eventContext.handle(prompt)
}

public override fun FeatureConfig.isAccepted(eventContext: AgentLifecycleEventContext): Boolean {
return this.eventFilter.invoke(eventContext)
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,10 @@ public class EventHandler {
config.invokeOnStrategyCompleted(eventContext)
}

pipeline.interceptLLMPromptTransforming(this) intercept@{ prompt ->
config.invokeOnLLMPromptTransforming(this, prompt)
}

pipeline.interceptLLMCallStarting(this) intercept@{ eventContext: LLMCallStartingContext ->
config.invokeOnLLMCallStarting(eventContext)
}
Expand Down
Loading
Loading