-
Notifications
You must be signed in to change notification settings - Fork 183
Description
What is the bug?
We can perform a RAG Pipeline search using this API call:
GET /<index_name>/_search?search_pipeline=rag_pipeline
{
"query": {
"match": {
"text": "Abraham Lincoln"
}
},
"ext": {
"generative_qa_parameters": {
"llm_model": "bedrock/anthropic-claude",
"llm_question": "who is lincoln",
"system_prompt": "null",
"user_instructions": "null",
"context_size": 5,
"message_size": 5,
"timeout": 60
}
}
}
When we send such a request, the pipeline will perform an LLM call (code reference) and the processes the output (code reference)
The users can pass a parameter called llmResponseField
which indicates filter to be applied on the outputs of the LLM model and we finally see the output response.
But for default models, users do not need to provide the llmResponseField
for the default models, we support them out of the box. The supported types are: OPENAI, BEDROCK, COHERE, and BEDROCK_CONVERSE.
For the bedrock models when the "llm_model": "bedrock/anthropic-claude"
the code expect V2 family of Claude models. But these models are now deprecated and people now use V3 family of Claude models. The newer version of models have a different output format causing the LLM message extraction to fail when the llm_model is provided as bedrock/anthropic-claude
We need to update the Default LLM Implementation (code reference) to work with the new V3 family of Claude models.
How can one reproduce the bug?
Steps to reproduce the behavior:
- Go to '...'
- Click on '....'
- Scroll down to '....'
- See error
What is the expected behavior?
A clear and concise description of what you expected to happen.
What is your host/environment?
- OS: [e.g. iOS]
- Version [e.g. 22]
- Plugins
Do you have any screenshots?
If applicable, add screenshots to help explain your problem.
Do you have any additional context?
Add any other context about the problem.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status