Description
Description
I am attempting to use AWS Bedrock as my model provider and following the docs as seen here: https://docs.crewai.com/concepts/llms#aws-bedrock
When starting my app I get an import error:
ImportError: cannot import name 'LLM' from 'crewai'
Are the docs incorrect? If so, is there a preferred method to use AWS Bedrock as a model provider?
Steps to Reproduce
- Import LLM from crewai
- Create an LLM config:
bedrock_llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
)
- Create an agent that uses the LLM
test_agent = Agent(
role="test agentr",
goal="To test",
verbose=True,
backstory="A test",
llm=bedrock_llm
)
- Launch the app and get an error:
ImportError: cannot import name 'LLM' from 'crewai'
Expected behavior
The LLM class is imported successfully and the app runs.
Screenshots/Code snippets
from crewai import Agent, Task, Crew, Process
from loguru import logger
from app.crews.tools import DatabaseTools
import boto3
import json
bedrock_llm = LLM(model="bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0")
async def create_crew():
boto3_session = boto3.Session()
bedrock_embedder = (
{
"provider": "bedrock",
"config": {
"session": boto3_session,
"model": "amazon.titan-embed-text-v2:0",
"vector_dimension": 1024,
},
},
)
database_administrator = Agent(
role="Database Administrator",
goal="To provide data from a database that can be used to answer questions",
verbose=True,
memory=True,
backstory="An expert database administrator who specializes in reviewing database schemas, generating SQL queries, and executing queries to gather the necessary data to answer questions.",
embedder=bedrock_embedder,
tools=[DatabaseTools.execute_query_tool],
)
database_schema_retriever = Agent(
role="Database Schema Retriever",
goal="To retrieve the schema of a database for the database administrator",
verbose=True,
memory=True,
backstory="An expert in retrieving database schemas.",
embedder=bedrock_embedder,
tools=[DatabaseTools.get_database_schema_tool],
)
answer_user_question = Task(
description=(
"Answer the user's question using the data from the database."
"The general process should be as follows:"
"1. Retrieve the database schema."
"2. Generate an SQL query to retrieve the necessary data."
"3. Execute the query and retrieve the data."
"4. Use the data to answer the user's question."
"This process should be iterated on until the user's question is answered or it is determined that the question cannot be answered."
"If the question cannot be answered, provide a detailed explanation as to why."
"User's question: {user_question}"
),
expected_output="Answer to the user question",
)
# Define crew
crew = Crew(
agents=[database_administrator, database_schema_retriever],
tasks=[answer_user_question],
process=Process.sequential,
memory=True,
embedder=bedrock_embedder,
verbose=True,
)
return crew
async def kickoff_crew(inputs):
crew = await create_crew()
crew_output = await crew.kickoff_async(inputs=inputs)
logger.debug(f"Raw Output: {crew_output.raw}")
if crew_output.json_dict:
logger.debug(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
if crew_output.pydantic:
logger.debug(f"Pydantic Output: {crew_output.pydantic}")
logger.debug(f"Tasks Output: {crew_output.tasks_output}")
logger.debug(f"Token Usage: {crew_output.token_usage}")
return crew_output.raw
Operating System
Ubuntu 24.04
Python Version
3.12
crewAI Version
0.51.1
crewAI Tools Version
NA
Virtual Environment
Venv
Evidence
│ /code/app/crews/crew_manager.py:1 in <module> │
│ │
│ ❱ 1 from crewai import Agent, Task, Crew, Process, LLM │
│ 2 from loguru import logger │
│ 3 from app.crews.tools import DatabaseTools │
│ 4 import boto3 │
╰──────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'LLM' from 'crewai'
(/usr/local/lib/python3.12/site-packages/crewai/__init__.py)
Possible Solution
None
Additional context
None