-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Update internal_instructor.py MISSING API_KEY issue #2786
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
In case when we do not add API KEY in env and pass it manually from LLM object, it fails to convert the output data to pydantic mode. Error: ``` Failed to convert text into a Pydantic model due to error: litellm.AuthenticationError: Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars ```
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #2786OverviewThe pull request modifies Code Analysis1. API Key Handling
Suggested Improvementsa. Error Handling
b. API Key Validation
c. Documentation
Security Considerations
Testing Recommendations
Code Quality Metrics
ConclusionThe pull request effectively resolves the API key handling issue. However, adopting the suggested improvements will enhance the code's robustness and maintainability. The changes are approved with recommendations for implementation in future iterations. This review emphasizes the importance of proactive error management and proper documentation, which are essential for maintaining code quality in collaborative environments. |
- added validation error logic
ValueError: If no API key is provided or is invalid. | ||
RuntimeError: If chat completion creation fails. | ||
""" | ||
if not self.llm.api_key and not os.getenv("ANTHROPIC_API_KEY"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think checking the ANTROPIC_API_KEY can be changed, to check os.getenv('OPENAI_API_KEY')
default crewai use openai model gpt-4
RuntimeError: If chat completion creation fails. | ||
""" | ||
if not self.llm.api_key and not os.getenv("ANTHROPIC_API_KEY"): | ||
raise ValueError("API key must be provided either through LLM object or ANTHROPIC_API_KEY environment variable") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again the ANTHROPIC_API_KEY
error can be changed, as the user can use any llm provider.
I think that we can get the api_key
from the LLM object, but what if base_url
is also set, does instructor accepts that arguement?
a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are correct this should not be thrown from here, as the key will be validated internally by LiteLLM for all type of providers
Raises: | ||
ValueError: If no API key is provided or is invalid. | ||
RuntimeError: If chat completion creation fails. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are not raising it anymore, right?
@amitgeed That's makes sens! Would you mind to share the code to reproduce this issues? |
@lucasgomide
|
@amitgeed Should I try with another model? from crewai import LLM, Agent, Crew, Task
from pydantic import BaseModel
llm = LLM(
model="openai/gpt-4o-mini",
max_completion_tokens=4200,
api_key="sk-proj-***",
)
class PydanticModel(BaseModel):
name: str
age: int
agent = Agent(
role="",
goal="",
backstory="",
llm=llm,
)
task = Task(
name="",
description="",
expected_output="",
agent=agent,
output_pydantic=PydanticModel,
)
crew = Crew(
agents=[agent],
tasks=[task],
)
result = crew.kickoff()
print(result) ![]() |
@lucasgomide Have you tried step 4 with Anthropic? |
messages = [{"role": "user", "content": self.content}] | ||
model = self._client.chat.completions.create( | ||
model=self.llm.model, response_model=self.model, messages=messages | ||
model=self.llm.model, response_model=self.model, messages=messages, api_key=self.llm.api_key |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should also send the api.base here as well.
Issue #2753
what do you think @lucasgomide?
In case when we do not add API KEY in env and pass it manually from LLM object, it fails to convert the output data to pydantic mode.
Error: