Skip to content

Conversation

@sk5268
Copy link
Contributor

@sk5268 sk5268 commented Dec 30, 2024

Purpose

Cerebras with its 3rd generation Wafer-Scale-Engine (WSE-3) offers inference speed of ~2000 TPS.
Hence I added cerebras api to the list of available llms using langchain-cerebras.

Usage

  1. Import the llm class:
    from openagi.llms.cerebras import CerebrasModel
  1. Set the ENV variables:
    os.environ['CEREBRAS_API_KEY'] = "YOUR_CEREBRAS_API_KEY"
    os.environ['Cerebras_MODEL'] = "llama-3.3-70b"
    os.environ['Cerebras_TEMP'] = "0.5"
  1. Configre the LLM
    config = CerebrasModel.load_from_env_config()
    llm = CerebrasModel(config=config)

Note

I tested this tool by replacing gemini( from openagi.llms.gemini import GeminiModel )
with cerebras (from openagi.llms.cerebras import CerebrasModel) in the example notebook example/blog_post.py.

Everything worked as expected, same as using GeminiModel.


References

  1. Cerebras AI
  2. Langchain Cerebras Integration

Copy link
Contributor Author

@sk5268 sk5268 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated Doc-string format

@tarun-aiplanet tarun-aiplanet merged commit e20bf4f into aiplanethub:main Feb 10, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants