Skip to content

Commit 7d4717d

Browse files
Merge branch 'main' into rcp/ToolCalling-P1
2 parents f215a95 + 043c0c7 commit 7d4717d

38 files changed

+1510
-265
lines changed

.env.sample

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,21 @@ GOOGLE_REGION=
1818
GOOGLE_PROJECT_ID=
1919

2020
# Hugging Face token
21-
HUGGINGFACE_TOKEN=
21+
HF_TOKEN=
2222

2323
# Fireworks
2424
FIREWORKS_API_KEY=
2525

2626
# Together AI
2727
TOGETHER_API_KEY=
28+
29+
# WatsonX
30+
WATSONX_SERVICE_URL=
31+
WATSONX_API_KEY=
32+
WATSONX_PROJECT_ID=
33+
34+
# xAI
35+
XAI_API_KEY=
36+
37+
# Sambanova
38+
SAMBANOVA_API_KEY=

.github/workflows/run_pytest.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818
run: |
1919
python -m pip install --upgrade pip
2020
pip install poetry
21-
poetry install
21+
poetry install --all-extras --with test
2222
- name: Test with pytest
23-
run: poetry run pytest
23+
run: poetry run pytest -m "not integration"
2424

.gitignore

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,12 @@ __pycache__/
44
env/
55
.env
66
.google-adc
7+
8+
# Testing
9+
.coverage
10+
11+
# pyenv
12+
.python-version
13+
14+
.DS_Store
15+
**/.DS_Store

README.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
11
# aisuite
22

3+
[![PyPI](https://img.shields.io/pypi/v/aisuite)](https://pypi.org/project/aisuite/)
34
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
45

56
Simple, unified interface to multiple Generative AI providers.
67

78
`aisuite` makes it easy for developers to use multiple LLM through a standardized interface. Using an interface similar to OpenAI's, `aisuite` makes it easy to interact with the most popular LLMs and compare the results. It is a thin wrapper around python client libraries, and allows creators to seamlessly swap out and test responses from different LLM providers without changing their code. Today, the library is primarily focussed on chat completions. We will expand it cover more use cases in near future.
89

910
Currently supported providers are -
10-
OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace and Ollama.
11+
OpenAI, Anthropic, Azure, Google, AWS, Groq, Mistral, HuggingFace Ollama, Sambanova and Watsonx.
1112
To maximize stability, `aisuite` uses either the HTTP endpoint or the SDK for making calls to the provider.
1213

1314
## Installation
@@ -21,11 +22,13 @@ pip install aisuite
2122
```
2223

2324
This installs aisuite along with anthropic's library.
25+
2426
```shell
2527
pip install 'aisuite[anthropic]'
2628
```
2729

2830
This installs all the provider-specific libraries
31+
2932
```shell
3033
pip install 'aisuite[all]'
3134
```
@@ -41,12 +44,14 @@ You can use tools like [`python-dotenv`](https://pypi.org/project/python-dotenv/
4144
Here is a short example of using `aisuite` to generate chat completion responses from gpt-4o and claude-3-5-sonnet.
4245

4346
Set the API keys.
47+
4448
```shell
4549
export OPENAI_API_KEY="your-openai-api-key"
4650
export ANTHROPIC_API_KEY="your-anthropic-api-key"
4751
```
4852

4953
Use the python client.
54+
5055
```python
5156
import aisuite as ai
5257
client = ai.Client()
@@ -67,6 +72,7 @@ for model in models:
6772
print(response.choices[0].message.content)
6873

6974
```
75+
7076
Note that the model name in the create() call uses the format - `<provider>:<model-name>`.
7177
`aisuite` will call the appropriate provider with the right parameters based on the provider value.
7278
For a list of provider values, you can look at the directory - `aisuite/providers/`. The list of supported providers are of the format - `<provider>_provider.py` in that directory. We welcome providers adding support to this library by adding an implementation file in this directory. Please see section below for how to contribute.
@@ -79,9 +85,10 @@ aisuite is released under the MIT License. You are free to use, modify, and dist
7985

8086
## Contributing
8187

82-
If you would like to contribute, please read our [Contributing Guide](CONTRIBUTING.md) and join our [Discord](https://discord.gg/T6Nvn8ExSb) server!
88+
If you would like to contribute, please read our [Contributing Guide](https://github.com/andrewyng/aisuite/blob/main/CONTRIBUTING.md) and join our [Discord](https://discord.gg/T6Nvn8ExSb) server!
8389

8490
## Adding support for a provider
91+
8592
We have made easy for a provider or volunteer to add support for a new platform.
8693

8794
### Naming Convention for Provider Modules
@@ -91,20 +98,24 @@ We follow a convention-based approach for loading providers, which relies on str
9198
- The provider's module file must be named in the format `<provider>_provider.py`.
9299
- The class inside this module must follow the format: the provider name with the first letter capitalized, followed by the suffix `Provider`.
93100

94-
#### Examples:
101+
#### Examples
95102

96103
- **Hugging Face**:
97104
The provider class should be defined as:
105+
98106
```python
99107
class HuggingfaceProvider(BaseProvider)
100108
```
109+
101110
in providers/huggingface_provider.py.
102111

103112
- **OpenAI**:
104113
The provider class should be defined as:
114+
105115
```python
106116
class OpenaiProvider(BaseProvider)
107117
```
118+
108119
in providers/openai_provider.py
109120

110121
This convention simplifies the addition of new providers and ensures consistency across provider implementations.

aisuite/framework/message.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
"""Interface to hold contents of api responses when they do not conform to the OpenAI style response"""
1+
"""Interface to hold contents of api responses when they do not confirm to the OpenAI style response"""
22

33
from pydantic import BaseModel
44
from typing import Literal, Optional

aisuite/providers/aws_provider.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ class BedrockConfig:
1414

1515
def __init__(self, **config):
1616
self.region_name = config.get(
17-
"region_name", os.getenv("AWS_REGION_NAME", "us-west-2")
17+
"region_name", os.getenv("AWS_REGION", "us-west-2")
1818
)
1919

2020
def create_client(self):
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
import os
2+
import cohere
3+
4+
from aisuite.framework import ChatCompletionResponse
5+
from aisuite.provider import Provider
6+
7+
8+
class CohereProvider(Provider):
9+
def __init__(self, **config):
10+
"""
11+
Initialize the Cohere provider with the given configuration.
12+
Pass the entire configuration dictionary to the Cohere client constructor.
13+
"""
14+
# Ensure API key is provided either in config or via environment variable
15+
config.setdefault("api_key", os.getenv("CO_API_KEY"))
16+
if not config["api_key"]:
17+
raise ValueError(
18+
" API key is missing. Please provide it in the config or set the CO_API_KEY environment variable."
19+
)
20+
self.client = cohere.ClientV2(**config)
21+
22+
def chat_completions_create(self, model, messages, **kwargs):
23+
response = self.client.chat(
24+
model=model,
25+
messages=messages,
26+
**kwargs # Pass any additional arguments to the Cohere API
27+
)
28+
29+
return self.normalize_response(response)
30+
31+
def normalize_response(self, response):
32+
"""Normalize the reponse from Cohere API to match OpenAI's response format."""
33+
normalized_response = ChatCompletionResponse()
34+
normalized_response.choices[0].message.content = response.message.content[
35+
0
36+
].text
37+
return normalized_response
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
import openai
2+
import os
3+
from aisuite.provider import Provider, LLMError
4+
5+
6+
class DeepseekProvider(Provider):
7+
def __init__(self, **config):
8+
"""
9+
Initialize the DeepSeek provider with the given configuration.
10+
Pass the entire configuration dictionary to the OpenAI client constructor.
11+
"""
12+
# Ensure API key is provided either in config or via environment variable
13+
config.setdefault("api_key", os.getenv("DEEPSEEK_API_KEY"))
14+
if not config["api_key"]:
15+
raise ValueError(
16+
"DeepSeek API key is missing. Please provide it in the config or set the OPENAI_API_KEY environment variable."
17+
)
18+
config["base_url"] = "https://api.deepseek.com"
19+
20+
# NOTE: We could choose to remove above lines for api_key since OpenAI will automatically
21+
# infer certain values from the environment variables.
22+
# Eg: OPENAI_API_KEY, OPENAI_ORG_ID, OPENAI_PROJECT_ID. Except for OPEN_AI_BASE_URL which has to be the deepseek url
23+
24+
# Pass the entire config to the OpenAI client constructor
25+
self.client = openai.OpenAI(**config)
26+
27+
def chat_completions_create(self, model, messages, **kwargs):
28+
# Any exception raised by OpenAI will be returned to the caller.
29+
# Maybe we should catch them and raise a custom LLMError.
30+
return self.client.chat.completions.create(
31+
model=model,
32+
messages=messages,
33+
**kwargs # Pass any additional arguments to the OpenAI API
34+
)

aisuite/providers/huggingface_provider.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,10 @@ def __init__(self, **config):
2121
The token is fetched from the config or environment variables.
2222
"""
2323
# Ensure API key is provided either in config or via environment variable
24-
self.token = config.get("token") or os.getenv("HUGGINGFACE_TOKEN")
24+
self.token = config.get("token") or os.getenv("HF_TOKEN")
2525
if not self.token:
2626
raise ValueError(
27-
"Hugging Face token is missing. Please provide it in the config or set the HUGGINGFACE_TOKEN environment variable."
27+
"Hugging Face token is missing. Please provide it in the config or set the HF_TOKEN environment variable."
2828
)
2929

3030
# Initialize the InferenceClient with the specified model and timeout if provided
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import os
2+
from aisuite.provider import Provider
3+
from openai import Client
4+
5+
6+
BASE_URL = "https://api.studio.nebius.ai/v1"
7+
8+
9+
class NebiusProvider(Provider):
10+
def __init__(self, **config):
11+
"""
12+
Initialize the Nebius AI Studio provider with the given configuration.
13+
Pass the entire configuration dictionary to the OpenAI client constructor.
14+
"""
15+
# Ensure API key is provided either in config or via environment variable
16+
config.setdefault("api_key", os.getenv("NEBIUS_API_KEY"))
17+
if not config["api_key"]:
18+
raise ValueError(
19+
"Nebius AI Studio API key is missing. Please provide it in the config or set the NEBIUS_API_KEY environment variable. You can get your API key at https://studio.nebius.ai/settings/api-keys"
20+
)
21+
22+
config["base_url"] = BASE_URL
23+
# Pass the entire config to the OpenAI client constructor
24+
self.client = Client(**config)
25+
26+
def chat_completions_create(self, model, messages, **kwargs):
27+
return self.client.chat.completions.create(
28+
model=model,
29+
messages=messages,
30+
**kwargs # Pass any additional arguments to the Nebius API
31+
)

0 commit comments

Comments
 (0)