Skip to content
This repository was archived by the owner on Jun 12, 2024. It is now read-only.

Commit f3b0955

Browse files
authored
doc: update async open router doc
1 parent f81fedd commit f3b0955

File tree

1 file changed

+48
-7
lines changed

1 file changed

+48
-7
lines changed

documents/README_OPENROUTER.md

Lines changed: 48 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -145,20 +145,20 @@ For the Gemini API, due to issues like rate limiting and blocking, sync objects
145145

146146
The `OpenRouter` class is designed to manage API interactions with OpenRouter for creating chat completions using AI models asynchronously. This class utilizes `aiohttp` for asynchronous network calls.
147147

148+
<br>
148149

149-
150-
## Class Usage
150+
## Usage
151151

152152
### Initialization
153153

154154
Initialize an instance of `OpenRouter` with your model identifier and API key:
155155

156156
```python
157-
from open_router_async import OpenRouter
157+
from gemini import AsyncOpenRouter
158158

159159
api_key = 'your_api_key_here'
160160
model = 'google/gemma-7b-it:free'
161-
router = OpenRouter(model, api_key)
161+
router = AsyncOpenRouter(model, api_key)
162162
```
163163

164164
### Single Chat Completion
@@ -169,13 +169,19 @@ To generate a single chat completion asynchronously:
169169
import asyncio
170170

171171
async def main():
172-
completion = await router.create_chat_completion("Hello, how can I help you today?")
172+
completion = await router.create_chat_completion("Give me infomation of Seoul, Korea.")
173173
print(completion)
174174

175175
if __name__ == "__main__":
176176
asyncio.run(main())
177177
```
178178

179+
```python
180+
from gemini import AsyncOpenRouter
181+
182+
payload = await GemmaClient.create_chat_completion("Give me infomation of Seoul, Korea.")
183+
```
184+
179185
### Multiple Chat Completions
180186

181187
To handle multiple chat completions concurrently:
@@ -185,18 +191,53 @@ import asyncio
185191

186192
async def main():
187193
messages = [
188-
"Hello, how can I help you today?",
194+
""Give me infomation of Seoul, Korea.",
189195
"What is the weather like today?",
190196
"Can you recommend some books?"
191197
]
192-
completions = await router.create_multi_chat_completions(messages)
198+
completions = await GemmaClient.create_multi_chat_completions(messages)
193199
for completion in completions:
194200
print(completion)
195201

196202
if __name__ == "__main__":
197203
asyncio.run(main())
198204
```
199205

206+
```python
207+
messages = [
208+
"Give me infomation of Seoul, Korea.",
209+
"What is the weather like today?",
210+
"Can you recommend some books?"
211+
]
212+
213+
completions = await GemmaClient.create_multi_chat_completions(messages)
214+
215+
# Print completions
216+
for completion in completions:
217+
print("-"*20)
218+
print(completion)
219+
```
220+
221+
### Generate Content
222+
223+
To generate a single chat completion asynchronously:
224+
225+
```python
226+
import asyncio
227+
228+
async def main():
229+
completion = await router.generate_content("Give me infomation of Seoul, Korea.")
230+
print(completion)
231+
232+
if __name__ == "__main__":
233+
asyncio.run(main())
234+
```
235+
236+
```python
237+
from gemini import AsyncOpenRouter
238+
239+
payload = await GemmaClient.generate_content("Give me infomation of Seoul, Korea.")
240+
```
200241

201242
### More Examples
202243

0 commit comments

Comments
 (0)