Skip to content

Commit 7a4e703

Browse files
authored
Merge pull request d-run#381 from windsonsea/apicall
update models/api-call.md
2 parents cdf9874 + a8a68cc commit 7a4e703

File tree

2 files changed

+64
-52
lines changed

2 files changed

+64
-52
lines changed

docs/zh/docs/en/models/api-call.md

Lines changed: 62 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,47 @@
1-
---
2-
status: new
3-
translated: true
4-
---
5-
61
# Model Invocation
72

8-
The `d.run` platform offers two deployment options for large language models, allowing you to choose based on your specific needs:
3+
d.run offers two ways to host large language models. You can choose based on your needs:
94

10-
- **MaaS by Token**: Utilizes token-based billing, sharing resources, and enables model invocation without requiring instance deployment
11-
- **Model Service**: Provides dedicated instances with per-instance billing, offering unlimited API call access
5+
* **MaaS by Token**: Billed by token usage. Resources are shared, and users can call models without deploying their own instances.
6+
* **Dedicated Model Service**: Users get exclusive instances, billed by instance, with no limit on API call volume.
127

13-
## Supported Models and Deployment Options
8+
## Currently Supported Models and Hosting Options
149

15-
| Model Name | MaaS by Token | Model Service |
16-
| ----------------------------- | ------------- | ------------- |
17-
| 🔥 DeepSeek-R1 || |
18-
| 🔥 DeepSeek-V3 || |
19-
| Phi-4 | ||
20-
| Phi-3.5-mini-instruct | ||
21-
| Qwen2-0.5B-Instruct | ||
22-
| Qwen2.5-7B-Instruct |||
23-
| Qwen2.5-14B-Instruct | ||
24-
| Qwen2.5-Coder-32B-Instruct | ||
25-
| Qwen2.5-72B-Instruct-AWQ |||
26-
| baichuan2-13b-Chat | ||
27-
| Llama-3.2-11B-Vision-Instruct |||
28-
| glm-4-9b-chat |||
10+
| Model Name | MaaS by Token | Dedicated Service |
11+
| ----------------------------- | ------------- | ----------------- |
12+
| 🔥 DeepSeek-R1 | | |
13+
| 🔥 DeepSeek-V3 | | |
14+
| Phi-4 | | |
15+
| Phi-3.5-mini-instruct | | |
16+
| Qwen2-0.5B-Instruct | | |
17+
| Qwen2.5-7B-Instruct | | |
18+
| Qwen2.5-14B-Instruct | | |
19+
| Qwen2.5-Coder-32B-Instruct | | |
20+
| Qwen2.5-72B-Instruct-AWQ | | |
21+
| baichuan2-13b-Chat | | |
22+
| Llama-3.2-11B-Vision-Instruct | | |
23+
| glm-4-9b-chat | | |
2924

30-
## Model Endpoints
25+
## Model Endpoint
3126

32-
A model endpoint is a URL or API address that allows users to access and send requests for model inference.
27+
A model endpoint is a URL or API address users can send requests to in order to run inference.
3328

34-
| Invocation Method | Endpoint |
35-
| ----------------- | ------------------- |
36-
| MaaS by Token | `chat.d.run` |
37-
| Model Service | `<region>.d.run` |
29+
| Access Method | Endpoint |
30+
| ----------------- | -------------------- |
31+
| MaaS by Token | `https://chat.d.run` |
32+
| Dedicated Service | `<region>.d.run` |
3833

39-
## API Invocation Examples
34+
## Example API Usage
4035

41-
### Invoking via MaaS by Token
36+
### Using MaaS by Token
4237

43-
To invoke models using the MaaS by Token method, follow these steps:
38+
To call a model via MaaS by Token, follow these steps:
4439

45-
1. **Obtain API Key**: Log in to your user console and create a new API Key
46-
2. **Set Endpoint**: Replace the SDK endpoint with `chat.d.run`
47-
3. **Invoke Model**: Use the official model name along with the new API Key for invocation
40+
1. **Get an API Key**: Log in to the user console and [create a new API key](./apikey.md)
41+
2. **Set the Endpoint**: Set your SDK's endpoint to `https://chat.d.run`
42+
3. **Call the Model**: Use the official model name along with your API key
4843

49-
**Example Code (Python)**:
44+
**Example Code (Python):**
5045

5146
```python
5247
import openai
@@ -62,30 +57,51 @@ response = openai.Completion.create(
6257
print(response.choices[0].text)
6358
```
6459

65-
### Invoking via Model Service
60+
### Using a Dedicated Model Instance
6661

67-
To invoke models using the Model Service method, follow these steps:
62+
To call a model hosted on your own instance, follow these steps:
6863

69-
1. **Obtain API Key**: Log in to your user console and create a new API Key
70-
2. **Set Endpoint**: Replace the SDK endpoint with `<region>.d.run`
71-
3. **Invoke Model**: Use the official model name along with the new API Key for invocation
64+
1. **Deploy a Model Instance**: Deploy in a specified region, e.g., `sh-02`
65+
2. **Get an API Key**: Log in to the user console and create a new API key
66+
3. **Set the Endpoint**: Set your SDK's endpoint to `<region>.d.run`, e.g., `sh-02.d.run`
67+
4. **Call the Model**: Use the official model name and your API key
7268

73-
**Example Code (Python)**:
69+
**Example Code (Python):**
7470

7571
```python
7672
import openai
7773

7874
openai.api_key = "your-api-key" # Replace with your API Key
79-
openai.api_base = "<region>.d.run"
75+
openai.api_base = "https://sh-02.d.run" # Replace with your instance's region
8076

8177
response = openai.Completion.create(
82-
model="u-1100a15812cc/qwen2",
78+
model="u-1100a15812cc/qwen2", # Replace with your model's full name
8379
prompt="What is your name?"
8480
)
8581

8682
print(response.choices[0].text)
8783
```
8884

89-
## Support and Feedback
85+
## Frequently Asked Questions
86+
87+
### Q1: How should I choose the invocation method?
88+
89+
* **MaaS by Token**: Best for lightweight or infrequent use cases.
90+
* **Dedicated Instance**: Ideal for high-performance and high-frequency usage.
91+
92+
### Q2: How do I view my API Key?
93+
94+
Log in to the user console and go to the API Key management page. See [API Key Management](apikey.md) for more details.
95+
96+
### Q3: How do I find the model name?
97+
98+
* For MaaS by Token, model names follow the format `public/<model_name>`, such as `public/deepseek-r1`, which can be found on the model details page.
99+
* For dedicated services, model names follow the format `<username>/<model_name>`, such as `u-1100a15812cc/qwen2`, and can be copied directly from your model list.
100+
101+
### Q4: How is pricing calculated for dedicated model instances?
102+
103+
Pricing is based on region, instance type, and usage time. For details, refer to the instance pricing page in your user console.
104+
105+
## Support & Feedback
90106

91-
For any questions or feedback, please contact our [Technical Support Team](../contact/index.md).
107+
For any questions or feedback, please contact our [technical support team](../contact/index.md).

docs/zh/docs/models/api-call.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
---
2-
status: new
3-
---
4-
51
# 模型调用
62

73
d.run 提供了两种大模型的托管方式,您可以根据自己的需求任选其一:
@@ -32,7 +28,7 @@ d.run 提供了两种大模型的托管方式,您可以根据自己的需求
3228

3329
| 调用方式 | Endpoint |
3430
| ------------- | ------------------- |
35-
| MaaS by Token | `chat.d.run` |
31+
| MaaS by Token | `https://chat.d.run` |
3632
| 模型服务 | `<region>.d.run` |
3733

3834
## API 调用示例
@@ -42,7 +38,7 @@ d.run 提供了两种大模型的托管方式,您可以根据自己的需求
4238
要使用 MaaS by Token 调用模型,请按照以下步骤操作:
4339

4440
1. **获取 API Key**:登录用户控制台,[创建一个新的 API Key](./apikey.md)
45-
2. **设置 Endpoint**:将 SDK 的 endpoint 替换为 `chat.d.run`
41+
2. **设置 Endpoint**:将 SDK 的 endpoint 替换为 `https://chat.d.run`
4642
3. **调用模型**:使用官方的模型名称和新的 API Key 进行调用
4743

4844
**示例代码 (Python)**

0 commit comments

Comments
 (0)