|  | 
|  | 1 | +--- | 
|  | 2 | +hide: | 
|  | 3 | +  - toc | 
|  | 4 | +--- | 
|  | 5 | + | 
|  | 6 | +# Deploy, Invoke, and Try DeepSeek on d.run | 
|  | 7 | + | 
|  | 8 | +d.run currently offers a free one-week trial for DeepSeek models.   | 
|  | 9 | +Visit <https://console.d.run/> to try out the following DeepSeek models with one click: | 
|  | 10 | + | 
|  | 11 | +- **DeepSeek-R1**: The full-powered version with 236 billion parameters. It uses a dense Transformer architecture and activates all modules during every inference.   | 
|  | 12 | +  It demonstrates strong capabilities in complex reasoning tasks, especially excelling in math reasoning, code generation, and logical deduction.   | 
|  | 13 | +  In these high-end application domains, R1 is considered on par with top-tier models like OpenAI's GPT-4, offering much stronger reasoning power than typical general-purpose models. | 
|  | 14 | + | 
|  | 15 | +- **DeepSeek-V3**: Based on a Mixture of Experts (MoE) architecture with 671 billion parameters, activating 37 billion per inference.   | 
|  | 16 | +  It performs excellently on general natural language processing tasks and stands out in terms of response speed. | 
|  | 17 | + | 
|  | 18 | +- **DeepSeek-R1-Distill-Qwen-32B**: A distilled model. | 
|  | 19 | + | 
|  | 20 | +- **DeepSeek-R1-Distill-Qwen-14B**: A distilled model. | 
|  | 21 | + | 
|  | 22 | + | 
|  | 23 | + | 
|  | 24 | +However, the free trial usually only lasts one week. It is recommended to deploy your own exclusive large model.   | 
|  | 25 | +The following uses DeepSeek-R1-Distill-Qwen-14B as an example. | 
|  | 26 | + | 
|  | 27 | +## Deploying the Model | 
|  | 28 | + | 
|  | 29 | +1. In the **Model Plaza**, find DeepSeek-R1-Distill-Qwen-14B and click the **Deploy** button on the card. | 
|  | 30 | + | 
|  | 31 | +     | 
|  | 32 | + | 
|  | 33 | +1. Enter the basic information and resource details, select a billing method, then click **Confirm**. | 
|  | 34 | + | 
|  | 35 | +     | 
|  | 36 | + | 
|  | 37 | +1. When the system prompts "Deployment Successful" and the status changes from **Deploying** to **Running**,   | 
|  | 38 | +   it means the DeepSeek-R1-Distill-Qwen-14B model has been successfully deployed. | 
|  | 39 | + | 
|  | 40 | +     | 
|  | 41 | + | 
|  | 42 | +## Chat with the Deployed Model in drun | 
|  | 43 | + | 
|  | 44 | +In the image above, click **Experience** in the operations column to start chatting with the DeepSeek-R1-Distill-Qwen-14B distilled model. | 
|  | 45 | + | 
|  | 46 | + | 
|  | 47 | + | 
|  | 48 | +## Calling the d.run Model from Third-party Apps | 
|  | 49 | + | 
|  | 50 | +You can also call the d.run API from third-party intelligent apps such as VSCode, Bob Translate, Lobe Chat, Cherry Studio, etc.   | 
|  | 51 | +This allows you to use the just-deployed DeepSeek-R1-Distill-Qwen-14B model. | 
|  | 52 | + | 
|  | 53 | +### Creating an API Key | 
|  | 54 | + | 
|  | 55 | +To call d.run's model services in a third-party app, you’ll need an API Key. | 
|  | 56 | + | 
|  | 57 | +1. In the d.run LLM service platform, go to **Analytics & Management** -> **API Key Management**, then click **Create** on the right. | 
|  | 58 | + | 
|  | 59 | +     | 
|  | 60 | + | 
|  | 61 | +1. Enter an easily recognizable name, then click **Confirm**. | 
|  | 62 | + | 
|  | 63 | +     | 
|  | 64 | + | 
|  | 65 | +1. When prompted that the API Key was created successfully, securely save the generated key and click **Close**. | 
|  | 66 | + | 
|  | 67 | +     | 
|  | 68 | + | 
|  | 69 | +1. Return to the API Key list. The newly generated key appears at the top by default. | 
|  | 70 | + | 
|  | 71 | +     | 
|  | 72 | + | 
|  | 73 | +### Calling d.run Model Service | 
|  | 74 | + | 
|  | 75 | +Using the third-party app Bob Translate as an example: | 
|  | 76 | + | 
|  | 77 | +1. After installing it via the App Store, launch Bob and select **Preferences** from the dropdown menu. | 
|  | 78 | + | 
|  | 79 | +     | 
|  | 80 | + | 
|  | 81 | +1. Click **Services** -> **➕** | 
|  | 82 | + | 
|  | 83 | +     | 
|  | 84 | + | 
|  | 85 | +1. Select DeepSeek. | 
|  | 86 | + | 
|  | 87 | +     | 
|  | 88 | + | 
|  | 89 | +1. Fill in the following parameters, enable only d.run’s model service, then click **Save**. | 
|  | 90 | + | 
|  | 91 | +    | **Config Item**         | **Example Value**              | | 
|  | 92 | +    |-------------------------|-------------------------------| | 
|  | 93 | +    | **Service Name**        | d.run.deepseek                | | 
|  | 94 | +    | **API Key**             | Enter the newly generated Key | | 
|  | 95 | +    | **Custom API Base URL** | `https://sh-02.d.run`         | | 
|  | 96 | +    | **Custom API Path**     | Leave empty or `/v1/chat/completions` | | 
|  | 97 | +    | **Model**               | Custom model                  | | 
|  | 98 | +    | **Custom Model**        | DeepSeek-R1-Distill-Qwen-14B  | | 
|  | 99 | + | 
|  | 100 | +     | 
|  | 101 | + | 
|  | 102 | +    !!! note | 
|  | 103 | + | 
|  | 104 | +        You can also run the following command in the terminal to check if the model is callable: | 
|  | 105 | + | 
|  | 106 | +        ```bash | 
|  | 107 | +        curl 'https://sh-02.d.run/v1/chat/completions' \ | 
|  | 108 | +         -H "Content-Type: application/json" \ | 
|  | 109 | +         -H "Authorization: Bearer <replace with your API Key>" \ | 
|  | 110 | +         -d '{ | 
|  | 111 | +            "model": "u-3d7a8e49da2a/test14b", | 
|  | 112 | +            "messages": [{"role": "user", "content": "Say this is a test!"}], | 
|  | 113 | +            "temperature": 0.7 | 
|  | 114 | +         }' | 
|  | 115 | +        ``` | 
|  | 116 | + | 
|  | 117 | +1. Now try letting Bob Translate translate a sentence. | 
|  | 118 | + | 
|  | 119 | +     | 
|  | 120 | + | 
|  | 121 | +🎉 Congratulations! The third-party app Bob Translate has successfully called the DeepSeek-R1-Distill-Qwen-14B model deployed in d.run via the API.   | 
|  | 122 | +Similarly, you can integrate any DeepSeek model deployed in d.run with any third-party intelligent application. | 
0 commit comments