You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/>
9
12
</a>
@@ -38,6 +41,9 @@ In this guide, we will walk through a simple example to demonstrate how function
38
41
39
42
Before we get started, let’s assume we have a dataframe consisting of payment transactions. When users ask questions about this dataframe, they can use certain tools to answer questions about this data. This is just an example to emulate an external database that the LLM cannot directly access.
@@ -63,6 +95,9 @@ Users can define all the necessary tools for their use cases.
63
95
64
96
- In many cases, we might have multiple tools at our disposal. For example, let’s consider we have two functions as our two tools: `retrieve_payment_status` and `retrieve_payment_date` to retrieve payment status and payment date given transaction ID.
returnJSON.stringify({ error: 'transaction id not found.' });
130
+
}
76
131
```
77
132
133
+
</TabItem>
134
+
</Tabs>
78
135
79
136
- In order for Mistral models to understand the functions, we need to outline the function specifications with a JSON schema. Specifically, we need to describe the type, function name, function description, function parameters, and the required parameter for the function. Since we have two functions here, let’s list two function specifications in a list.
80
137
138
+
<TabsgroupId="code">
139
+
<TabItemvalue="python"label="python"default>
140
+
81
141
```python
82
142
tools = [
83
143
{
@@ -117,8 +177,56 @@ tools = [
117
177
]
118
178
```
119
179
180
+
</TabItem>
181
+
<TabItemvalue="typescript"label="typescript">
182
+
183
+
```typescript
184
+
const tools = [
185
+
{
186
+
type: "function",
187
+
function: {
188
+
name: "retrievePaymentStatus",
189
+
description: "Get payment status of a transaction",
190
+
parameters: {
191
+
type: "object",
192
+
properties: {
193
+
transactionId: {
194
+
type: "string",
195
+
description: "The transaction id.",
196
+
}
197
+
},
198
+
required: ["transactionId"],
199
+
},
200
+
},
201
+
},
202
+
{
203
+
type: "function",
204
+
function: {
205
+
name: "retrievePaymentDate",
206
+
description: "Get payment date of a transaction",
207
+
parameters: {
208
+
type: "object",
209
+
properties: {
210
+
transactionId: {
211
+
type: "string",
212
+
description: "The transaction id.",
213
+
}
214
+
},
215
+
required: ["transactionId"],
216
+
},
217
+
},
218
+
}
219
+
];
220
+
```
221
+
222
+
</TabItem>
223
+
</Tabs>
224
+
120
225
- Then we organize the two functions into a dictionary where keys represent the function name, and values are the function with the `df` defined. This allows us to call each function based on its function name.
Suppose a user asks the following question: “What’s the status of my transaction?” A standalone LLM would not be able to answer this question, as it needs to query the business logic backend to access the necessary data. But what if we have an exact tool we can use to answer this question? We could potentially provide an answer!
133
254
255
+
<TabsgroupId="code">
256
+
<TabItemvalue="python"label="python"default>
257
+
134
258
```python
135
259
messages = [{"role": "user", "content": "What's the status of my transaction T1001?"}]
136
260
```
137
261
262
+
</TabItem>
263
+
<TabItemvalue="typescript"label="typescript">
264
+
265
+
```typescript
266
+
const messages = [{"role": "user", "content": "What's the status of my transaction T1001?"}];
@@ -186,6 +367,10 @@ How do we execute the function? Currently, it is the user’s responsibility to
186
367
187
368
188
369
Let’s extract some useful function information from model response including `function_name` and `function_params`. It’s clear here that our Mistral model has chosen to use the function `retrieve_payment_status` with the parameter `transaction_id` set to T1001.
We can now provide the output from the tools to Mistral models, and in return, the Mistral model can produce a customised final response for the specific user.
0 commit comments