You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: evi/evi-flutter/README.md
+32-30
Original file line number
Diff line number
Diff line change
@@ -5,61 +5,63 @@
5
5
6
6
This project features a sample implementation of Hume's [Empathic Voice Interface](https://dev.hume.ai/docs/empathic-voice-interface-evi/overview) using Flutter. This is lightly adapted from the stater project provided by `flutter create`.
7
7
8
-
**Targets:** The example supports iOS, Android, and Web.
8
+
**Targets:** The example supports iOS, Android, and Web.
9
9
10
-
**Dependencies:** It uses the [record](https://pub.dev/packages/record) Flutter package for audio recording, and [audioplayers](https://pub.dev/packages/audioplayers) package for playback.
10
+
**Dependencies:** It uses the [record](https://pub.dev/packages/record) Flutter package for audio recording, and [audioplayers](https://pub.dev/packages/audioplayers) package for playback.
2. Install Flutter (if needed) following the [official guide](https://docs.flutter.dev/get-started/install).
22
22
23
23
3. Install dependencies:
24
-
```shell
25
-
flutter pub get
26
-
```
24
+
25
+
```shell
26
+
flutter pub get
27
+
```
27
28
28
29
4. Set up your API key:
29
30
30
-
You must authenticate to use the EVI API. Your API key can be retrieved from the [Hume AI platform](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
31
-
32
-
This example uses [flutter_dotenv](https://pub.dev/packages/flutter_dotenv). Place your API key in a `.env` file at the root of your project.
31
+
You must authenticate to use the EVI API. Your API key can be retrieved from the [Hume AI platform](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
32
+
33
+
This example uses [flutter_dotenv](https://pub.dev/packages/flutter_dotenv). Place your API key in a `.env` file at the root of your project.
33
34
34
-
```shell
35
-
echo"HUME_API_KEY=your_api_key_here"> .env
36
-
```
37
-
38
-
You can copy the `.env.example` file to use as a template.
35
+
```shell
36
+
echo"HUME_API_KEY=your_api_key_here"> .env
37
+
```
39
38
40
-
**Note:** the `HUME_API_KEY` environment variable is fordevelopment only. In a production flutter app you should avoid building your api key into the app -- the client should fetch an access token from an endpoint on your server. You should supply the `MY_SERVER_AUTH_URL` environment variable and uncomment the call to `fetchAccessToken`in`lib/main.dart`.
39
+
You can copy the `.env.example` file to use as a template.
40
+
41
+
**Note:** the `HUME_API_KEY` environment variable is for development only. In a production flutter app you should avoid building your api key into the app -- the client should fetch an access token from an endpoint on your server. You should supply the `MY_SERVER_AUTH_URL` environment variable and uncomment the call to `fetchAccessToken` in `lib/main.dart`.
41
42
42
43
5. Specify an EVI configuration (Optional):
43
44
44
-
EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration).
45
+
EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration).
45
46
46
-
```shell
47
-
echo"HUME_CONFIG_ID=your_config_id_here">> .env
48
-
```
47
+
```shell
48
+
echo"HUME_CONFIG_ID=your_config_id_here">> .env
49
+
```
49
50
50
51
6. Run the app:
51
-
```shell
52
-
flutter run
53
-
```
52
+
53
+
```shell
54
+
flutter run
55
+
```
54
56
55
57
7. If you are using the Android emulator, make sure to send audio to the emulator from the host.
56
58
57
59

58
60
59
61
## Notes
60
62
61
-
***Echo cancellation**. Echo cancellation is important fora good user experience using EVI. Without echo cancellation, EVI will detect its own speech as user interruptions, and will cut itself off and become incoherent. This flutter example *requests* echo cancellation from the browser or the device's operating system, but echo cancellation is hardware-dependent and may not be providedin all environments.
62
-
* Echo cancellation works consistently on physical iOS devices and on the web.
63
-
* Echo cancellation works on some physical Android devices.
64
-
* Echo cancellation doesn't seem to work using the iOS simulator or Android Emulator when forwarding audio from the host.
65
-
* If you need to test using a simulator or emulator, or in an environment where echo cancellation is not provided, use headphones, or enable the mute button while EVI is speaking.
63
+
-**Echo cancellation**. Echo cancellation is important for a good user experience using EVI. Without echo cancellation, EVI will detect its own speech as user interruptions, and will cut itself off and become incoherent. This flutter example _requests_ echo cancellation from the browser or the device's operating system, but echo cancellation is hardware-dependent and may not be provided in all environments.
64
+
- Echo cancellation works consistently on physical iOS devices and on the web.
65
+
- Echo cancellation works on some physical Android devices.
66
+
- Echo cancellation doesn't seem to work using the iOS simulator or Android Emulator when forwarding audio from the host.
67
+
- If you need to test using a simulator or emulator, or in an environment where echo cancellation is not provided, use headphones, or enable the mute button while EVI is speaking.
Copy file name to clipboardExpand all lines: evi/evi-next-js-app-router-quickstart/README.md
+30-29
Original file line number
Diff line number
Diff line change
@@ -22,49 +22,50 @@ Below are the steps to completing deployment:
22
22
1. Create a Git Repository for your project.
23
23
2. Provide the required environment variables. To get your API key and Secret key, log into the Hume AI Platform and visit the [API keys page](https://platform.hume.ai/settings/keys).
cd hume-api-examples/evi/evi-next-js-app-router-quickstart
32
+
```
33
33
34
34
2. Install dependencies:
35
-
```shell
36
-
npm install
37
-
```
35
+
36
+
```shell
37
+
npm install
38
+
```
38
39
39
40
3. Set up your API key and Secret key:
40
41
41
-
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
42
-
43
-
Place your `HUME_API_KEY` and `HUME_SECRET_KEY`in a `.env` file at the root of your project.
42
+
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
43
+
44
+
Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project.
44
45
45
-
```shell
46
-
echo"HUME_API_KEY=your_api_key_here"> .env
47
-
echo"HUME_SECRET_KEY=your_secret_key_here">> .env
48
-
```
46
+
```shell
47
+
echo"HUME_API_KEY=your_api_key_here"> .env
48
+
echo"HUME_SECRET_KEY=your_secret_key_here">> .env
49
+
```
49
50
50
-
You can copy the `.env.example` file to use as a template.
51
+
You can copy the `.env.example` file to use as a template.
51
52
52
53
4. Specify an EVI configuration (Optional):
53
54
54
55
EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration).
55
-
56
-
You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-app-router-quickstart/components/Chat.tsx).
You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-app-router-quickstart/components/Chat.tsx).
Copy file name to clipboardExpand all lines: evi/evi-next-js-function-calling/README.md
+51-49
Original file line number
Diff line number
Diff line change
@@ -13,85 +13,87 @@ See the [Tool Use guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/f
13
13
14
14
1.[Create a tool](https://dev.hume.ai/docs/empathic-voice-interface-evi/tool-use#create-a-tool) with the following payload:
15
15
16
-
```json
17
-
{
18
-
"name": "get_current_weather",
19
-
"description": "This tool is for getting the current weather.",
20
-
"parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }"
21
-
}
22
-
```
16
+
```json
17
+
{
18
+
"name": "get_current_weather",
19
+
"description": "This tool is for getting the current weather.",
20
+
"parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }"
21
+
}
22
+
```
23
23
24
24
2.[Create a configuration](https://dev.hume.ai/docs/empathic-voice-interface-evi/tool-use#create-a-configuration) equipped with that tool:
cd hume-api-examples/evi/evi-next-js-function-calling
49
+
```
50
50
51
51
2. Install dependencies:
52
-
```shell
53
-
pnpm install
54
-
```
52
+
53
+
```shell
54
+
pnpm install
55
+
```
55
56
56
57
3. Set up your API key and Secret key:
57
58
58
-
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
59
-
60
-
Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project.
59
+
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
60
+
61
+
Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project.
This will start the Next.js development server, and you can access the application at `http://localhost:3000`.
84
+
```shell
85
+
pnpm run dev
86
+
```
87
+
88
+
This will start the Next.js development server, and you can access the application at `http://localhost:3000`.
87
89
88
90
## Example Conversation
89
91
90
92
Here's an example of how you might interact with the EVI to get weather information:
91
93
92
-
*User: "What's the weather like in New York City?"*
94
+
_User: "What's the weather like in New York City?"_
93
95
94
-
*EVI: (Uses the get_current_weather tool to fetch data) "Currently in New York City, it's 72°F (22°C) and partly cloudy. The forecast calls for a high of 78°F (26°C) and a low of 65°F (18°C) today."*
96
+
_EVI: (Uses the get_current_weather tool to fetch data) "Currently in New York City, it's 72°F (22°C) and partly cloudy. The forecast calls for a high of 78°F (26°C) and a low of 65°F (18°C) today."_
Copy file name to clipboardExpand all lines: evi/evi-next-js-pages-router-quickstart/README.md
+23-24
Original file line number
Diff line number
Diff line change
@@ -22,38 +22,39 @@ Below are the steps to completing deployment:
22
22
1. Create a Git Repository for your project.
23
23
2. Provide the required environment variables. To get your API key and Secret key, log into the Hume AI Platform and visit the [API keys page](https://platform.hume.ai/settings/keys).
cd hume-api-examples/evi/evi-next-js-pages-router-quickstart
32
+
```
33
33
34
34
2. Install dependencies:
35
-
```shell
36
-
pnpm install
37
-
```
35
+
36
+
```shell
37
+
pnpm install
38
+
```
38
39
39
40
3. Set up your API key and Secret key:
40
41
41
-
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
42
-
43
-
Place your `HUME_API_KEY` and `HUME_SECRET_KEY`in a `.env` file at the root of your project.
42
+
In order to make an authenticated connection we will first need to generate an access token. Doing so will require your API key and Secret key. These keys can be obtained by logging into the Hume AI Platform and visiting the [API keys page](https://platform.hume.ai/settings/keys). For detailed instructions, see our documentation on [getting your api keys](https://dev.hume.ai/docs/introduction/api-key).
43
+
44
+
Place your `HUME_API_KEY` and `HUME_SECRET_KEY` in a `.env` file at the root of your project.
44
45
45
-
```shell
46
-
echo"HUME_API_KEY=your_api_key_here"> .env
47
-
echo"HUME_SECRET_KEY=your_secret_key_here">> .env
48
-
```
46
+
```shell
47
+
echo"HUME_API_KEY=your_api_key_here"> .env
48
+
echo"HUME_SECRET_KEY=your_secret_key_here">> .env
49
+
```
49
50
50
-
You can copy the `.env.example` file to use as a template.
51
+
You can copy the `.env.example` file to use as a template.
51
52
52
53
4. Specify an EVI configuration (Optional):
53
54
54
-
EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration).
55
-
56
-
You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-pages-router-quickstart/components/Chat.tsx).
55
+
EVI is pre-configured with a set of default values, which are automatically applied if you do not specify a configuration. The default configuration includes a preset voice and language model, but does not include a system prompt or tools. To customize these options, you will need to create and specify your own EVI configuration. To learn more, see our [configuration guide](https://dev.hume.ai/docs/empathic-voice-interface-evi/configuration/build-a-configuration).
56
+
57
+
You may pass in a configuration ID to the `VoiceProvider` object inside the [components/Chat.tsx file](https://github.com/HumeAI/hume-api-examples/blob/main/evi/next-js/evi-next-js-pages-router-quickstart/components/Chat.tsx).
57
58
58
59
Here's an example:
59
60
```tsx
@@ -64,8 +65,6 @@ Below are the steps to completing deployment:
0 commit comments