46
46
inflections and tones to the text based on the user’s expressions and
47
47
the context of the conversation. The synthesized audio is streamed
48
48
back to the user as an [Assistant
49
- Message](/reference/empathic-voice-interface-evi/chat/chat#receive.Assistant%20Message .type).
49
+ Message](/reference/empathic-voice-interface-evi/chat/chat#receive.AssistantMessage .type).
50
50
type :
51
51
type : literal<"assistant_input">
52
52
docs : >-
68
68
docs : >-
69
69
Indicates if this message was inserted into the conversation as text
70
70
from an [Assistant Input
71
- message](/reference/empathic-voice-interface-evi/chat/chat#send.Assistant%20Input .text).
71
+ message](/reference/empathic-voice-interface-evi/chat/chat#send.AssistantInput .text).
72
72
id :
73
73
type : optional<string>
74
74
docs : >-
@@ -443,7 +443,7 @@ types:
443
443
444
444
Once this message is sent, EVI will not respond until a [Resume
445
445
Assistant
446
- message](/reference/empathic-voice-interface-evi/chat/chat#send.Resume%20Assistant%20Message .type)
446
+ message](/reference/empathic-voice-interface-evi/chat/chat#send.ResumeAssistantMessage .type)
447
447
is sent. When paused, EVI won’t respond, but transcriptions of your
448
448
audio inputs will still be recorded.
449
449
source :
@@ -710,9 +710,9 @@ types:
710
710
docs : >-
711
711
Indicates whether a response to the tool call is required from the
712
712
developer, either in the form of a [Tool Response
713
- message](/reference/empathic-voice-interface-evi/chat/chat#send.Tool%20Response%20Message .type)
713
+ message](/reference/empathic-voice-interface-evi/chat/chat#send.ToolResponseMessage .type)
714
714
or a [Tool Error
715
- message](/reference/empathic-voice-interface-evi/chat/chat#send.Tool%20Error%20Message .type).
715
+ message](/reference/empathic-voice-interface-evi/chat/chat#send.ToolErrorMessage .type).
716
716
tool_call_id :
717
717
type : string
718
718
docs : >-
@@ -773,7 +773,7 @@ types:
773
773
invocation, ensuring that the Tool Error message is linked to the
774
774
appropriate tool call request. The specified `tool_call_id` must match
775
775
the one received in the [Tool Call
776
- message](/reference/empathic-voice-interface-evi/chat/chat#receive.Tool%20Call%20Message .type).
776
+ message](/reference/empathic-voice-interface-evi/chat/chat#receive.ToolCallMessage .type).
777
777
tool_type :
778
778
type : optional<ToolType>
779
779
docs : >-
@@ -787,7 +787,7 @@ types:
787
787
788
788
789
789
Upon receiving a [Tool Call
790
- message](/reference/empathic-voice-interface-evi/chat/chat#receive.Tool%20Call%20Message .type)
790
+ message](/reference/empathic-voice-interface-evi/chat/chat#receive.ToolCallMessage .type)
791
791
and failing to invoke the function, this message is sent to notify EVI
792
792
of the tool's failure.
793
793
source :
@@ -815,7 +815,7 @@ types:
815
815
invocation, ensuring that the correct response is linked to the
816
816
appropriate request. The specified `tool_call_id` must match the one
817
817
received in the [Tool Call
818
- message](/reference/empathic-voice-interface-evi/chat/chat#receive.Tool%20Call%20Message .tool_call_id).
818
+ message](/reference/empathic-voice-interface-evi/chat/chat#receive.ToolCallMessage .tool_call_id).
819
819
tool_name :
820
820
type : optional<string>
821
821
docs : >-
@@ -825,7 +825,7 @@ types:
825
825
Include this optional field to help the supplemental LLM identify
826
826
which tool generated the response. The specified `tool_name` must
827
827
match the one received in the [Tool Call
828
- message](/reference/empathic-voice-interface-evi/chat/chat#receive.Tool%20Call%20Message .type).
828
+ message](/reference/empathic-voice-interface-evi/chat/chat#receive.ToolCallMessage .type).
829
829
tool_type :
830
830
type : optional<ToolType>
831
831
docs : >-
@@ -839,7 +839,7 @@ types:
839
839
840
840
841
841
Upon receiving a [Tool Call
842
- message](/reference/empathic-voice-interface-evi/chat/chat#receive.Tool%20Call%20Message .type)
842
+ message](/reference/empathic-voice-interface-evi/chat/chat#receive.ToolCallMessage .type)
843
843
and successfully invoking the function, this message is sent to convey
844
844
the result of the function call back to EVI.
845
845
source :
@@ -853,7 +853,7 @@ types:
853
853
UserInput :
854
854
docs : >-
855
855
User text to insert into the conversation. Text sent through a User Input
856
- message is treated as the user’ s speech to EVI. EVI processes this input
856
+ message is treated as the user' s speech to EVI. EVI processes this input
857
857
and provides a corresponding response.
858
858
859
859
@@ -922,7 +922,7 @@ types:
922
922
docs : >-
923
923
Indicates if this message was inserted into the conversation as text
924
924
from a [User
925
- Input](/reference/empathic-voice-interface-evi/chat/chat#send.User%20Input .text)
925
+ Input](/reference/empathic-voice-interface-evi/chat/chat#send.UserInput .text)
926
926
message.
927
927
interim :
928
928
type : boolean
@@ -934,7 +934,7 @@ types:
934
934
context. Interim messages are useful to detect if the user is
935
935
interrupting during audio playback on the client. Even without a
936
936
finalized transcription, along with
937
- [UserInterrupt](/reference/empathic-voice-interface-evi/chat/chat#receive.User%20Interruption .type)
937
+ [UserInterrupt](/reference/empathic-voice-interface-evi/chat/chat#receive.UserInterruption .type)
938
938
messages, interim `UserMessages` are useful for detecting if the user
939
939
is interrupting during audio playback on the client, signaling to stop
940
940
playback in your application. Interim `UserMessages` will only be
@@ -960,9 +960,9 @@ types:
960
960
This message contains both a transcript of the user’s input and the
961
961
expression measurement predictions if the input was sent as an [Audio
962
962
Input
963
- message](/reference/empathic-voice-interface-evi/chat/chat#send.Audio%20Input .type).
963
+ message](/reference/empathic-voice-interface-evi/chat/chat#send.AudioInput .type).
964
964
Expression measurement predictions are not provided for a [User Input
965
- message](/reference/empathic-voice-interface-evi/chat/chat#send.User%20Input .type),
965
+ message](/reference/empathic-voice-interface-evi/chat/chat#send.UserInput .type),
966
966
as the prosody model relies on audio input and cannot process text
967
967
alone.
968
968
source :
@@ -981,21 +981,6 @@ types:
981
981
- type : ToolErrorMessage
982
982
source :
983
983
openapi : evi-asyncapi.json
984
- TtsInput :
985
- properties :
986
- type : optional<literal<"tts">>
987
- source :
988
- openapi : evi-asyncapi.json
989
- TextInput :
990
- properties :
991
- type : optional<literal<"text_input">>
992
- source :
993
- openapi : evi-asyncapi.json
994
- FunctionCallResponseInput :
995
- properties :
996
- type : optional<literal<"function_call_response">>
997
- source :
998
- openapi : evi-asyncapi.json
999
984
HTTPValidationError :
1000
985
properties :
1001
986
detail :
@@ -3204,12 +3189,6 @@ types:
3204
3189
minutes).
3205
3190
source :
3206
3191
openapi : evi-openapi.json
3207
- PostedPromptSpec :
3208
- docs : A Prompt associated with this Config.
3209
- properties :
3210
- version : optional<unknown>
3211
- source :
3212
- openapi : evi-openapi.json
3213
3192
PostedVoiceProvider :
3214
3193
enum :
3215
3194
- HUME_AI
0 commit comments