-
Notifications
You must be signed in to change notification settings - Fork 25.3k
[ML] Adding configurable inference service #127939
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Adding configurable inference service #127939
Conversation
Hi @jonathan-buttner, I've created a changelog YAML for you. |
…ttner/elasticsearch into custom-inference-service-jon
...src/main/java/org/elasticsearch/xpack/inference/external/http/retry/BaseResponseHandler.java
Show resolved
Hide resolved
...inference/src/main/java/org/elasticsearch/xpack/inference/services/custom/CustomService.java
Show resolved
Hide resolved
...e/src/main/java/org/elasticsearch/xpack/inference/services/custom/CustomServiceSettings.java
Show resolved
Hide resolved
removeNullValues(parameters); | ||
validateMapValues( | ||
parameters, | ||
List.of(String.class, Integer.class, Double.class, Float.class, Boolean.class), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restricting the task settings to these types (no nested fields aka maps or lists).
...ference/src/main/java/org/elasticsearch/xpack/inference/services/custom/QueryParameters.java
Show resolved
Hide resolved
...e/src/main/java/org/elasticsearch/xpack/inference/services/custom/request/CustomRequest.java
Show resolved
Hide resolved
...e/src/main/java/org/elasticsearch/xpack/inference/services/custom/request/CustomRequest.java
Show resolved
Hide resolved
.../main/java/org/elasticsearch/xpack/inference/services/settings/SerializableSecureString.java
Show resolved
Hide resolved
...inference/src/test/java/org/elasticsearch/xpack/inference/services/AbstractServiceTests.java
Show resolved
Hide resolved
...ain/java/org/elasticsearch/xpack/inference/services/custom/response/ErrorResponseParser.java
Show resolved
Hide resolved
…ttner/elasticsearch into custom-inference-service-jon
@elasticmachine merge upstream |
There are no new commits on the base branch. |
Pinging @elastic/ml-core (Team:ML) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
.../plugin/inference/src/main/java/org/elasticsearch/xpack/inference/services/ServiceUtils.java
Outdated
Show resolved
Hide resolved
.../main/java/org/elasticsearch/xpack/inference/services/settings/SerializableSecureString.java
Show resolved
Hide resolved
...inference/src/test/java/org/elasticsearch/xpack/inference/services/AbstractServiceTests.java
Show resolved
Hide resolved
Embedding ServicesI successfully created embedding services for Cohere and VoyageAI, it was actually quite simple to get something working but I have a few questions/suggestions.
|
} | ||
|
||
try { | ||
var request = new CustomRequest(query, input, model); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EmbeddingsInput
has a inputType
parameter that should be passed to the CustomRequest
so that is can be replaced in the request body. Same for the topN
and returnDocuments
options in QueryAndDocsInputs
, these could all be passed to the CustomRequest
constructor as a loose map
Rerank
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Dave and I chatted, I'll address the feedback in followup PRs. I added a feature flag to exclude the current functionality from production. |
…nference-service-jon
…ttner/elasticsearch into custom-inference-service-jon
…nference-service-jon
…ttner/elasticsearch into custom-inference-service-jon
💔 Backport failed
You can use sqren/backport to manually backport by running |
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation |
* Inference changes * Custom service fixes * Update docs/changelog/127939.yaml * Cleaning up from failed merge * Fixing changelog * [CI] Auto commit changes from spotless * Fixing test * Adding feature flag * [CI] Auto commit changes from spotless --------- Co-authored-by: elasticsearchmachine <[email protected]> (cherry picked from commit 9db1837) # Conflicts: # server/src/main/java/org/elasticsearch/TransportVersions.java # test/test-clusters/src/main/java/org/elasticsearch/test/cluster/FeatureFlag.java
I'll track addressing the feedback using this comment Dave's feedback:
|
* Inference changes * Custom service fixes * Update docs/changelog/127939.yaml * Cleaning up from failed merge * Fixing changelog * [CI] Auto commit changes from spotless * Fixing test * Adding feature flag * [CI] Auto commit changes from spotless --------- Co-authored-by: elasticsearchmachine <[email protected]>
* Inference changes * Custom service fixes * Update docs/changelog/127939.yaml * Cleaning up from failed merge * Fixing changelog * [CI] Auto commit changes from spotless * Fixing test * Adding feature flag * [CI] Auto commit changes from spotless --------- Co-authored-by: elasticsearchmachine <[email protected]>
I'm looking forward to this. Any idea when |
Taking the ideas and commits from #124299
Notable changes from initial PR:
path
andmethod
nestingurl
fieldquery_string
and converted it to a list of tuples, to leveragedescription
andversion
as they weren't usedsparse_result
andvalue
fieldsresponse.error_parser
to indicate the location to find the error message fieldpath
path
field to tell it where to find that nested mapformat
field that specifies how the response is structure (elser's structure is an array of maps, where the key is the token id and the value is the weight, this parser expects the map to have a token id field and a weight field)Add Custom Model support to Inference API.
You can use this Inference API to invoke models that support the HTTP format.
Inference Endpoint Creation:
Endpoint creation
Support task_type
Parameter Description
Parameter Description
secret_parameters
: secret parameters like api_key can be defined here.headers
(optional):https' header parametersrequest.content
: The body structure of the request requires passing in the string-escaped result of the JSON format HTTP request body.NOTE: Unfortunately, if we aren't using kibana the content string needs to be a single line
response.json_parser
: We need to parse the returned response into an object that Elasticsearch can recognize.(TextEmbeddingFloatResults, SparseEmbeddingResults, RankedDocsResults, ChatCompletionResults)Therefore, we use jsonPath syntax to parse the necessary content from the response.
(For the text_embedding type, we need a
List<List<Float>>
object. The same applies to other types.)Different task types have different json_parser parameters.
response.error_parser
: Since each 3rd party service can have its own error response format we'll need the user to give us the location to retrieve the base error message. For example, openai's error structure is here: https://platform.openai.com/docs/api-reference/realtime-server-events/error. We'd want to extract themessage
field. An example of that might look like:task_settings.parameters
: Due to the limitations of the inference framework, if the model requires more parameters to be configured, they can be set in task_settings.parameters. These parameters can be placed in the request.body as placeholders and replaced with the configured values when constructing the request.Testing
🚧 In progress
Jon Testing
OpenAI
Texting Embedding
Cohere
Rerank
Azure OpenAI
Alibaba Testing
we use Alibaba Cloud AI Search Model for example,
Please replace the value of
secret_parameters.api_key
with your api_key.text_embedding
sparse_embedding
rerank