Skip to content

add ml_inference processor for offline batch inference #5507

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 3, 2025

Conversation

Zhangxunmt
Copy link
Contributor

@Zhangxunmt Zhangxunmt commented Mar 6, 2025

Description

Adding a new ml_inference processor to interact with ml-commons plugin in OpenSearch for ML related applications.

Some examples that work well:

ml-batch-job-pipeline:
  source:
    s3:
      codec:
        ndjson:
      compression: none
      aws:
        region: "us-east-1"
      default_bucket_owner: <your account>
      scan:
        scheduling:
          interval: PT2M
        buckets:
          - bucket:
              name: "offlinebatch"
              data_selection: metadata_only
              filter:
                include_prefix:
                  - bedrock-multisource/my_batch
                exclude_suffix:
                  - .out
          - bucket:
              name: "offlinebatch"
              data_selection: data_only
              filter:
                include_prefix:
                  - bedrock-multisource/output-multisource/
                exclude_suffix:
                  - manifest.json.out

  buffer:
    bounded_blocking:
      buffer_size: 2048 # max number of records the buffer accepts
      batch_size: 512 # max number of records the buffer drains after each read

  processor:
    - ml_inference:
        host: "<your host>"
        aws_sigv4: true
        action_type: "batch_predict"
        service_name: "bedrock"
        model_id: "<your model id in search>"
        output_path: "s3://offlinebatch/bedrock-multisource/output-multisource/"
        aws:
          region: "us-east-1"
        ml_when: /bucket == "offlinebatch"
    - copy_values:
        entries:
          - to_key: chapter
            from_key: /modelInput/inputText
          - to_key: chapter_embedding
            from_key: /modelOutput/embedding
    - delete_entries:
        with_keys: [modelInput, modelOutput, recordId, s3]

  route:
      - ml-ingest-route: "/chapter != null and /chapter_embedding != null"

  sink:
    - opensearch:
        hosts: ["<your host>"]
        aws_sigv4: true
        index: "my-nlp-index-bedrock"
        routes: [ml-ingest-route]

Issues Resolved

#5470
#5433
#5509

Check List

  • New functionality includes testing.
  • New functionality has a documentation issue. Please link to it in this PR.
    • New functionality has javadoc added
  • Commits are signed with a real name per the DCO

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@dlvenable
Copy link
Member

@Zhangxunmt , Thank you for this great processor!

We will also need some unit tests. I'm ok accepting this PR without them as long as we have the @Experimental annotation.

@Zhangxunmt
Copy link
Contributor Author

Zhangxunmt commented Mar 12, 2025

@Zhangxunmt , Thank you for this great processor!

We will also need some unit tests. I'm ok accepting this PR without them as long as we have the @Experimental annotation.

dlvenable Thanks David for the comments. Looks like there're no major concerns. I will add the remaining UTs soon and the @experimental annotation.

@Zhangxunmt Zhangxunmt force-pushed the main branch 5 times, most recently from 0b384c9 to 9ce8a77 Compare March 24, 2025 22:26
@Zhangxunmt Zhangxunmt force-pushed the main branch 3 times, most recently from 5ae7861 to c670ca5 Compare March 25, 2025 18:28
@Zhangxunmt Zhangxunmt changed the title add ml processor for offline batch inference add ml-inference processor for offline batch inference Mar 25, 2025
@Zhangxunmt Zhangxunmt changed the title add ml-inference processor for offline batch inference add ml_inference processor for offline batch inference Mar 25, 2025
@Zhangxunmt Zhangxunmt force-pushed the main branch 3 times, most recently from d0aa269 to 0108602 Compare March 25, 2025 20:00
@Zhangxunmt
Copy link
Contributor Author

Zhangxunmt commented Mar 31, 2025

@dlvenable Please review the latest commit for the updates to requested changes, after a rebase to the main. The Gradle Builds somehow fail due to unrelated tests.

return true; // Success
} catch (Exception e) {
try {
Thread.sleep(BASE_DELAY_MS * (1L << attempt)); // Exponential backoff
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will increase quite fast with number of retries. We should use import com.linecorp.armeria.client.retry.Backoff; and the code should be similar to exponential backoff code in https://github.com/kkondaka/kk-data-prepper-f2/blob/main/data-prepper-plugins/opensearch/src/main/java/org/opensearch/dataprepper/plugins/sink/opensearch/BulkRetryStrategy.java

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated to use similar backoff logic with linecorp.armeria.client

Signed-off-by: Xun Zhang <[email protected]>
Copy link
Member

@dlvenable dlvenable left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @Zhangxunmt for this contribution!

@dlvenable dlvenable merged commit f451272 into opensearch-project:main Apr 3, 2025
70 of 74 checks passed
amdhing pushed a commit to amdhing/data-prepper that referenced this pull request Apr 16, 2025
…oject#5507)

Add ml processor for offline batch inference

Signed-off-by: Xun Zhang <[email protected]>
Davidding4718 pushed a commit to Davidding4718/data-prepper that referenced this pull request Apr 25, 2025
…oject#5507)

Add ml processor for offline batch inference

Signed-off-by: Xun Zhang <[email protected]>
Davidding4718 pushed a commit to Davidding4718/data-prepper that referenced this pull request Apr 25, 2025
…oject#5507)

Add ml processor for offline batch inference

Signed-off-by: Xun Zhang <[email protected]>
Mamol27 pushed a commit to Mamol27/data-prepper that referenced this pull request May 6, 2025
…oject#5507)

Add ml processor for offline batch inference

Signed-off-by: Xun Zhang <[email protected]>
Signed-off-by: mamol27 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants