Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement] add dlpack support to to_table #2275

Open
wants to merge 108 commits into
base: main
Choose a base branch
from

Conversation

icfaust
Copy link
Contributor

@icfaust icfaust commented Jan 27, 2025

Description

This PR introduces __dlpack__ tensor (https://github.com/dmlc/dlpack) consumption by to_table allowing for zero-copy use of data in oneDAL. This is important for enabling array_api support and is a pre-requisite for #2096 (array api dispatching). That PR is then a pre-requisite for #2100 #2106 #2189 #2206 #2207 and #2209. Sklearn provides array_api support for some algorithms. If we wish to fully support zero copy of sycl_usm inputs, we need to be able to consume array_api inputs due to underlying sklearn dependencies (validate_data, check_array, etc.). While we support Sycl usm ndarrays (dpctl, dpnp) via the __sycl_usm_array_interface__ method in the onedal folder estimators, to properly interface estimators in the sklearnex folder, we need to support the __dlpack__ method of arrays/tensors. This PR does that and greatly simplifies the necessary logic in #2096 and the follow-up PRs. This PR also provides the added benefit of working with other frameworks which support SYCL gpu data which have __dlpack__ interfaces (i.e. PyTorch).

NOTES:

TODO: add a onedal function which checks a dlpack tensor for C-contiguity or F-contiguity similar to the flags attribute of numpy/dpctl/dpnp. This is out of the scope of this PR, but is necessary for assert_all_finite support for the next step in array_api work.


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

Copy link
Contributor

@ahuber21 ahuber21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work, just a few questions from my side. Ping me for my approval once addressed.

onedal/datatypes/dlpack/data_conversion.cpp Outdated Show resolved Hide resolved
onedal/datatypes/dlpack/data_conversion.cpp Outdated Show resolved Hide resolved
MAKE_QUEUED_HOMOGEN(ptr);
}
else {
auto* const mut_ptr = const_cast<T*>(ptr);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto* const mut_ptr = const_cast<T*>(ptr);
auto* mut_ptr = const_cast<T*>(ptr);

Same thought about making the pointer const as above.
By the way, the data is provided non-const, you are making it const by choice above and const-casting it away again here.
At first glance, the non-const usage is dominant and I don't see a point in ever making the data const. Did I miss something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just as an update: with the discussion of the readonly, I removed the aspect entirely.

onedal/datatypes/dlpack/data_conversion.cpp Outdated Show resolved Hide resolved
@icfaust
Copy link
Contributor Author

icfaust commented Feb 3, 2025

/intelci: run

onedal/datatypes/dlpack/data_conversion.cpp Outdated Show resolved Hide resolved
}

#define MAKE_HOMOGEN_TABLE(CType) \
res = versioned ? convert_to_homogen_impl<CType, DLManagedTensorVersioned>(dlmv, q_obj) \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you are assigning value to the already defined object I would suggest to use if statement instead of ternary

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was trying to keep it small/ for readability, I know it may cause a slight difference in the output assembly.

: convert_to_homogen_impl<CType, DLManagedTensor>(dlm, q_obj);
SET_CTYPE_FROM_DAL_TYPE(dtype,
MAKE_HOMOGEN_TABLE,
throw std::invalid_argument("Found unsupported array type"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't an error message be declared in the separate file alongside other error messages?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follows convention set out in the original numpy interfaces (https://github.com/uxlfoundation/scikit-learn-intelex/blob/main/onedal/datatypes/numpy/data_conversion.cpp#L205). I'm not saying its right, but its consistent.

std::int32_t get_ndim(const DLTensor& tensor) {
// check if 1 or 2 dimensional, and return the number of dimensions
const std::int32_t ndim = tensor.ndim;
if (ndim != 2 && ndim != 1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the zero dimensional tensors (scalars)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will make a test explicitly for testing this scenario and handling it gracefully

onedal/datatypes/dlpack/dlpack_utils.cpp Show resolved Hide resolved
@@ -35,6 +35,7 @@
get_dataframes_and_queues,
)
from onedal.tests.utils._device_selection import get_queues, is_dpctl_device_available
from onedal.utils._array_api import _get_sycl_namespace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, it's very uncommon in Python (and ML) world to name importance entities starting with underscore. Some Python interpreters and widely used libraries do not support this naming. Famously you can not use any _* entities in torch.jit context due to the inability to inspect these functions

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I totally agree. Private functions should be private. Will follow up with changes to make this (wasn't my doing, but I guess I did allow it to happen).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, but I do want to add that the rules are more relaxed in testing context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants