Skip to content

[ArrayAPI] Refactor KMeans estimator to follow oneDAL estimator design pattern#2654

Draft
KateBlueSky wants to merge 59 commits intouxlfoundation:mainfrom
KateBlueSky:dev_array_api_kmeans_refactor
Draft

[ArrayAPI] Refactor KMeans estimator to follow oneDAL estimator design pattern#2654
KateBlueSky wants to merge 59 commits intouxlfoundation:mainfrom
KateBlueSky:dev_array_api_kmeans_refactor

Conversation

@KateBlueSky
Copy link

@KateBlueSky KateBlueSky commented Aug 6, 2025

Refactor KMeans estimator to follow oneDAL estimator design pattern

Depends on #2641

This PR refactors the KMeans estimator to align with the standardized design pattern used across oneDAL estimators, such as DummyEstimator outlined in #2534 . The main goal is to make the estimator consistent, maintainable, and compatible with future extensions (e.g., other algorithms or backends).

Changes Made

  • Reorganized KMeans and _BaseKMeans classes to follow the oneDAL estimator model pattern.

  • Added backend bindings using @bind_default_backend decorators:

  • train() → kmeans.clustering

  • infer() → kmeans.clustering

  • _is_same_clustering() → kmeans_common (no policy)

  • Centralized creation of the params dictionary in _get_onedal_params(...).

  • Ensured type dispatch and method dispatch use fptype and method respectively (e.g., by_default, lloyd_csr).

  • Wrapped all input/output in to_table / from_table.

  • Separated backend logic from estimator logic using _fit_backend() and _predict_backend().

  • Deferred model creation and attribute assignment to follow a minimalistic and clean init/finalization process.

  • Applied Sycl queue support via @supports_queue decorators for fit/predict/score.

  • Removed redundant or sklearn-only attributes that aren’t required by oneDAL estimators.

  • Preserved full feature parity (e.g., init modes, scoring, CSR support, random_state handling, etc.).


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

@codecov
Copy link

codecov bot commented Aug 6, 2025

Codecov Report

❌ Patch coverage is 75.00000% with 35 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
sklearnex/cluster/k_means.py 51.85% 21 Missing and 5 partials ⚠️
onedal/cluster/kmeans.py 89.53% 6 Missing and 3 partials ⚠️
Flag Coverage Δ
azure 77.15% <75.00%> (-2.69%) ⬇️
github ?

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/cluster/kmeans.py 81.12% <89.53%> (+2.15%) ⬆️
sklearnex/cluster/k_means.py 66.66% <51.85%> (-17.48%) ⬇️

... and 34 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

"The default value of `n_init` will change from "
f"{default_n_init} to 'auto' in 1.4. Set the value of `n_init`"
" explicitly to suppress the warning"
f"{default_n_init} to 'auto' in 1.4. Set `n_init` explicitly "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we have a utility to check the sklearn version, this could be placed under an if-else, or removed altogether considering we appear to not support versions 1.1 through 1.3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bumping up comment.

Copy link
Contributor

@yuejiaointel yuejiaointel Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx, removed!

elif callable(init):
cc_arr = init(X, self.n_clusters, random_state)
cc_arr = np.ascontiguousarray(cc_arr, dtype=dtype)
if hasattr(cc_arr, "__array_namespace__"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the function get_namespace supposed to be doing these checks?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, array_namespace check is removed. Init on GPU will fall back to sklearn via onedal_gpu_supported, and init_centroids_sklearn will only be called with np data

@yuejiaointel
Copy link
Contributor

/intelci: run

@david-cortes-intel
Copy link
Contributor

david-cortes-intel commented Feb 11, 2026

@yuejiaointel Not sure if this is an issue with the PR or with the GPU implementation of KMeans, but this hangs:

import os
os.environ["SCIPY_ARRAY_API"] = "1"

import numpy as np
import dpnp

rng = np.random.default_rng(seed=123)
X = rng.standard_normal(size=(1000, 20), dtype=np.float32)
Xd = dpnp.array(X, device="gpu")

from sklearnex import config_context
from sklearnex.cluster import KMeans

with config_context(array_api_dispatch=True):
    km = KMeans().fit(Xd)
    cl = km.predict(Xd)

Snippet above only hangs with some verbosity levels set on oneDAL. Runs fine under default settings.

@david-cortes-intel
Copy link
Contributor

Operations with data frames are also not working:

import os
os.environ["SCIPY_ARRAY_API"] = "1"

import numpy as np

rng = np.random.default_rng(seed=123)
X = rng.standard_normal(size=(1000, 20), dtype=np.float32)

from sklearnex import config_context
from sklearnex.cluster import KMeans

import polars as pl
Xdf = pl.DataFrame(X)
with config_context(array_api_dispatch=True, transform_output="polars"):
    km = KMeans().fit(Xdf)
    cl = km.transform(Xdf)

@david-cortes-intel
Copy link
Contributor

@yuejiaointel Also having issues with torch:

import os
os.environ["SCIPY_ARRAY_API"] = "1"

import numpy as np
import torch

rng = np.random.default_rng(seed=123)
X = rng.standard_normal(size=(1000, 20), dtype=np.float32)
Xt = torch.tensor(X, device="xpu")

from sklearnex import config_context
from sklearnex.cluster import KMeans

with config_context(array_api_dispatch=True):
    km = KMeans(n_clusters=3).fit(Xt)
    cl = km.predict(Xt)
TypeError: var() received an invalid combination of arguments - got (ddof=int, dtype=NoneType, out=NoneType, axis=int, ), but expected one of:
 * (tuple of ints dim, bool unbiased = True, bool keepdim = False)
 * (tuple of ints dim = None, *, Number correction = None, bool keepdim = False)
      didn't match because some of the keywords were incorrect: ddof, dtype, out, axis
 * (bool unbiased = True)
 * (tuple of names dim, bool unbiased = True, bool keepdim = False)
 * (tuple of names dim, *, Number correction = None, bool keepdim = False)
      didn't match because some of the keywords were incorrect: ddof, dtype, out, axis

- Add from_table(like=X) to fit/predict/cluster_centers_ for correct
  output array type (dpnp/dpctl) instead of always returning numpy
- Remove np.asarray in _init_centroids_onedal and cluster_centers_
  setter to avoid crash on GPU arrays
- Store _input_type for deferred from_table in cluster_centers_ property
- Split _onedal_gpu_supported to reject callable init on GPU (falls
  back to sklearn instead of crashing in numpy-only code path)
- Remove n_init=='warn' branch from onedal fit() and sklearnex _resolve_n_init()
- Simplify n_init default to 'auto' (sklearn >=1.4)
- Simplify algorithm default to 'lloyd' (sklearn >=1.1)
- CI only tests sklearn 1.7.2/1.8.0, these version-conditional defaults were dead code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants