Skip to content

Conversation

@Adityakushwaha2006
Copy link
Contributor

Reference Issues/PRs

Related to #1248

What does this implement/fix? Explain your changes.

This PR implements kernel and feature parity between CPU and GPU ROCKET implementations by reusing the CPU's kernel generation function while maintaining GPU acceleration for transform operations.

Changes:

  • Import CPU's kernel generation function for identical kernel parameters
  • Add conversion method to transform sparse channel indexing to dense format compatible with TensorFlow convolutions
  • Update CPU-GPU parity test to use decimal=4 threshold (previously xfail, now passes)
  • Removed GPU-specific kernel generation parameters as they're now derived from CPU logic

Results:

  • Kernel parity: 100% exact match (identical weights, biases, dilations, channel selections)
  • Feature parity: Features match within 1e-4 (0.0001) precision (most datasets show even higher precision of the order 1e-5 or 1e-7)
  • Tested on: Both univariate and multivariate datasets.

Key insight:
The sparse to dense conversion places CPU's selected channel weights at correct positions in a dense kernel, with zeros for non selected channels. Since zero weights contribute nothing to convolution, this achieves mathematical equivalence while using standard TensorFlow operations.

Does your contribution introduce a new dependency? If yes, which one?

No.

Any other comments?

None.

PR checklist

For all contributions
  • I've added myself to the list of contributors. Alternatively, you can use the @all-contributors bot to do this for you after the PR has been merged.
  • The PR title starts with either [ENH], [MNT], [DOC], [BUG], [REF], [DEP] or [GOV] indicating whether the PR topic is related to enhancement, maintenance, documentation, bugs, refactoring, deprecation or governance.
For new estimators and functions
  • I've added the estimator/function to the online API documentation.
  • (OPTIONAL) I've added myself as a __maintainer__ at the top of relevant files and want to be contacted regarding its maintenance. Unmaintained files may be removed. This is for the full file, and you should not add yourself if you are just making minor changes or do not want to help maintain its contents.
For developers with write access
  • (OPTIONAL) I've updated aeon's CODEOWNERS to receive notifications about future changes to these files.

…l parity along with feature divergence <1e-4
@aeon-actions-bot aeon-actions-bot bot added bug Something isn't working transformations Transformations package labels Jan 8, 2026
@aeon-actions-bot
Copy link
Contributor

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ bug ].
I have added the following labels to this PR based on the changes made: [ transformations ]. Feel free to change these if they do not properly represent the PR.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Discord channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • Run pre-commit checks for all files
  • Run mypy typecheck tests
  • Run all pytest tests and configurations
  • Run all notebook example tests
  • Run numba-disabled codecov tests
  • Stop automatic pre-commit fixes (always disabled for drafts)
  • Disable numba cache loading
  • Regenerate expected results for testing
  • Push an empty commit to re-run CI checks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working transformations Transformations package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant