Skip to content

Specialize powi instruction #3148

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft

Conversation

ArthurBrussee
Copy link
Contributor

@ArthurBrussee ArthurBrussee commented May 4, 2025

Pull Request Template

Specialize powi for -1 (recip), 1 (id), 2 (sqr). Otherwise use floating point fallback. Update more places to use powi_scalar when possible.

TODO: Not 100% sure yet if this is actually faster due to inplace operations.
TODO: specialize 3/4/8 as well? -2/-4/-8?

Copy link

codecov bot commented May 4, 2025

Codecov Report

Attention: Patch coverage is 86.48649% with 5 lines in your changes missing coverage. Please review.

Project coverage is 81.33%. Comparing base (5a437b0) to head (c652d19).

Files with missing lines Patch % Lines
crates/burn-tensor/src/tensor/ops/tensor.rs 71.42% 2 Missing ⚠️
crates/burn-core/src/optim/rmsprop.rs 66.66% 1 Missing ⚠️
crates/burn-tensor/src/tensor/api/numeric.rs 66.66% 1 Missing ⚠️
crates/burn-tensor/src/tensor/ops/int_tensor.rs 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3148      +/-   ##
==========================================
+ Coverage   81.32%   81.33%   +0.01%     
==========================================
  Files         817      817              
  Lines      117804   117807       +3     
==========================================
+ Hits        95802    95820      +18     
+ Misses      22002    21987      -15     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@crutcher
Copy link
Contributor

crutcher commented May 4, 2025

I couldn't work out how to push a PR/PR when you're on a fork.

But, consider specializing the base int op too?

diff --git a/crates/burn-tensor/src/tensor/ops/int_tensor.rs b/crates/burn-tensor/src/tensor/ops/int_tensor.rs
index 6c4a474e..8ced3e71 100644
--- a/crates/burn-tensor/src/tensor/ops/int_tensor.rs
+++ b/crates/burn-tensor/src/tensor/ops/int_tensor.rs
@@ -446,7 +446,13 @@ pub trait IntTensorOps<B: Backend> {
     ///
     /// The elements of `lhs` raised to the value of `rhs`.
     fn int_powi_scalar(lhs: IntTensor<B>, rhs: IntElem<B>) -> IntTensor<B> {
-        B::float_into_int(B::float_powi_scalar(B::int_into_float(lhs), rhs))
+        let p: i32 = rhs.elem();
+
+        match p {
+            1 => lhs,
+            2 => B::int_mul(lhs.clone(), lhs),
+            _ => B::float_into_int(B::float_powi_scalar(B::int_into_float(lhs), rhs)),
+        }
     }
 
     /// Element-wise power with a floatTensor.
diff --git a/crates/burn-tensor/src/tensor/ops/tensor.rs b/crates/burn-tensor/src/tensor/ops/tensor.rs
index 5c4d7044..f0121046 100644
--- a/crates/burn-tensor/src/tensor/ops/tensor.rs
+++ b/crates/burn-tensor/src/tensor/ops/tensor.rs
@@ -844,13 +844,14 @@ pub trait FloatTensorOps<B: Backend> {
     ///
     /// The elements of `lhs` raised to the value of `rhs`.
     fn float_powi_scalar(lhs: FloatTensor<B>, rhs: IntElem<B>) -> FloatTensor<B> {
-        let rhs: i32 = rhs.elem();
+        let p: i32 = rhs.elem();
 
-        match rhs {
-            -1 => B::float_recip(lhs),
+        match p {
             1 => lhs,
             2 => B::float_mul(lhs.clone(), lhs),
-            val => Self::float_powf_scalar(lhs, val as f32),
+            -1 => B::float_recip(lhs),
+            -2 => B::float_recip(B::float_mul(lhs.clone(), lhs)),
+            _ => Self::float_powf_scalar(lhs, p as f32),
         }
     }
 
@@ -875,7 +876,9 @@ pub trait FloatTensorOps<B: Backend> {
     /// # Returns
     ///
     /// A tensor with the same shape as `tensor` with square root values.
-    fn float_sqrt(tensor: FloatTensor<B>) -> FloatTensor<B>;
+    fn float_sqrt(tensor: FloatTensor<B>) -> FloatTensor<B> {
+        Self::float_powf_scalar(tensor, 0.5)
+    }
 
     /// Returns a new tensor with absolute values.
     ///

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants