Skip to content

Conversation

@wermos
Copy link
Contributor

@wermos wermos commented Mar 25, 2025

This PR is a big one because of certain quirks in the Fastor library.

To ease the reviewing process, I will explain the changes that were made and why they were made.

Large Number of Files Changed

The majority of the changes are files where I removed the f suffix in floating point numbers.

In core/include/detray/builders/cuboid_portal_generator.hpp, we had the line (before I changed it)

vector3_t new_x{1.f, 0.f, 0.f};

This causes problems in Fastor because the compiler tries to instantiate a Fastor::Tensor<double, 3>, but the braced-initializer-list contains 3 floats. Because of this, the template deduction was failing and the compilation would error out.

Logically also, we are typecasting from a smaller type (float) to a larger type (double), so it makes sense to have the default be a double that might be downcasted to a float if that is what we define our scalar_t to be.

This was occurring in many places, so I changed it using a search-and-replace. In certain cases, I made changes to files which were using floats, and I triggered the -Wfloat-conversion warning, so I reverted the change in those files. After reverting those files, I wasn't getting any more warnings of that type, so I have some amount of assurance that the changes weren't too heavy-handed.

However, I still worry that I changed some files that didn't need to be changed. If anyone has any ideas on how to determine if each file that had the f suffix change actually needs it, I'm open to suggestions.

Fastor Expression Template Shenanigans

In some places, I had to special=case the math computations for the Fastor library specifically using #ifdef because otherwise, the compilation would fail.

First Example

There is this line:

const point_t t = bounds[e_max_r] * vector::normalize(c + b);

in core/include/detray/geometry/shapes/annulus2D.hpp.

When the Fastor backend is used, both c and b are of type point_t, which are Fastor::Tensor<T, 2> (T is either float or double). When operations between Tensors are performed, they're not done eagerly, but lazily to prevent extra computations. In order to achieve this, c + b returns a Fastor::BinaryAddOp.

The issue here is that normalize doesn't have an overload that takes in a Fastor::BinaryAddOp as input. For this reason, it throws a compilation error. The easiest way around this was to use a temporary variable tmp with an explicit type to force the computation to happen.

Second Example

There is this line:

const scalar_type sign = vector::dot(r, t - p) > 0. ? -1. : 1.;

in core/include/detray/geometry/coordinates/line2D.hpp.

The behavior here is very similar to the previous case. t - p returns a Fastor::BinarySubOp, which the dot function doesn't know how to handle, so it causes a compilation error with the Fastor backend. Here also, using a temporary variable fixes the issue.

Performing operations like dot products, cross products, normalization, is by all means a very common usecase, so we should not require such workarounds. But until we get upstream fixes in Fastor, this is the best workaround I can think of.

@sonarqubecloud
Copy link

@niermann999
Copy link
Collaborator

Hi,
thank you for all of this work! It is nice to see fastor running in detray finally!

Unfortunately I don't think we will move away from the single precision literals any time soon. Double precision operations can be costly on some GPUs (and are less efficient to vectorize) and we want to prevent any implicit (unwanted) upcasting from double precision literals to avoid them in single precision builds. Therefore, the compilers in the CI warn very strictly on type conversions.
In general, the construction of a double precision fastor tensor from single precision elements should be fine, since a conversion from float to double will not trigger a narrowing warning. If the vector3_t is defined as Fastor::Tensor<double, 3>, where does the template autodeduction happen in Fastor?
This kind of construction is also part of the algebra-plugins tests, which are passed by fastor as far as I am aware (https://github.com/acts-project/algebra-plugins/blob/069af81d4c56abfa179183650eb931de63271db6/tests/common/test_host_basics.hpp#L404)

The fun with expression templates we have also seen in other plugins (mostly Eigen and SMatrix). Normally, using explicit types/casts in the plugin or in detray should be enough to fix this. In the Eigen plugin, for example, the functions take a type that can be used both with the vector types and the operation types (but I do not know if that is possible in Fatsor, potentially using the AbstractTensor type?). Anyway, I don't think we should use an ifdef for Fastor though, you can just make this change for all plugins if it is needed in just a few places

@wermos
Copy link
Contributor Author

wermos commented Mar 25, 2025

Unfortunately I don't think we will move away from the single precision literals any time soon.

That makes sense. I'll revert that commit and try to debug what the issue with Fastor's template type deduction is.

In the Eigen plugin, for example, the functions take a type that can be used both with the vector types and the operation types (but I do not know if that is possible in Fatsor, potentially using the AbstractTensor type?)

I was looking into it, and I think you are correct. I will need to make some changes to the Fastor backend in algebra-plugins so that it doesn't error on something like this.

The issue is not with Fastor, but with some of the function definitions in the Fastor backend for algebra-plugins.

In particular, this code works as intended:

#include <iostream>
#include "Fastor/Fastor.h"

int main() {
    Fastor::Tensor<double, 3> v1({1., 2., 3.});
    Fastor::Tensor<double, 3> v2({4., 5., 7.});

    double ans = Fastor::inner(v1, v1 + v2);

    std::cout << "v1 =\n" << v1 << "\tv2 =\n" << v2 << '\n';
    std::cout << ans << '\n';

}

@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants