Skip to content

torch/autograd support for the Mie backend#480

Open
edudc wants to merge 8 commits into
DeepTrackAI:gv/final/features2from
edudc:mie-autodiff
Open

torch/autograd support for the Mie backend#480
edudc wants to merge 8 commits into
DeepTrackAI:gv/final/features2from
edudc:mie-autodiff

Conversation

@edudc
Copy link
Copy Markdown

@edudc edudc commented May 12, 2026

Summary

This makes torch MieSphere rendering differentiable through both geometric and hybrid modes, including propagation and optical field assembly. The Mie backend now uses DeepTrack's array API namespace where possible, using the existing SciPy based NumPy path for special functions.

Prerequisites

This PR depends on two active prerequisite fixes from other PRs:

Please merge them first.

Changes

  • Refactor deeptrack.backend.mie to support torch tensors and autograd.
  • Add Riccati-Bessel polynomial recurrences for non NumPy backends using array API.
  • Preserve existing NumPy/SciPy behavior for the NumPy path.
  • Add torch parity and autodiff coverage for:
    • homogeneous Mie coefficients
    • stratified Mie coefficients
    • Mie harmonics
  • Refactor MieScatterer field construction to use Python array API operations for:
    • coordinate grids and masks
    • polarization coefficients
    • geometric and hybrid field assembly
    • FFT / inverse FFT paths
    • propagation matrix application
  • Make get_propagation_matrix backend-aware so torch inputs remain differentiable.
  • Preserve torch tensors through MieSphere coefficients so gradients flow to radius and
    refractive index.
  • Keep MieStratifiedSphere tensor safe where it shares the same conversion path.
  • Add torch regression tests for:
    • MieSphere.resolve() autodiff in geometric and hybrid modes
    • multiple torch Mie fields summed through brightfield optics
    • torch autodiff through get_propagation_matrix
    • zero-valued learnable Zernike coefficients staying in the graph

Notes

polynomials.py currently depends on SciPy’s arbitrary order cylindrical Bessel/Hankel functions. torch.special does have Bessel functions, but does not cover arbitrary order jv/yv or h1vp. Also PyTorch's implementation does not work with autograd. In 2.11.0 this fails:

x = torch.tensor(1.0, requires_grad=True)
y = torch.special.bessel_j0(x)
y.backward()

because y has no grad_fn. (PyTorch forum discussion about this: https://discuss.pytorch.org/t/a-problem-about-autograd-and-special-bessel-j0/192062).

For integer-order Riccati-Bessel functions used by Mie, we can compute them from sin, cos and the recurrence relations, which makes it autograd friendly (and works on GPU).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant