Skip to content

Conversation

@mohseniaref
Copy link
Owner

Summary

  • implement optional GPU acceleration in ifgram_inversion.py
  • add use_gpu parameter to estimate_timeseries and calc_inv_quality

Testing

  • pytest -q
  • python -m compileall -q src/mintpy

https://chatgpt.com/codex/tasks/task_e_6848ab8ab8748320baac958522fa6f13

This comment was marked as outdated.

@mohseniaref mohseniaref requested a review from Copilot June 10, 2025 22:47
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds optional GPU acceleration to the time-series estimation workflow and provides a benchmarking utility.

  • Introduce a use_gpu flag in estimate_timeseries and calc_inv_quality to switch between NumPy and CuPy backends
  • Adapt core linear algebra and array operations to use xp (NumPy or CuPy)
  • Add benchmark_gpu_speedup to compare CPU vs. GPU performance
Comments suppressed due to low confidence (3)

src/mintpy/ifgram_inversion.py:146

  • [nitpick] The variable name linmod is not very descriptive. Consider renaming it to something like xp_linalg or backend_linalg for clarity.
linmod = linalg

src/mintpy/ifgram_inversion.py:93

  • There are no tests covering the use_gpu=True path. It would be helpful to add unit tests that run with use_gpu=True to validate GPU behavior and prevent regressions.
print_msg=True, use_gpu=False):

src/mintpy/ifgram_inversion.py:418

  • The time module isn't imported in this file, so calls to time.perf_counter() will raise a NameError. Please add import time at the top.
t0 = time.perf_counter()

@mohseniaref mohseniaref merged commit 7789703 into main Jun 10, 2025
2 of 4 checks passed
@mohseniaref mohseniaref deleted the codex/optimize-code-with-vectorization-and-gpu-support branch June 10, 2025 23:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants