-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CUDA plugin CI. #8593
base: master
Are you sure you want to change the base?
Fix CUDA plugin CI. #8593
Conversation
Looks like the failing jobs are due to a failed clone from kleidiai's gitlab. Is that a widespread issue or spurious failure? |
It doesn't look widespread (haven't seen in other PRs). I will try rebasing this PR. |
b5474c1
to
afc5707
Compare
@ysiraichi from pytorch/pytorch#138609 (comment), it looks like PyTorch upstream decided to release with some specific set of CUDA versions (see issue). Can we use one of their chosen versions, for example CUDA 12.4 instead of CUDA 12.3? |
Problem is: I didn't find a docker image with CUDA 12.4. Also, I'm not sure how to create one, since it seems something internal. |
Could you clarify this challenge? Do you mean that you were hoping to find a |
As far as I understand, PyTorch/XLA CI relies on docker images (see xla/.github/workflows/build_and_test.yml Lines 44 to 52 in fbbdfca
|
@ysiraichi got it. thanks for the explanation. I think using CUDA 12.3 for now is a-okay. IIUC, most of the time we're only using torch CPU + torch_xla GPU in any case. |
LMK when I should review. It looks like there are still some failed tests. |
Fix: #8577
This PR reverts #8286, and bumps CUDA version to 12.3. The latter is needed for successfully compiling GPU dependent source code that makes use of
CUgraphConditionalHandle
(not available in 12.1) driver API typedef.