-
Notifications
You must be signed in to change notification settings - Fork 582
ARM & PYTORCH 2.6 &CUDA 12.8 #873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
It is working fine on jetson agx orin cuda 12.8 pytorch 2.6.0 https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torch3d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you so much for the PR. I'm gonna add few commits on top just to fix the doc issue and the comments that git removed.
Is there a specific reason to remove the no_python_abi_suffix
option? Also the title claim this is for 12.8. How are most of you using 12.8? nvcr or nightly build? It will be difficult for us to have wheels before pytorch make a release. We will try to see how we can ship wheels for ARM but for now it's great to enable at least build from source.
setup.py
Outdated
CYTHON_MIN_VER = '0.29.37' | ||
IGNORE_TORCH_VER = os.getenv('IGNORE_TORCH_VER') is not None | ||
|
||
# Module required before installation | ||
# trying to install it ahead turned out to be too unstable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you keep this comment?
setup.py
Outdated
extra_compile_args = {'cxx': ['-O3']} | ||
define_macros = [] | ||
include_dirs = [] | ||
sources = glob.glob('kaolin/csrc/**/*.cpp', recursive=True) | ||
# FORCE_CUDA is for cross-compilation in docker build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you keep this comment?
setup.py
Outdated
packages=find_packages(exclude=('docs', 'tests', 'examples')), | ||
scripts=get_scripts(), | ||
include_package_data=True, | ||
install_requires=get_requirements(), | ||
zip_safe=False, | ||
ext_modules=get_extensions(), | ||
cmdclass={ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please keep the options, is there a specific reason to remove it?
We build our pytorch in jetson-containers (Nvidia) with the official branch. |
I pushed my master by mistake and cause it to close the PR, now I can't push anything on you branch anymore 😂 I pushed my modifications on top of your work here: https://github.com/Caenorst/kaolin/tree/arm do you mind reopen the PR with this? |
Ah it’s because seems that in my branch are not the cahnges anymore. You can continue if you want. |
This pull request includes updates to CUDA dispatch macros and improvements to the
setup.py
script for the Kaolin library. The most important changes include updating the dispatch macros to usescalar_type()
and enhancing thesetup.py
script for better version handling and logging.CUDA dispatch macro updates:
AT_DISPATCH_FLOATING_TYPES_AND_HALF
macros to usescalar_type()
instead oftype()
inkaolin/csrc/ops/spc/point_utils_cuda.cu
(interpolate_trilinear_cuda_impl
,coords_to_trilinear_cuda_impl
) [1] [2].AT_DISPATCH_FLOATING_TYPES_AND_HALF
macros to usescalar_type()
instead oftype()
inkaolin/csrc/ops/spc/query_cuda.cu
(query_cuda_impl
,query_multiscale_cuda_impl
,query_cuda_impl_empty
) [1] [2] [3].AT_DISPATCH_FLOATING_TYPES_AND_HALF
andAT_DISPATCH_INTEGRAL_TYPES
macros to usescalar_type()
inkaolin/csrc/render/spc/raytrace_cuda.cu
(mark_pack_boundaries_cuda_impl
,diff_cuda_impl
,sum_reduce_cuda_impl
,cumsum_cuda_impl
,cumprod_cuda_impl
) [1] [2] [3] [4] [5] [6] [7].Setup script improvements:
setup.py
to better handle version constraints and logging for PyTorch and Cython dependencies [1] [2].setup.py
[1] [2].