Change the repository type filter
All
Repositories list
647 repositories
- C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows
- GPU accelerated decision optimization
- CUDA Core Compute Libraries
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.
- Ongoing research training transformer models at scale
- Differentiable signal processing on the sphere for PyTorch
- cuEquivariance is a math library that is a collective of low-level primitives and tensor ops to accelerate widely-used models, like DiffDock, MACE, Allegro and NEQUIP, based on equivariant neural networks. Also includes kernels for accelerated structure prediction.
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.
- Examples for Recommenders - easy to train and deploy on accelerated infrastructure.
- NVIDIA device plugin for Kubernetes
- Open-source deep-learning framework for exploring, building and deploying AI weather/climate workflows.