Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions docs/source/overview/environments.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ We provide environments for both disassembly and assembly.

.. attention::

CUDA is recommended for running the AutoMate environments with 570 drivers. If running with Nvidia driver 570 on Linux with architecture x86_64, we follow the below steps to install CUDA 12.8. This allows for computing rewards in AutoMate environments with CUDA. If you have a different operation system or architecture, please refer to the `CUDA installation page <https://developer.nvidia.com/cuda-12-8-0-download-archive>`_ for additional instruction.
CUDA is recommended for running the AutoMate environments. If running with Nvidia driver 570 on Linux with architecture x86_64, we follow the below steps to install CUDA 12.8. This allows for computing rewards in AutoMate environments with CUDA. If you have a different operation system or architecture, please refer to the `CUDA installation page <https://developer.nvidia.com/cuda-12-8-0-download-archive>`_ for additional instruction.

.. code-block:: bash

Expand All @@ -273,7 +273,13 @@ We provide environments for both disassembly and assembly.

conda install cudatoolkit

With 580 drivers and CUDA 13, we are currently unable to enable CUDA for computing the rewards. The code automatically fallbacks to CPU, resulting in slightly slower performance.
With 580 drivers on Linux with architecture x86_64, we install CUDA 13 and additionally install several packages. Please ensure that the pytorch version is compatible with the CUDA version.
.. code-block:: bash

wget https://developer.download.nvidia.com/compute/cuda/13.0.2/local_installers/cuda_13.0.2_580.95.05_linux.run
sudo sh cuda_13.0.2_580.95.05_linux.run --toolkit
pip install numba-cuda[cu13] coverage==7.6.1


* |disassembly-link|: The plug starts inserted in the socket. A low-level controller lifts the plug out and moves it to a random position. This process is purely scripted and does not involve any learned policy. Therefore, it does not require policy training or evaluation. The resulting trajectories serve as demonstrations for the reverse process, i.e., learning to assemble. To run disassembly for a specific task: ``python source/isaaclab_tasks/isaaclab_tasks/direct/automate/run_disassembly_w_id.py --assembly_id=ASSEMBLY_ID --disassembly_dir=DISASSEMBLY_DIR``. All generated trajectories are saved to a local directory ``DISASSEMBLY_DIR``.
* |assembly-link|: The goal is to insert the plug into the socket. You can use this environment to train a policy via reinforcement learning or evaluate a pre-trained checkpoint.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,7 @@ def __init__(self, cfg: AssemblyEnvCfg, render_mode: str | None = None, **kwargs
)

# Create criterion for dynamic time warping (later used for imitation reward)
cuda_version = automate_algo.get_cuda_version()
if (cuda_version is not None) and (cuda_version < (13, 0, 0)):
self.soft_dtw_criterion = SoftDTW(use_cuda=True, device=self.device, gamma=self.cfg_task.soft_dtw_gamma)
else:
self.soft_dtw_criterion = SoftDTW(use_cuda=False, device=self.device, gamma=self.cfg_task.soft_dtw_gamma)
self.soft_dtw_criterion = SoftDTW(use_cuda=True, device=self.device, gamma=self.cfg_task.soft_dtw_gamma)

# Evaluate
if self.cfg_task.if_logging_eval:
Expand Down Expand Up @@ -855,7 +851,7 @@ def randomize_initial_state(self, env_ids):
self.step_sim_no_action()

grasp_time = 0.0
while grasp_time < 0.25:
while grasp_time < 1.0:
self.ctrl_target_joint_pos[env_ids, 7:] = 0.0 # Close gripper.
self.ctrl_target_gripper_dof_pos = 0.0
self.move_gripper_in_place(ctrl_target_gripper_dof_pos=0.0)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@
# SPDX-License-Identifier: BSD-3-Clause

import os
import re
import subprocess
import sys
import torch
import trimesh
Expand All @@ -25,52 +23,6 @@
"""


def parse_cuda_version(version_string):
"""
Parse CUDA version string into comparable tuple of (major, minor, patch).

Args:
version_string: Version string like "12.8.9" or "11.2"

Returns:
Tuple of (major, minor, patch) as integers, where patch defaults to 0 iff
not present.

Example:
"12.8.9" -> (12, 8, 9)
"11.2" -> (11, 2, 0)
"""
parts = version_string.split(".")
major = int(parts[0])
minor = int(parts[1]) if len(parts) > 1 else 0
patch = int(parts[2]) if len(parts) > 2 else 0
return (major, minor, patch)


def get_cuda_version():
try:
# Execute nvcc --version command
result = subprocess.run(["nvcc", "--version"], capture_output=True, text=True, check=True)
output = result.stdout

# Use regex to find the CUDA version (e.g., V11.2.67)
match = re.search(r"V(\d+\.\d+(\.\d+)?)", output)
if match:
return parse_cuda_version(match.group(1))
else:
print("CUDA version not found in output.")
return None
except FileNotFoundError:
print("nvcc command not found. Is CUDA installed and in your PATH?")
return None
except subprocess.CalledProcessError as e:
print(f"Error executing nvcc: {e.stderr}")
return None
except Exception as e:
print(f"An unexpected error occurred: {e}")
return None


def get_gripper_open_width(obj_filepath):

retrieve_file_path(obj_filepath, download_dir="./")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@
import torch.cuda
from torch.autograd import Function

from numba import cuda, jit, prange
import numba.cuda as cuda
from numba import jit, prange


# ----------------------------------------------------------------------------------------------------------------------
Expand Down