-
Notifications
You must be signed in to change notification settings - Fork 97
Open
Description
Hello,
Apologies if this is an obvious issue, I'm pretty new to gym and python in general. I've installed IsaacGym and AerialGym on a fresh Ubuntu installation (System: Ubuntu 20.04.6 LTS), and, while Isaac's 1080 balls and AerialGym's position control examples work fine, the "run_trained_navigation_policy.sh" example runs weirdly. It doesn't seem to spawn the agent or the shown environment, but spawns a series of blue walls (see screenshot below) and fills the terminal with "resetting environment" messages. Does this mean the installation was faulty?
Here is the terminal output during the execution:
Terminal output
(aerialgym) meng@meng-MS-7E16:~/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation$ bash run_trained_navigation_policy.sh
bash: /home/meng/miniconda3/envs/aerialgym/lib/libtinfo.so.6: no version information available (required by bash)
Importing module 'gym_38' (/home/meng/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/meng/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.13.1
Device count 1
/home/meng/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/meng/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Emitting ninja build file /home/meng/.cache/torch_extensions/py38_cu117/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
Warp 1.0.0 initialized:
CUDA Toolkit 11.5, Driver 12.2
Devices:
"cpu" : "x86_64"
"cuda:0" : "NVIDIA GeForce RTX 4090" (24 GiB, sm_89, mempool enabled)
Kernel cache:
/home/meng/.cache/warp/1.0.0
Gym has been unmaintained since 2022 and does not support NumPy 2.0 amongst other critical functionality.
Please upgrade to Gymnasium, the maintained drop-in replacement of Gym, or contact the authors of your software and request that they upgrade.
See the migration guide at https://gymnasium.farama.org/introduction/migration_guide/ for additional information.
[isaacgym:gymutil.py] Unknown args: ['--train_dir=/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', '--experiment=selected_network', '--env=test', '--obs_key=observations', '--load_checkpoint_kind=best']
[1348 ms][aerial_gym.examples.dce_rl_navigation.dce_navigation_task] - CRITICAL : Hardcoding number of envs to 16 if it is greater than that. (dce_navigation_task.py:14)
[1348 ms][base_task] - INFO : Setting seed: 42 (base_task.py:38)
[1349 ms][navigation_task] - INFO : Building environment for navigation task. (navigation_task.py:44)
[1349 ms][navigation_task] - INFO : Sim Name: base_sim, Env Name: env_with_obstacles, Robot Name: lmf2, Controller Name: lmf2_velocity_control (navigation_task.py:45)
[1349 ms][env_manager] - INFO : Populating environments. (env_manager.py:73)
[1349 ms][env_manager] - INFO : Creating simulation instance. (env_manager.py:87)
[1349 ms][env_manager] - INFO : Instantiating IGE object. (env_manager.py:88)
[1349 ms][IsaacGymEnvManager] - INFO : Creating Isaac Gym Environment (IGE_env_manager.py:41)
[1349 ms][IsaacGymEnvManager] - INFO : Acquiring gym object (IGE_env_manager.py:73)
[1349 ms][IsaacGymEnvManager] - INFO : Acquired gym object (IGE_env_manager.py:75)
[isaacgym:gymutil.py] Unknown args: ['--train_dir=/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', '--experiment=selected_network', '--env=test', '--obs_key=observations', '--load_checkpoint_kind=best']
[1349 ms][IsaacGymEnvManager] - INFO : Fixing devices (IGE_env_manager.py:89)
[1349 ms][IsaacGymEnvManager] - INFO : Using GPU pipeline for simulation. (IGE_env_manager.py:102)
[1349 ms][IsaacGymEnvManager] - INFO : Sim Device type: cuda, Sim Device ID: 0 (IGE_env_manager.py:105)
[1349 ms][IsaacGymEnvManager] - INFO : Graphics Device ID: 0 (IGE_env_manager.py:119)
[1349 ms][IsaacGymEnvManager] - INFO : Creating Isaac Gym Simulation Object (IGE_env_manager.py:120)
[1349 ms][IsaacGymEnvManager] - WARNING : If you have set the CUDA_VISIBLE_DEVICES environment variable, please ensure that you set it
to a particular one that works for your system to use the viewer or Isaac Gym cameras.
If you want to run parallel simulations on multiple GPUs with camera sensors,
please disable Isaac Gym and use warp (by setting use_warp=True), set the viewer to headless. (IGE_env_manager.py:127)
[1349 ms][IsaacGymEnvManager] - WARNING : If you see a segfault in the next lines, it is because of the discrepancy between the CUDA device and the graphics device.
Please ensure that the CUDA device and the graphics device are the same. (IGE_env_manager.py:132)
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
[2590 ms][IsaacGymEnvManager] - INFO : Created Isaac Gym Simulation Object (IGE_env_manager.py:136)
[2591 ms][IsaacGymEnvManager] - INFO : Created Isaac Gym Environment (IGE_env_manager.py:43)
[2731 ms][env_manager] - INFO : IGE object instantiated. (env_manager.py:109)
[2731 ms][env_manager] - INFO : Creating warp environment. (env_manager.py:112)
[2731 ms][env_manager] - INFO : Warp environment created. (env_manager.py:114)
[2731 ms][env_manager] - INFO : Creating robot manager. (env_manager.py:118)
[2731 ms][BaseRobot] - INFO : [DONE] Initializing controller (base_robot.py:26)
[2731 ms][BaseRobot] - INFO : Initializing controller lmf2_velocity_control (base_robot.py:29)
[2731 ms][base_multirotor] - WARNING : Creating 16 multirotors. (base_multirotor.py:32)
[2731 ms][env_manager] - INFO : [DONE] Creating robot manager. (env_manager.py:123)
[2731 ms][env_manager] - INFO : [DONE] Creating simulation instance. (env_manager.py:125)
[2731 ms][asset_loader] - INFO : Loading asset: model.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2732 ms][asset_loader] - INFO : Loading asset: panel.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2733 ms][asset_loader] - INFO : Loading asset: cuboidal_rod.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2734 ms][asset_loader] - INFO : Loading asset: 1_x_1_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2734 ms][asset_loader] - INFO : Loading asset: 0_5_x_0_5_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2735 ms][asset_loader] - INFO : Loading asset: small_cube.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2736 ms][asset_loader] - INFO : Loading asset: left_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2736 ms][asset_loader] - INFO : Loading asset: right_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2737 ms][asset_loader] - INFO : Loading asset: back_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2737 ms][asset_loader] - INFO : Loading asset: front_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2738 ms][asset_loader] - INFO : Loading asset: bottom_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2738 ms][asset_loader] - INFO : Loading asset: top_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2741 ms][env_manager] - INFO : Populating environment 0 (env_manager.py:179)
[3025 ms][robot_manager] - WARNING :
Robot mass: 1.2400000467896461,
Inertia: tensor([[0.0134, 0.0000, 0.0000],
[0.0000, 0.0144, 0.0000],
[0.0000, 0.0000, 0.0138]], device='cuda:0'),
Robot COM: tensor([[0., 0., 0., 1.]], device='cuda:0') (robot_manager.py:427)
[3025 ms][robot_manager] - WARNING : Calculated robot mass and inertia for this robot. This code assumes that your robot is the same across environments. (robot_manager.py:430)
[3025 ms][robot_manager] - CRITICAL : If your robot differs across environments you need to perform this computation for each different robot here. (robot_manager.py:433)
[3102 ms][env_manager] - INFO : [DONE] Populating environments. (env_manager.py:75)
[3113 ms][IsaacGymEnvManager] - WARNING : Headless: False (IGE_env_manager.py:424)
[3113 ms][IsaacGymEnvManager] - INFO : Creating viewer (IGE_env_manager.py:426)
[3178 ms][IGE_viewer_control] - WARNING : Instructions for using the viewer with the keyboard:
ESC: Quit
V: Toggle Viewer Sync
S: Sync Frame Time
F: Toggle Camera Follow
P: Toggle Camera Follow Type
R: Reset All Environments
UP: Switch Target Environment Up
DOWN: Switch Target Environment Down
SPACE: Pause Simulation
(IGE_viewer_control.py:153)
[3178 ms][IsaacGymEnvManager] - INFO : Created viewer (IGE_env_manager.py:432)
*** Can't create empty tensor
[3224 ms][asset_manager] - WARNING : Number of obstacles to be kept in the environment: 9 (asset_manager.py:32)
WARNING: allocation matrix is not full rank. Rank: 4
/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/control/motor_model.py:45: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
torch.tensor(self.min_thrust, device=self.device, dtype=torch.float32).expand(
/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/control/motor_model.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
torch.tensor(self.max_thrust, device=self.device, dtype=torch.float32).expand(
[3374 ms][control_allocation] - WARNING : Control allocation does not account for actuator limits. This leads to suboptimal allocation (control_allocation.py:48)
[3374 ms][WarpSensor] - INFO : Camera sensor initialized (warp_sensor.py:50)
creating render graph
Module warp.utils load on device 'cuda:0' took 1.75 ms
Module aerial_gym.sensors.warp.warp_kernels.warp_camera_kernels load on device 'cuda:0' took 5.50 ms
Module aerial_gym.sensors.warp.warp_kernels.warp_stereo_camera_kernels load on device 'cuda:0' took 8.03 ms
Module aerial_gym.sensors.warp.warp_kernels.warp_lidar_kernels load on device 'cuda:0' took 3.88 ms
finishing capture of render graph
Encoder network initialized.
Defined encoder.
[ImgDecoder] Starting create_model
[ImgDecoder] Done with create_model
Defined decoder.
Loading weights from file: /home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/utils/vae/weights/ICRA_test_set_more_sim_data_kld_beta_3_LD_64_epoch_49.pth
Number of environments 16
CFG is: Namespace(actor_critic_share_weights=True, actor_worker_gpus=[0], adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-06, adaptive_stddev=True, algo='APPO', async_rl=True, batch_size=2048, batched_sampling=True, benchmark=False, cli_args={'env': 'test', 'experiment': 'selected_network', 'train_dir': '/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', 'load_checkpoint_kind': 'best', 'obs_key': 'observations'}, command_line='--train_dir=/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network --experiment=selected_network --env=test --obs_key=observations --load_checkpoint_kind=best', continuous_tanh_scale=0.0, decoder_mlp_layers=[], decorrelate_envs_on_one_worker=True, decorrelate_experience_max_seconds=0, default_niceness=0, device='gpu', encoder_conv_architecture='convnet_simple', encoder_conv_mlp_layers=[512], encoder_mlp_layers=[512, 512], enjoy_script=None, env='test', env_agents=-1, env_frameskip=1, env_framestack=1, env_gpu_actions=True, env_gpu_observations=True, eval_deterministic=False, eval_env_frameskip=None, eval_stats=False, experiment='selected_network', experiment_summaries_interval=10, exploration_loss='entropy', exploration_loss_coeff=0.0, flush_summaries_interval=30, force_envs_single_thread=False, fps=0, gae_lambda=0.95, gamma=0.98, git_hash='1ba60ad91635bd3e42442b50800c00aa2a868923', git_repo_name='[email protected]:ntnu-arl/aerial_gym_simulator.git', heartbeat_interval=20, heartbeat_reporting_interval=180, help=False, hf_repository=None, ige_api_version='preview4', initial_stddev=1.0, keep_checkpoints=2, kl_loss_coeff=0.1, learning_rate=0.0003, load_checkpoint_kind='best', log_to_file=True, lr_adaptive_max=0.01, lr_adaptive_min=1e-06, lr_schedule='kl_adaptive_epoch', lr_schedule_kl_threshold=0.016, max_grad_norm=0.0, max_num_episodes=1000000000.0, max_num_frames=1000000000.0, max_policy_lag=1000, no_render=False, nonlinearity='elu', normalize_input=True, normalize_input_keys=None, normalize_returns=True, num(aerialgym) meng@meng-MS-7E16:~/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation$ bash run_trained_navigation_policy.sh
bash: /home/meng/miniconda3/envs/aerialgym/lib/libtinfo.so.6: no version information available (required by bash)
Importing module 'gym_38' (/home/meng/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/meng/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.13.1
Device count 1
/home/meng/Downloads/IsaacGym_Preview_4_Package/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/meng/.cache/torch_extensions/py38_cu117 as PyTorch extensions root...
Emitting ninja build file /home/meng/.cache/torch_extensions/py38_cu117/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
Warp 1.0.0 initialized:
CUDA Toolkit 11.5, Driver 12.2
Devices:
"cpu" : "x86_64"
"cuda:0" : "NVIDIA GeForce RTX 4090" (24 GiB, sm_89, mempool enabled)
Kernel cache:
/home/meng/.cache/warp/1.0.0
Gym has been unmaintained since 2022 and does not support NumPy 2.0 amongst other critical functionality.
Please upgrade to Gymnasium, the maintained drop-in replacement of Gym, or contact the authors of your software and request that they upgrade.
See the migration guide at https://gymnasium.farama.org/introduction/migration_guide/ for additional information.
[isaacgym:gymutil.py] Unknown args: ['--train_dir=/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', '--experiment=selected_network', '--env=test', '--obs_key=observations', '--load_checkpoint_kind=best']
[1348 ms][aerial_gym.examples.dce_rl_navigation.dce_navigation_task] - CRITICAL : Hardcoding number of envs to 16 if it is greater than that. (dce_navigation_task.py:14)
[1348 ms][base_task] - INFO : Setting seed: 42 (base_task.py:38)
[1349 ms][navigation_task] - INFO : Building environment for navigation task. (navigation_task.py:44)
[1349 ms][navigation_task] - INFO : Sim Name: base_sim, Env Name: env_with_obstacles, Robot Name: lmf2, Controller Name: lmf2_velocity_control (navigation_task.py:45)
[1349 ms][env_manager] - INFO : Populating environments. (env_manager.py:73)
[1349 ms][env_manager] - INFO : Creating simulation instance. (env_manager.py:87)
[1349 ms][env_manager] - INFO : Instantiating IGE object. (env_manager.py:88)
[1349 ms][IsaacGymEnvManager] - INFO : Creating Isaac Gym Environment (IGE_env_manager.py:41)
[1349 ms][IsaacGymEnvManager] - INFO : Acquiring gym object (IGE_env_manager.py:73)
[1349 ms][IsaacGymEnvManager] - INFO : Acquired gym object (IGE_env_manager.py:75)
[isaacgym:gymutil.py] Unknown args: ['--train_dir=/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', '--experiment=selected_network', '--env=test', '--obs_key=observations', '--load_checkpoint_kind=best']
[1349 ms][IsaacGymEnvManager] - INFO : Fixing devices (IGE_env_manager.py:89)
[1349 ms][IsaacGymEnvManager] - INFO : Using GPU pipeline for simulation. (IGE_env_manager.py:102)
[1349 ms][IsaacGymEnvManager] - INFO : Sim Device type: cuda, Sim Device ID: 0 (IGE_env_manager.py:105)
[1349 ms][IsaacGymEnvManager] - INFO : Graphics Device ID: 0 (IGE_env_manager.py:119)
[1349 ms][IsaacGymEnvManager] - INFO : Creating Isaac Gym Simulation Object (IGE_env_manager.py:120)
[1349 ms][IsaacGymEnvManager] - WARNING : If you have set the CUDA_VISIBLE_DEVICES environment variable, please ensure that you set it
to a particular one that works for your system to use the viewer or Isaac Gym cameras.
If you want to run parallel simulations on multiple GPUs with camera sensors,
please disable Isaac Gym and use warp (by setting use_warp=True), set the viewer to headless. (IGE_env_manager.py:127)
[1349 ms][IsaacGymEnvManager] - WARNING : If you see a segfault in the next lines, it is because of the discrepancy between the CUDA device and the graphics device.
Please ensure that the CUDA device and the graphics device are the same. (IGE_env_manager.py:132)
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
[2590 ms][IsaacGymEnvManager] - INFO : Created Isaac Gym Simulation Object (IGE_env_manager.py:136)
[2591 ms][IsaacGymEnvManager] - INFO : Created Isaac Gym Environment (IGE_env_manager.py:43)
[2731 ms][env_manager] - INFO : IGE object instantiated. (env_manager.py:109)
[2731 ms][env_manager] - INFO : Creating warp environment. (env_manager.py:112)
[2731 ms][env_manager] - INFO : Warp environment created. (env_manager.py:114)
[2731 ms][env_manager] - INFO : Creating robot manager. (env_manager.py:118)
[2731 ms][BaseRobot] - INFO : [DONE] Initializing controller (base_robot.py:26)
[2731 ms][BaseRobot] - INFO : Initializing controller lmf2_velocity_control (base_robot.py:29)
[2731 ms][base_multirotor] - WARNING : Creating 16 multirotors. (base_multirotor.py:32)
[2731 ms][env_manager] - INFO : [DONE] Creating robot manager. (env_manager.py:123)
[2731 ms][env_manager] - INFO : [DONE] Creating simulation instance. (env_manager.py:125)
[2731 ms][asset_loader] - INFO : Loading asset: model.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2732 ms][asset_loader] - INFO : Loading asset: panel.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2733 ms][asset_loader] - INFO : Loading asset: cuboidal_rod.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2734 ms][asset_loader] - INFO : Loading asset: 1_x_1_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2734 ms][asset_loader] - INFO : Loading asset: 0_5_x_0_5_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2735 ms][asset_loader] - INFO : Loading asset: small_cube.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2736 ms][asset_loader] - INFO : Loading asset: left_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2736 ms][asset_loader] - INFO : Loading asset: right_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2737 ms][asset_loader] - INFO : Loading asset: back_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2737 ms][asset_loader] - INFO : Loading asset: front_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2738 ms][asset_loader] - INFO : Loading asset: bottom_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2738 ms][asset_loader] - INFO : Loading asset: top_wall.urdf for the first time. Next use of this asset will be via the asset buffer. (asset_loader.py:71)
[2741 ms][env_manager] - INFO : Populating environment 0 (env_manager.py:179)
[3025 ms][robot_manager] - WARNING :
Robot mass: 1.2400000467896461,
Inertia: tensor([[0.0134, 0.0000, 0.0000],
[0.0000, 0.0144, 0.0000],
[0.0000, 0.0000, 0.0138]], device='cuda:0'),
Robot COM: tensor([[0., 0., 0., 1.]], device='cuda:0') (robot_manager.py:427)
[3025 ms][robot_manager] - WARNING : Calculated robot mass and inertia for this robot. This code assumes that your robot is the same across environments. (robot_manager.py:430)
[3025 ms][robot_manager] - CRITICAL : If your robot differs across environments you need to perform this computation for each different robot here. (robot_manager.py:433)
[3102 ms][env_manager] - INFO : [DONE] Populating environments. (env_manager.py:75)
[3113 ms][IsaacGymEnvManager] - WARNING : Headless: False (IGE_env_manager.py:424)
[3113 ms][IsaacGymEnvManager] - INFO : Creating viewer (IGE_env_manager.py:426)
[3178 ms][IGE_viewer_control] - WARNING : Instructions for using the viewer with the keyboard:
ESC: Quit
V: Toggle Viewer Sync
S: Sync Frame Time
F: Toggle Camera Follow
P: Toggle Camera Follow Type
R: Reset All Environments
UP: Switch Target Environment Up
DOWN: Switch Target Environment Down
SPACE: Pause Simulation
(IGE_viewer_control.py:153)
[3178 ms][IsaacGymEnvManager] - INFO : Created viewer (IGE_env_manager.py:432)
*** Can't create empty tensor
[3224 ms][asset_manager] - WARNING : Number of obstacles to be kept in the environment: 9 (asset_manager.py:32)
WARNING: allocation matrix is not full rank. Rank: 4
/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/control/motor_model.py:45: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
torch.tensor(self.min_thrust, device=self.device, dtype=torch.float32).expand(
/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/control/motor_model.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
torch.tensor(self.max_thrust, device=self.device, dtype=torch.float32).expand(
[3374 ms][control_allocation] - WARNING : Control allocation does not account for actuator limits. This leads to suboptimal allocation (control_allocation.py:48)
[3374 ms][WarpSensor] - INFO : Camera sensor initialized (warp_sensor.py:50)
creating render graph
Module warp.utils load on device 'cuda:0' took 1.75 ms
Module aerial_gym.sensors.warp.warp_kernels.warp_camera_kernels load on device 'cuda:0' took 5.50 ms
Module aerial_gym.sensors.warp.warp_kernels.warp_stereo_camera_kernels load on device 'cuda:0' took 8.03 ms
Module aerial_gym.sensors.warp.warp_kernels.warp_lidar_kernels load on device 'cuda:0' took 3.88 ms
finishing capture of render graph
Encoder network initialized.
Defined encoder.
[ImgDecoder] Starting create_model
[ImgDecoder] Done with create_model
Defined decoder.
Loading weights from file: /home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/utils/vae/weights/ICRA_test_set_more_sim_data_kld_beta_3_LD_64_epoch_49.pth
Number of environments 16
CFG is: Namespace(actor_critic_share_weights=True, actor_worker_gpus=[0], adam_beta1=0.9, adam_beta2=0.999, adam_eps=1e-06, adaptive_stddev=True, algo='APPO', async_rl=True, batch_size=2048, batched_sampling=True, benchmark=False, cli_args={'env': 'test', 'experiment': 'selected_network', 'train_dir': '/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', 'load_checkpoint_kind': 'best', 'obs_key': 'observations'}, command_line='--train_dir=/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network --experiment=selected_network --env=test --obs_key=observations --load_checkpoint_kind=best', continuous_tanh_scale=0.0, decoder_mlp_layers=[], decorrelate_envs_on_one_worker=True, decorrelate_experience_max_seconds=0, default_niceness=0, device='gpu', encoder_conv_architecture='convnet_simple', encoder_conv_mlp_layers=[512], encoder_mlp_layers=[512, 512], enjoy_script=None, env='test', env_agents=-1, env_frameskip=1, env_framestack=1, env_gpu_actions=True, env_gpu_observations=True, eval_deterministic=False, eval_env_frameskip=None, eval_stats=False, experiment='selected_network', experiment_summaries_interval=10, exploration_loss='entropy', exploration_loss_coeff=0.0, flush_summaries_interval=30, force_envs_single_thread=False, fps=0, gae_lambda=0.95, gamma=0.98, git_hash='1ba60ad91635bd3e42442b50800c00aa2a868923', git_repo_name='[email protected]:ntnu-arl/aerial_gym_simulator.git', heartbeat_interval=20, heartbeat_reporting_interval=180, help=False, hf_repository=None, ige_api_version='preview4', initial_stddev=1.0, keep_checkpoints=2, kl_loss_coeff=0.1, learning_rate=0.0003, load_checkpoint_kind='best', log_to_file=True, lr_adaptive_max=0.01, lr_adaptive_min=1e-06, lr_schedule='kl_adaptive_epoch', lr_schedule_kl_threshold=0.016, max_grad_norm=0.0, max_num_episodes=1000000000.0, max_num_frames=1000000000.0, max_policy_lag=1000, no_render=False, nonlinearity='elu', normalize_input=True, normalize_input_keys=None, normalize_returns=True, num_batches_per_epoch=2, num_batches_to_accumulate=2, num_envs_per_worker=1, num_epochs=4, num_policies=1, num_workers=1, obs_key='observations', obs_scale=1.0, obs_subtract_mean=0.0, optimizer='adam', pbt_mix_policies_in_one_env=True, pbt_mutation_rate=0.15, pbt_optimize_gamma=False, pbt_period_env_steps=5000000, pbt_perturb_max=1.5, pbt_perturb_min=1.1, pbt_replace_fraction=0.3, pbt_replace_reward_gap=0.1, pbt_replace_reward_gap_absolute=1e-06, pbt_start_mutation=20000000, pbt_target_objective='true_objective', pixel_format='CHW', policy_index=0, policy_init_gain=1.0, policy_initialization='torch_default', policy_workers_per_policy=1, ppo_clip_ratio=0.2, ppo_clip_value=1.0, push_to_hub=False, recurrence=-1, restart_behavior='overwrite', reward_clip=1000.0, reward_scale=0.1, rnn_num_layers=1, rnn_size=512, rnn_type='gru', rollout=24, save_best_after=5000000, save_best_every_sec=5, save_best_metric='reward', save_every_sec=120, save_milestones_sec=-1, save_video=False, seed=None, serial_mode=True, set_workers_cpu_affinity=True, shuffle_minibatches=True, stats_avg=100, subtask=None, summaries_use_frameskip=True, train_dir='/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', train_for_env_steps=10000000, train_for_seconds=10000000000, train_script=None, use_env_info_cache=False, use_record_episode_statistics=False, use_rnn=False, value_bootstrap=True, value_loss_coeff=2.0, video_frames=1000000000.0, video_name=None, vtrace_c=1.0, vtrace_rho=1.0, wandb_group=None, wandb_job_type='SF', wandb_project='sample_factory', wandb_tags=[], wandb_user=None, with_pbt=False, with_vtrace=False, with_wandb=False, worker_num_splits=1)
[2025-09-04 20:33:04,645][10066] Loading existing experiment configuration from /home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network/selected_network/config.json
[2025-09-04 20:33:04,646][10066] Overriding arg 'env' with value 'test' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'experiment' with value 'selected_network' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'train_dir' with value '/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'load_checkpoint_kind' with value 'best' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'obs_key' with value 'observations' passed from command line
[2025-09-04 20:33:04,646][10066] Adding new argument 'fps'=0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'eval_env_frameskip'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'no_render'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'save_video'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'max_num_episodes'=1000000000.0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-09-04 20:33:04,647][10066] RunningMeanStd input shape: (81,)
[2025-09-04 20:33:04,647][10066] RunningMeanStd input shape: (1,)
Model:
ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): MultiInputEncoder(
(encoders): ModuleDict(
(obs): MlpEncoder(
(mlp_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Linear)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Linear)
(5): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(64, 64)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=64, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=64, out_features=6, bias=True)
)
)
[2025-09-04 20:33:04,708][10066] Loading state from checkpoint /home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network/selected_network/checkpoint_p0/best_000052096_26673152_reward_1333.322.pth...
[5289 ms][navigation_task] - CRITICAL : Crash is happening too soon. (navigation_task.py:195)
[5289 ms][navigation_task] - CRITICAL : Envs crashing too soon: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
device='cuda:0') (navigation_task.py:196)
[5289 ms][navigation_task] - CRITICAL : Time at crash: tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cuda:0',
dtype=torch.int32) (navigation_task.py:197)
dce_nn_navigation.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
action = torch.tensor(action).expand(rl_task.num_envs, -1)
Resetting environments (tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([0, 4], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Termination
Resetting environments (tensor([4], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 1, 12], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 0, 10], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 7, 8, 9, 13], device='cuda:0'),) due to Timeout
Resetting environments (tensor([3], device='cuda:0'),) due to Termination
Resetting environments (tensor([0], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([7], device='cuda:0'),) due to Termination
Resetting environments (tensor([11, 15], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 4, 12], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([9], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Timeout
Resetting environments (tensor([4, 9], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([10, 12], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 8, 13], device='cuda:0'),) due to Timeout
Resetting environments (tensor([0], device='cuda:0'),) due to Timeout
Resetting environments (tensor([7], device='cuda:0'),) due to Timeout
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Timeout
Resetting environments (tensor([9], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Timeout
Resetting environments (tensor([11], device='cuda:0'),) due to Timeout
Resetting environments (tensor([15], device='cuda:0'),) due to Timeout
Resetting environments (tensor([2], device='cuda:0'),) due to Timeout
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([4], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([8], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 0, 14], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([13], device='cuda:0'),) due to Timeout
Resetting environments (tensor([8], device='cuda:0'),) due to Termination
Resetting environments (tensor([7], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([9], device='cuda:0'),) due to Timeout
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Timeout
Resetting environments (tensor([ 4, 14], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Timeout
Resetting environments (tensor([0], device='cuda:0'),) due to Timeout
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Timeout
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Timeout
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
^CTraceback (most recent call last):
_batches_per_epoch=2, num_batches_to_accumulate=2, num_envs_per_worker=1, num_epochs=4, num_policies=1, num_workers=1, obs_key='observations', obs_scale=1.0, obs_subtract_mean=0.0, optimizer='adam', pbt_mix_policies_in_one_env=True, pbt_mutation_rate=0.15, pbt_optimize_gamma=False, pbt_period_env_steps=5000000, pbt_perturb_max=1.5, pbt_perturb_min=1.1, pbt_replace_fraction=0.3, pbt_replace_reward_gap=0.1, pbt_replace_reward_gap_absolute=1e-06, pbt_start_mutation=20000000, pbt_target_objective='true_objective', pixel_format='CHW', policy_index=0, policy_init_gain=1.0, policy_initialization='torch_default', policy_workers_per_policy=1, ppo_clip_ratio=0.2, ppo_clip_value=1.0, push_to_hub=False, recurrence=-1, restart_behavior='overwrite', reward_clip=1000.0, reward_scale=0.1, rnn_num_layers=1, rnn_size=512, rnn_type='gru', rollout=24, save_best_after=5000000, save_best_every_sec=5, save_best_metric='reward', save_every_sec=120, save_milestones_sec=-1, save_video=False, seed=None, serial_mode=True, set_workers_cpu_affinity=True, shuffle_minibatches=True, stats_avg=100, subtask=None, summaries_use_frameskip=True, train_dir='/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network', train_for_env_steps=10000000, train_for_seconds=10000000000, train_script=None, use_env_info_cache=False, use_record_episode_statistics=False, use_rnn=False, value_bootstrap=True, value_loss_coeff=2.0, video_frames=1000000000.0, video_name=None, vtrace_c=1.0, vtrace_rho=1.0, wandb_group=None, wandb_job_type='SF', wandb_project='sample_factory', wandb_tags=[], wandb_user=None, with_pbt=False, with_vtrace=False, with_wandb=False, worker_num_splits=1)
[2025-09-04 20:33:04,645][10066] Loading existing experiment configuration from /home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network/selected_network/config.json
[2025-09-04 20:33:04,646][10066] Overriding arg 'env' with value 'test' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'experiment' with value 'selected_network' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'train_dir' with value '/home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'load_checkpoint_kind' with value 'best' passed from command line
[2025-09-04 20:33:04,646][10066] Overriding arg 'obs_key' with value 'observations' passed from command line
[2025-09-04 20:33:04,646][10066] Adding new argument 'fps'=0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'eval_env_frameskip'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'no_render'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'save_video'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'video_name'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'max_num_episodes'=1000000000.0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'push_to_hub'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'hf_repository'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'policy_index'=0 that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'train_script'=None that is not in the saved config file!
[2025-09-04 20:33:04,646][10066] Adding new argument 'enjoy_script'=None that is not in the saved config file!
[2025-09-04 20:33:04,647][10066] RunningMeanStd input shape: (81,)
[2025-09-04 20:33:04,647][10066] RunningMeanStd input shape: (1,)
Model:
ActorCriticSharedWeights(
(obs_normalizer): ObservationNormalizer(
(running_mean_std): RunningMeanStdDictInPlace(
(running_mean_std): ModuleDict(
(obs): RunningMeanStdInPlace()
)
)
)
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
(encoder): MultiInputEncoder(
(encoders): ModuleDict(
(obs): MlpEncoder(
(mlp_head): RecursiveScriptModule(
original_name=Sequential
(0): RecursiveScriptModule(original_name=Linear)
(1): RecursiveScriptModule(original_name=ELU)
(2): RecursiveScriptModule(original_name=Linear)
(3): RecursiveScriptModule(original_name=ELU)
(4): RecursiveScriptModule(original_name=Linear)
(5): RecursiveScriptModule(original_name=ELU)
)
)
)
)
(core): ModelCoreRNN(
(core): GRU(64, 64)
)
(decoder): MlpDecoder(
(mlp): Identity()
)
(critic_linear): Linear(in_features=64, out_features=1, bias=True)
(action_parameterization): ActionParameterizationDefault(
(distribution_linear): Linear(in_features=64, out_features=6, bias=True)
)
)
[2025-09-04 20:33:04,708][10066] Loading state from checkpoint /home/meng/workspaces/aerial_gym_ws/src/aerial_gym_simulator/aerial_gym/examples/dce_rl_navigation/selected_network/selected_network/checkpoint_p0/best_000052096_26673152_reward_1333.322.pth...
[5289 ms][navigation_task] - CRITICAL : Crash is happening too soon. (navigation_task.py:195)
[5289 ms][navigation_task] - CRITICAL : Envs crashing too soon: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
device='cuda:0') (navigation_task.py:196)
[5289 ms][navigation_task] - CRITICAL : Time at crash: tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], device='cuda:0',
dtype=torch.int32) (navigation_task.py:197)
dce_nn_navigation.py:48: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
action = torch.tensor(action).expand(rl_task.num_envs, -1)
Resetting environments (tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15],
device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([0, 4], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Termination
Resetting environments (tensor([4], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 1, 12], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 0, 10], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 7, 8, 9, 13], device='cuda:0'),) due to Timeout
Resetting environments (tensor([3], device='cuda:0'),) due to Termination
Resetting environments (tensor([0], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([7], device='cuda:0'),) due to Termination
Resetting environments (tensor([11, 15], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 4, 12], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([9], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Timeout
Resetting environments (tensor([4, 9], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([10, 12], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 8, 13], device='cuda:0'),) due to Timeout
Resetting environments (tensor([0], device='cuda:0'),) due to Timeout
Resetting environments (tensor([7], device='cuda:0'),) due to Timeout
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Timeout
Resetting environments (tensor([9], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Timeout
Resetting environments (tensor([11], device='cuda:0'),) due to Timeout
Resetting environments (tensor([15], device='cuda:0'),) due to Timeout
Resetting environments (tensor([2], device='cuda:0'),) due to Timeout
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([4], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Termination
Resetting environments (tensor([8], device='cuda:0'),) due to Termination
Resetting environments (tensor([ 0, 14], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Termination
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Termination
Resetting environments (tensor([12], device='cuda:0'),) due to Termination
Resetting environments (tensor([13], device='cuda:0'),) due to Timeout
Resetting environments (tensor([8], device='cuda:0'),) due to Termination
Resetting environments (tensor([7], device='cuda:0'),) due to Termination
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([2], device='cuda:0'),) due to Termination
Resetting environments (tensor([9], device='cuda:0'),) due to Timeout
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([3], device='cuda:0'),) due to Timeout
Resetting environments (tensor([ 4, 14], device='cuda:0'),) due to Termination
Resetting environments (tensor([5], device='cuda:0'),) due to Timeout
Resetting environments (tensor([0], device='cuda:0'),) due to Timeout
Resetting environments (tensor([15], device='cuda:0'),) due to Termination
Resetting environments (tensor([10], device='cuda:0'),) due to Timeout
Resetting environments (tensor([1], device='cuda:0'),) due to Termination
Resetting environments (tensor([11], device='cuda:0'),) due to Timeout
Resetting environments (tensor([6], device='cuda:0'),) due to Termination
Resetting environments (tensor([14], device='cuda:0'),) due to Termination
^CTraceback (most recent call last):
Thank you in advance,
-Meng
Metadata
Metadata
Assignees
Labels
No labels