Replies: 2 comments
-
I was wondering the same. I tried creating a wrapper around RLTaskEnv with the idea of creating a tensor that stores the initial post-randomisation states of the agents. I would like to call the constructor of BaseEnv and then initialise the initial_root_states tensor:
The issue is that calling the constructor of BaseEnv calls the different managers, which throws an error since the initial_root_states tensor does not exist yet. On the other hand, if you try to initialise initial_root_states first, it will throw an error because self.num_envs does not exist. I can't think of a nice work-around that does not involve directly modifying BaseEnv (so that the tensor is created before the manages are called). I'd appreciate any help on this! |
Beta Was this translation helpful? Give feedback.
-
I would suggest an inheritance approach. (i.e. CustomRLTaskEnv(RLTaskEnv) and CustomRLTaskEnvCfg(RLTaskEnvCfg)) to add functionality |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Question
Hi,
how would you add additional custom variables to the
RLTaskEnv
.E.g. lets say a new
phase_buffer
that stores the elapsed time of each environment since the last reset.In the tutorials
RLTaskEnv
is instantiated indirectly throughRLTaskEnvCfg
and all setup is done by modifying the configs.Or is the intended workflow to subclass
RLTaskEnv
andRLTaskEnvCfg
to add custom fields to the environments?Is there a better way to setup additional variables/buffers when RLTaskEnv is instantiated.
Thanks a lot
Beta Was this translation helpful? Give feedback.
All reactions