Hi,
Bravo for such a useful work !
I am trying to train the agent for longer (e.g., >300k steps) to get a more stable policy, but I am running into issues.
Specifically:
Increasing the number of training steps by modifying ui.cpp results in a decrease of the number of agents and very slow training.
It also requires changing the Dockerfile and pulling many packages, which I’d like to avoid if possible.
With only ~300k steps my CF agent still oscillates around the setpoint and does not converge to a stable policy.
My goals are:
Increase total training steps so the policy has time to converge,
Keep the number of parallel agents high (i.e., not reduce them),
Avoid heavy Docker modifications if possible (submodules sync etc...)
Questions:
What is the recommended way to increase the total number of steps (timesteps) during training?
Is there a config / hyperparameter / script argument to just set more steps without changing core files like ui.cpp?
Are there known bottlenecks that cause training to slow when increasing steps?
Thanks!