MuJoCo/OpenAI Gym environment for robotic control using Reinforcement Learning. The task is learn to manipulate large unknown objects with underactuated robots.
Note: If using GPU, change the torch package reference in setup.py. For more details, see here.
Use pip to install the dependencies.
pip install -e .
If using GPU you can check if the installation was successful with:
python -c "import torch; print(torch.cuda.is_available())"
To run the models, 'models/eval_agent.py' takes the following arguments:
- --train_env - trained model to evaluate, eg "acorn"
- --sim_env - simulated environment, eg "/xmls/acorn_env.xml"
- --render - render environment for visualisation
- --plot_trajectory - plot object and robot trajectories
- --direction - target vector direction, eg 0 or 45
python eval_agent.py --train_env acorn --sim_env /xmls/acorn_env.xml --render True --plot_trajectory True --direction 0
To train the models, 'models/train_agent.py' takes the following arguments:
- --sim_env - simulated environment, eg "/xmls/acorn_env.xml"
- --task - adds training task name to saved model folder, eg "/reward"
- --name - name of the experiment, eg "sand_ball"
- --suffix - customized suffix: config.name = config.name + suffix: e.g., {model}_{IM_reward}
- --direction - target vector direction, eg 0 or 45
python train_agent.py --sim_env /xmls/bread_crumb_env.xml --task reward --name bread_crumb --suffix progress_reward_best_model --direction 0
Acorn environment with forward movement.
Bread crumb environment with forward movement.
Sand ball environment with forward movement.
Sugar cube environment with forward movement.
This project is licensed under the MIT License, see the LICENSE.md file for details.



