This is Roche group doing the COM-304 Communications project in robotics.
load/– Contains the main agent controller and executor files used to run the trained PPO model on the TurtleBot.targets/– Contains the 3D model files (.glb) and their corresponding object configuration JSON files (.object_config.json) used as assets in the Habitat simulation environment.Roche_Robotics.ipynb– Jupyter Notebook that documents the training process, model architecture, environment setup, and evaluation results for the PPO agent.requirements.txt– Lists all Python dependencies needed to run the training, evaluation, and deployment scripts, including stable-baselines3, habitat-sim, gymnasium, and visualization libraries.
The Jupyter Notebook Roche_Robotics.ipynb provides a detailed overview of the entire project, including:
- The training process of the PPO agent, with explanations on the environment setup and hyperparameters.
- How the simulation environment was configured and run.
- Evaluation metrics and results demonstrating the agent’s performance.
- Code snippets for training and evaluation to facilitate reproduction and understanding.
To use the notebook:
- Access the Izar cluster and perform the same setup as described in Turtlebot4 setup guide.
- Add the notebook to your workspace at the same level as
RL_Habitat_Homework/RL_Habitat_Homework.ipynbor select the kernel used by that file. - Place the target files
.glband.object_config.jsonin/workspace/habitat-sim/data/test_assets/objects/or update the paths in the notebook accordingly. - Open the notebook with Jupyter Notebook or JupyterLab:
jupyter notebook Roche_Robotics.ipynb # or jupyter lab Roche_Robotics.ipynb - Install the required Python packages listed in
requirements.txtusing:pip install -r requirements.txt
- Run the cells sequentially to explore the training pipeline and evaluation.
- Adjust parameters or experiment with the code to better understand or improve the model.
This notebook is a valuable resource both for reproducing the training and for understanding the design decisions behind the PPO agent deployed on the TurtleBot.
Before anything, have a stable connection to a TurtleBot 4 Lite.
- Put the file
ppo_random_ball_and_robot_pos_rgb_v01.zipat the same level as the filesrl_agent_controller.pyandrobot_action_executor.py(find them here). - In
rl_agent_controller.py, change theMODEL_PATH(line 13) to the right one on your computer of the PPO model (ppo_random_ball_and_robot_pos_rgb_v01.zip). - Install the required Python packages for Stable Baselines3:
pip install stable-baselines3[extra] opencv-python numpy
- Open two terminals and source in both the following:
source /opt/ros/humble/setup.bash source ~/ros2_ws/install/setup.bash
- Go to the same level as where the files
rl_agent_controller.pyandrobot_action_executor.pyare. - In the first terminal, run:
python3 robot_action_executor.py
- In the second terminal, run:
python3 rl_agent_controller.py
- (Optional) If the robot were to undock and move backwards into the dock, in the file
rl_agent_controller.py, you could toggle the line 33, undock it by hand and move the robot some other place before steps 6 and 7.