Skip to content

kimtienly/inteliplan_docker

Repository files navigation

InteLiPlan workspace

Author: Kim Tien Ly

This workspace is intended for paper "InteLiPlan: Interactive Lightweight LLM-Based Planner for Domestic Robot Autonomy".

More details of the paper is available at our project page.

Prerequisites

  • Docker

Docker usage

Use the following command to:

  • build
make build
  • Build without cache
docker compose build --no-cache
  • create the container
make up

Once the container is running, open container terminal:

docker exec -it inteliplan_docker bash

Running

Before you run

Run the demo simulation

The system was tested on the Toyota Human Support Robot (HSR). The HSR simulation package is installed after the docker was built. The simulation is operating with ROS 1. There is a finetuned model with fetchme task at src/inteliplan_robot/inteliplan_interface/models to begin with.

inteliplan_robot is the ROS package for the robot experiment. Proceed to inteliplan_ws to build and source the workspace with:

catkin build
source devel/setup.bash

In order to be able to run graphical user interfaces from inside the Docker, on the host system, run:

xhost +local:root

Intall octomap-server from inside docker:

sudo apt-get install ros-noetic-octomap-server

Launch the tmux workspace:

rosrun inteliplan_interface run.tmux
  • The first window robot has two terminals:
    • simulation_world.launch launches the robot simulation together with vision recognition
    • manipulation_servers_no_grasp_synthesis.launch launches the manipulation servers, which includes a list of motion APIs.
  • The second window inteliplan is the main inteliplan pipeline, where you input commands and instructions for the robot.
  • Tips for tmux:
    • Ctrl+B then 0 to open robot window
    • Ctrl+B then 1 to open inteliplan window
    • Ctrl+B then left arrow/right arrow to navigate between terminals on the same window.
    • Ctrl+B then [ to scroll in each terminal

Fine-tune the model

Code for fine-tuning is in /inteliplan/. Run the python files in the following order:

  • create_fetchme_data.py provide a quick example of generate csv data files. The data will be text-only as described in the paper.
  • data_csv_to_jsonl converts the generated csv file to a jsonl format dedicated for fine-tuning.
  • train.py finetunes the LLM model (default: Mistral).
  • inference.py tests the trained model.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published