This is a collection of all the components of the WorldSystem.
git clone https://github.com/Tom-Sloan/WorldSystem.git
cd WorldSystem git submodule update --init --recursive
docker compose build
docker compose up --detach $(docker compose config --services | grep -v slam3r)
- Clone the repository
git clone https://github.com/Tom-Sloan/WorldSystem.git- Install submodules
git submodule update --init --recursive- Build the docker containers
docker compose build- Run the docker containers
docker compose updocker-compose builddocker-compose updocker-compose build --no-cache websitedocker-compose exec website sh
docker-compose exec server bashdocker-compose exec website bash
cd /app && npm run devdocker-compose exec server bash
-> $ source activate drone_server && python main.py # Changes in ./server will be liveThis is built using react and uses react 3 fibre. The purpose of this component is visualize what is going on in real time in the system. It receives images at 30fps, and json data in real time through a web socket. I am considering moving this to a desktop application, though it would have to be cross platform. It also occasionally sends requests to the api of the server, or through the web socket. I start it using nom run dev and going to my localhost. It will request 3d files through the api from the server to show using three js. Using the visualization tool I can control the drone.
This component is meant to be the waypoint or connection of the data. It receives data from the external sensors as images at 30fps, and json data through a web socket. The server then forwards the data to the Visualizer through another web socket. It also writes all the images to the computer, and the imu data to text files. This component is written in python using fastapi. It sends the information to the Visualizer using a second process. It writes the information to the computer using another process. The server manages the data flow between all components and provides real-time updates to the visualization website.
Using the saved images and imu information from the server, SLAM3R processes video frames to generate camera poses and trajectory information, which it loads into shared memory. This runs at around 15fps and enables real-time 3D reconstruction.
The mesh service reads camera poses from shared memory and generates 3D meshes in real-time using TSDF + Marching Cubes or Open3D Poisson reconstruction. It produces continuous mesh updates that are visualized through Rerun and sent to the website.
The drone can only connect to a specific remote, which can only run on an android device. The android phone connects to the server using a web socket and streams the video and the imu information of the drone. It also sends the status information to the server using the web socket.
The purpose of this component is to allow me to build a fantasy version of any room. It is a work in progress. It takes the 3d model of the room generated by the 3d reconstruction method, and modifies it similar to a 'skin' in a video game.
I am trying to build something like:
docker run -d --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.12-management# Build
docker build -t website:latest ./website
# Run (development mode)
docker run -it --rm \
-p 3000:3000 \
-v $(pwd)/website:/app \
-e VITE_API_URL=http://localhost:5001 \
-e VITE_WS_HOST=localhost \
website:latest
# Build
docker build -t server:latest ./server
# Run
docker run -it --rm \
-p 5001:5001 \
--gpus all \
-v $(pwd)/server:/app \
-e RABBITMQ_URL=amqp://host.docker.internal \
server:latestsudo /home/sam3/anaconda3/envs/3dreconstruction/bin/py-spy top --pid 331272
docker inspect --format '{{.State.Pid}}' slam3r
SLAM3R now automatically detects video segment boundaries and resets its state when transitioning between segments. This prevents drift and maintains clean reconstruction for each segment.
Features:
- Automatic detection of video segment changes via RabbitMQ message headers
- Complete SLAM state reset between segments
- Optional saving of point clouds and trajectories per segment
- Segment boundary notifications published to reconstruction visualization exchange
Configuration:
# Enable saving point clouds when segments change (default: false)
export SLAM3R_SAVE_SEGMENT_POINTCLOUDS=true
# Directory to save segment data (default: /tmp/slam3r_segments)
export SLAM3R_SEGMENT_OUTPUT_DIR=/path/to/outputTesting segment reset:
# Run the test monitor script
python test_segment_reset.py
# Check SLAM3R logs for reset messages
docker logs slam3r | grep -E "(segment|reset)"# Build
docker build -t fantasy:latest ./fantasy
# Run
docker run -it --rm \
--gpus all \
-v $(pwd)/fantasy:/app \
-e RABBITMQ_URL=amqp://host.docker.internal \
fantasy:latest| Service | Host Port | Notes / How to Access |
|---|---|---|
| RabbitMQ | 5672 | AMQP messaging (clients connect here) |
| RabbitMQ Management | 15672 | Web-based management UI for RabbitMQ |
| Nginx | 80 | HTTP reverse proxy / website front-end (HTTP) |
| 443 | HTTP reverse proxy / website front-end (HTTPS) | |
| Server | 5001 | Main server API (includes /api and /ws routes) |
| Slam | 8000 | SLAM service (exposes a possible API or WebSocket) |
| Reconstruction | 8001 | Reconstruction service API |
| Data Storage | 8002 | Data storage service API |
| cAdvisor | 8080 | Container resource usage metrics (scraped by Prometheus) |
| Prometheus | 9090 | Prometheus metrics UI |
| Grafana | 3000 | Grafana dashboards |
| Jaeger (UI) | 16686 | Distributed tracing UI |
| Jaeger (OTLP) | 4318 | OTLP ingestion port (if using OTLP tracing) |
| Jaeger (Other) | 6831/udp, 6832/udp, 14268 | Legacy agent ports, collector endpoint, etc. |
| NVIDIA DCGM Exporter | 9400 | GPU metrics (scraped by Prometheus) |
RabbitMQ Management: http://134.117.167.139:15672/ cAdvisor: http://134.117.167.139:8080/ Prometheus: http://134.117.167.139:9090/ Grafana: http://134.117.167.139:3000/ Jaeger UI: http://134.117.167.139:16686/
New Control Scheme Left Stick/WASD: Movement W: Forward A: Left S: Backward D: Right Right Stick/Arrow Keys: Rotation & Altitude Left Arrow: Rotate left Right Arrow: Rotate right Up Arrow: Up Down Arrow: Down T: Takeoff/Land toggle P: Camera on/off toggle
// Movement (WASD) { "type": "movement", "x": 0.0, // -1.0 (left) to 1.0 (right) "y": 0.0, // -1.0 (backward) to 1.0 (forward) "timestamp": "1234567890123456789" }
// Rotation/Altitude (Arrow Keys) { "type": "rotation", "yaw": 0.0, // -1.0 (rotate left) to 1.0 (rotate right) "z": 0.0, // -1.0 (down) to 1.0 (up) "timestamp": "1234567890123456789" }
// Camera Toggle (P key) { "type": "camera", "action": "toggle", // "toggle", "on", or "off" "timestamp": "1234567890123456789" }
// Takeoff/Land (T key) { "type": "flightmode", "action": "takeoff", // or "land" "timestamp": "1234567890123456789" }
