👋 Want to contribute to XLeRobot?
If you have a bug or an idea, read the guidelines below before opening an issue.
If you're ready to tackle some open issues, we've collected some good first issues for you. You can also create an issue yourself to tackle the tasks below, following the template.
- Computer Vision: YOLO object detection and segmentation, depth estimation, gesture recognition
- Navigation: LIDAR/RGBD/Stereo/RGB-based navigation, path planning, obstacle avoidance
- Simulation: Isaac Sim integration, MuJoCo improvements, etc
- VLA Integration: VLA models (Pi0, SmolVLA) migration from single SO101 arm
- User Interfaces: Web control, mobile apps, voice recognition
- Hardware Support: More cameras (realsense, oak camera, etc), tactile sensing, additional sensors
- RL Training: Stable and generalizable RL algorithms, sim2real transfer, benchmark environments
- VLM/LLM/MCP: Inference pipelines, task planning, multimodal reasoning, MCP tool integration
- Your Own Research Ideas: Novel robotics research, embodied AI algorithms, innovative applications
- Share Your Experiences: Document your XLeRobot journey, challenges, and successes
- Educational Use Cases: Robotics courses, workshops, student projects, and learning materials
- Community Events: Organize hackathons, meetups, and collaborative projects
- Real-world Applications: Industrial pilots, research projects, and practical deployments
- Content Creation: Blog posts, videos, tutorials, and social media showcasing XLeRobot capabilities
- Tutorials and guides update (especially video)
- Code examples
- API documentation
- Your own video demos
Note: The hardware design is fairly settled. Most contributions should focus on software, examples, and documentation. For major hardware changes, please discuss in issues first!
- Check existing issues - someone might already be working on it
- Comment on the issue - let others know your approach
- Small PRs are better - easier to review and merge
When you want to work on an issue:
Comment with your proposal like this:
I'd like to work on this! My approach:
- Use OpenCV + YOLO11 for object detection
- Create a ROS2 wrapper for easy integration
- Add examples for common objects (bottles, cups, etc.)
- Timeline: ~2 weeks
Let me know if this sounds good or if you have suggestions!
This helps avoid duplicate work and gets feedback early.
- Fork → Branch → Code → Test → PR
- Include examples when adding new features
- Update README if needed
- Reference the issue:
Fixes #123
good first issue- Perfect for newcomershelp wanted- Community help neededbug- Something brokenenhancement- New featuresdocumentation- Docs improvements
Areas:
area: vision- Computer visionarea: navigation- Navigation/planningarea: simulation- Sim environmentsarea: ai- AI/ML featuresarea: hardware- Hardware integration
- Follow existing patterns in the codebase
- Test your changes (at least run the examples)
- No major hardware redesigns without discussion first
- Keep cost increases minimal
- Don't make assembly significantly harder
- Don't submit PRs without first commenting on an issue
- Don't duplicate existing functionality without good reason
- Don't add heavy dependencies without discussion
- Don't break existing examples
- Discord: XLeRobot Community
- Documentation: https://xlerobot.readthedocs.io/
- Issues: For bugs and feature requests
- Discussions: For questions and brainstorming
- Look for
good first issuelabels - Comment your approach before starting
- Fork, code, test, PR
- Celebrate! 🎉
Thanks for contributing to making embodied AI accessible to everyone! 🤖✨