Skip to content
@open-lm-engine

open-lm-engine

🧠 Efficient Kernels

Welcome to Open LM Engine, a GitHub organization dedicated to developing high-performance tools, libraries, and methods for training large-scale machine learning models. Our mission is to push the boundaries of computational efficiency, enabling faster, more scalable, and cost-effective AI training.


🚀 Mission

Training large models—from transformer-based architectures to multimodal networks—demands innovative solutions across the stack: from custom CUDA kernels to memory-efficient training paradigms. At Open LM Engine, we:

  • Develop optimized GPU kernels (other accelerators to follow) for deep learning workloads
  • Explore sparsity, quantization, and other model compression techniques
  • Build training frameworks for large scale models
  • Share research-driven open-source tools with the community

🧪 Research & Engineering

We combine research insights with engineering best practices. Our work is inspired by:

  • Recent breakthroughs in model efficiency (e.g., FlashAttention, ZeRO etc)
  • Papers and implementations from top conferences (NeurIPS, ICML, ICLR)
  • Real-world scalability needs in LLM and foundation model training

🤝 Contributing

We welcome contributions from the community! Whether you're optimizing a kernel, fixing a bug, or proposing a new training strategy—every contribution counts.

📬 Open a PR or open an issue/discussion to get involved.


📜 License

All repositories are open-source under Apache 2.0 license. See individual repos for details.

Popular repositories Loading

  1. lm-engine lm-engine Public

    LM engine is a library for pretraining/finetuning LLMs

    Python 56 20

  2. cute-kernels cute-kernels Public

    A bunch of kernels that might make stuff slower 😉

    Python 49 6

  3. .github .github Public

Repositories

Showing 3 of 3 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…