Skip to content
View Lidang-Jiang's full-sized avatar

Block or report Lidang-Jiang

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Lidang-Jiang/README.md

Hi, I'm Lidang Jiang

MLsys Engineer passionate about using AI to build everything — from inference systems to embodied robots.

Interests: Embodied AI, Robotics, Inference Acceleration, AI Infra, C++

Blog GitHub


Open Source Contributions

19 merged PRs across 6 projects (327k+ combined stars) · 46 open PRs in review

Repository Stars PRs Merged Links
huggingface/transformers 158.7k 1 #45045
affaan-m/everything-claude-code 132.7k 5 View all
Genesis-Embodied-AI/Genesis 28.4k 3 #2612, #2609, #2610
vllm-project/vllm-omni 4.1k 2 #2221, #1687
haosulab/ManiSkill 2.7k 2 #1403, #1402
baidu/vLLM-Kunlun 390 6 View all

Tech Stack

Python C++ Go TypeScript CUDA PyTorch Kubernetes Docker

Popular repositories Loading

  1. Lidang-Jiang.github.io Lidang-Jiang.github.io Public

    personal websites

    TypeScript 1

  2. UniversityResearcherProfiles UniversityResearcherProfiles Public

    Vue

  3. vllm-omni vllm-omni Public

    Forked from vllm-project/vllm-omni

    A framework for efficient model inference with omni-modality models

    Python

  4. vLLM-Kunlun vLLM-Kunlun Public

    Forked from baidu/vLLM-Kunlun

    vLLM Kunlun (vllm-kunlun) is a community-maintained hardware plugin designed to seamlessly run vLLM on the Kunlun XPU.

    Python

  5. sglang sglang Public

    Forked from sgl-project/sglang

    SGLang is a high-performance serving framework for large language models and multimodal models.

    Python

  6. vllm vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python