GPU passthrough availability? #62
Replies: 20 comments 52 replies
-
| 
         Hey @vivekpatani - we do not currently support this. If you have feedback on what you'd like to see specifically, let us know. Converting this to a discussion.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         such as ollama or pytorch can use mps in the container  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         @egernst not sure if my issue belongs here or not but I think it's relevant. to quote my use case, which is pretty much the same with @yongjer 
  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         We need GPU pass through (preferable) or some workaround like Docker/Podman did for macOS with model runner to use AI on macOS.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         This was the first thing I was hoping for when I heard Apple was making a native container solution. Honestly, I have no reason to switch from podman or docker if nothing new is added. The existing solutions work well enough now. Please do something well that the existing solutions do poorly.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         I'd also like to see sharing of the onboard GPU with Vulkan support on the guest. Support for passing through a thunderbolt/usb4 eGPU to the Linux container would be fantastic too.  | 
  
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
| 
         Apple deliberately used a VM-per-container approach. I expect this is for security, which makes me doubt they will be okay with insecure GPU acceleration approaches like passing Vulkan commands through to the host. Secure GPU acceleration without hardware support (for SR-IOV or PCI passthrough) requires exposing the host kernel driver to the guest and running the shader compiler there. This means either custom container images with a (presumably proprietary) Apple-specific runtime, or providing an interface that Mesa can use and porting Mesa to M3 and M4.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         I want to see support for external GPU passthrough, without having to trust the external GPU to not emulate a keyboard or mouse.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         Linux is very necessary for scientific computing in bioinformatics most of the time. Apple's native container support is very exciting, but the lack of support for native Metal will undoubtedly limit Apple silicon unified-memory design. I am currently using my Mac for bioinformatics analysis. Although pytorch (mps) runs well most of the time, Mac is still a second-class citizen on the hardware in my research field, even though I personally tested the overall analysis process and the performance of Mac is very satisfactory. I hope the development team can pay enough attention to the great value of Mac in academic research and seriously consider adding native Metal support to the container.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         @mavenugo You seem to be asking several individuals about their use-case for containerization and why this abstraction would improve quality of life. I'm genuinely confused as to why there is such focus on this... essentially you're asking why containerization itself has become so popular over the years. it certainly makes rapid development significantly easier, cleaner and much more portable. There are no shortage of reasons users would be interested in GPU accessibility in Linux containers on a macOS host, and the hardware is insanely powerful compared to the energy required to run it... Is this really so difficult to comprehend why the community would be able to utilize this feature? Ideally, i would like to keep individual components abstracted away from other services and the host, especially in dev environments where several services are able to run alongside each other. of course users can develop on the dedicated host, but build/test/deploy workflow is not nearly as clean compared to templating a base image with pinned requirements in a containerized deployment. being the maintainer of this project, i'm certain you're well aware of all of this... so the real question is, what is the hesitation with implementing GPU access within this native containerization solution? no doubt it is complex, but this seems like such an obvious interest for a feature request. the only reason i could understand hesitation is if there is a known limitation with current macOS and existing hardware... and reading through this thread seems like this topic is trying to be avoided, or at least delayed for whatever reason. i sincerely hope this is not the case. any chance that GPU access is already being considered internally?  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         I'm actually quite amazed this wasn't one of the first things added when designing this project. Looking online, there's massive amounts of articles where people are frustrated by not being able to do this without work arounds or performance loss. This should 100% be apart of this project. For me, without this, I really don't see a reason to use it instead of Docker or Podman.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         Any plans to add this for v1 release? This would be a real differentiator against any other competitors out there.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         Would love to be able to attach an NVIDIA or AMD eGPU by thunderbolt dock and have access to Vulkan & CUDA/ROCm within the VM container 🙏  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         At least give us a mac guest on mac host container with gpu sharing, if linux its not an option, would help a lot with running automated isolated environments  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         PoC for metal flash attention with Python bindings, tested for images.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         POC of  That's not a general purpose solution though, it can only works for applications relying on the GGML library ( It follows the same idea of the Mesa/Venus/Vulkan API forwarding already available in libkrun VMs.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
        
 Apple Silicon is limiting robotics and STEM students. We need GPU/OpenGL support in Linux VMs for essential tools like ROS 2 and Gazebo. I’m a CS graduate student studying ML and robotics while using an Apple M4 Max for coursework and research. I run ROS 2 Humble and Gazebo Fortress inside an Ubuntu VM (via UTM) because these tools aren’t fully supported natively on macOS. ROS 2 provides official prebuilt binaries only for Ubuntu. Packages like Cartographer and Navigation2 rely on Ubuntu’s apt-based dependency chain, and Gazebo’s ROS 2 plugins are built and distributed through the same Ubuntu package ecosystem. The main limitation is that Virtualization.framework doesn’t expose GPU or OpenGL 3.3+ to Linux guests. Gazebo’s GUI immediately fails with “OpenGL 3.3 not supported,” leaving only headless mode. That blocks visualization and debugging, which are central to robotics education and simulation. The hardware is more than capable, but without GPU/OpenGL passthrough, students and developers can’t use Apple Silicon for modern robotics or simulation workflows. Extending ParavirtualizedGraphics or adding GPU/3D acceleration passthrough for Linux guests would make Apple Silicon a first-class platform for robotics, AI, and research. It would let us stay within the macOS ecosystem instead of relying on external, non-Apple devices or resorting to a compromised VM workflow with impactful limitations.  | 
  
Beta Was this translation helpful? Give feedback.
-
| 
         I'm just adding my comment to show I also agree with everything said above. I tend to speak direct, so not trying to be a jerk here :) My apologies if I don't sound kind. I understand there are security implications any time you share a resource. But why would you ever NOT want to enable access to a large part of your system resources? If containers are CPU only, you are eliminating 60% of what I got a MacStudio for. I've been doing some homelab project this year, and it is SO much more difficult than it should be. Why handicap the products in any way? It just works would win pros over as well as 'normal' users. I'm not a dev, just a long time tech enthusiast. Final rant: To me this is right up there with a young guy being able to develop Whisky for FREE to be able to play Windows games on mac. Why wouldn't apple produce that natively? they are really good at virtualisation. I love my mac, but the biggest thing that tempts me away is compatibility. OK, that's all I have to say. :D If anyone with pull reads this, realize when you make our lives easy we throw money at you  | 
  
Beta Was this translation helpful? Give feedback.
-
        
 Until Apple comes up with their own solution, krunkit does support GPU acceleration for Linux guests/containers on Apple Silicon Macs using libkrun, Venus and MoltenVK. Podman Desktop supports krunkit/libkrun since quite a while (it's now the default virtualization engine for Podman Desktop), and Lima just gained support for it too. You can also use it standalone with you favorite Linux distro, just make sure the context (VM or container in a VM) where you run the AI workload has a patched Mesa package installed (we're working on removing this requirement, but since it requires changes in the Linux kernel and Mesa is going to take a while to get upstream and trickle down to every distro).  | 
  
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Would I be able to passthrough GPU devices to the container either atomically or in slices? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions