The mamba language model framework thru 2.2.6 is...
Critical severity
Unreviewed
Published
May 12, 2026
to the GitHub Advisory Database
•
Updated May 14, 2026
Description
Published by the National Vulnerability Database
May 12, 2026
Published to the GitHub Advisory Database
May 12, 2026
Last updated
May 14, 2026
The mamba language model framework thru 2.2.6 is vulnerable to insecure deserialization (CWE-502) when loading pre-trained models from HuggingFace Hub. The MambaLMHeadModel.from_pretrained() method uses torch.load() to load the pytorch_model.bin weight file without enabling the security-restrictive weights_only=True parameter. This allows the deserialization of arbitrary Python objects via the pickle module. An attacker can exploit this by publishing a malicious model repository on HuggingFace Hub. When a victim loads a model from this repository, arbitrary code is executed on the victim's system in the context of the mamba process.
References