-
Notifications
You must be signed in to change notification settings - Fork 72
Open
Description
## 🐛 Issue: `_pickle.UnpicklingError` when loading `.pth` weights on PyTorch 2.6+
### Description
When attempting to train EdgeYOLO using a pretrained `.pth` file (e.g. `edgeyolo_tiny_lrelu_coco.pth`) on **PyTorch 2.6+**, the following error occurs:
_pickle.UnpicklingError: Weights only load failed...
Unsupported global: GLOBAL numpy.core.multiarray.scalar...
This is due to a change in **PyTorch 2.6**, which now defaults `torch.load()` to `weights_only=True` and limits deserialization unless explicitly allowed. The older checkpoint format includes `numpy.core.multiarray.scalar`, which is now restricted.
---
### ✅ Workaround / Fix
#### In `/edgeyolo/edgeyolo/models/__init__.py`, inside the `EdgeYOLO.__init__()` method:
**1. Insert before `torch.load(...)`:**
```python
import numpy.core.multiarray
from torch.serialization import add_safe_globals
add_safe_globals({"scalar": numpy.core.multiarray.scalar})
2. Change this line:
torch.load(weights, map_location="cpu")to:
torch.load(weights, map_location="cpu", weights_only=False)This resolves the UnpicklingError by explicitly allowlisting the scalar type and disabling the new default weights_only=True.
🧪 Environment
- PyTorch: 2.1.2 and 2.6.0
- CUDA: 11.8
- GPU: RTX 4090
- Platform: Docker container with local runtime
- EdgeYOLO: Main branch (May 2025)
💡 Suggestion
To future-proof or support others:
- Add a check like:
if hasattr(torch, 'serialization') and hasattr(torch.serialization, 'add_safe_globals'):
import numpy.core.multiarray
from torch.serialization import add_safe_globals
add_safe_globals({"scalar": numpy.core.multiarray.scalar})- Or document this in the README under “Common Errors with PyTorch 2.6+”
Thanks for your work — this repo rocks 🚀
SolSearcher
Metadata
Metadata
Assignees
Labels
No labels