AI-Assisted Image Segmentation for Machine Learning Dataset Preparation
LazyLabel combines Meta's Segment Anything Model (SAM) with comprehensive manual annotation tools to accelerate the creation of pixel-perfect segmentation masks for computer vision applications.
pip install lazylabel-gui
lazylabel-guiFrom source:
git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
pip install -e .
lazylabel-guiRequirements: Python 3.10+, 8GB RAM, ~2.5GB disk space (for model weights)
LazyLabel leverages Meta's SAM for intelligent object detection:
- Single-click object segmentation
- Interactive refinement with positive/negative points
- Support for both SAM 1.0 and SAM 2.1 models
- GPU acceleration with automatic CPU fallback
When precision matters:
- Polygon drawing with vertex-level editing
- Bounding box annotations for object detection
- Edit mode for adjusting existing segments
- Merge tool for combining related segments
Advanced preprocessing capabilities:
- FFT filtering: Remove noise and enhance edges
- Channel thresholding: Isolate objects by color
- Border cropping: Define crop regions that set pixels outside the area to zero in saved outputs
- View adjustments: Brightness, contrast, gamma correction
Process multiple images efficiently:
- Annotate up to 4 images simultaneously
- Synchronized zoom and pan across views
- Mirror annotations to all linked images
One-hot encoded masks optimized for deep learning:
import numpy as np
data = np.load('image.npz')
mask = data['mask'] # Shape: (height, width, num_classes)
# Each channel represents one class
sky = mask[:, :, 0]
boats = mask[:, :, 1]
cats = mask[:, :, 2]
dogs = mask[:, :, 3]Normalized polygon coordinates for YOLO training:
0 0.234 0.456 0.289 0.478 0.301 0.523 ...
1 0.567 0.123 0.598 0.145 0.612 0.189 ...
Maintains consistent class naming across datasets:
{
"0": "background",
"1": "person",
"2": "vehicle"
}- Open folder containing your images
- Click objects to generate AI masks (mode 1)
- Refine with additional points or manual tools
- Assign classes and organize in the class table
- Export as NPZ or YOLO format
For challenging images:
- Apply FFT filtering to reduce noise
- Use channel thresholding to isolate color ranges
- Enable "Operate on View" to pass filtered images to SAM
- Fine-tune with manual tools
Access via the "Multi" tab to process multiple images:
- 2-view (side-by-side) or 4-view (grid) layouts
- Annotations mirror across linked views automatically
- Synchronized zoom maintains alignment
LazyLabel supports both SAM 1.0 (default) and SAM 2.1 models. SAM 2.1 offers improved segmentation accuracy and better handling of complex boundaries.
To use SAM 2.1 models:
- Install the SAM 2 package:
pip install git+https://github.com/facebookresearch/sam2.git
- Download a SAM 2.1 model (e.g.,
sam2.1_hiera_large.pt) from the SAM 2 repository - Place the model file in LazyLabel's models folder:
- If installed via pip:
~/.local/share/lazylabel/models/(or equivalent on your system) - If running from source:
src/lazylabel/models/
- If installed via pip:
- Select the SAM 2.1 model from the dropdown in LazyLabel's settings
Note: SAM 1.0 models are automatically downloaded on first use.
| Action | Key | Description |
|---|---|---|
| AI Mode | 1 |
SAM point-click segmentation |
| Draw Mode | 2 |
Manual polygon creation |
| Edit Mode | E |
Modify existing segments |
| Accept AI Segment | Space |
Confirm AI segment suggestion |
| Save | Enter |
Save annotations |
| Merge | M |
Combine selected segments |
| Pan Mode | Q |
Enter pan mode |
| Pan | WASD |
Navigate image |
| Delete | V/Delete |
Remove segments |
| Undo/Redo | Ctrl+Z/Y |
Action history |
Create a standalone Windows executable with bundled models (no Python required):
Requirements:
- Windows (native, not WSL)
- Python 3.10+
- PyInstaller:
pip install pyinstaller
Build steps:
git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
python build_system/windows/build_windows.pyThe executable will be created in dist/LazyLabel/. The entire folder (~7-8GB) can be moved anywhere and runs offline.
- Usage Manual - Comprehensive feature guide
- Architecture Guide - Technical implementation details
- GitHub Issues - Report bugs or request features

