DeepLabV3+ LULC Segmentation is a production-ready, state-of-the-art semantic segmentation framework for Land Use/Land Cover mapping from satellite imagery, offering end-to-end solutions from data preprocessing to intelligent land cover analysis
Tip
New in v2.0: Multi-architecture support with U-Net variants, advanced data augmentation pipeline, and production-ready web interface for real-time land cover analysis.
The DeepLabV3+ LULC Technical Report is now available. See details at: DeepLabV3+ for LULC Segmentation
DeepLabV3+ LULC Segmentation converts satellite imagery into structured land cover maps with industry-leading accuracy powering environmental monitoring applications for researchers, government agencies, and enterprises worldwide. Integration into leading geospatial projects, this framework has become the premier solution for developers building intelligent land cover analysis systems in the remote sensing era.
-
DeepLabV3+ with EfficientNet-B2 β State-of-the-Art LULC Segmentation
Single model achieves 84.01% pixel accuracy across 8 land cover classes with 55.31% mIoU. Handles complex landscape patterns from urban areas to natural environments. -
Multi-Architecture Support β Flexible Model Selection
Choose from DeepLabV3+, U-Net with ResNet34, and U-Net with SegFormer encoders. Each architecture optimized for different deployment scenarios and accuracy requirements. -
Production-Ready Pipeline β From Research to Deployment
Complete framework with Flask web interface, batch processing, and comprehensive evaluation metrics. Seamlessly transition from model training to production deployment.
-
Multi-Architecture Framework:
- Enhanced DeepLabV3+ with EfficientNet-B2, achieving 55.31% mIoU with custom SE-attention mechanism
- U-Net with ResNet34, optimized for balanced performance with comprehensive evaluation metrics
- Advanced training pipeline with PyTorch Lightning integration and mixed precision training
-
Advanced Training Pipeline:
- Integrated sophisticated data preprocessing with satellite-specific normalization
- Smart augmentation strategies including geometric and photometric transformations
- Comprehensive evaluation framework with detailed per-class analysis and confusion matrices
-
Production Features:
- Interactive Flask Web Application with drag-and-drop inference and real-time visualization
- Batch processing capabilities for large-scale satellite image analysis
- Model comparison tools and comprehensive performance benchmarking
Important
Pre-trained models are stored using Git LFS. Make sure you have Git LFS installed before cloning the repository to download the actual model files.
Install Git LFS and PyTorch following the official guide, then clone and set up the repository:
# Install Git LFS (if not already installed)
git lfs install
# Clone the repository (Git LFS will automatically download model files)
git clone https://github.com/VishalPainjane/deeplabv3-lulc-segmentation.git
cd deeplabv3-lulc-segmentation
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtWhenever you clone this repo on a different machine or re-clone it:
git lfs install
git clone https://github.com/VishalPainjane/deeplabv3-lulc-segmentation.gitGit will automatically pull the actual large files tracked by LFS.
Launch the interactive Flask web interface:
# Start the web application
python app.py
# Application will be available at http://localhost:5000Access the web interface for:
- Drag & drop image upload
- Real-time LULC segmentation
- Interactive result visualization
- Model performance metrics
- Download prediction results
All models trained on SEN-2 LULC preprocessed dataset for 50 epochs with advanced augmentation pipeline.
| Model | Encoder | mIoU | Pixel Acc | Params | GPU Memory | Inference (ms) | Model File |
|---|---|---|---|---|---|---|---|
| DeepLabV3+ | EfficientNet-B2 | 55.31 | 84.01% | 8.1M | 3.2GB | 45 | deeplabv3_effecientnet_b2.pth |
| U-Net | ResNet34 | 46.12 | 81.24% | 24.4M | 5.1GB | 38 | unet_resnet34.pth |
| U-Net | SegFormer | 47.28 | 82.67% | 47.3M | 8.7GB | 52 | unet_segformer.pth |
| Class ID | Land Cover | IoU | F1-Score | Precision | Recall | Area Coverage |
|---|---|---|---|---|---|---|
| 1 | Water Bodies | 37.51 | 54.55 | 61.2% | 49.8% | 18.7% |
| 2 | Dense Forest | 40.03 | 57.15 | 78.9% | 45.1% | 8.2% |
| 3 | Built up | 48.52 | 65.31 | 69.4% | 61.7% | 15.4% |
| 4 | Agriculture land | 53.53 | 69.84 | 72.1% | 67.8% | 28.9% |
| 5 | Barren land | 69.31 | 81.97 | 85.3% | 78.9% | 3.1% |
| 6 | Fallow land | 88.03 | 93.63 | 94.7% | 92.6% | 21.8% |
| 7 | Sparse Forest | 50.27 | 66.89 | 71.4% | 62.9% | 1.6% |
Pre-trained models available in the models/ directory:
| Model | Use Case | Accuracy | Speed | Size | Model File |
|---|---|---|---|---|---|
| DeepLabV3+ Server | High accuracy research | mIoU: 55.31 | 45ms | 32MB | deeplabv3_effecientnet_b2.pth |
| U-Net ResNet34 | Balanced performance | mIoU: 46.12 | 38ms | 97MB | unet_resnet34.pth |
| U-Net SegFormer | Transformer-based | mIoU: 47.28 | 52ms | 189MB | unet_segformer.pth |
The framework expects the following dataset structure (as created by data_preprocessing.py):
SEN-2_LULC_preprocessed/
βββ train_images/ # Training satellite images
βββ train_masks/ # Training segmentation masks
βββ val_images/ # Validation satellite images
βββ val_masks/ # Validation segmentation masks
- Deforestation tracking with temporal analysis using satellite time series
- Urban expansion monitoring for sustainable city planning and development
- Agricultural land assessment for food security and crop yield prediction
- Water body changes monitoring due to climate variations and human impact
- Land use compliance monitoring for regulatory enforcement
- Environmental impact assessment for infrastructure projects
- Disaster response and damage assessment using before/after imagery
- Carbon footprint analysis and emissions reporting
- Real estate development site suitability analysis
- Insurance risk assessment for natural disasters and climate risks
- Precision agriculture for optimized farming and resource management
- Infrastructure planning and optimal site selection for renewable energy
This project is released under the MIT License.
Acknowledgments: This work was supported by [Your Institution/Grant]. Special thanks to the open-source community and contributors to PyTorch, segmentation-models-pytorch, and the geospatial data science ecosystem.
For support, questions, or collaboration opportunities:
- π§ Email: vishalpainjane22@gmail.com
- π¬ GitHub Discussions: Join the community
- π Web Demo: Try the live application
\qualitative_predictions\comparison_9.png)
\qualitative_predictions\comparison_28.png)