This project focuses on brain tumor segmentation using MRI images and a deep learning approach. The U-Net architecture, a popular convolutional neural network for biomedical image segmentation, is used to distinguish tumor regions from non-tumor areas in brain MRI scans. The dataset used for this task is the LGG MRI Segmentation Dataset, which contains paired MRI images and corresponding tumor masks.
Brain tumors, particularly low-grade gliomas (LGG), are life-threatening and need timely detection. Accurate segmentation of the tumor from MRI images is critical for planning treatment. Manual segmentation is time-consuming and subject to human error. This project aims to automate the segmentation process using U-Net to enhance precision and reduce the time required for diagnosis.

- Develop a U-Net model to perform pixel-wise segmentation of brain tumors in MRI images.
- Enhance the model using image preprocessing and data augmentation techniques.
- Achieve high segmentation performance, focusing on metrics like Dice Coefficient and Intersection over Union (IoU).
- The dataset is sourced from Kaggle: LGG MRI Segmentation Dataset.
- Images are preprocessed to match the input size for the model.
The dataset used for this project is the LGG MRI Segmentation Dataset, which is available on Kaggle. You can access the dataset via the following link:
Kaggle: LGG MRI Segmentation Dataset
- Resize images and masks to a consistent shape.
- Convert images to grayscale when necessary.
- Perform data augmentation (e.g., random rotations, flips) to increase model robustness.
The U-Net model is designed for image segmentation tasks, featuring:
- Encoder (Contracting Path): Consists of repeated convolutional layers followed by max-pooling to downsample the feature maps.
- Bottleneck: Central part that captures abstracted features of the image.
- Decoder (Expanding Path): Involves upsampling layers and concatenation with corresponding encoder layers to restore spatial resolution.
- Dropout and Batch Normalization are used to improve generalization.
- The model is trained using the Adam optimizer.
- EarlyStopping is employed to prevent overfitting by stopping training when the validation loss plateaus.
- ModelCheckpoint ensures that the best-performing model (based on validation performance) is saved.
- The primary metric used for evaluating segmentation performance is the Dice Coefficient, which measures the overlap between predicted segmentation and ground truth masks.
- Additional metrics include IoU and pixel-wise accuracy.
- The U-Net model successfully segments tumor regions from the MRI images.
- Data augmentation improves model generalization on unseen data.
- Metrics indicate a high degree of overlap between predicted tumor regions and actual masks, with a Dice Coefficient close to the state-of-the-art benchmarks.

- TensorFlow: Model building and training.
- OpenCV: Image processing and visualization.
- Scikit-image: Image transformation utilities.
- NumPy and Pandas: Data handling and matrix operations.
- Matplotlib: Visualizing training progress and results.
- U-Net's encoder-decoder structure: It is highly effective for pixel-wise image segmentation, allowing the network to recover spatial information and refine segmentation boundaries.
- Data Augmentation: Critical for preventing overfitting, especially with limited medical imaging data.
- Preprocessing: Proper resizing, normalization, and augmentation play an essential role in enhancing model performance.
- Hyperparameter Tuning: Explore different optimizer configurations and learning rates to further boost performance.
- 3D Segmentation: Extend the model to handle 3D MRI data for volumetric tumor segmentation.
- Real-time Segmentation: Improve the model's inference speed for deployment in real-time diagnostic systems.
This project showcases the ability of deep learning models, specifically U-Net, to accurately segment brain tumors from MRI scans. By automating this critical step in the diagnosis process, the model has the potential to greatly assist medical professionals, improving diagnosis speed and accuracy.
- Clone the repository.
- Install required libraries using
pip install -r requirements.txt. - Ensure the dataset is downloaded from Kaggle.
- Run the Jupyter notebook or Python script to train and evaluate the model.


