Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .coveragerc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@

[run]
source = color_correction_asdfghjkl
omit =
tests/*

[report]
exclude_lines =
pragma: no cover
def __repr__
raise NotImplementedError
if __name__ == .__main__.:
pass
19 changes: 9 additions & 10 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# .github/workflows/test.yml
name: Test

on:
Expand All @@ -8,33 +7,33 @@ on:
jobs:
test:
name: Test
runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version:
- "3.10"
- "3.11"
- "3.12"

steps:
- uses: actions/checkout@v4

- name: Install uv and set the python version
uses: astral-sh/setup-uv@v5
with:
version: "0.5.23"
version: "0.5.24"
enable-cache: true
cache-dependency-glob: "uv.lock"
python-version: ${{ matrix.python-version }}

- name: Install the project
run: uv sync --all-groups --no-group dev-model

- name: Checking linter and formatting
run: uvx ruff check

- name: Run tests
run: uv run pytest tests -v

- name: Test with Coverage
run: uv run pytest --cov=src tests/
- name: Run tests with Coverage
run: |
uv run pytest --cov-report=term-missing --cov=color_correction_asdfghjkl tests/
uv run coverage report --fail-under=35
22 changes: 22 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,27 @@
# Changelog

## [v0.0.1b1] - 2025-02-03
**Enhanced Color Correction with Improved Documentation and Evaluation**

### ✨ Features
- Enhanced color correction with improved patch comparison and metrics
- Added polynomial correction model with configurable degrees
- Implemented comprehensive color difference evaluation

### 📚 Documentation
- Added "How it works" section with visual explanation
- Updated README with polynomial correction details
- Improved section headers for better clarity
- Added sample debug output visualization
- Enhanced usage examples with evaluation results

### 🔧 Technical
- Added `calc_color_diff_patches()` method for quality evaluation
- Implemented CIE 2000 color difference calculation
- Enhanced debug visualization capabilities
- Added support for multiple correction models


## [v0.0.1b0] - 2025-02-03

### 🔧 Improvements
Expand Down
12 changes: 11 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,14 @@ yolo-export-onnx:
half=True

test:
pytest tests -v
pytest tests -v


diff:
git diff main..{branch_name} > diff-output.txt

log:
git log --oneline main..{branch_name} > log-output.txt

update-uv-lock:
uv lock
53 changes: 46 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,17 @@

This package is designed to perform color correction on images using the Color Checker Classic 24 Patch card. It provides a robust solution for ensuring accurate color representation in your images.

## Installation
## 📦 Installation

```bash
pip install color-correction-asdfghjkl
```
## Usage

## 🏋️‍♀️ How it works
![How it works](assets/color-correction-how-it-works.png)


## ⚡ How to use

```python
# Step 1: Define the path to the input image
Expand All @@ -23,12 +28,15 @@ input_image = cv2.imread(image_path)
color_corrector = ColorCorrection(
detection_model="yolov8",
detection_conf_th=0.25,
correction_model="least_squares",
degree=2, # for polynomial correction model
correction_model="polynomial", # "least_squares", "affine_reg", "linear_reg"
degree=3, # for polynomial correction model
use_gpu=True,
)

# Step 4: Extract color patches from the input image
# you can set reference patches from another image (image has color checker card)
# or use the default D50
# color_corrector.set_reference_patches(image=None, debug=True)
color_corrector.set_input_patches(image=input_image, debug=True)
color_corrector.fit()
corrected_image = color_corrector.predict(
Expand All @@ -37,17 +45,48 @@ corrected_image = color_corrector.predict(
debug_output_dir="zzz",
)

# Step 5: Evaluate the color correction results
eval_result = color_corrector.calc_color_diff_patches()
print(eval_result)
```
Sample output:
![Sample Output](assets/sample-output-usage.png)
- Output evaluation result:
```json
{
"initial": {
"min": 2.254003059526461,
"max": 13.461066402633447,
"mean": 8.3072755187654,
"std": 3.123962754767539,
},
"corrected": {
"min": 0.30910031798755183,
"max": 5.422311999126372,
"mean": 1.4965478752947827,
"std": 1.2915738724958112,
},
"delta": {
"min": 1.9449027415389093,
"max": 8.038754403507074,
"mean": 6.810727643470616,
"std": 1.8323888822717276,
},
}
```
- Sample output debug image (polynomial degree=2):
![Sample Output](assets/sample-output-debug.jpg)

## 📈 Benefits
- **Consistency**: Ensure uniform color correction across multiple images.
- **Accuracy**: Leverage the color correction matrix for precise color adjustments.
- **Flexibility**: Adaptable for various image sets with different color profiles.

![How it works](assets/color-correction-how-it-works.png)

## 🤸 TODO
- [ ] Add Loggers
- [ ] Add detection MCC:CCheckerDetector from opencv
- [ ] Add Segmentation Color Checker using YOLOv11 ONNX
- [ ] Improve validation preprocessing (e.g., auto-match-orientation CC)
- [ ] Add more analysis and evaluation metrics (Still thinking...)

<!-- write reference -->
## 📚 References
Expand Down
Binary file added assets/sample-output-debug.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
94 changes: 55 additions & 39 deletions color_correction_asdfghjkl/services/color_correction.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import os
from typing import Literal

import colour as cl
import cv2
import numpy as np
from numpy.typing import NDArray
Expand All @@ -16,6 +15,7 @@
create_patch_tiled_image,
visualize_patch_comparison,
)
from color_correction_asdfghjkl.utils.image_processing import calc_color_diff
from color_correction_asdfghjkl.utils.visualization_utils import (
create_image_grid_visualization,
)
Expand Down Expand Up @@ -84,6 +84,10 @@ def __init__(
self.input_grid_image = None
self.input_debug_image = None

# Initialize correction output attributes
self.corrected_patches = None
self.corrected_grid_image = None

# Initialize model attributes
self.trained_model = None
self.correction_model = CorrectionModelFactory.create(
Expand Down Expand Up @@ -178,17 +182,12 @@ def _save_debug_output(
output_directory : str
Directory to save debug outputs.
"""
predicted_patches = self.correction_model.compute_correction(
input_image=np.array(self.input_patches),
)
predicted_grid = create_patch_tiled_image(predicted_patches)

before_comparison = visualize_patch_comparison(
ls_mean_in=self.input_patches,
ls_mean_ref=self.reference_patches,
)
after_comparison = visualize_patch_comparison(
ls_mean_in=predicted_patches,
ls_mean_in=self.corrected_patches,
ls_mean_ref=self.reference_patches,
)

Expand All @@ -204,7 +203,7 @@ def _save_debug_output(
("Reference vs Corrected", after_comparison),
("[Free Space]", None),
("Patch Input", self.input_grid_image),
("Patch Corrected", predicted_grid),
("Patch Corrected", self.corrected_grid_image),
("Patch Reference", self.reference_grid_image),
]

Expand Down Expand Up @@ -239,11 +238,17 @@ def _create_debug_directory(self, base_dir: str) -> str:

@property
def model_name(self) -> str:
"Return the name of the correction model."
return self.correction_model.__class__.__name__

@property
def img_grid_patches_ref(self) -> np.ndarray:
return create_patch_tiled_image(self.reference_color_card)
def ref_patches(self) -> np.ndarray:
"""Return grid image of reference color patches."""
return (
self.reference_patches,
self.reference_grid_image,
self.reference_debug_image,
)

def set_reference_patches(
self,
Expand All @@ -270,6 +275,7 @@ def set_input_patches(self, image: np.ndarray, debug: bool = False) -> None:
self.input_grid_image,
self.input_debug_image,
) = self._extract_color_patches(image=image, debug=debug)
return self.input_patches, self.input_grid_image, self.input_debug_image

def fit(self) -> tuple[NDArray, list[ColorPatchType], list[ColorPatchType]]:
"""Fit color correction model using input and reference images.
Expand Down Expand Up @@ -297,6 +303,12 @@ def fit(self) -> tuple[NDArray, list[ColorPatchType], list[ColorPatchType]]:
y_patches=self.reference_patches,
)

# Compute corrected patches
self.corrected_patches = self.correction_model.compute_correction(
input_image=np.array(self.input_patches),
)
self.corrected_grid_image = create_patch_tiled_image(self.corrected_patches)

return self.trained_model

def predict(
Expand Down Expand Up @@ -342,42 +354,37 @@ def predict(

return corrected_image

def calc_color_diff(
self,
image1: ImageType,
image2: ImageType,
) -> tuple[float, float, float, float]:
"""Calculate color difference metrics between two images.

Parameters
----------
image1, image2 : NDArray
Images to compare in BGR format.
def calc_color_diff_patches(self) -> dict:
initial_color_diff = calc_color_diff(
image1=self.input_grid_image,
image2=self.reference_grid_image,
)

Returns
-------
Tuple[float, float, float, float]
Minimum, maximum, mean, and standard deviation of delta E values.
"""
rgb1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
rgb2 = cv2.cvtColor(image2, cv2.COLOR_BGR2RGB)
corrected_color_diff = calc_color_diff(
image1=self.corrected_grid_image,
image2=self.reference_grid_image,
)

lab1 = cl.XYZ_to_Lab(cl.sRGB_to_XYZ(rgb1 / 255))
lab2 = cl.XYZ_to_Lab(cl.sRGB_to_XYZ(rgb2 / 255))
delta_color_diff = {
"min": initial_color_diff["min"] - corrected_color_diff["min"],
"max": initial_color_diff["max"] - corrected_color_diff["max"],
"mean": initial_color_diff["mean"] - corrected_color_diff["mean"],
"std": initial_color_diff["std"] - corrected_color_diff["std"],
}

delta_e = cl.difference.delta_E(lab1, lab2, method="CIE 2000")
info = {
"initial": initial_color_diff,
"corrected": corrected_color_diff,
"delta": delta_color_diff,
}

return (
float(np.min(delta_e)),
float(np.max(delta_e)),
float(np.mean(delta_e)),
float(np.std(delta_e)),
)
return info


if __name__ == "__main__":
# Step 1: Define the path to the input image
image_path = "asset/images/cc-19.png"
image_path = "asset/images/cc-1.jpg"

# Step 2: Load the input image
input_image = cv2.imread(image_path)
Expand All @@ -386,16 +393,25 @@ def calc_color_diff(
color_corrector = ColorCorrection(
detection_model="yolov8",
detection_conf_th=0.25,
correction_model="least_squares",
degree=2, # for polynomial correction model
correction_model="polynomial",
# correction_model="least_squares",
# correction_model="affine_reg",
# correction_model="linear_reg",
degree=3, # for polynomial correction model
use_gpu=True,
)

# Step 4: Extract color patches from the input image
# you can set reference patches from another image (image has color checker card)
# or use the default D50
# color_corrector.set_reference_patches(image=None, debug=True)
color_corrector.set_input_patches(image=input_image, debug=True)
color_corrector.fit()
corrected_image = color_corrector.predict(
input_image=input_image,
debug=True,
debug_output_dir="zzz",
)

eval_result = color_corrector.calc_color_diff_patches()
print(eval_result)
Loading