Skip to content

Commit 0e121db

Browse files
authored
Merge pull request #5 from agfianf/feat/evaluation
Feat/evaluation
2 parents 778a14d + 6647bc3 commit 0e121db

File tree

10 files changed

+208
-85
lines changed

10 files changed

+208
-85
lines changed

.coveragerc

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
2+
[run]
3+
source = color_correction_asdfghjkl
4+
omit =
5+
tests/*
6+
7+
[report]
8+
exclude_lines =
9+
pragma: no cover
10+
def __repr__
11+
raise NotImplementedError
12+
if __name__ == .__main__.:
13+
pass

.github/workflows/tests.yml

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
# .github/workflows/test.yml
21
name: Test
32

43
on:
@@ -8,33 +7,33 @@ on:
87
jobs:
98
test:
109
name: Test
11-
runs-on: ubuntu-latest
10+
runs-on: ${{ matrix.os }}
1211
strategy:
1312
matrix:
13+
os: [ubuntu-latest, windows-latest, macos-latest]
1414
python-version:
1515
- "3.10"
1616
- "3.11"
1717
- "3.12"
1818

1919
steps:
2020
- uses: actions/checkout@v4
21-
21+
2222
- name: Install uv and set the python version
2323
uses: astral-sh/setup-uv@v5
2424
with:
25-
version: "0.5.23"
25+
version: "0.5.24"
2626
enable-cache: true
2727
cache-dependency-glob: "uv.lock"
2828
python-version: ${{ matrix.python-version }}
29-
29+
3030
- name: Install the project
3131
run: uv sync --all-groups --no-group dev-model
3232

3333
- name: Checking linter and formatting
3434
run: uvx ruff check
3535

36-
- name: Run tests
37-
run: uv run pytest tests -v
38-
39-
- name: Test with Coverage
40-
run: uv run pytest --cov=src tests/
36+
- name: Run tests with Coverage
37+
run: |
38+
uv run pytest --cov-report=term-missing --cov=color_correction_asdfghjkl tests/
39+
uv run coverage report --fail-under=35

CHANGELOG.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,27 @@
11
# Changelog
22

3+
## [v0.0.1b1] - 2025-02-03
4+
**Enhanced Color Correction with Improved Documentation and Evaluation**
5+
6+
### ✨ Features
7+
- Enhanced color correction with improved patch comparison and metrics
8+
- Added polynomial correction model with configurable degrees
9+
- Implemented comprehensive color difference evaluation
10+
11+
### 📚 Documentation
12+
- Added "How it works" section with visual explanation
13+
- Updated README with polynomial correction details
14+
- Improved section headers for better clarity
15+
- Added sample debug output visualization
16+
- Enhanced usage examples with evaluation results
17+
18+
### 🔧 Technical
19+
- Added `calc_color_diff_patches()` method for quality evaluation
20+
- Implemented CIE 2000 color difference calculation
21+
- Enhanced debug visualization capabilities
22+
- Added support for multiple correction models
23+
24+
325
## [v0.0.1b0] - 2025-02-03
426

527
### 🔧 Improvements

Makefile

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,14 @@ yolo-export-onnx:
88
half=True
99

1010
test:
11-
pytest tests -v
11+
pytest tests -v
12+
13+
14+
diff:
15+
git diff main..{branch_name} > diff-output.txt
16+
17+
log:
18+
git log --oneline main..{branch_name} > log-output.txt
19+
20+
update-uv-lock:
21+
uv lock

README.md

Lines changed: 46 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,17 @@
55
66
This package is designed to perform color correction on images using the Color Checker Classic 24 Patch card. It provides a robust solution for ensuring accurate color representation in your images.
77

8-
## Installation
8+
## 📦 Installation
99

1010
```bash
1111
pip install color-correction-asdfghjkl
1212
```
13-
## Usage
13+
14+
## 🏋️‍♀️ How it works
15+
![How it works](assets/color-correction-how-it-works.png)
16+
17+
18+
## ⚡ How to use
1419

1520
```python
1621
# Step 1: Define the path to the input image
@@ -23,12 +28,15 @@ input_image = cv2.imread(image_path)
2328
color_corrector = ColorCorrection(
2429
detection_model="yolov8",
2530
detection_conf_th=0.25,
26-
correction_model="least_squares",
27-
degree=2, # for polynomial correction model
31+
correction_model="polynomial", # "least_squares", "affine_reg", "linear_reg"
32+
degree=3, # for polynomial correction model
2833
use_gpu=True,
2934
)
3035

3136
# Step 4: Extract color patches from the input image
37+
# you can set reference patches from another image (image has color checker card)
38+
# or use the default D50
39+
# color_corrector.set_reference_patches(image=None, debug=True)
3240
color_corrector.set_input_patches(image=input_image, debug=True)
3341
color_corrector.fit()
3442
corrected_image = color_corrector.predict(
@@ -37,17 +45,48 @@ corrected_image = color_corrector.predict(
3745
debug_output_dir="zzz",
3846
)
3947

48+
# Step 5: Evaluate the color correction results
49+
eval_result = color_corrector.calc_color_diff_patches()
50+
print(eval_result)
4051
```
41-
Sample output:
42-
![Sample Output](assets/sample-output-usage.png)
52+
- Output evaluation result:
53+
```json
54+
{
55+
"initial": {
56+
"min": 2.254003059526461,
57+
"max": 13.461066402633447,
58+
"mean": 8.3072755187654,
59+
"std": 3.123962754767539,
60+
},
61+
"corrected": {
62+
"min": 0.30910031798755183,
63+
"max": 5.422311999126372,
64+
"mean": 1.4965478752947827,
65+
"std": 1.2915738724958112,
66+
},
67+
"delta": {
68+
"min": 1.9449027415389093,
69+
"max": 8.038754403507074,
70+
"mean": 6.810727643470616,
71+
"std": 1.8323888822717276,
72+
},
73+
}
74+
```
75+
- Sample output debug image (polynomial degree=2):
76+
![Sample Output](assets/sample-output-debug.jpg)
4377

4478
## 📈 Benefits
4579
- **Consistency**: Ensure uniform color correction across multiple images.
4680
- **Accuracy**: Leverage the color correction matrix for precise color adjustments.
4781
- **Flexibility**: Adaptable for various image sets with different color profiles.
4882

49-
![How it works](assets/color-correction-how-it-works.png)
5083

84+
## 🤸 TODO
85+
- [ ] Add Loggers
86+
- [ ] Add detection MCC:CCheckerDetector from opencv
87+
- [ ] Add Segmentation Color Checker using YOLOv11 ONNX
88+
- [ ] Improve validation preprocessing (e.g., auto-match-orientation CC)
89+
- [ ] Add more analysis and evaluation metrics (Still thinking...)
5190

5291
<!-- write reference -->
5392
## 📚 References

assets/sample-output-debug.jpg

380 KB
Loading

color_correction_asdfghjkl/services/color_correction.py

Lines changed: 55 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,6 @@
11
import os
22
from typing import Literal
33

4-
import colour as cl
54
import cv2
65
import numpy as np
76
from numpy.typing import NDArray
@@ -16,6 +15,7 @@
1615
create_patch_tiled_image,
1716
visualize_patch_comparison,
1817
)
18+
from color_correction_asdfghjkl.utils.image_processing import calc_color_diff
1919
from color_correction_asdfghjkl.utils.visualization_utils import (
2020
create_image_grid_visualization,
2121
)
@@ -84,6 +84,10 @@ def __init__(
8484
self.input_grid_image = None
8585
self.input_debug_image = None
8686

87+
# Initialize correction output attributes
88+
self.corrected_patches = None
89+
self.corrected_grid_image = None
90+
8791
# Initialize model attributes
8892
self.trained_model = None
8993
self.correction_model = CorrectionModelFactory.create(
@@ -178,17 +182,12 @@ def _save_debug_output(
178182
output_directory : str
179183
Directory to save debug outputs.
180184
"""
181-
predicted_patches = self.correction_model.compute_correction(
182-
input_image=np.array(self.input_patches),
183-
)
184-
predicted_grid = create_patch_tiled_image(predicted_patches)
185-
186185
before_comparison = visualize_patch_comparison(
187186
ls_mean_in=self.input_patches,
188187
ls_mean_ref=self.reference_patches,
189188
)
190189
after_comparison = visualize_patch_comparison(
191-
ls_mean_in=predicted_patches,
190+
ls_mean_in=self.corrected_patches,
192191
ls_mean_ref=self.reference_patches,
193192
)
194193

@@ -204,7 +203,7 @@ def _save_debug_output(
204203
("Reference vs Corrected", after_comparison),
205204
("[Free Space]", None),
206205
("Patch Input", self.input_grid_image),
207-
("Patch Corrected", predicted_grid),
206+
("Patch Corrected", self.corrected_grid_image),
208207
("Patch Reference", self.reference_grid_image),
209208
]
210209

@@ -239,11 +238,17 @@ def _create_debug_directory(self, base_dir: str) -> str:
239238

240239
@property
241240
def model_name(self) -> str:
241+
"Return the name of the correction model."
242242
return self.correction_model.__class__.__name__
243243

244244
@property
245-
def img_grid_patches_ref(self) -> np.ndarray:
246-
return create_patch_tiled_image(self.reference_color_card)
245+
def ref_patches(self) -> np.ndarray:
246+
"""Return grid image of reference color patches."""
247+
return (
248+
self.reference_patches,
249+
self.reference_grid_image,
250+
self.reference_debug_image,
251+
)
247252

248253
def set_reference_patches(
249254
self,
@@ -270,6 +275,7 @@ def set_input_patches(self, image: np.ndarray, debug: bool = False) -> None:
270275
self.input_grid_image,
271276
self.input_debug_image,
272277
) = self._extract_color_patches(image=image, debug=debug)
278+
return self.input_patches, self.input_grid_image, self.input_debug_image
273279

274280
def fit(self) -> tuple[NDArray, list[ColorPatchType], list[ColorPatchType]]:
275281
"""Fit color correction model using input and reference images.
@@ -297,6 +303,12 @@ def fit(self) -> tuple[NDArray, list[ColorPatchType], list[ColorPatchType]]:
297303
y_patches=self.reference_patches,
298304
)
299305

306+
# Compute corrected patches
307+
self.corrected_patches = self.correction_model.compute_correction(
308+
input_image=np.array(self.input_patches),
309+
)
310+
self.corrected_grid_image = create_patch_tiled_image(self.corrected_patches)
311+
300312
return self.trained_model
301313

302314
def predict(
@@ -342,42 +354,37 @@ def predict(
342354

343355
return corrected_image
344356

345-
def calc_color_diff(
346-
self,
347-
image1: ImageType,
348-
image2: ImageType,
349-
) -> tuple[float, float, float, float]:
350-
"""Calculate color difference metrics between two images.
351-
352-
Parameters
353-
----------
354-
image1, image2 : NDArray
355-
Images to compare in BGR format.
357+
def calc_color_diff_patches(self) -> dict:
358+
initial_color_diff = calc_color_diff(
359+
image1=self.input_grid_image,
360+
image2=self.reference_grid_image,
361+
)
356362

357-
Returns
358-
-------
359-
Tuple[float, float, float, float]
360-
Minimum, maximum, mean, and standard deviation of delta E values.
361-
"""
362-
rgb1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
363-
rgb2 = cv2.cvtColor(image2, cv2.COLOR_BGR2RGB)
363+
corrected_color_diff = calc_color_diff(
364+
image1=self.corrected_grid_image,
365+
image2=self.reference_grid_image,
366+
)
364367

365-
lab1 = cl.XYZ_to_Lab(cl.sRGB_to_XYZ(rgb1 / 255))
366-
lab2 = cl.XYZ_to_Lab(cl.sRGB_to_XYZ(rgb2 / 255))
368+
delta_color_diff = {
369+
"min": initial_color_diff["min"] - corrected_color_diff["min"],
370+
"max": initial_color_diff["max"] - corrected_color_diff["max"],
371+
"mean": initial_color_diff["mean"] - corrected_color_diff["mean"],
372+
"std": initial_color_diff["std"] - corrected_color_diff["std"],
373+
}
367374

368-
delta_e = cl.difference.delta_E(lab1, lab2, method="CIE 2000")
375+
info = {
376+
"initial": initial_color_diff,
377+
"corrected": corrected_color_diff,
378+
"delta": delta_color_diff,
379+
}
369380

370-
return (
371-
float(np.min(delta_e)),
372-
float(np.max(delta_e)),
373-
float(np.mean(delta_e)),
374-
float(np.std(delta_e)),
375-
)
381+
return info
376382

377383

378384
if __name__ == "__main__":
379385
# Step 1: Define the path to the input image
380386
image_path = "asset/images/cc-19.png"
387+
image_path = "asset/images/cc-1.jpg"
381388

382389
# Step 2: Load the input image
383390
input_image = cv2.imread(image_path)
@@ -386,16 +393,25 @@ def calc_color_diff(
386393
color_corrector = ColorCorrection(
387394
detection_model="yolov8",
388395
detection_conf_th=0.25,
389-
correction_model="least_squares",
390-
degree=2, # for polynomial correction model
396+
correction_model="polynomial",
397+
# correction_model="least_squares",
398+
# correction_model="affine_reg",
399+
# correction_model="linear_reg",
400+
degree=3, # for polynomial correction model
391401
use_gpu=True,
392402
)
393403

394404
# Step 4: Extract color patches from the input image
405+
# you can set reference patches from another image (image has color checker card)
406+
# or use the default D50
407+
# color_corrector.set_reference_patches(image=None, debug=True)
395408
color_corrector.set_input_patches(image=input_image, debug=True)
396409
color_corrector.fit()
397410
corrected_image = color_corrector.predict(
398411
input_image=input_image,
399412
debug=True,
400413
debug_output_dir="zzz",
401414
)
415+
416+
eval_result = color_corrector.calc_color_diff_patches()
417+
print(eval_result)

0 commit comments

Comments
 (0)