Skip to content

Commit 41c7d4a

Browse files
authored
Merge pull request #6 from dw42CSCE/main
Implementing NMF and Adaptive Thresholding
2 parents a32e11c + 19747cb commit 41c7d4a

22 files changed

+860
-73
lines changed

.github/dependabot.yml

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
version: 2
2+
updates:
3+
# Maintain dependencies for GitHub Actions
4+
- package-ecosystem: "github-actions"
5+
directory: "/"
6+
schedule:
7+
interval: "monthly"
8+
9+
# Maintain dependencies for Composer
10+
- package-ecosystem: "pip"
11+
directory: "/"
12+
schedule:
13+
interval: "monthly"

.github/workflows/build_docs.yml

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
name: Build Sphinx Documentation
2+
3+
on:
4+
push:
5+
branches:
6+
- main
7+
pull_request:
8+
branches:
9+
- 2-add-continuous-integration
10+
- main
11+
12+
jobs:
13+
build:
14+
runs-on: ubuntu-latest # You can also use `windows-latest` or `macos-latest` if needed
15+
16+
steps:
17+
- name: Checkout repository
18+
uses: actions/checkout@v3
19+
20+
- name: Set up Python
21+
uses: actions/setup-python@v4
22+
with:
23+
python-version: '3.10' # Set your preferred Python version
24+
25+
- name: Add contourusv to PYTHONPATH
26+
run: echo "PYTHONPATH=$PYTHONPATH:$(pwd)/contourusv" >> $GITHUB_ENV
27+
28+
- name: Install dependencies
29+
run: |
30+
python -m pip install --upgrade pip
31+
pip install sphinx sphinx_rtd_theme opencv-python numpy pandas tqdm scipy matplotlib codecarbon scikit-learn
32+
33+
34+
- name: Build the documentation
35+
run: |
36+
37+
# Create a new branch for the documentation
38+
git checkout --orphan gh-pages
39+
# Generate reStructuredText files from the source code
40+
sphinx-apidoc -o sphinx/ contourusv/ -f
41+
42+
rm -rf docs/
43+
44+
# Ensure the docs directory exists
45+
mkdir -p docs
46+
47+
# Build the HTML documentation
48+
sphinx-build -b html sphinx/ docs/
49+
pwd
50+
ls
51+
52+
# Add a .nojekyll file to bypass Jekyll processing on GitHub Pages
53+
touch ./docs/.nojekyll
54+
55+
- name: Commit and push generated docs
56+
run: |
57+
ls
58+
git config --global user.email "[email protected]"
59+
git config --global user.name "Dallas Wade"
60+
git config --global pull.rebase false # Or true, or --ff-only based on your preference
61+
62+
# Ensure docs/ is staged properly
63+
git add docs/
64+
git status # Debugging: See if anything is staged
65+
git commit -m "Update Sphinx documentation" || echo "No changes to commit."
66+
67+
# Push the changes
68+
git push origin gh-pages --force
69+

.gitignore

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,5 @@
1-
src/__pycache__
1+
contourusv/__pycache__
2+
venv
3+
contourusv/output/
4+
contourusv/Experiments/
5+
contourusv/test.ipynb

contourusv/__init__.py

Whitespace-only changes.
Lines changed: 226 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,226 @@
1+
Metadata-Version: 2.2
2+
Name: contourusv
3+
Version: 0.1.0
4+
Summary: USV Detection Pipeline
5+
Author-email: Sabah Anis <[email protected]>
6+
License: MIT
7+
Requires-Python: >=3.9
8+
Description-Content-Type: text/markdown
9+
Requires-Dist: numpy
10+
Requires-Dist: mne
11+
Requires-Dist: matplotlib
12+
Requires-Dist: scipy
13+
Requires-Dist: pandas
14+
Requires-Dist: opencv-python
15+
Requires-Dist: tqdm
16+
Requires-Dist: codecarbon
17+
Requires-Dist: seaborn
18+
Requires-Dist: pillow
19+
20+
# ContourUSV: Ultrasonic Vocalization Detection Pipeline
21+
22+
![Python Version](https://img.shields.io/badge/python-%3E%3D3.9-blue.svg)
23+
![License](https://img.shields.io/badge/license-MIT-green.svg)
24+
25+
ContourUSV is an automated pipeline for detecting ultrasonic vocalizations (USVs) in audio recordings. The system uses spectrogram analysis combined with advanced image processing techniques to identify and classify 22kHz and 50kHz USVs.
26+
27+
<!-- ## Features
28+
29+
- Audio preprocessing with bandpass filtering and normalization
30+
- Spectrogram generation with customizable parameters
31+
- Advanced image processing for noise reduction:
32+
- Median filtering
33+
- Otsu's thresholding
34+
- Contrast Limited Adaptive Histogram Equalization (CLAHE)
35+
- Morphological operations
36+
- Contour-based USV detection
37+
- Annotation generation from multiple file formats (HTML, Excel, CSV)
38+
- Comprehensive evaluation metrics:
39+
- Precision, Recall, F1 Score, Specificity
40+
- Carbon emissions tracking via CodeCarbon
41+
- Parallel processing support -->
42+
43+
## Installation
44+
45+
1. **Clone the repository:**
46+
```bash
47+
git clone https://github.com/yourusername/contourusv.git
48+
cd contourusv
49+
```
50+
51+
2. **Create and activate virtual environment:**
52+
```bash
53+
python -m venv venv
54+
source venv/bin/activate # Linux/MacOS
55+
venv\Scripts\activate # Windows
56+
```
57+
58+
3. **Install dependencies:**
59+
```bash
60+
pip install -e .
61+
```
62+
63+
## Data Directory Structure
64+
65+
Organize your input data using the following structure:
66+
67+
```
68+
root_path/ # Passed via --root_path argument
69+
├── EXPERIMENT_NAME/ # Passed via --experiment argument
70+
│ └── TRIAL_NAME/ # Passed via --trial argument
71+
│ ├── *.wav # Audio recordings
72+
│ ├── *.WAV # (Alternative capitalization)
73+
│ ├── *.html # HTML annotations
74+
│ ├── *.xlsx # Excel annotations
75+
│ └── *.csv # CSV annotations
76+
```
77+
78+
**Example Concrete Structure:**
79+
```
80+
/Users/username/data/
81+
└── PTSD16/
82+
└── ACQ/
83+
├── rat12_day1.wav
84+
├── rat12_day1.html
85+
├── rat13_day1.WAV
86+
└── rat13_day1.html
87+
```
88+
89+
**File Requirements:**
90+
- Audio files: Must have `.wav` or `.WAV` extension
91+
- Annotation files: Must match audio filenames and reside in same directory
92+
- Supported annotation formats:
93+
- HTML
94+
- Excel
95+
- CSV
96+
97+
## Output Structure
98+
99+
```
100+
output/
101+
├── EXPERIMENT_NAME/
102+
│ ├── TRIAL_NAME/
103+
│ │ ├── contour_detections/
104+
│ │ │ └── *.csv (detection annotations)
105+
│ │ ├── evaluation_results/
106+
│ │ │ └── Evaluation_*.csv (performance metrics)
107+
│ │ └── spectrograms/
108+
│ │ └── *.png (annotated spectrograms)
109+
│ └── ground_truth_annotations/
110+
│ └── *.csv (processed ground truth)
111+
```
112+
113+
## Usage
114+
115+
In the `src` directory execute the following command to run the detection pipeline.
116+
117+
### Basic Command
118+
```bash
119+
python main.py \
120+
--root_path /path/to/your/data \
121+
--experiment EXPERIMENT_NAME \
122+
--trial TRIAL_NAME \
123+
--file_ext ANNOTATION_FILE_EXT
124+
```
125+
126+
### Example Command
127+
```bash
128+
python main.py \
129+
--root_path /Users/username/data \
130+
--experiment PTSD16 \
131+
--trial ACQ \
132+
--file_ext .html
133+
```
134+
135+
### Required Arguments
136+
| Argument | Description | Example |
137+
|---------------|-------------------------------------------|------------------|
138+
| `--root_path` | Root directory containing experiment data | `/data/studies` |
139+
| `--experiment`| Name of the experiment | `PTSD16` |
140+
| `--trial` | Name of the trial/condition | `ACQ` |
141+
| `--file_ext` | Annotation file extension (`.html`, `.xlsx`, `.csv`) | `.html` |
142+
143+
### Optional Parameters
144+
| Parameter | Default | Description |
145+
|----------------|---------|------------------------------------------|
146+
| `--overlap` | 3 | Overlap duration between windows (seconds) |
147+
| `--winlen` | 10 | Window length for processing (seconds) |
148+
| `--freq_min` | 15 | Minimum frequency for detection (kHz) |
149+
| `--freq_max` | 115 | Maximum frequency for detection (kHz) |
150+
| `--wsize` | 2500 | Window size for processing |
151+
| `--th_perc` | 95 | Percentile threshold for noise reduction |
152+
153+
<!-- ## Pipeline Architecture
154+
155+
1. **Preprocessing**
156+
- Audio normalization and filtering
157+
- Spectrogram generation
158+
- Noise reduction using:
159+
- Median filtering
160+
- Otsu's thresholding
161+
- CLAHE contrast enhancement
162+
163+
2. **Detection**
164+
- Contour detection using OpenCV
165+
- USV classification (22kHz vs 50kHz)
166+
- Bounding box annotation
167+
- Temporal and spectral feature extraction
168+
169+
3. **Annotation Generation**
170+
- Supports multiple input formats:
171+
- HTML
172+
- Excel
173+
- CSV
174+
175+
4. **Evaluation**
176+
- Precision/Recall calculations
177+
- F1 Score and Specificity metrics
178+
- Carbon emissions tracking
179+
- Energy consumption monitoring
180+
181+
## Evaluation Metrics
182+
183+
The pipeline calculates four key performance metrics:
184+
- **Precision**: Ratio of correct USV detections to total detections
185+
- **Recall**: Ratio of detected USVs to total actual USVs
186+
- **F1 Score**: Harmonic mean of precision and recall
187+
- **Specificity**: Ability to identify true negative segments
188+
189+
Example output:
190+
```
191+
Mean Precision: 0.92 ± 0.05
192+
Mean Recall: 0.88 ± 0.07
193+
Mean F1 Score: 0.90 ± 0.04
194+
Mean Specificity: 0.95 ± 0.03
195+
```
196+
197+
## Environmental Impact Tracking
198+
199+
The pipeline integrates with CodeCarbon to monitor:
200+
- CO₂ emissions (kg)
201+
- Energy consumption (kWh)
202+
- Computational efficiency
203+
204+
Sample output:
205+
```
206+
ContourUSV_Execution_Time_(s) = 452.783
207+
ContourUSV_Carbon_Emissions_(kgCO2) = 0.127
208+
ContourUSV_Total_Energy_Consumed_(kWh) = 0.342
209+
``` -->
210+
211+
## Contributing
212+
213+
Contributions are welcome! Please follow these steps:
214+
1. Fork the repository
215+
2. Create your feature branch (`git checkout -b feature/your-feature`)
216+
3. Commit your changes (`git commit -am 'Add some feature'`)
217+
4. Push to the branch (`git push origin feature/your-feature`)
218+
5. Open a Pull Request
219+
220+
## License
221+
222+
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
223+
224+
## Contact
225+
226+
Sabah Anis - [[email protected]](mailto:[email protected])
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
README.md
2+
pyproject.toml
3+
src/__init__.py
4+
src/detection.py
5+
src/evaluation.py
6+
src/generate_annotation.py
7+
src/main.py
8+
src/preprocessing.py
9+
src/contourusv.egg-info/PKG-INFO
10+
src/contourusv.egg-info/SOURCES.txt
11+
src/contourusv.egg-info/dependency_links.txt
12+
src/contourusv.egg-info/requires.txt
13+
src/contourusv.egg-info/top_level.txt
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
numpy
2+
mne
3+
matplotlib
4+
scipy
5+
pandas
6+
opencv-python
7+
tqdm
8+
codecarbon
9+
seaborn
10+
pillow
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
__init__
2+
detection
3+
evaluation
4+
generate_annotation
5+
main
6+
preprocessing

src/detection.py renamed to contourusv/detection.py

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import cv2
22

33
def detect_contours(cleaned_image, start_time, end_time, freq_min, freq_max,
4-
file_name, annotations, call_type_defs=None):
4+
file_name, annotations, call_type_defs=None, processing="adaptive"):
55
"""
66
Detect and classify USVs in cleaned spectrogram images.
77
@@ -34,18 +34,22 @@ def detect_contours(cleaned_image, start_time, end_time, freq_min, freq_max,
3434
"22kHz": {"freq_min": 15,
3535
"freq_max": 45,
3636
"freq_span_max": 10,
37-
"duration_min": 0.03,
37+
"duration_min": 0.03, # .03 was original
3838
"duration_max": 3.0},
3939
"50kHz": {"freq_min": 40,
4040
"freq_max": 80,
4141
"freq_span_max": 10,
4242
"duration_min": 0.01,
4343
"duration_max": 0.3},
44+
4445
}
4546

46-
# Re-apply Otsu's Thresholding
47-
ret, thresholded_image = cv2.threshold(
48-
cleaned_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
47+
# Re-apply Otsu's Thresholding (Cant do if using adaptive)
48+
if(processing == "Otsu"):
49+
ret, thresholded_image = cv2.threshold(
50+
cleaned_image, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
51+
else:
52+
thresholded_image = cleaned_image
4953

5054
contours, _ = cv2.findContours(
5155
thresholded_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

0 commit comments

Comments
 (0)