Skip to content

Commit f99fc42

Browse files
Copilotedeno
andcommitted
Add comprehensive .github/copilot-instructions.md file
Co-authored-by: edeno <[email protected]>
1 parent 5241445 commit f99fc42

File tree

1 file changed

+158
-0
lines changed

1 file changed

+158
-0
lines changed

.github/copilot-instructions.md

Lines changed: 158 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,158 @@
1+
# Copilot Instructions for replay_trajectory_classification
2+
3+
## Repository Summary
4+
5+
`replay_trajectory_classification` is a Python package for decoding spatial position from neural activity and categorizing trajectory types, specifically designed for analyzing hippocampal replay events in neuroscience research. The package provides state-space models that can decode position from both spike-sorted cells and clusterless spikes, with support for GPU acceleration and complex 1D/2D environments.
6+
7+
## High-Level Repository Information
8+
9+
- **Size**: ~63MB with 28 Python files
10+
- **Type**: Scientific Python package for computational neuroscience
11+
- **Primary Language**: Python 3.11+ (configured for Python 3.13 in current environment)
12+
- **Key Dependencies**: NumPy, SciPy, scikit-learn, numba, xarray, dask, matplotlib, pandas
13+
- **Documentation**: Sphinx-based documentation with ReadTheDocs hosting
14+
- **License**: MIT
15+
16+
## Environment Setup and Build Instructions
17+
18+
### Prerequisites
19+
Always use conda for environment management due to complex scientific dependencies:
20+
21+
```bash
22+
# Update conda first (required)
23+
conda update -n base conda
24+
25+
# Create environment from environment.yml (required)
26+
conda env create -f environment.yml
27+
28+
# Activate environment
29+
conda activate replay_trajectory_classification
30+
```
31+
32+
### Installation Commands
33+
**ALWAYS install in development mode for code changes:**
34+
35+
```bash
36+
# Development installation (preferred)
37+
python setup.py develop
38+
# OR alternatively:
39+
pip install -e .
40+
```
41+
42+
**Note**: The `setup.py develop` command shows deprecation warnings but works correctly. Modern pip with `-e` flag is recommended.
43+
44+
### Validation Commands
45+
46+
#### Package Import Test
47+
```bash
48+
python -c "import replay_trajectory_classification; print('Package imported successfully')"
49+
```
50+
**Expected output**: "Cupy is not installed or GPU is not detected. Ignore this message if not using GPU" followed by "Package imported successfully"
51+
52+
#### Linting
53+
```bash
54+
flake8 replay_trajectory_classification/ --max-line-length=88 --select=E9,F63,F7,F82 --show-source --statistics
55+
```
56+
**Expected**: No output (clean lint)
57+
58+
#### Notebook Testing (CI Validation)
59+
The main test suite runs Jupyter notebooks. Test individual notebooks:
60+
```bash
61+
jupyter nbconvert --to notebook --ExecutePreprocessor.kernel_name=python3 --execute notebooks/tutorial/01-Introduction_and_Data_Format.ipynb --output-dir=/tmp
62+
```
63+
**Time required**: ~2-3 minutes per notebook
64+
**Expected**: Notebook executes without errors
65+
66+
#### Documentation Build
67+
```bash
68+
# First install docs dependencies
69+
pip install -r docs/requirements-docs.txt
70+
71+
# Note: Documentation build has dependency issues with jupytext in Makefile
72+
# The docs can be built but require manual intervention
73+
```
74+
75+
## Continuous Integration
76+
77+
The repository uses GitHub Actions (`.github/workflows/PR-test.yml`):
78+
- **Trigger**: All pushes
79+
- **OS**: Ubuntu latest only
80+
- **Python**: 3.11 (but environment.yml uses current conda defaults)
81+
- **Test Process**: Executes all 5 tutorial notebooks sequentially
82+
- **Environment**: Uses conda with channels: conda-forge, franklab, edeno
83+
- **Installation**: `pip install -e .` after conda environment setup
84+
85+
## Project Architecture and Layout
86+
87+
### Core Package Structure (`replay_trajectory_classification/`)
88+
- **`__init__.py`**: Main API exports (ClassifierBase, Decoders, Environment, etc.)
89+
- **`classifier.py`**: Base classes for trajectory classification with both sorted/clusterless approaches
90+
- **`decoder.py`**: Core decoding functionality
91+
- **`environments.py`**: Spatial environment representation with discrete grids
92+
- **`core.py`**: Low-level computational functions
93+
- **`likelihoods/`**: Subpackage with various likelihood models (KDE, GLM, multiunit, GPU variants)
94+
95+
### Key Configuration Files
96+
- **`environment.yml`**: Conda environment specification with scientific computing stack
97+
- **`setup.py`**: Package configuration and dependencies
98+
- **`.readthedocs.yaml`**: Documentation build configuration
99+
- **`docs/conf.py`**: Sphinx documentation configuration
100+
- **`docs/requirements-docs.txt`**: Documentation build dependencies
101+
102+
### Documentation (`docs/`)
103+
- **Sphinx-based** with ReadTheDocs hosting
104+
- **API docs**: Auto-generated from docstrings
105+
- **Installation guide**: `installation.md`
106+
- **Build system**: Makefile (but has jupytext dependency issues)
107+
108+
### Tutorials (`notebooks/tutorial/`)
109+
Five comprehensive Jupyter notebooks demonstrate package usage:
110+
1. **01-Introduction_and_Data_Format.ipynb**: Data format requirements
111+
2. **02-Decoding_with_Sorted_Spikes.ipynb**: Single movement model with sorted spikes
112+
3. **03-Decoding_with_Clusterless_Spikes.ipynb**: Single movement model with clusterless approach
113+
4. **04-Classifying_with_Sorted_Spikes.ipynb**: Multiple movement models with sorted spikes
114+
5. **05-Classifying_with_Clusterless_Spikes.ipynb**: Multiple movement models with clusterless spikes
115+
116+
### Dependencies Not Obvious from Structure
117+
- **track_linearization**: External package for spatial track handling (imported in `__init__.py`)
118+
- **regularized_glm**: Custom GLM implementation
119+
- **GPU dependencies**: CuPy for GPU acceleration (optional)
120+
- **franklab & edeno conda channels**: Required for specialized neuroscience packages
121+
122+
## Important Development Notes
123+
124+
### Environment Requirements
125+
- **ALWAYS** use the conda environment - pip-only installations will fail due to complex scientific dependencies
126+
- **GPU support** requires CuPy installation (optional, warnings are normal without GPU)
127+
- **Documentation builds** may require manual intervention due to jupytext path issues
128+
129+
### Testing Approach
130+
- **No unit tests**: Validation relies entirely on notebook execution
131+
- **Integration testing**: All 5 tutorial notebooks must execute successfully
132+
- **CI dependency**: Notebooks test real scientific workflows, not isolated functions
133+
134+
### Common Issues and Workarounds
135+
- **Documentation build**: Makefile expects jupytext in PATH but may not find conda environment version
136+
- **Setup.py warnings**: Deprecation warnings are expected but installation succeeds
137+
- **GPU warnings**: "Cupy not installed" messages are normal for CPU-only environments
138+
- **Long notebook execution**: Tutorial notebooks can take 2-3 minutes each to execute
139+
140+
### File Exclusions (from .gitignore)
141+
Key files to exclude from commits:
142+
- Jupyter checkpoint files (`.ipynb_checkpoints`)
143+
- Build artifacts (`_build`, `_autosummary`, `dist/`)
144+
- Data files (`*.mat`, `*.csv`, `*.nc`)
145+
- Cache files (`__pycache__`, `*.prof`)
146+
147+
## Validation Checklist for Changes
148+
149+
1. **Environment setup**: Conda environment creates successfully
150+
2. **Installation**: `python setup.py develop` or `pip install -e .` succeeds
151+
3. **Import test**: Package imports without errors (GPU warnings OK)
152+
4. **Lint check**: flake8 passes with specified parameters
153+
5. **Notebook execution**: At least one tutorial notebook runs successfully
154+
6. **CI compatibility**: Changes don't break the GitHub Actions workflow
155+
156+
## Final Note
157+
158+
This package serves active neuroscience research. Changes should maintain scientific accuracy and computational efficiency. The codebase prioritizes correctness over traditional software engineering practices (hence notebook-based testing). Trust these instructions and only search for additional information if specific technical details are missing or incorrect.

0 commit comments

Comments
 (0)