Skip to content
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ docker-compose.yml
build/*
dist/*
docs/*
docker/
docker/
45 changes: 45 additions & 0 deletions .github/workflows/gpu.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
name: GPU Tests

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
gpu-test:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Set up Python 3.7
uses: actions/setup-python@v4
with:
python-version: '3.7'

- name: Install FFMPEG
run: |
sudo apt-get update
sudo apt-get install -y ffmpeg

- name: Install PySide2
run: |
python -m pip install --upgrade pip
pip install "pyside2==5.13.2"

- name: Install PyTorch with CUDA
run: |
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu102

- name: Install package and test dependencies
run: |
pip install -r requirements.txt
pip install pytest pytest-cov
python setup.py develop

- name: GPU Tests
run: |
pytest -v -m "gpu" tests/
env:
CUDA_VISIBLE_DEVICES: 0
59 changes: 59 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
name: CPU Tests

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
test:
name: Test on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]

steps:
- uses: actions/checkout@v3

- name: Set up Python 3.7
uses: actions/setup-python@v4
with:
python-version: '3.7'

- name: Install FFMPEG (Ubuntu)
if: matrix.os == 'ubuntu-latest'
run: |
sudo apt-get update
sudo apt-get install -y ffmpeg

- name: Install FFMPEG (macOS)
if: matrix.os == 'macos-latest'
run: |
brew install ffmpeg

- name: Install FFMPEG (Windows)
if: matrix.os == 'windows-latest'
run: |
choco install ffmpeg

- name: Install PySide2
run: |
python -m pip install --upgrade pip
pip install "pyside2==5.13.2"

- name: Install PyTorch CPU
run: |
pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu

- name: Install package and test dependencies
run: |
pip install -r requirements.txt
pip install pytest pytest-cov
python setup.py develop

- name: Run CPU tests
run: |
pytest -v -m "not gpu" tests/
28 changes: 28 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: Pre-commit

on:
push:
branches: [ master ]
pull_request:
branches: [ master ]

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Set up Python 3.7
uses: actions/setup-python@v4
with:
python-version: '3.7'

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pre-commit ruff

- name: Run pre-commit
run: |
pre-commit install
pre-commit run --all-files
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -152,4 +152,4 @@ venv.bak/
dmypy.json

# Pyre type checker
.pyre/
.pyre/
19 changes: 19 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-ast
- id: check-json
- id: check-merge-conflict
- id: detect-private-key

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.1
hooks:
- id: ruff
args: [--fix]
- id: ruff-format
3 changes: 0 additions & 3 deletions .style.yapf

This file was deleted.

1 change: 1 addition & 0 deletions CODEOWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
@jbohnslav
2 changes: 1 addition & 1 deletion MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
include README.md
include deepethogram/gui/icons/*.png
recursive-include deepethogram/conf *
recursive-include deepethogram/conf *
91 changes: 63 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
- Written by Jim Bohnslav, except where as noted
- [email protected]

DeepEthogram is an open-source package for automatically classifying each frame of a video into a set of pre-defined
behaviors. Designed for neuroscience research, it could be used in any scenario where you need to detect actions from
DeepEthogram is an open-source package for automatically classifying each frame of a video into a set of pre-defined
behaviors. Designed for neuroscience research, it could be used in any scenario where you need to detect actions from
each frame of a video.

Example use cases:
Expand All @@ -12,56 +12,56 @@ Example use cases:
* Counting licks from video for appetite measurement
* Measuring reach onset times for alignment with neural activity

DeepEthogram uses state-of-the-art algorithms for *temporal action detection*. We build on the following previous machine
DeepEthogram uses state-of-the-art algorithms for *temporal action detection*. We build on the following previous machine
learning research into action detection:
* [Hidden Two-Stream Convolutional Networks for Action Recognition](https://arxiv.org/abs/1704.00389)
* [Temporal Gaussian Mixture Layer for Videos](https://arxiv.org/abs/1803.06316)

![deepethogram schematic](docs/images/deepethogram_schematic.png)

## Installation
For full installation instructions, see [this readme file](docs/installation.md).
For full installation instructions, see [this readme file](docs/installation.md).

In brief:
* [Install PyTorch](https://pytorch.org/)
In brief:
* [Install PyTorch](https://pytorch.org/)
* `pip install deepethogram`

## Data
**NEW!** All datasets collected and annotated by the DeepEthogram authors are now available from this DropBox link:
**NEW!** All datasets collected and annotated by the DeepEthogram authors are now available from this DropBox link:
https://www.dropbox.com/sh/3lilfob0sz21och/AABv8o8KhhRQhYCMNu0ilR8wa?dl=0

If you have issues downloading the data, please raise an issue on Github.
If you have issues downloading the data, please raise an issue on Github.

## COLAB
I've written a Colab notebook that shows how to upload your data and train models. You can also use this if you don't
have access to a decent GPU.
I've written a Colab notebook that shows how to upload your data and train models. You can also use this if you don't
have access to a decent GPU.

To use it, please [click this link to the Colab notebook](https://colab.research.google.com/drive/1Nf9FU7FD77wgvbUFc608839v2jPYgDhd?usp=sharing).
Then, click `copy to Drive` at the top. You won't be able to save your changes to the notebook as-is.
To use it, please [click this link to the Colab notebook](https://colab.research.google.com/drive/1Nf9FU7FD77wgvbUFc608839v2jPYgDhd?usp=sharing).
Then, click `copy to Drive` at the top. You won't be able to save your changes to the notebook as-is.


## News
We now support docker! Docker is a way to run `deepethogram` in completely reproducible environments, without interacting
with other system dependencies. [See docs/Docker for more information](docs/docker.md)

## Pretrained models
Rather than start from scratch, we will start with model weights pretrained on the Kinetics700 dataset. Go to
Rather than start from scratch, we will start with model weights pretrained on the Kinetics700 dataset. Go to
To download the pretrained weights, please use [this Google Drive link](https://drive.google.com/file/d/1ntIZVbOG1UAiFVlsAAuKEBEVCVevyets/view?usp=sharing).
Unzip the files in your `project/models` directory. Make sure that you don't add an extra directory when unzipping! The path should be
Unzip the files in your `project/models` directory. Make sure that you don't add an extra directory when unzipping! The path should be
`your_project/models/pretrained_models/{models 1:6}`, not `your_project/models/pretrained_models/pretrained_models/{models1:6}`.

## Licensing
Copyright (c) 2020 - President and Fellows of Harvard College. All rights reserved.

This software is free for academic use. For commercial use, please contact the Harvard Office of Technology
Development ([email protected]) with cc to Dr. Chris Harvey. For details, see [license.txt](license.txt).
This software is free for academic use. For commercial use, please contact the Harvard Office of Technology
Development ([email protected]) with cc to Dr. Chris Harvey. For details, see [license.txt](license.txt).

## Usage
### [To use the GUI, click](docs/using_gui.md)
#### [To use the command line interface, click](docs/using_CLI.md)

## Dependencies
The major dependencies for DeepEthogram are as follows:
The major dependencies for DeepEthogram are as follows:
* pytorch, torchvision: all the neural networks, training, and inference pipelines were written in PyTorch
* pytorch-lightning: for nice model training base classes
* kornia: for GPU-based image augmentations
Expand All @@ -76,32 +76,67 @@ The major dependencies for DeepEthogram are as follows:
* tqdm: for nice progress bars

## Hardware requirements
For GUI usage, we expect that the users will be working on a local workstation with a good NVIDIA graphics card. For training via a cluster, you can use the command line interface.
For GUI usage, we expect that the users will be working on a local workstation with a good NVIDIA graphics card. For training via a cluster, you can use the command line interface.

* CPU: 4 cores or more for parallel data loading
* Hard Drive: SSD at minimum, NVMe drive is better.
* GPU: DeepEthogram speed is directly related to GPU performance. An NVIDIA GPU is absolutely required, as PyTorch uses
CUDA, while AMD does not.
* GPU: DeepEthogram speed is directly related to GPU performance. An NVIDIA GPU is absolutely required, as PyTorch uses
CUDA, while AMD does not.
The more VRAM you have, the more data you can fit in one batch, which generally increases performance. a
I'd recommend 6GB VRAM at absolute minimum. 8GB is better, with 10+ GB preferred.
Recommended GPUs: `RTX 3090`, `RTX 3080`, `Titan RTX`, `2080 Ti`, `2080 super`, `2080`, `1080 Ti`, `2070 super`, `2070`
Some older ones might also be fine, like a `1080` or even `1070 Ti`/ `1070`.
Recommended GPUs: `RTX 3090`, `RTX 3080`, `Titan RTX`, `2080 Ti`, `2080 super`, `2080`, `1080 Ti`, `2070 super`, `2070`
Some older ones might also be fine, like a `1080` or even `1070 Ti`/ `1070`.

## testing
Test coverage is still low, but in the future we will be expanding our unit tests.
Test coverage is still low, but in the future we will be expanding our unit tests.

First, download a copy of [`testing_deepethogram_archive.zip`](https://drive.google.com/file/d/1IFz4ABXppVxyuhYik8j38k9-Fl9kYKHo/view?usp=sharing)
Make a directory in tests called `DATA`. Unzip this and move it to the `deepethogram/tests/DATA`
directory, so that the path is `deepethogram/tests/DATA/testing_deepethogram_archive/{DATA,models,project_config.yaml}`. Then run `pytest tests/` to run.
the `zz_commandline` test module will take a few minutes, as it is an end-to-end test that performs model training
and inference. Its name reflects the fact that it should come last in testing.
Make a directory in tests called `DATA`. Unzip this and move it to the `deepethogram/tests/DATA`
directory, so that the path is `deepethogram/tests/DATA/testing_deepethogram_archive/{DATA,models,project_config.yaml}`.

To run tests:
```bash
# Run all tests except GPU tests (default)
pytest tests/

# Run only GPU tests (requires NVIDIA GPU)
pytest -m gpu

# Run all tests including GPU tests
pytest -m ""
```

GPU tests are skipped by default as they require significant computational resources and time to complete. These tests perform end-to-end model training and inference.

## Developer Guide
### Code Style and Pre-commit Hooks
We use pre-commit hooks to maintain code quality and consistency. The hooks include:
- Ruff for Python linting and formatting
- Various file checks (trailing whitespace, YAML validation, etc.)

To set up the development environment:

1. Install the development dependencies:
```bash
pip install -r requirements.txt
```

2. Install pre-commit hooks:
```bash
pre-commit install
```

The hooks will run automatically on every commit. You can also run them manually on all files:
```bash
pre-commit run --all-files
```

## Changelog
* 0.1.4: bugfixes for dependencies; added docker
* 0.1.2/3: fixes for multiclass (not multilabel) training
* 0.1.1.post1/2: batch prediction
* 0.1.1.post0: flow generator metric bug fix
* 0.1.1: bug fixes
* 0.1: deepethogram beta! See above for details.
* 0.1: deepethogram beta! See above for details.
* 0.0.1.post1: bug fixes and video conversion scripts added
* 0.0.1: initial version
4 changes: 2 additions & 2 deletions deepethogram/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@
# from deepethogram.sequence.inference import sequence_inference
import importlib.util

spec = importlib.util.find_spec('hydra')
spec = importlib.util.find_spec("hydra")
if spec is not None:
raise ValueError('Hydra installation found. Please run pip uninstall hydra-core: {}'.format(spec))
raise ValueError("Hydra installation found. Please run pip uninstall hydra-core: {}".format(spec))
# try:
# import hydra
# except Exception as e:
Expand Down
4 changes: 2 additions & 2 deletions deepethogram/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@
# def main(cfg: DictConfig) -> None:
# run(cfg)

if __name__ == '__main__':
entry()
if __name__ == "__main__":
entry()
Loading
Loading