Skip to content

A modular Pytorch-Lightning environment for the development, evaluation and testing of deep learning algorithms for Guided Depth Completion. Currently, KITTI depth completion benchmark is available including some notable architectures for this benchmark. This is a public and clean version of the environment I've created during my thesis.

Notifications You must be signed in to change notification settings

itsikad/depth-completion-public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Depth Completion Research Environment

A high-level environment for exploring, training and evaluating different architectures for depth-completion.

Researchers from other tasks and fields may find it useful as a reference to start their own project.

It utilizes Pytorch Lightning for easy scaling (single GPU to large clusters), Hydra for managing configs and Neptune for logging and managing experiments.

Implemented architectures:

  1. Guidenet
  2. Supervised/Unsupervised Sparse-to-Dense
  3. ACMNet

Datasets:

  1. KITTI Depth Completion

Other architectures and deatasets can be added with only small modifications to existing code.

Setup

  1. Clone this repo
git clone https://github.com/itsikad/depth-completion-public.git
  1. Install dependencies
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.tx
  1. Edit configs:

    1. Set data_root datasets root path in main config file
    2. (Optional) Set experiment_name, description and tags fields for Neptune Logger
    3. Set project_name for Neptune Logger in neptune logger config file
    4. Set number of gpus on your debug machine or server
  2. Currently, Neptune Logger doesn't integrate with Hydra so a simple corerction is required:

    1. in `.venv/lib/neptune/internal/api_cleints/hosted_api_clients/hosted_alpha_leaderboard_api_client.py' line 129 replace:
    if not isinstance(params, Dict):

    with

    if not isinstance(params, MutableMapping):
    1. in `.venv/lib/bravdo_core/schema.py' line 90 replace:
    return isinstance(spec, (dict, Mapping))

    with

    return isinstance(spec, (dict, typing.MutableMapping))
  3. Set NEPTUNE_API_TOKEN environment variable, see Neptune installation

How To Use

Train a model

Training an existing architecture is as simple as (example uses guidenet):

python src/run.py model=guidenet machine=server

where guidenet can be replaced with any other model located in ./src/configs/model/<model_name>.yaml, for example self_sup_sparse_to_dense)

CAUTION: During the first run, KITTI Depth Completion dataset will be downloaded and processed. It might take several hours and roughly ~160gb of disk space.

Add a new model / loss / dataset

Follow these steps to add a new model:

  1. Your new model should base models.base_model.BaseModel, an abstract model.
  2. It should return a dictionary, where the final depth prediction should be keyed by pred, other tensors used for debug, etc. can be also added to the dictionary.
  3. Add your model to model builder in model__init__.py.
  4. Model config should be placed in ./src/configs/model/<model_name>.yaml
  5. Train your model using:
python src/run.py model=<model_name> machine=server

Follow a similar process to add a new loss or dataset. For another dataset, don't forget to tell Hydra you're using a non-default dataset:

python src/run.py model=<model_name> machine=server dataset=<your_dataset>

About

A modular Pytorch-Lightning environment for the development, evaluation and testing of deep learning algorithms for Guided Depth Completion. Currently, KITTI depth completion benchmark is available including some notable architectures for this benchmark. This is a public and clean version of the environment I've created during my thesis.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages