A high-level environment for exploring, training and evaluating different architectures for depth-completion.
Researchers from other tasks and fields may find it useful as a reference to start their own project.
It utilizes Pytorch Lightning for easy scaling (single GPU to large clusters), Hydra for managing configs and Neptune for logging and managing experiments.
Implemented architectures:
- Guidenet
- Supervised/Unsupervised Sparse-to-Dense
- ACMNet
Datasets:
Other architectures and deatasets can be added with only small modifications to existing code.
- Clone this repo
git clone https://github.com/itsikad/depth-completion-public.git
- Install dependencies
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.tx
-
Edit configs:
- Set
data_root
datasets root path in main config file - (Optional) Set
experiment_name
,description
andtags
fields for Neptune Logger - Set
project_name
for Neptune Logger in neptune logger config file - Set number of gpus on your debug machine or server
- Set
-
Currently, Neptune Logger doesn't integrate with Hydra so a simple corerction is required:
- in `.venv/lib/neptune/internal/api_cleints/hosted_api_clients/hosted_alpha_leaderboard_api_client.py' line 129 replace:
if not isinstance(params, Dict):
with
if not isinstance(params, MutableMapping):
- in `.venv/lib/bravdo_core/schema.py' line 90 replace:
return isinstance(spec, (dict, Mapping))
with
return isinstance(spec, (dict, typing.MutableMapping))
-
Set NEPTUNE_API_TOKEN environment variable, see Neptune installation
Training an existing architecture is as simple as (example uses guidenet
):
python src/run.py model=guidenet machine=server
where guidenet
can be replaced with any other model located in ./src/configs/model/<model_name>.yaml
, for example self_sup_sparse_to_dense
)
CAUTION: During the first run, KITTI Depth Completion dataset will be downloaded and processed. It might take several hours and roughly ~160gb of disk space.
Follow these steps to add a new model:
- Your new model should base
models.base_model.BaseModel
, an abstract model. - It should return a dictionary, where the final depth prediction should be keyed by
pred
, other tensors used for debug, etc. can be also added to the dictionary. - Add your model to model builder in
model__init__.py
. - Model config should be placed in
./src/configs/model/<model_name>.yaml
- Train your model using:
python src/run.py model=<model_name> machine=server
Follow a similar process to add a new loss or dataset. For another dataset, don't forget to tell Hydra
you're using a non-default dataset:
python src/run.py model=<model_name> machine=server dataset=<your_dataset>