This repo contains Pytorch implemented model definitions, pre-trained weights and training/sampling code for DLDMs.
- git clone this repo
 - Install the pkgs and activate envorionment
 
$ git clone [email protected]:Yoonho-Na/DLDM.git
$ cd DLDM
$ conda env create -f environment.yaml
$ conda activate dldm
We provide pretrained weights.
$ python scripts/pretrained_dldm.py
$ python sample.py
- put your files (.jpg, .npy, .png, ...) in a folder 
custom_folder - create 2 text files a 
xx_train.txtandxx_valid.txtthat point to the files in your training and test set respectively
find $(pwd)/custom_folder/train -name "*.npy" > xx_train.txt
find $(pwd)/custom_folder/valid -name "*.npy" > xx_valid.txt 
${pwd}/custom_folder/train/
├── class1
│   ├── filename1.npy
│   ├── filename2.npy
│   ├── ...
├── class2
│   ├── filename1.npy
│   ├── filename2.npy
│   ├── ...
├── ...
${pwd}/custom_folder/valid/
├── class1
│   ├── filename1.npy
│   ├── filename2.npy
│   ├── ...
├── class2
│   ├── filename1.npy
│   ├── filename2.npy
│   ├── ...
├── ...
- adapt 
configs/custom_DAE.yamlto point to these 2 files - run 
python main.py --base configs/custom_DAE.yaml -t True --gpus 0,1to train on two GPUs. Use--gpus 0,(with a trailing comma) to train on a single GPU.