Skip to content

zyfzs0/SADER

Repository files navigation

SADER: Structure-Aware Diffusion Framework with Deterministic Resampling for Multi-Temporal Remote Sensing Cloud Removal

Yifan Zhang1,*, Qian Chen2,*, Yi Liu2,*, Wengen Li2, Jihong Guan2
1University of Michigan, Ann Arbor, USA   2Tongji University, Shanghai, China
*Equal contribution
SADER framework: multi-temporal fusion, cloud-aware attention, and deterministic resampling

This is the official repository for SADER, a structure-aware diffusion framework for multi-temporal remote sensing cloud removal that leverages a scalable multi-temporal conditional network, a cloud-aware attention loss, and a deterministic resampling strategy to achieve high-fidelity and reproducible cloud-free reconstruction.

🎉Usage

🔧Setup

There are two methods to set up the development environment for this project. The first method uses requirements.txt for a straightforward installation into your current Python environment. The second method employs an environment.yaml file to create a new, isolated conda environment named sader. Choose the method that best suits your workflow.

Method 1: Using requirements.txt (pip) Install the required Python packages directly into your current environment using pip.

pip install -r requirements.txt

Method 2: Using environment.yaml (conda) Create a new, isolated conda environment named sader with all dependencies specified in the environment.yaml file.

conda env create -f environment.yaml

To activate this environment after creation, use the following command:

conda activate sader

(Optional) If you still encounter the package missing problem, you can refer to the requirements.txt file to download the packages you need.

If you meet other enviroment setting problems we have not mentioned in this readme file, please contack us to report you problems or throw a issue.

📌Dataset

We use two datasets: SEN12MS-CR-TS and Sen2_MTC_New. You need to download these datasets first.

We provide the downloading URLs of these datasets as follows:

Dataset Type URL
SEN12MS-CR-TS Multi-Temporal https://patricktum.github.io/cloud_removal/sen12mscrts/
Sen2_MTC_New Multi-Temporal https://github.com/come880412/CTGAN

For fast starting, you can only download the testing dataset and run the testing instructions given below.

🔎Configurations

We provide our configuration files, i.e., *.yaml, in the ./configs/example_training/ folder. The code automatically reads the yaml file and sets the configuration. You can change the settings, such as the data path, batch size, number of workers, in the data part of each yaml file to adapt to your expectations. Read the yaml file in ./configs/example_training/ for more details.

We have also included the yaml files for our ablation experiments in the ./configs/example_training/ablation/ directory.

🔥Train

You can use the following instruction in the root path of this repository to run the training process:

python main.py --base configs/example_training/[yaml_file_name].yaml --enable_tf32

Here, sen2_mtc_new.yaml is for training on the Sen2_MTC_New dataset, and sentinel.yaml is for training on the SEN12MS-CR dataset. Note that you should modify the data.params.train part in the yaml file according to your dataset path.

You can also use the -l parameter to change the save path of logs, with ./logs as the default path:

python main.py --base configs/example_training/[yaml_file_name].yaml --enable_tf32 -l [path_to_your_logs]

If you want to resume from a previous training checkpoint, you can use the follow instruction:

python main.py --base configs/example_training/[yaml_file_name].yaml --enable_tf32 -r [path_to_your_ckpt]

If you want to initiate the model from an existing checkpoint and restart the training process, you should modify the value of model.ckpt_path in your yaml file to the path of your checkpoint.

🏃Test

Run the following instruction for testing:

python main.py --base configs/example_training/[yaml_file_name].yaml --enable_tf32 -t false

The [yaml_file_name].yaml files are the same as those in training process. Note that

  • You should set the data.params.test part, otherwise the test dataloader will not be implemented.
  • You should modify he value of model.ckpt_path in your yaml file to the path of your checkpoint.

💻Predict

The predicting process will output all cloud removed images. This process only support using one GPU (by setting lightning.trainer.devices to only one device). You can run predicting process using:

python main.py --base configs/example_training/[yaml_file_name].yaml --enable_tf32 -t false --no-test true --predict true

The [yaml_file_name].yaml files are the same as those in the testing process. Note that you should set the data.params.predict part and the model.ckpt_path part (the same way as testing), otherwise you will not obtain the correct results.

📧Contact

If you have encountered any problems, feel free to contact author via the email yifanzhg@umich.edu or 2250951@tongji.edu.cn.

📖BibTeX

@article{zhang2026sader,
  title={SADER: Structure-Aware Diffusion Framework with DEterministic Resampling for Multi-Temporal Remote Sensing Cloud Removal},
  author={Zhang, Yifan and Chen, Qian and Liu, Yi and Li, Wengen and Guan, Jihong},
  journal={arXiv preprint arXiv:2602.00536},
  year={2026}
}

About

Structure-Aware Diffusion Framework with Deterministic Resampling for Multi-Temporal Remote Sensing Cloud Removal

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages