by Hue Nguyen, Trevor D. Canham, Michael S. Brown
Capturing high dynamic range (HDR) video at high frame rates is critical for applications such as cinematic production and autonomous systems, yet it remains challenging with conventional cameras. We present the first unified framework that jointly performs HDR reconstruction and temporal interpolation from sequences with alternating exposures. Unlike prior work that reconstructs only middle frames or uses heavy off-the-shelf interpolators, our lightweight network synthesizes HDR video at arbitrary timesteps in real time on mid-range GPUs. To support this task, we introduce a new dataset of exposure-bracketed video sequences with real-world motion. To reduce reliance on ground-truth HDR data, we also propose a novel self-supervised training scheme that delivers competitive results. Experiments show our approach outperforms existing baselines in efficiency while achieving comparable or better visual quality, establishing a new benchmark for practical HDR video synthesis.
TBU
For a quick demo, please run the following command
python demo.py --video_dir example/20240526_SHGCNCT_S001_S001_T009 --odd_frame_ev 4 --even_frame_ev 0 --gamma 2.4 --playback_fps 15TBU
This project is based on HDRFlow, we thank the original authors for their excellent work.
If you have any questions or suggestions about this repo, please feel free to contact me (nthue189@gmail.com).
