Alignment-free HDR Deghosting with Semantics Consistent Transformer

1University of Burgundy, ImViA     2Computer Vision Lab, CAIDAS & IFI, University of W ̈urzburg     3 CVL, ETH Zurich     4University of Burgundy, CNRS, ICB     5University of Lorraine, CNRS, Inria, Loria

Abstract

High dynamic range (HDR) imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output. The essence is to leverage the contextual information, including both dynamic and static semantics, for better image generation. Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion. However, there is no research on jointly leveraging the dynamic and static context in a simultaneous manner. To delve into this problem, we propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules in the network. The spatial attention aims to deal with the intra-image correlation to model the dynamic motion, while the channel attention enables the inter-image intertwining to enhance the semantic consistency across frames. Aside from this, we introduce a novel realistic HDR dataset with more variations in foreground objects, environmental factors, and larger motions. Extensive comparisons on both conventional datasets and ours validate the effectiveness of our method, achieving the best trade-off on the performance and the computational cost.

Preprint BibTeX

@misc{tel2023alignmentfree,
      title={Alignment-free HDR Deghosting with Semantics Consistent Transformer},
      author={Steven Tel and Zongwei Wu and Yulun Zhang and Barthélémy Heyrman and Cédric Demonceaux and Radu Timofte and Dominique Ginhac},
      year={2023},
      eprint={2305.18135},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}