APSIPA Transactions on Signal and Information Processing > Vol 12 > Issue 4

SALVE: Self-Supervised Adaptive Low-Light Video Enhancement

Zohreh Azizi, University of Southern California, USA, zazizi@usc.edu , C.-C. Jay Kuo, University of Southern California, USA
 
Suggested Citation
Zohreh Azizi and C.-C. Jay Kuo (2023), "SALVE: Self-Supervised Adaptive Low-Light Video Enhancement", APSIPA Transactions on Signal and Information Processing: Vol. 12: No. 4, e102. http://dx.doi.org/10.1561/116.00000085

Publication Date: 05 Jun 2023
© 2023 Z. Azizi and C.-C .J. Kuo
 
Subjects
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 397 times

In this article:
Introduction 
Related Work 
Proposed Method 
Experiments 
Conclusion 
References 

Abstract

A self-supervised adaptive low-light video enhancement method, called SALVE, is proposed in this work. SALVE first enhances a few keyframes of an input low-light video using a retinex-based low-light image enhancement technique. For each keyframe, it learns a mapping from low-light image patches to enhanced ones via ridge regression. These mappings are then used to enhance the remaining frames in the low-light video. The combination of traditional retinex-based image enhancement and learning-based ridge regression leads to a robust, adaptive and computationally inexpensive solution to enhance low-light videos. Our extensive experiments along with a user study show that 87% of participants prefer SALVE over prior work. Our codes are available at: https://github.com/zohrehazizi/SALVE.

DOI:10.1561/116.00000085

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Emerging AI Technologies for Smart Infrastructure
See the other articles that are part of this special issue.