Few-Shot Domain Adaptation for Low Light RAW Image Enhancement

Video Analytics Lab, CDS,
Indian Institute of Science (IISc)
BMVC 2021 [Oral, Best Student Paper Award (Runner-Up)]

*Indicates Equal Contribution
results

Abstract

Enhancing practical low light raw images is a difficult task due to severe noise and color distortions from short exposure time and limited illumination. Despite the success of existing Convolutional Neural Network (CNN) based methods, their performance is not adaptable to different camera domains. In addition, such methods also require large datasets with short-exposure and corresponding long-exposure ground truth raw images for each camera domain, which is tedious to compile. To address this issue, we present a novel few-shot domain adaptation method to utilize the existing source camera labeled data with few labeled samples from the target camera to improve the target domain's enhancement quality in extreme low-light imaging. Our experiments show that only ten or fewer labeled samples from the target camera domain are sufficient to achieve similar or better enhancement performance than training a model with a large labeled target camera dataset. To support research in this direction, we also present a new low-light raw image dataset captured with a Nikon camera, comprising short-exposure and their corresponding long-exposure ground truth images.

Method

teaser
With a noisy raw image captured with low-exposure time (i.e., shutter speed) as input, our CNN-based approach is trained to predict a clean long-exposure sRGB output of the same scene. The input is multiplied by an exposure factor calculated by the ratio of output and input exposure times. For example, to generate a 10-second long exposure output, the input 0.1-second low exposure image must be multiplied by 100. As a result of this operation, along with illumination, the noise is also amplified proportionally. Since we multiply the factor in the unprocessed raw domain and expect the output in the sRGB domain, the network must learn camera hardware-specific enhancement as well as its entire ISP pipeline (lens correction, demosaicing, white balancing, color manipulation, tone curve application, color space transform, and Gamma correction). Thus, a model trained on one specific camera data (source domain) does not translate similar performance to a different camera (target domain), hence the domain gap. In this work, we propose to transfer the enhancement task from large labeled source data and generate output in the target domain using few labeled target data.

Nikon Dataset

teaser Example short-exposure and long-exposure image pairs from the Nikon dataset. The short exposure images are almost entirely dark whereas the long-exposure images have immense scene information.
We have compiled a dataset of raw low-light images captured with a Nikon D5600 camera to train the proposed few-shot domain adaptation architecture. The Nikon dataset consists of short-exposure images captured at 1/3 or 1/10 seconds and corresponding ground-truth long-exposure images captured at 10 or 30 seconds in the NEF format. For uniformity, there are two short-exposure images for every long-exposure image such that the exposure ratio (ratio of exposure time between the ground-truth long-exposure image and the input short-exposure image) is 100 and 300, respectively. Similar to LSID, we mount the camera on sturdy tripods and use appropriate camera settings to capture the static scenes using a smartphone app. The images captured include 129 short-exposure and 65 long-exposure ground-truth images of indoor and outdoor low-light scenes (sub lux).

Results

Sony as Source and Nikon as Target

results
results

Sony as Source and Canon as Target

results
results

Sony as Source and OnePlus 7 as Target

results

Sony as Source and Google Pixel as Target

results

BibTeX

@article{prabhakar2021fewshot,
        title     = {Few-Shot Domain Adaptation for Low Light RAW Image Enhancement},
        author    = {K. Prabhakar and Vishal Vinod and N. Sahoo and Venkatesh Babu Radhakrishnan},
        journal   = {British Machine Vision Conference},
        year      = {2021},
}