Project Page | Paper | Colab | HuggingFace
We present a simple yet effective technique to estimate lighting in a single input image. Current techniques rely heavily on HDR panorama datasets to train neural networks to regress an input with limited field-of-view to a full environment map. However, these approaches often struggle with real-world, uncontrolled settings due to the limited diversity and size of their datasets. To address this problem, we leverage diffusion models trained on billions of standard images to render a chrome ball into the input image. Despite its simplicity, this task remains challenging: the diffusion models often insert incorrect or inconsistent objects and cannot readily generate images in HDR format. Our research uncovers a surprising relationship between the appearance of chrome balls and the initial diffusion noise map, which we utilize to consistently generate high-quality chrome balls. We further fine-tune an LDR diffusion model (Stable Diffusion XL) with LoRA, enabling it to perform exposure bracketing for HDR light estimation. Our method produces convincing light estimates across diverse settings and demonstrates superior generalization to in-the-wild scenarios.
conda env create -f environment.yml
conda activate diffusionlight
pip install -r requirements.txt
python inpaint.py --dataset example --output_dir output
python ball2envmap.py --ball_dir output/square --envmap_dir output/envmap
python exposure2hdr.py --input_dir output/envmap --output_dir output/hdr
To setup the Python environment, you need to run the following commands in both Conda and pip:
conda env create -f environment.yml
conda activate diffusionlight
pip install -r requirements.txt
Note that Conda is optional. However, if you choose not to use Conda, you must manually install CUDA-toolkit and OpenEXR.
Please resize the input image to 1024x1024. If the image is not square, we recommend padding it with a black border.
First, we predict the chrome ball in different exposure values (EV) using the following command:
python inpaint.py --dataset <input_directory> --output_dir <output_directory>
This command outputs three subdirectories: control
, raw
, and square
The contents of each directory are:
control
: Conditioned depth mapraw
: Inpainted image with a chrome ball in the centersquare
: Square-cropped chrome ball (used for the next step)
Next, we project the chrome ball from the previous step to the LDR environment map using the following command:
python ball2envmap.py --ball_dir <output_directory>/square --envmap_dir <output_directory>/envmap
Finally, we compose an HDR image from multiple LDR environment maps using our custom exposure bracketing:
python exposure2hdr.py --input_dir <output_directory>/envmap --output_dir <output_directory>/hdr
The predicted light estimation will be located at <output_directory>/hdr
and can be used for downstream tasks such as object insertion. We will also use it to compare with other methods.
We use the evaluation code from StyleLight and Editable Indoor LightEstimation. You can use their code to measure our score.
Additionally, we provide a slightly modified version of the evaluation code at DiffusionLight-evaluation including the test input.
@inproceedings{Phongthawee2023DiffusionLight,
author = {Phongthawee, Pakkapon and Chinchuthakun, Worameth and Sinsunthithet, Nontaphat and Raj, Amit and Jampani, Varun and Khungurn, Pramook and Suwajanakorn, Supasorn},
title = {DiffusionLight: Light Probes for Free by Painting a Chrome Ball},
booktitle = {ArXiv},
year = {2023},
}