Out-of-distribution (OOD) 3D relighting requires novel view synthesis under unseen lighting conditions that differ significantly from the observed images. Existing relighting methods, which assume consistent light source distributions between training and testing, often degrade in OOD scenarios. We introduce MetaGS to tackle this challenge from two perspectives. First, we propose a meta-learning approach to train 3D Gaussian splatting, which explicitly promotes learning generalizable Gaussian geometries and appearance attributes across diverse lighting conditions, even with biased training data. Second, we embed fundamental physical priors from the Blinn-Phong reflection model into Gaussian splatting, which enhances the decoupling of shading components and leads to more accurate 3D scene reconstruction. Results on both synthetic and real-world datasets demonstrate the effectiveness of MetaGS in challenging OOD relighting tasks, supporting efficient point-light relighting and generalizing well to unseen environment lighting maps.
Out-of-distribution (OOD) refers to cases where test-time light sources deviate from the training distribution. As shown below, the lighting in the training set is arranged on one side of the hemisphere, while the lighting in the test set is arranged on the opposite side (cameras are located on both sides).
Our model decomposes the illumination effects by interacting the learned Gaussian points with rays originating from both the viewer and the light source.
Existing 3D relighting methods exhibit performance degradation when handling out-of-distribution relighting, primarily due to overfitting lighting patterns to perspective-constrained observations, resulting in producing unreasonable lighting components (such as wrong specular and shadows). To address this, we draw insights from MAML [1] and introduce a meta-learning framework based on bilevel optimization, which has been shown to effectively bridge the distribution shift between the training and testing domains, facilitating the generalization of optimized variables to unseen scenarios [2].
[1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. "Model-agnostic meta-learning for fast adaptation of deep networks." International conference on machine learning. PMLR, 2017.
[2] Chen, Jiaxin, et al. "A closer look at the training strategy for modern meta-learning." Advances in neural information processing systems 33 (2020): 396-406.
Out-of-distribution relighting results on synthetic data: We present rendered novel views and error maps. While baselines often misrepresent shadows or light-dependent effects (e.g., incorrect shadows in Plastic Cup), our model better infers surface appearance.
Out-of-distribution relighting results on real-world data: As highlighted with the red boxes, baseline models struggle with some level of global shading consistency, such as color shifts, wrong shadows, and floating artifacts. Our approach presents physically plausible specular highlights and geometrically consistent shadows that closely match ground truths.
@misc{he2025metagsmetalearnedgaussianphongmodel,
title={MetaGS: A Meta-Learned Gaussian-Phong Model for Out-of-Distribution 3D Scene Relighting},
author={Yumeng He and Yunbo Wang and Xiaokang Yang},
year={2025},
eprint={2405.20791},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2405.20791},
}