Lighting-grounded Video Generation with Renderer-based Agent Reasoning
Abstract
LiVER presents a diffusion-based framework for scene-controllable video generation that disentangles 3D scene properties through explicit conditioning and automated user instruction translation.
Diffusion models have achieved remarkable progress in video generation, but their controllability remains a major limitation. Key scene factors such as layout, lighting, and camera trajectory are often entangled or only weakly modeled, restricting their applicability in domains like filmmaking and virtual production where explicit scene control is essential. We present LiVER, a diffusion-based framework for scene-controllable video generation. To achieve this, we introduce a novel framework that conditions video synthesis on explicit 3D scene properties, supported by a new large-scale dataset with dense annotations of object layout, lighting, and camera parameters. Our method disentangles these properties by rendering control signals from a unified 3D representation. We propose a lightweight conditioning module and a progressive training strategy to integrate these signals into a foundational video diffusion model, ensuring stable convergence and high fidelity. Our framework enables a wide range of applications, including image-to-video and video-to-video synthesis where the underlying 3D scene is fully editable. To further enhance usability, we develop a scene agent that automatically translates high-level user instructions into the required 3D control signals. Experiments show that LiVER achieves state-of-the-art photorealism and temporal consistency while enabling precise, disentangled control over scene factors, setting a new standard for controllable video generation.
Community
Accepted to CVPR 2026
TL;DR: LiVER is a controllable video diffusion framework that generates videos from explicit 3D scene conditions such as layout, lighting, and camera motion. It combines a densely annotated dataset, a lightweight conditioning module, and a scene agent that converts user instructions into editable 3D controls, achieving strong realism and temporal consistency with much more precise scene control.
Get this paper in your agent:
hf papers read 2604.07966 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper