Dynamic Gaussian Mesh:
Consistent Mesh Reconstruction from Monocular Videos
Isabella Liu,
Hao Su† ,
Xiaolong Wang†
UC San Diego
† denotes equal advisory
Abstract
Modern 3D engines and graphics pipelines require mesh as a memory-efficient representation, which allows efficient rendering, geometry processing, texture editing, and many other downstream operations. However, it is still highly difficult to obtain high-quality mesh in terms of structure and detail from monocular visual observations. The problem becomes even more challenging for dynamic scenes and objects. To this end, we introduce Dynamic Gaussians Mesh (DG-Mesh), a framework to reconstruct a high-fidelity and time-consistent mesh given a single monocular video. Our work leverages the recent advancement in 3D Gaussian Splatting to construct the mesh sequence with temporal consistency from a video. Building on top of this representation, DG-Mesh recovers high-quality meshes from the Gaussian points and can track the mesh vertices over time, which enables applications such as texture editing on dynamic objects. We introduce the Gaussian-Mesh Anchoring, which encourages evenly distributed Gaussians, resulting better mesh reconstruction through mesh-guided densification and pruning on the deformed Gaussians. By applying cycle-consistent deformation between the canonical and the deformed space, we can project the anchored Gaussian back to the canonical space and optimize Gaussians across all time frames. During the evaluation on different datasets, DG-Mesh provides significantly better mesh reconstruction and rendering than baselines.
Pipeline
Training Process
4D GS Center
Anchored GS center
Mesh
Mesh Rendering
Real Results on Real Data
Nerfies: Toby-sit
Unbiased4D: Real Cactus
iPhone Captured Video
Video
BibeTex
@misc{liu2024dynamic,
title={Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos},
author={Isabella Liu and Hao Su and Xiaolong Wang},
year={2024},
eprint={2404.12379},
archivePrefix={arXiv},
primaryClass={cs.CV}
}