Dynamic Gaussians Mesh:
Consistent Mesh Reconstruction from Monocular Videos
ICLR 2025
Isabella Liu, Hao Su , Xiaolong Wang
UC San Diego

denotes equal advisory

DG-Mesh reconstructs high-fidelity, time-consistent meshes for dynamic scenes with complex non-rigid deformations. Given dynamic input and camera parameters, it recovers high-quality surfaces, appearance, and vertex motion while supporting flexible topology changes. Evaluations show it significantly outperforms baselines in dynamic mesh reconstruction and rendering.


Pipeline


Training Process

Pipeline

4D GS Center

Anchored GS center

Mesh

Mesh Rendering




D-NeRF Results




DG-Mesh Results




Real Results on Real Data


Nerfies: Toby-sit
Nerfies: Tail
Self-captured iPhone Dataset: Tiger
Self-captured iPhone Dataset: Starbucks
NeuralActor: D2_vlad
NeuralActor: N1_lingjie_yellowpants


Full Video







BibeTex

@article{liu2024dynamic,
        title={Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Monocular Videos},
        author={Liu, Isabella and Su, Hao and Wang, Xiaolong},
        journal={arXiv preprint arXiv:2404.12379},
        year={2024}
    }

Acknowledgements

The website template was borrowed from BakedSDF and HexPlane.