Dynamic scene reconstruction is a long-term challenge in the field of 3D vision. Recently, the emergence of 3D Gaussian Splatting has provided new insights into this problem. Although subsequent efforts rapidly extend static 3D Gaussian to dynamic scenes, they often lack explicit constraints on object motion, leading to optimization difficulties and performance degradation. To address the above issues, we propose a novel deformable 3D Gaussian splatting framework called MotionGS, which explores explicit motion priors to guide the deformation of 3D Gaussians. Specifically, we first introduce an optical flow decoupling module that decouples optical flow into camera flow and motion flow, corresponding to camera movement and object motion respectively. Then the motion flow can effectively constrain the deformation of 3D Gaussians, thus simulating the motion of dynamic objects. Additionally, a camera pose refinement module is proposed to alternately optimize 3D Gaussians and camera poses, mitigating the impact of inaccurate camera poses. Extensive experiments in the monocular dynamic scenes validate that MotionGS surpasses state-of-the-art methods and exhibits significant superiority in both qualitative and quantitative results.
动态场景重建一直是3D视觉领域的长期挑战。近年来,3D高斯点技术的出现为解决该问题提供了新的思路。尽管后续工作迅速将静态的3D高斯扩展到动态场景中,但它们通常缺乏对物体运动的显式约束,导致优化困难和性能下降。为了解决上述问题,我们提出了一种新颖的可变形3D高斯点框架——MotionGS,该框架通过探索显式运动先验来引导3D高斯的变形。具体而言,我们首先引入了一个光流解耦模块,将光流解耦为相机流和运动流,分别对应相机运动和物体运动。运动流可以有效地约束3D高斯的变形,从而模拟动态物体的运动。此外,我们还提出了一个相机姿态优化模块,交替优化3D高斯和相机姿态,减轻不准确的相机姿态对重建的影响。大量实验验证了MotionGS在单目动态场景中的优越性,在定性和定量结果上均显著超越了最新的方法。