Since hands are the primary interface in daily interactions, modeling high-quality digital human hands and rendering realistic images is a critical research problem. Furthermore, considering the requirements of interactive and rendering applications, it is essential to achieve real-time rendering and driveability of the digital model without compromising rendering quality. Thus, we propose Jointly 3D Gaussian Hand (JGHand), a novel joint-driven 3D Gaussian Splatting (3DGS)-based hand representation that renders high-fidelity hand images in real-time for various poses and characters. Distinct from existing articulated neural rendering techniques, we introduce a differentiable process for spatial transformations based on 3D key points. This process supports deformations from the canonical template to a mesh with arbitrary bone lengths and poses. Additionally, we propose a real-time shadow simulation method based on per-pixel depth to simulate self-occlusion shadows caused by finger movements. Finally, we embed the hand prior and propose an animatable 3DGS representation of the hand driven solely by 3D key points. We validate the effectiveness of each component of our approach through comprehensive ablation studies. Experimental results on public datasets demonstrate that JGHand achieves real-time rendering speeds with enhanced quality, surpassing state-of-the-art methods.
由于手是日常互动中的主要接口,建模高质量的数字人类手部并渲染逼真的图像是一个关键的研究问题。此外,考虑到交互和渲染应用的需求,实现实时渲染和数字模型的可驱动性而不降低渲染质量至关重要。因此,我们提出了 Jointly 3D Gaussian Hand (JGHand),一种基于联合驱动的 3D 高斯 Splatting(3DGS)手部表示方法,能够实时渲染高保真的手部图像,适用于各种姿势和角色。与现有的关节化神经渲染技术不同,我们引入了一种基于 3D 关键点的可微分空间变换过程。该过程支持从标准模板到具有任意骨骼长度和姿势的网格的形变。此外,我们提出了一种基于每像素深度的实时阴影模拟方法,用于模拟由于手指运动而产生的自遮挡阴影。最后,我们嵌入了手部先验,并提出了一种仅通过 3D 关键点驱动的可动画 3DGS 手部表示方法。通过全面的消融研究验证了我们方法中每个组件的有效性。基于公共数据集的实验结果表明,JGHand 实现了实时渲染速度,并在质量上有所提升,超越了现有的最先进方法。