Omnidirectional (or 360-degree) images are increasingly being used for 3D applications since they allow the rendering of an entire scene with a single image. Existing works based on neural radiance fields demonstrate successful 3D reconstruction quality on egocentric videos, yet they suffer from long training and rendering times. Recently, 3D Gaussian splatting has gained attention for its fast optimization and real-time rendering. However, directly using a perspective rasterizer to omnidirectional images results in severe distortion due to the different optical properties between two image domains. In this work, we present ODGS, a novel rasterization pipeline for omnidirectional images, with geometric interpretation. For each Gaussian, we define a tangent plane that touches the unit sphere and is perpendicular to the ray headed toward the Gaussian center. We then leverage a perspective camera rasterizer to project the Gaussian onto the corresponding tangent plane. The projected Gaussians are transformed and combined into the omnidirectional image, finalizing the omnidirectional rasterization process. This interpretation reveals the implicit assumptions within the proposed pipeline, which we verify through mathematical proofs. The entire rasterization process is parallelized using CUDA, achieving optimization and rendering speeds 100 times faster than NeRF-based methods. Our comprehensive experiments highlight the superiority of ODGS by delivering the best reconstruction and perceptual quality across various datasets. Additionally, results on roaming datasets demonstrate that ODGS restores fine details effectively, even when reconstructing large 3D scenes.
全景(或 360 度)图像因其能够使用单张图像渲染整个场景,逐渐在三维应用中得到广泛应用。基于神经辐射场的现有方法在第一人称视角视频中展示了成功的三维重建质量,但在训练和渲染时间上较长。最近,三维高斯喷涂 (3D Gaussian Splatting) 因其快速优化和实时渲染性能而备受关注。然而,直接使用透视光栅化处理全景图像会因两种图像域的光学属性差异而产生严重的失真。 为了解决这一问题,我们提出了一种具有几何解释的全景光栅化流水线 ODGS。对于每个高斯基元,我们定义一个与单位球相切且垂直于指向高斯中心的射线的切平面。然后,我们利用透视相机光栅化器将高斯投影到对应的切平面。投影后的高斯基元经过变换并组合到全景图像中,完成全景光栅化过程。该几何解释揭示了提出的流水线中的隐式假设,我们通过数学证明对此进行验证。 整个光栅化过程使用 CUDA 并行化,实现了比基于 NeRF 的方法快 100 倍的优化和渲染速度。我们在多个数据集上进行了全面实验,结果表明 ODGS 在重建质量和感知质量方面表现优越。此外,在移动数据集上的结果表明,即使在重建大型三维场景时,ODGS 也能有效恢复细节。