We present GaSpCT, a novel view synthesis and 3D scene representation method used to generate novel projection views for Computer Tomography (CT) scans. We adapt the Gaussian Splatting framework to enable novel view synthesis in CT based on limited sets of 2D image projections and without the need for Structure from Motion (SfM) methodologies. Therefore, we reduce the total scanning duration and the amount of radiation dose the patient receives during the scan. We adapted the loss function to our use-case by encouraging a stronger background and foreground distinction using two sparsity promoting regularizers: a beta loss and a total variation (TV) loss. Finally, we initialize the Gaussian locations across the 3D space using a uniform prior distribution of where the brain's positioning would be expected to be within the field of view. We evaluate the performance of our model using brain CT scans from the Parkinson's Progression Markers Initiative (PPMI) dataset and demonstrate that the rendered novel views closely match the original projection views of the simulated scan, and have better performance than other implicit 3D scene representations methodologies. Furthermore, we empirically observe reduced training time compared to neural network based image synthesis for sparse-view CT image reconstruction. Finally, the memory requirements of the Gaussian Splatting representations are reduced by 17% compared to the equivalent voxel grid image representations.
我们提出了GaSpCT,一种用于计算机断层扫描(CT)的新颖视图合成和三维场景表示方法,用于生成CT扫描的新颖投影视图。我们调整了高斯喷溅框架,使其能够基于有限的2D图像投影集合进行CT的新视图合成,而无需使用结构从运动(SfM)方法论。因此,我们减少了总扫描持续时间和患者在扫描过程中接收的辐射剂量。我们通过使用两个促进稀疏性的正则化器:贝塔损失和总变分(TV)损失,调整了损失函数以强化背景和前景之间的区分。最后,我们使用一个均匀的先验分布初始化三维空间中的高斯位置,这个分布预期了大脑在视野中的位置。我们使用帕金森病进展标记物计划(PPMI)数据集中的脑CT扫描评估了我们模型的性能,并证明渲染的新视图与模拟扫描的原始投影视图紧密匹配,并且性能优于其他隐式三维场景表示方法。此外,我们经验性地观察到与基于神经网络的稀疏视图CT图像重建相比,训练时间有所减少。最后,与等效的体素网格图像表示相比,高斯喷溅表示的内存要求减少了17%。