Speaker
Description
The evolution of high-resolution total-body (TB) PET/CT has expanded dynamic PET applications, yet the temporal resolution gap between PET and CT presents challenges for quantitative accuracy. Organ segmentation from CT exacerbates errors in kinetic modeling. This study aims to utilize enhanced anatomical details from TB PET for attenuation (AC) and scatter correction (SC), incorporating frame-by-frame multi-organ segmentation to address temporal resolution disparities and improve quantitative precision in dynamic PET imaging. Deep learning algorithms were developed using static TB PET images from 430 patients scanned with the United Imaging uExplorer system. A 3D UNet was trained for multi-organ segmentation, using non-corrected PET images and CT-derived ground-truth segmentation maps. A dedicated decomposition-based network was trained for AC and SC. In dynamic data application, organ segmentations were predicted for each frame, followed by AC and SC. Comparative analysis involved Dice coefficients for concordance against manually refined CT-based segmentation labels. The trained model achieved an average Dice coefficient of 0.96 across eight organs and dynamic frames, outperforming the CT-based approach with an average Dice coefficient of 0.77. The developed deep learning method shows promise for CT-free multi-organ segmentation, AC, and SC in dynamic TB PET scans. Its potential to enhance accuracy and efficiency in dynamic PET imaging could broaden its application scope.
Field | Software and quantification |
---|