Speaker
Description
In PET systems, processing sensor data, such as gamma event positioning, directly on detector level is beneficial for early data reduction and scalability to large systems. Especially in total-body PET with its large amount of detectors, low-cost electronics for early data processing are required. Gradient tree boosting (GTB) has been proven as an accurate method for positioning in planar and DOI direction in radiation detectors. GTB is a supervised machine learning technique based on building ensembles of independent binary decision trees of a pre-defined depth in an additive manner. Input data traverses the trees to the leave nodes, where the path is determined by comparisons of one input data feature to a split value in each tree node. We have shown an FPGA implementation with good positioning performance, high throughputs and low FPGA logic resource consumption. Since memory resources are scarce in low-cost FPGAs, we investigate a method to reduce the memory requirements of GTB models, which can still be high in the FPGA implementation for larger tree depths. GTB models were trained using data acquired with a pixelated high-resolution LYSO scintillator with a 1 mm pitch coupled to a sensor array consisting of 16 digital SiPMs (DPC-3200-22, Philips Digital Photon Counting) with 64 photon channels. From these data, GTB models with varying model hyperparameters were trained for five different feature sets, where the feature sets consist of different combinations of the raw photon counts of the channels and calculated features of the light distribution. For each model, memory requirements were reduced by finding similar split values in the decision tree nodes of the trained models and assigning a common value to these nodes, so that no comparison results are changed. Reducing the amount of input features improves both positioning performance and split value reduction. With no loss of positioning performance, we achieve maximum reductions of split values of more than 50 %. The optimization algorithm is also run with fractions of the training data instead of the full data set, which improves the reduction of split values for a model of 100 trees from 42.5 % to around 60 %, while maintaining more than 99.8 % of positioning performance. With this reduced amount of individual split values, we can reduce the memory requirement of the FPGA implementation of GTB models and employ it on smaller FPGAs, which can reduce the potential cost of total-body PET systems.