Speaker
Description
The LEGEND experiment searches for neutrinoless double beta decay (0νββ) using high-purity germanium (HPGe) detectors enriched in 7676Ge, where minimizing backgrounds is essential for improving discovery sensitivity. A key technique for this is pulse shape discrimination (PSD), traditionally done using A/E-based cuts that exploit differences in current pulse morphology. However, such methods simplify waveform features and have limited adaptability.
In this work, we present a transformer-based PSD framework trained on waveforms from ²²⁸Th calibration data. The model leverages self-attention to capture subtle pulse shape variations, enabling robust classification of signal-like single-site events (SSE) against multi-site (MSE) and surface background events. After applying standard LEGEND-200 quality cuts and semi-automated waveform labeling, the model achieves over 99% accuracy for MSE/SSE discrimination and over 94% for n-/p-contact surface classification on calibration-labeled data.
Compared to traditional A/E cuts, the transformer demonstrates higher PSD efficiency at Q_ββ, with consistent performance across calibration runs. UMAP projections of the latent space show clear class separation, indicating that the model effectively learns waveform topology.
As an extension, we plan to integrate Domain Adversarial Neural Networks (DANN) to bridge domain differences between calibration and physics datasets, improving generalization and reducing potential selection biases in physics analyses.
AI keywords | Transformers; Domain Adaptation; Simulation-Based Inference; Semi-Supervised Learning |
---|