Description
We present the successful development and optimization of a Variational Autoencoder (VAE) framework designed to accelerate Geant4 Monte Carlo simulations in hadrontherapy applications. This work, conducted as part of ICSC Spoke 2 - Flagship 2.6.2, addresses the critical computational bottleneck in high-resolution Linear Energy Transfer (LET) calculations. Our system employs deep learning to transform low-resolution simulation outputs into high-resolution LET distributions, preserving critical Bragg peak characteristics while achieving substantial computational efficiency gains. The framework underwent extensive hyperparameter optimization using Optuna and complete architectural refactoring, resulting in a modular pipeline with separated components for data engineering, training, optimization, and generation. Validated against Geant4 v11.2.2 with CATANA beamline parameters, the open-source release (baltig.infn.it/gigallo/ML_GEANT4_WP6) includes all necessary components for immediate community deployment, including representative datasets and detailed user guides. This work establishes a sustainable foundation for ML-enhanced Monte Carlo simulations, with potential applications extending from medical physics to broader computational science domains requiring efficient high-fidelity simulations.
| INFN OpenAccess Repository link | baltig.infn.it/gigallo/ML_GEANT4_WP6 |
|---|