Generalizability is a fundamental property for machine learning algorithms, detected by a grokking transition during training dynamics. In the quantum-inspired machine learning framework we numerically prove that a quantum many-body system shows an entanglement transition corresponding to a performances improvement in binary classification of unseen data. Two datasets are considered as use...
We present Qibo, an open-source quantum computing framework offering a full-stack solution for efficient deployment of quantum algorithms and calibration routines on quantum hardware.
Quantum computers require compilation of high-level circuits tailored to specific chip architectures and integration with control electronics. Our framework tackles these challenges through Qibolab, a versatile...
This work presents a novel machine learning approach to characterize the noise impacting a quantum chip and emulate it during simulations. By leveraging reinforcement learning, we train an agent to introduce noise channels that accurately mimic specific noise patterns. The proposed noise characterization method has been tested on simulations for small quantum circuits, where it con- sistently...
Quantum Neural Networks hold great promise for addressing computational challenges, but noise in near-term quantum devices remains a significant obstacle for circuit depth. In this work, we propose a preliminary study on a novel noise mitigation strategy based on early exit, traditionally used in classical deep learning to improve computational efficiency. Experiments have been conducted on a...
Quantum machine learning models based on parameterized quantum circuits have attracted significant attention as early applications for current noisy quantum processors. While the advantage of such algorithms over classical counterparts in practical learning tasks is yet to be demonstrated, learning distributions generated by quantum systems, which are inherently quantum, is a promising avenue...
Variational quantum computing provides a versatile computational approach, applicable to a wide range of fields such as quantum chemistry, machine learning, and optimization problems. However, scaling up the optimization of quantum circuits encounters a significant hurdle due to the exponential concentration of the loss function, often dubbed the barren plateau (BP) phenomenon.
Although...
Machine Learning (ML) techniques for background event rejection in Liquid Argon Time Projection Chambers (LArTPCs) have been extensively studied for various physics channels [1,2], yielding promising results. In this contribution, we highlight the performance of Quantum Machine Learning (QML)-based background mitigation strategies to enhance the sensitivity of kton-scale LArTPCs for rare event...
Tracking charged particles in high-energy physics experiments is one of the most computationally demanding steps in the data analysis pipeline.
As we approach the High Luminosity LHC era, which is expected to significantly increase the number of proton-proton interactions per beam collision, particle tracking will become even more problematic due to the massive increase in the volume of data...
I will present a combination of different results obtained by my group in the last few years, about quantifying the complexity of learning with quantum data, such as quantum states, quantum dynamics and quantum channels. Example applications include the classification of quantum phases of matter, which are encoded into ground states of quantum many-particle systems, decision problems such as...
We investigate the combined use of quantum computers and classical deep neural networks, considering both quantum annealers and universal gate-based platforms.
In the first case, we show that data produced by D-Wave quantum annealers allow accelerating Monte Carlo simulations of spin glasses through the training of autoregressive neural networks [1].
In the second case, we show that deep...