- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Overview
The second quantum computing workshop @ INFN will be held in Padova on 29-31 October 2024.
Quantum computing potentially offers a paradigm shift for issues of interest to INFN, in areas ranging from quantum machine learning to event reconstruction and simulation for experiments, theoretical physics, and many others.
The Quantum Computing @INFN workshop represents an opportunity for the community to come together and receive training, with the objectives of presenting ongoing activities, fostering the exchange of knowledge and experiences, and attracting researchers and technologists who wish to acquire or enhance their skills.
The conference fee is 100 euro for PhD students and post-docs, 150 euro for other participants.
For INFN staff and associates, the fee will be payed exclusively by INFN internal transfer ("storno tra Unità Operative").
Venue
Palazzo della salute, Via San Francesco 90, 35121, Padova PD
https://www.palazzodellasalute.it
Scientific Committee
Daniele Bonacorsi, Università di Bologna and INFN Bologna
Valter Bonvicini, INFN Trieste
Concezio Bozzi, INFN Ferrara
Elisa Ercolessi, Università di Bologna and INFN Bologna
Claudio Gatti, INFN Laboratori Nazionali di Frascati
Andrea Giachero, Università di Milano-Bicocca e INFN MIlano-Bicocca
Stefano Giagu, Sapienza Università di Roma e INFN Roma1
Simone Montangero, Università di Padova e INFN Padova
Francesco Pederiva, Università di Trento e INFN TIFPA
Organizing Committee
Ilaria Siloi, Università di Padova and INFN Padova
Rossana Chiaratti, INFN Padova
Concezio Bozzi, INFN Ferrara
Giuseppe Calajo, Università di Padova and INFN Padova
Andrea Giachero, Università di Milano-Bicocca e INFN MIlano-Bicocca
Silvana Schiavo,Università di Padova
Simone Montangero, Università di Padova e INFN Padova
Certifying the presence of a spectral gap of a many-body quantum systems is a fundamental challenge that finds numerous applications across very different fields. In many-body physics, a non-zero gap in the thermodynamic limit indicates properties such as area law scaling of entanglement and exponential decay of correlations, both in 1D [1] and (partially) in 2D [2]. In quantum computing, the size of the spectral gap along the path is in one-to-one correspondence with the efficiency of the adiabatic algorithm [3]. Furthermore, classical Monte-Carlo algorithms that are rapidly-mixing can be directly mapped to gapped many-body quantum hamiltonians [4].
However, estimating the spectral gap of a many-body quantum systems is a very challenging task, and an undecidable problem in general [5]. Nonetheless, given the relevance of the problem, several methods are known to lower bound the spectral gap in the thermodynamic limit. Two main exponents are the martingale method [6] and finite-size criteria [7], which have allowed to prove the existence of a spectral gap in a plethora of physically-relevant cases. Yet, each of those results required carefully tailoring the method to the specific model of interest.
I will present a novel and general approach to certify the gap of local Hamiltonians in the thermodynamic limit. We leverage the fact that the gap can be estimated as a minimisation problem under the constraint that a degree-two polynomial of the hamiltonian is positive semidefinite. By taking ideas from sum-of-squares proof of positivity, we introduce a relaxation of the minimisation problem that provides lower bounds to the spectral gap. The quality of the lower bound can be systematically improved by controlling a single parameter in our method. Being based on semidefinite programming, our technique provides an efficient and reliable numerical algorithm that can be applied flexibly to any local, frustration-free hamiltonian. Lastly, our method recovers many previous approaches as a special case.
We benchmark our method on several 1D models and show a clear improvement with respect to previous approaches. In all the observed cases, our technique allows to estimate a noticeably larger gap and to prove the existence of a gap in parameter regimes that are inaccessible to other methods. When combined with variational estimations of the gap, our lower bounds often match the corresponding upper bound, providing an exact calculation of the gap. Finally, we discuss extensions to 2D systems and future prospects of the method.
[1] M. B. Hastings, JSTAT, P08024 (2007)?[2] A. Anshu et al, Proc. of the 54th Annual ACM SIGACT Symp. on Theory of Computing (2022)
[3] S. Jansen et al, Journal of Mathematical Physics 48, 102111 (2007)
[4] D. Aharonov et al, quant-ph/0301023 (2003)
[5] T. Cubitt et al, Nature 528, 207–211 (2015)
[6] B. Nachtergaele, Communications in Mathematical Physics 175(3) (1996)
[7] S. Knabe, Journal of statistical physics 52, 627 (1988)
I will introduce a framework to represent and optimize parametric quantum channels for quantum computing tasks, discussing the current status of research in non-unitary computation protocols and some of their applications to the investigation of quantum lattice systems.
In High-Energy Physics experiments, each detector exhibits a unique signature in terms of efficiency, resolution, and geometric acceptance. The overall effect is that the measured distribution of a given physical observable can be smeared and biased. The unfolding algorithm is the classical statistics technique employed to correct for this distortion, aiming to recover the underlying true distribution. This process is essential to make effective comparisons between experimental results and theoretical predictions.
In this context, the emerging technology of quantum computing represents an opportunity to enhance the unfolding performance and potentially yield more accurate results. QUnfold is a Python package designed to tackle the unfolding problem by harnessing the capabilities of quantum annealing. The regularized log-likelihood maximization formulation of the unfolding problem is translated into a Quantum Unconstrained Binary Optimization (QUBO) problem, solvable by D-Wave’s quantum annealing systems. The algorithm is validated on a simulated data sample of particle collision events, generated by combining the Madgraph Monte Carlo event generator and the Delphes simulation software to model a realistic scenario. Several kinematic distributions are unfolded, and the results are compared with conventional unfolding methods widely used in precision measurements at the Large Hadron Collider.
Generalizability is a fundamental property for machine learning algorithms, detected by a grokking transition during training dynamics. In the quantum-inspired machine learning framework we numerically prove that a quantum many-body system shows an entanglement transition corresponding to a performances improvement in binary classification of unseen data. Two datasets are considered as use case scenarios, namely fashion MNIST and genes expression communities of hepatocellular carcinoma. The measurement of qubits magnetization and correlations is included in the matrix product state (MPS) simulation, in order to define meaningful genes subcommunities, verified by means of enrichment procedures.
We present Qibo, an open-source quantum computing framework offering a full-stack solution for efficient deployment of quantum algorithms and calibration routines on quantum hardware.
Quantum computers require compilation of high-level circuits tailored to specific chip architectures and integration with control electronics. Our framework tackles these challenges through Qibolab, a versatile backend that interfaces with a wide range of electronics -both commercial and open-source- for seamless program execution on quantum devices.
Moreover, frequent calibration is essential for maintaining quantum computers in an operational state. Qibocal simplifies this process, providing a hardware-agnostic interface that automates calibration routines across supported platforms, complete with advanced reporting tools. We tested our software suite on platforms based on superconducting qubit technology to get performance benchmarks using different electronics. The ease of integrating new hardware drivers makes Qibo particularly valuable for labs aiming to control their own self-hosted quantum systems.
This work presents a novel machine learning approach to characterize the noise impacting a quantum chip and emulate it during simulations. By leveraging reinforcement learning, we train an agent to introduce noise channels that accurately mimic specific noise patterns. The proposed noise characterization method has been tested on simulations for small quantum circuits, where it con- sistently outperformed randomized benchmarking, a widely used noise characterization technique. Furthermore, we show a practical application of the algorithm using the well-known Grover’s circuit.
Quantum Neural Networks hold great promise for addressing computational challenges, but noise in near-term quantum devices remains a significant obstacle for circuit depth. In this work, we propose a preliminary study on a novel noise mitigation strategy based on early exit, traditionally used in classical deep learning to improve computational efficiency. Experiments have been conducted on a classification task over MNIST dataset, where early exit mechanism has been implemented through mid circuit measurements. The proposed methodology shows promising results under coherent noise, while requiring further refinement under incoherent noise conditions. Despite these limitations, the approach offers a promising path toward enhancing the robustness of QNN on near-term quantum devices.
Quantum machine learning models based on parameterized quantum circuits have attracted significant attention as early applications for current noisy quantum processors. While the advantage of such algorithms over classical counterparts in practical learning tasks is yet to be demonstrated, learning distributions generated by quantum systems, which are inherently quantum, is a promising avenue for exploration. We propose a quantum version of a generative diffusion model. In this algorithm, artificial neural networks are replaced with parameterized quantum circuits, in order to directly generate quantum states. We present both a full quantum and a latent quantum version of the algorithm; we also present a conditioned version of these models. Models’ performance have been evaluated using quantitative metrics complemented by qualitative assessments. An implementation of a simplified version of the algorithm has been executed on real NISQ quantum hardware.
Variational quantum computing provides a versatile computational approach, applicable to a wide range of fields such as quantum chemistry, machine learning, and optimization problems. However, scaling up the optimization of quantum circuits encounters a significant hurdle due to the exponential concentration of the loss function, often dubbed the barren plateau (BP) phenomenon.
Although rigorous results exist on the extent of barren plateaus in unitary or in noisy circuits, little is known about the interaction between these two effects, mainly because the loss concentration in noisy parameterized quantum circuits (PQCs) cannot be adequately described using the standard Lie algebraic formalism used in the unitary case.
In this work, we introduce a new analytical formulation based on non-negative matrix theory that enables precise calculation of the variance in deep PQCs, which allows investigating the complex and rich interplay between unitary dynamics and noise. In particular, we show the emergence of a noise-induced absorption mechanism, a phenomenon that cannot arise in the purely reversible context of unitary quantum computing.
Despite the challenges, general lower bounds on the variance of deep PQCs can still be established by appropriately slowing down speed of convergence to the deep circuit limit, effectively mimicking the behaviour of shallow circuits. Our framework applies to both unitary and non-unitary dynamics, allowing us to establish a deeper connection between the noise resilience of PQCs and the potential to enhance their expressive power through smart initialization strategies. Theoretical developments are supported by numerical examples and related applications.
Machine Learning (ML) techniques for background event rejection in Liquid Argon Time Projection Chambers (LArTPCs) have been extensively studied for various physics channels [1,2], yielding promising results. In this contribution, we highlight the performance of Quantum Machine Learning (QML)-based background mitigation strategies to enhance the sensitivity of kton-scale LArTPCs for rare event searches in the few-MeV energy range. We emphasize their potential in the search for neutrinoless double beta decay (0νββ) of the 136Xe isotope within the Deep Underground Neutrino Experiment (DUNE) These low-energy events generate very short, undersampled tracks in LArTPCs that are difficult to analyze [3].
We present the application of QML algorithms, particularly Quantum Support Vector Machines (QSVMs) [4]. QSVMs exploit quantum computation to map original features into a higher-dimensional vector space so that the resulting hyperplane would allow better separation within classes. The choice of this transformation called feature map is critical, and results in a positive, semidefinite scalar function called kernel.
QSVMs exhibit competitive performance but require careful design of their kernel functions. Optimizing a quantum kernel for specific classification tasks remains an open challenge in QML. We address this problem by employing powerful meta-heuristic genetic optimization algorithms, which allow for the discovery of quantum kernel functions tailored to both the dataset and the quantum hardware in use. Specifically, we propose mono-objective and multi-objective fitness function optimization strategies that consider the constraints of current Noisy Intermediate-Scale Quantum (NISQ) devices, optimizing feature maps to align with the specific qubit connectivity and available basis gates.
Our study provides deeper insights into the feasibility of performing genetic optimizations directly on quantum hardware. We evaluate the impact of noise through experiments conducted on different IBM quantum backends with over 100 qubits. We further explore the feasibility of partitioning quantum devices to compute multiple independent quantum kernels in parallel, achieving significant acceleration in the genetic optimization process. This approach demonstrates that genetic optimization on modern quantum hardware is feasible under certain conditions, leading to a substantial speed-up and contributing to the pioneering of quantum hardware parallelization.
This work is supported by PNRR MUR projects PE0000023-NQSTI and CN00000013-ICSC.
References
[1] DUNE Collaboration, Phys. Rev. D. 102, 092003 (2020).
[2] MicroBooNE Collaboration, Phys. D. 103, 092003 (2021).
[3] R. Moretti et al., Eur. Phys. J. Plus 139, 723 (2024).
[4] Havlíček, V et al., Nature. 567, 209–212 (2019).
Lithium niobate is a leading material for integrated optics for quantum and classical applications. Because of its nonlinearity, it supports the fabrication of electro-optical devices for quantum state generation and manipulation. Using this material platform, I will show our experimental results on the generation of squeezed vacuum state on chip, frequency conversion of single photons, and integration of multiple components on chip. The monolithic nature of these devices means that the correct phase can be stably realized in what would otherwise be an unstable interferometer, greatly simplifying the task of implementing sophisticated photonic quantum circuits.
Loss-tolerant quantum codes (LTCs) are particular error-correcting codes, essential for safeguarding quantum information against physical qubit losses, with significant applications in quantum communication, as well as in quantum computation, where photons can connect various modules of a computer, facilitate remote computation, or even act as the fundamental units of all-photonic processors.
In this work, we enhance the feasibility and performance of loss correction in two main directions: optimizing the code generation protocol and refining the specific details of the code.
Specifically, we develop novel protocols for the generation of photonic LTCs. These protocols move an important step forward with respect to the existing literature by allowing for a hierarchical generation of the LTCs - a feature that, although recognized to be highly desirable, had remained elusive in previous proposals; in line with expectations, our numerical analyses show that our hierarchically generated LTCs feature remarkably improved loss-tolerance properties. In addition, our scheme is deterministic and only requires one single quantum emitter; this provides important simplifications to the system layout, and is grounded on recent experimental achievements.
Furthermore, we address the design of the LTCs themselves. Unlike current approaches, we demonstrate that introducing asymmetries in the LTCs can systematically enhance success rates while simultaneously reducing the required photon number.
Together, these results go in the direction of improving the performance (i.e., the loss-tolerance), by simplifying the physical resources
Compilation optimizes quantum algorithms performances on real-world quantum computers. To date, it is performed via classical optimization strategies, but its NP-hard nature makes finding optimal solutions difficult. We introduce a class of quantum algorithms to perform compilation via quantum computers, paving the way for a quantum advantage in compilation. We demonstrate the effectiveness of this approach via Quantum and Simulated Annealing-based compilation: we successfully compile a Trotterized Hamiltonian simulation with up to 64 qubits and 64 time-steps and a Quantum Fourier Transform with up to 40 qubits and 771 time steps. Furthermore, we show that, for a translationally invariant circuit, the compilation results in a fidelity gain that grows extensively in the size of the input circuit, outperforming any local or quasi-local compilation approach.
Individually trapped neutral atoms offer a promising path for engineering controllable many-body quantum systems: coherent manipulation has been demonstrated for arrays featuring hundreds of atoms, encouraging to envision atom-based quantum processors.
In this talk, I will present a novel approach to use Rydberg atom arrays as platforms for quantum information processing. Our model has the crucial feature of not requiring any local addressing or dynamical rearrangement of the atoms: instead, any quantum algorithm can be executed by driving a universal arrangement of atoms with a global laser field in the Rydberg blockade regime. The arrangement is circuit-independent, and any algorithm is imprinted in the phase profile of the laser; our model thus highlights new ways to understand quantum computation as programmable out-of-equilibrium many-body phenomena.
After briefly introducing the principles of Rydberg-based quantum physics, I will give an overview of our model for universal quantum computation, and make connections with classical cellular automata patterns. I will also discuss the feasibility of the scheme, focussing on error-suppression techniques specific to our model.
Reference: Francesco Cesa and Hannes Pichler, Physical Review Letters 131, 170601 (2023) .
Quantum Fuzzy Logic integrates two distinct mathematical frameworks—quantum computing and fuzzy logic—both of which fundamentally deal with uncertainty and imprecision. Quantum mechanics inherently involves uncertainty through its stochasticity, while fuzzy logic addresses vagueness in reasoning and decision-making processes, allowing for degrees of truth rather than binary true/false values. The proposed abstract explores the synergy between these fields, particularly the potential for quantum computing to enhance fuzzy logic systems, and conversely, how classical fuzzy logic can contribute to advancements in quantum computing.
On the one hand, quantum computing, with its ability to process information in superposed states, provides an innovative platform for the development of fuzzy rule-based control systems. Indeed, this well-known classical controller suffers when the number of input variables in the systems increases, because of the related exponential number of rules to be computed. To address this issue, an innovative quantum fuzzy inference engine capable of firing fuzzy rules exponentially faster than the classical counterpart was recently proposed [1]. Its applicability has been proved thanks to the implementation of quantum control systems in scenarios like particle accelerators [2] or smart cities [3].
Conversely, classical fuzzy logic can also play a critical role in improving quantum computing. Fuzzy logic algorithms, designed to operate effectively in uncertain and imprecise environments, could offer solutions to some of the challenges in quantum computing, such as noise suppression and mitigation [4]. The proposed abstract presents an application of the Fuzzy C-means algorithm exploited to mitigate error on an actual superconductive quantum device [5].
Overall, Quantum Fuzzy Logic is this innovative bidirectional research line that offers a new perspective both for quantum computing and fuzzy logic. This bidirectionality, not only fosters innovation in both domains but also creates a feedback loop where advancements in one field continuously drive progress in the other, opening new pathways for the development of advanced computational systems.
[1] G. Acampora, R. Schiattarella and A. Vitiello, "On the Implementation of Fuzzy Inference Engines on Quantum Computers," in IEEE Transactions on Fuzzy Systems, vol. 31, no. 5, pp. 1419-1433, May 2023, doi: 10.1109/TFUZZ.2022.3202348.
[2] G. Acampora, M. Grossi, M. Schenk and R. Schiattarella, "Quantum Fuzzy Inference Engine for Particle Accelerator Control," in IEEE Transactions on Quantum Engineering, vol. 5, pp. 1-13, 2024, Art no. 3101013, doi:10.1109/TQE.2024.3374251.
[3] G. Acampora, R. Schiattarella and A. Vitiello, "Using Quantum Fuzzy Inference Engines in Smart Cities," 2024 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Yokohama, Japan, 2024, pp. 1-8, doi: 10.1109/FUZZ-IEEE60900.2024.10611863.
[4] G. Acampora and A. Vitiello, "Error Mitigation in Quantum Measurement through Fuzzy C-Means Clustering," 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Luxembourg, Luxembourg, 2021, pp. 1-6, doi: 10.1109/FUZZ45933.2021.9494538.
[5] H. G. Ahmad, R. Schiattarella, P. Mastrovito, A. Chiatto, A. Levochkina, M. Esposito, D. Montemurro, G. P. Pepe, A. Bruno, F. Tafuri, A. Vitiello, G. Acampora, D. Massarotti, Mitigating Errors on Superconducting Quantum Processors Through Fuzzy Clustering. Adv Quantum Technol. 2024, 7, 2300400. https://doi.org/10.1002/qute.202300400
Tensor network methods have emerged as powerful tools for addressing complex challenges in quantum science, particularly supporting advances in quantum computing and technologies. In this talk, I will discuss recent developments in tensor network techniques across various domains. First, I will highlight how the integration of hyper-optimized contraction protocols into tensor network algorithms significantly improves the efficiency and accuracy of quantum circuit emulations. Additionally, I will explore novel hybrid tensor network architectures for variational optimization tasks. Finally, I will focus on the simulation of quantum optical systems in non-Markovian regimes, presenting results about the generation of entangled bound states in waveguide quantum electrodynamics.
Quantum computers are a promising platforms to efficiently simulate systems hard to tackle on classical machines. An important challenge to overcome is the efficient control of errors that, if left undisturbed, make quantum simulations useless. A solution to this challenge is quantum error correction, that exploiting redundancy is able to correct errors. In this talk I will explore the connections between quantum error correction and lattice gauge theories and exploit them to propose a path forward for error corrected simulations of interest for high energy physics.
Quantum many-body scarring (QMBS) is an intriguing mechanism of ergodicity breaking that has recently spurred significant attention. Particularly prominent in Abelian lattice gauge theories (LGTs), an open question is whether QMBS nontrivially arises in non-Abelian LGTs. Here, we present evidence of robust QMBS in a non-Abelian SU(2) LGT with dynamical matter. Starting in product states that require little experimental overhead, we show that prominent QMBS arises for certain quenches, facilitated through meson and baryon-antibaryon excitations, highlighting its non-Abelian nature. The uncovered scarred dynamics manifests as long-lived coherent oscillations in experimentally accessible local observables and prominent revivals in the state fidelity. Our findings bring QMBS to the realm of non-Abelian LGTs, highlighting the intimate connection between scarring and gauge symmetry, and are amenable for observation in a recently proposed trapped-ion quantum computer.
We show that a viable route to generate strongly-interacting chiral phases can exploit the interplay between onsite interactions and flux frustration for bosons in dimerized lattices with pi-flux. By constructing an effective theory, we demonstrate how this setting favours the spontaneous breaking of time-reversal symmetry. This can lead to the realization of the long-sought chiral Mott insulator phases, a phase characterized by a vortex array, which we study via DMRG and variational calculations. Furthermore, dynamical properties like the chiral motion of impurities is identified via spectroscopy and quenches. Protocols to perform state preparation and current measurements will also be discussed.
The homogeneous Bethe-Salpeter equation (hBSE) [1], which models a bound system within a fully relativistic quantum field theory, has been solved for the first time using a D-Wave quantum annealer [2]. Following standard discretization methods, the hBSE in the ladder approximation can be reformulated as a generalized eigenvalue problem (GEVP) involving two square matrices, one symmetric and the other non-symmetric ( see Ref. [3] for details). This problem is of significant interest in various scientific fields, making the results broadly impactful. The non-symmetric matrix presents a challenge for a formal approach to solving the GEVP on a quantum annealer, as it needs to be converted into a quadratic unconstrained binary optimization (QUBO) problem. We have developed a hybrid algorithm. First, we reduce the non-symmetric GEVP to a standard eigenvalue problem classically. Then, we employ the QA to solve the variational problem. Drawing inspiration from approaches for symmetric matrices [4], we generalize the algorithm to accommodate the non-symmetric case, which involves complex eigenvalues (see Ref. [5] for details). A thorough numerical evaluation of the proposed algorithms, applied to matrices of up to 64 dimensions, was conducted using the proprietary simulated annealing package and the D-Wave Advantage 4.1 system thanks to the D-Wave-CINECA agreement[6], as part of an international project approved by Q@TN (INFN-UNITN-FBK-CNR)[7]. The results show excellent agreement with classical algorithms and reveal promising scalability properties.
[1] E. E. Salpeter and H. A. Bethe, A relativistic equation for bound-state problems, Phys. Rev. 84, 1232 (1951)
[2] F.Fornetti., A.Gnech, T. Frederico, F.Pederiva, M.Rinaldi, A.Roggero, G.Salmè, S.Scopetta, and M.Viviani, Solving the homogeneous Bethe-Salpeter equation with a quantum annealer, Phys. Rev. D 110 (2024) 5, 056012
[3]T. Frederico, G. Salmè, and M. Viviani, Quantitative studies of the homogeneous Bethe-Salpeter equation in Minkowski space, Phys. Rev. D 89, 016010 (2014)
[4] B. Krakoff, S. M. Mniszewski, and C. F. A. Negre, A QUBO algorithm to compute eigenvectors of symmetric matrices, (2021), arXiv:2104.11
[5] S. Alliney, F. Laudiero, and M. Savoia, A variational technique for the computation of the vibration frequencies of mechanical systems governed by nonsymmetric matrices, Applied mathematical modeling 16, 148 (1992)
[6]https://www.quantumcomputinglab.cineca.it/en/2021/05/12/collaboration-agreement-between-cineca-and-d-wave-for-the-distribution-in-italy-of-quantum-computing-resources/
[7] https://quantumtrento.eu/
Classical shadows are a versatile tool to probe many-qubit quantum systems, consisting of a
combination of randomised measurements and classical post-processing computations. In a recently
introduced version of the protocol, the randomization step is performed via unitary circuits of
variable depth t, defining the so-called shallow shadows. For sufficiently large t, this approach allows
one to get around the use of non-local unitaries to probe global properties such as the fidelity with
respect to a target state or the purity. Still, shallow shadows involve the inversion of a many-qubit
map, the measurement channel, which requires non-trivial computations in the post-processing step,
thus limiting its applicability when the number of qubits N is large. In this talk, I will explain a recent proposal
to use a simple approximate post-processing scheme where the infinite-depth inverse channel is applied to
the finite-depth classical shadows and discuss its performance for fidelity and purity estimation. The
scheme is efficient and allows for different circuit connectivity, as I will illustrate for geometrically local circuits in one
and two spatial dimensions and geometrically non-local circuits made of two-qubit gates. I will argue that this approach extends the
applicability of shallow shadows to large number of qubits and general circuit connectivity, with potential application to quantum simulation.
Talk based on arXiv:2407.11813
Simulating the low-temperature properties of frustrated quantum Ising models is a paradigmatic problem in condensed matter physics. It has recently gained strong interest in the context of quantum-enhanced optimization performed via quantum annealers and of quantum simulation in Rydberg-atom experiments.
We use a recently-developed self-learning projection quantum Monte Carlo algorithm driven by neural-network states to simulate both short-range and long-range disordered quantum Ising models at zero-temperature.
Our results show that, if the neural ansatz is big enough, this technique provides unbiased estimates of ground-state properties, accessing regimes not easily accessible so far.
In particular, we investigate the spin-glass phase of the 2D quantum Edwards-Anderson model and analyze the quantum critical point. Furthermore, we obtain results consistent with replica symmetry breaking.
Lastly, we study the properties of geometrically-frustrated quantum magnets with either nearest-neighbour or power-law type interactions, relevant to describe Rydberg atoms in optical tweezers. Our preliminary results confirm the existence of the so called “order-by-disorder” phenomenon in which the ordered clock-phase arises from quantum fluctuations. Future findings in these systems could be relevant for comparison with experiments on quantum simulators based on trapped atoms, where the interaction is highly controllable.
The preparation of a given quantum state on a quantum computing register is a typically demanding operation, requiring a number of elementary gates that scales exponentially with the size of the problem. In view of performing quantum simulations of manybody systems, this limitation might severely hinder the actual application of the noisy quantum processors that are currently available.
In our work [https://arxiv.org/abs/2405.03656] we focus on adiabatic processes to prepare quantum states. In addition to the Hamiltonian of the system to be simulated, the adiabatic preparation requires an auxiliary Hamiltonian $H_0$ that can be chosen with high arbitrariness.
Our aim is to provide a theoretically guided procedure to select the optimal auxiliary Hamiltonian, i.e. the one that allows to prepare
the highest-fidelity approximation of the target quantum state within a fixed depth of the quantum circuit.
In our work we theoretically derive a bound to the error in state preparation that shows an exponential scaling as a function of the adiabatic timescale $\tau$, which is proportional to the circuit depth,
and we provide an expression for its characteristic time, where the dependence from $H_0$ is made explicit. Therefore, the auxiliary Hamiltonian minimizing the characteristic time formula showcases an exponential suppression of the error if compared with a naively-chosen one.
We perform extensive numerical experiments to test our mathematical result on typical spin-models, such as the one- and two-dimensional Ising and Heisenberg models, confirming that the exponential bound is indeed realized and observing an exponential advantage for the optimized adiabatic processes against the unoptimized one. Our results provide a promising strategy to perform quantum simulations of manybody models via Trotter evolution on near term quantum processors.
Tracking charged particles in high-energy physics experiments is one of the most computationally demanding steps in the data analysis pipeline.
As we approach the High Luminosity LHC era, which is expected to significantly increase the number of proton-proton interactions per beam collision, particle tracking will become even more problematic due to the massive increase in the volume of data to be analysed.
The problem is currently being tackled using a variety of methods. The best classical algorithms are local and scale worse than quadratically with the number of particle hits in the detector layers. Promising results are coming from global approaches. In particular, we explore the possibility of using quantum graph neural networks, a combination of machine learning techniques with quantum computing.
We show recent results on the application of this architecture, with scalability tests for increasing pileup values. We discuss the critical issues and give an outlook on potential improvements and alternative approaches.
I will present a combination of different results obtained by my group in the last few years, about quantifying the complexity of learning with quantum data, such as quantum states, quantum dynamics and quantum channels. Example applications include the classification of quantum phases of matter, which are encoded into ground states of quantum many-particle systems, decision problems such as learning to classify entangled vs. separable states, and sensing applications such as quantum-enhanced object/pattern recognition.
I will show how to adapt bounds from statistical learning theory to assess which of these tasks are easy for a learner, in the sense of requiring few training data-points.
We investigate the combined use of quantum computers and classical deep neural networks, considering both quantum annealers and universal gate-based platforms.
In the first case, we show that data produced by D-Wave quantum annealers allow accelerating Monte Carlo simulations of spin glasses through the training of autoregressive neural networks [1].
In the second case, we show that deep neural networks fed with a combination of data from noisy quantum computers and classical circuit descriptors are able to emulate otherwise classically intractable quantum circuits, thus also achieving an effective error mitigation scheme [2].
This present introduces a novel approach to classifying quantum phases of matter through the use of tensor networks within a quantum machine learning (QML) framework. Beginning with an overview of QML principles and their transformative impact on condensed matter physics, I will outline the role of tensor networks in modeling and analyzing many-body systems. By applying these models to tackle phase classification problems, we demonstrate how tensor networks enhance the understanding of intricate quantum behaviors. In particular, I will present results from the ANNNI and Haldane chain models, which illustrate the efficacy of tensor networks in accurately identifying diverse phases, even in complex, strongly correlated quantum systems.
In recent years, much research has investigated the potentialities of quantum computers, ranging from physical implementations to comparisons with well-known algorithms of classical computation.
I would like to present a quantum algorithm to measure quantities related to scattering theory: reflection and transmission amplitudes of a quantum particle interacting with a short-ranged potential.
The main feature of the protocol is the coupling between the particle and an ancillary spin-1/2 degree of freedom. This allows us to reconstruct tomographically the scattering amplitudes, which are in general complex numbers, from the readout of one qubit.
The presentation is based on a recent paper [arXiv:2407.01669].
We benchmark Quantum TEA, a simulation framework developed as well with the support of the INFN quantum initiative and INFN infrastructure. Quantum TEA supports both digital, analog, and quantum-inspired quantum simulation on classical hardware. The simulations of many-body quantum systems run on heterogeneous hardware platforms using CPUs, GPUs, and TPUs. We compare different linear algebra backends, e.g., numpy versus the torch, jax, or tensorflow library, as well as a mixed-precision-inspired approach and optimizations for the target hardware. Quantum red TEA out of the Quantum TEA library specifically addresses handling tensors with different libraries or hardware, where the tensors are the building block of tensor network algorithms. The benchmark problem is a variational search of a ground state in an interacting model. This is a ubiquitous problem in quantum many-body physics, which we solve using tensor network methods. This approximate state-of-the-art method compresses quantum correlations which is key to overcoming the exponential growth of the Hilbert space as a function of the number of particles. We present a way to obtain speedups of a factor of 34 when tuning parameters on the CPU, and an additional factor of 2.76 on top of the best CPU setup when migrating to GPUs.
We present two examples of the application of stochastic calculus in quantum computing.
The first example involves simulating quantum circuits in the presence of noise using classical computers. Instead of directly solving the Lindblad master equation, we utilize its stochastic unravelling to model a random evolution of the state vector. This approach enables us to incorporate noise effects directly into the gates, effectively creating "noisy gates." To study the impact of noise in a circuit, we replace each ideal gate with the corresponding noisy gate, run multiple simulations, and then average the results. We compare this method with the IBM Qiskit simulator, demonstrating that it more accurately reproduces the analytical solution of the Lindblad equation and aligns better with results obtained from real quantum computers.
The second application focuses on simulating open quantum systems using a quantum computer. We begin by constructing a stochastic unravelling of the dynamics we wish to study, employing quantum Itô processes. Then, we introduce a method to simulate this unravelling on a quantum computer. Remarkably, regardless of the number of Lindblad operators that describe the noise, our method requires only a single qubit (in addition to those of the system studied) to simulate all environmental effects.
Singlet fission (SF) is an electronic transition that in the last decade has been under the spotlight for its applications in optoelectronics, from photovoltaics to spintronics. Despite considerable experimental and theoretical advancements, optimising SF in extended solids remains a challenge, due to the complexity of its analysis beyond perturbative methods. Here, we tackle the case of 1D rings, aiming to promote singlet fission and prevent its back-reaction. We study ultrafast SF non-perturbatively, by numerically solving a spin-boson model, via exact propagation and tensor network methods. By optimising over a parameter space relevant to organic molecular materials, we identify two classes of solutions that can take SF efficiency beyond 85% in the non-dissipative (coherent) regime, and to 99% when exciton-phonon interactions can be tuned. These results are a promising step towards optimising SF in 2D and 3D extended media. After discussing the experimental feasibility of the optimised solutions, we conclude by proposing that this approach can be extended to a wider class of optoelectronic optimisation problems.
Tensor network methods are a family of numerical techniques that efficiently compress the information of quantum many-body systems while accurately capturing their important physical properties. Here, we present a tensor-network-based toolbox developed for constructing the quantum many-body states at thermal equilibrium. Using this framework, we probe classical correlations as well as entanglement monotones of a Rydberg atom array - a promising quantum simulation platform. By examining the entanglement of formation and entanglement negativity of a half-system bipartition, we numerically confirm that a conformal scaling law of entanglement extends from the zero-temperature critical points into the low-temperature regime.
Bell’s inequality represent a cornerstone in our understanding of quantum theory, as they allow to prove the quantum mechanics is non-local and no hidden variable theory can give the same results.
While known to the community, it is often not highlighted the role of non-stabilizerness, often dubbed magic, in the violation of Bell’s inequalities.
In our work we show how much non-stabilizerness, as quantified by the Stabilizer Rényi Entropy is necessary in order to violate the Bell’s inequalities, proving that the violations can be used as a witness for the presence of magic in a quantum state.
Moreover, we prove results on the probabilistic violation of Bell’s inequality by random unitary operations picked from the Clifford group and the full Unitary group respectively, highlighting the role of t-doping in the probabilistic violation.