Speaker
Description
Classical diffusion models achieve state-of-the-art sample quality and diversity in a wide range of generative tasks. Motivated by this success, several fully quantum and hybrid quantum–classical diffusion schemes have been proposed, yet so far they have mainly been tested on classical data distributions, without clear advantages over classical baselines, or on toy ensembles of quantum states with a limited number of qubits .
In this work we study, in classical simulation, two diffusion-based strategies for learning distributions of quantum states: (i) a hardware-friendly fully-quantum model, and (ii) a hybrid model whose expressive power is boosted by a classical neural network. Across all considered benchmarks, the hybrid approach consistently outperforms the fully-quantum one in accuracy and generalization. Although the hybrid model incurs a higher computational cost, we show that this overhead can be substantially reduced when partial physical information about the target system is available.
In particular, for reconstructing the phase of the one-dimensional transverse-field Ising model from its ground states, we exploit classical-shadow tomography focused on relevant observables to cut the sampling cost. For learning ground-state distributions of molecules, we leverage electronic-structure constraints to compress the Hilbert space and accelerate training. Finally, we outline an extension of the same framework to polymer families.
These findings suggest that combining an hybrid diffusion mechanisms with classical inductive biases is a promising strategy for scalable quantum generative learning, and motivates future validations on broader quantum datasets.
| Sessions | Quantum Machine Learning: |
|---|---|
| Invited | No |