Speaker
Description
Learning provides a common language to (i) make quantum devices usable in realistic regimes and (ii) turn quantum effects into new computational primitives for AI.
On the “hardware-to-learning” direction, I will present sample‑efficient, task‑oriented inference and optimization frameworks that replace full device characterization with directly trainable objectives, including learning‑theoretic guarantees for continuous‑variable photonic processors and for classical‑to‑quantum processes when classical inputs are not under experimental control.
These ideas translate into data‑driven quantum engineering where optical receiver architectures are learned from measurement data (reinforcement‑learning calibration and supervised discovery of photonic joint‑detection decoders) and where “functional classical shadows” exploit structure across experimental sensing settings to operate in photon‑limited regimes.
On the “learning-to-hardware” direction, I introduce quantum‑native learning primitives—notably a variational quantum self‑attention mechanism that realizes nonlinearity via overlap interference and yields a directly measurable loss—enabling sequence prediction on both classical and many‑body quantum data.
Overall, the message is a tight feedback loop: learning tools make quantum platforms scalable, while quantum platforms motivate and implement new learning operations. [link.aps.org], [quantum-journal.org], [arxiv.org] [quantum-journal.org], [link.aps.org], [link.aps.org] [link.aps.org], [arxiv.org], [arxiv.org] [arxiv.org], [arxiv.org] [quantum-journal.org], [arxiv.org], [arxiv.org]