Speaker
Description
Fault-tolerant universal quantum computers still appear to be more than a decade away. However, consistent growth in the field of quantum technologies over the years has lead to the development of Noisy Intermediate-Scale Quantum (NISQ) devices. The computational capabilities of these devices are considerably restricted due to limited connectivity, short coherence time, poor qubit quality and minimal error-correction. A particular class of useful algorithms that can be executed on these devices are variational algorithms which use the following hybrid approach: Prepare a highly entangled quantum state using a limited depth quantum processor, then apply a classical optimization routine on the gate parameters to converge to that quantum state for which the objective function is minimized. Availability of massive amounts of data in the natural sciences has enabled the use of machine learning (ML) and artificial intelligence (AI) for tasks ranging from molecular discovery to prediction of new genes. Despite the excitement and promising results, the requirement of extensive computational resources to train ML models largely restricts their current applicability. Various proposals have been made to overcome this bottleneck, and one of them is to use quantum computers to achieve significant speedups. In the same spirit, we showcase a quantum-assisted approach in which a QPC is used as a machine learning model and optimized using a scheme similar to back-propagation in feedforward neural networks. In general, such a circuit (model) can then be trained for a given labelled dataset {x; f(x)} on current generation NISQ devices to perform classification and regression tasks by learning a function f(x). In this work, we investigate a hybrid quantum-classical (HQC) approach to envisage a quantum neural network using quantum parameterized circuits (QPCs). To simulate these quantum circuits, we make use of three different libraries based on the task at hand: Pennylane by Xanadu, pyQuil by Rigetti and Qiskit by IBM. Our analysis shows that given enough labelled training data points, a QPC with sufficient qubits and depth can be trained using classical optimization routines to perform both regression and classification tasks.