Speaker
Description
Is it possible to reconcile irreversibility and a time-reversal symmetric theory such as quantum mechanics (QM)? In this work, we consider a specific type of constructor-based irreversibility, which allows a generalization of the classical irreversibility based on the second law of thermodynamics. Let the task T be the specification of a physical transformation on qubits which brings a quantum state s to another quantum state f: T={s -> f}. Its transpose is: R={f -> s}. Then, a constructor for the task T is a physical system that enables T to occur on the substrates without undergoing any net change in its ability to do it again. Thus, we can say that a task is possible if the laws of physics do not put any constraint on the accuracy with which it can be performed by a constructor. It is impossible otherwise. Constructor-based irreversibility, then, consists in the fact that while the task T is possible, its transpose R is not. A classical example of this may be that a cycle (a constructor) that converts completely work into heat is possible but a cycle that performs the transpose task is not. For a constructor, this is equivalent to say that the constructor of the transpose task has lower accuracy than the one for the straight task, i.e. the distance between the task goal and the constructor output is greater in the transpose case with respect to the straight one. Our study focuses on a quantum model for constructor-based irreversibility based on homogenisation machines. An homogenisation machine implements a task T={s -> f} by having the initial qubit in the pure state s undergo a series of partial swaps with an environment of qubits in the mixed state f. The transpose task R={f -> s}, then, is implemented by having a mixed state f undergoing the series of partial swaps with an environment in the pure state s. In our implementation, we consider two states: a pure state s = |0><0| and a maximally mixed state f=0,5(|0><0|+|1><1|) and we compare the two cases of an incoming state s (pure to mixed task T) and of an incoming task f (mixed to pure task R). In order to compare the two tasks, we measure the accuracy of each machine in performing the task by computing the error E, defined as one minus the Fidelity between the output state and the goal of the task (f for the pure-to-mixed task and s for the mixed-to-pure task). In our setup, single photons are generated at 1550 nm by a low-noise heralded single photon source and sent to a 1x4 fiber optical switch which addresses them to four different optical paths, one for the “system” and three for the environment. The photon are prepared in the appropriate state and connected to a cascade of three consecutive fiber beam splitters (FBS, all either 50:50, 90:10 or 75:25) implementing subsequent partial swaps between the photons. Finally, the four outputs are sent to single-photon avalanche diodes, whose outputs are addressed to proper time-tagging electronics. Our preliminary results show that the pure-to-mixed task has a greater accuracy than the mixed-to-pure one, i.e., the error on the final state is greater in the mixed-to-pure case with respect to the pure-to-mixed one, in accordance to the theory.