Registration
Physical systems characterized by stick-slip dynamics often display avalanches. Regardless of the diversity of their microscopic
structure, these systems are governed by a power-law distribution of avalanche size and duration. We focus instead on the
interevent times between avalanches and show that, unlike their size and duration, the distribution of interevent times is able to
distinguish different mechanical states of the system, characterized by different volume fractions or confining pressures. We
use experiments in granular media and numerical simulations of emulsions to show that systems having the same probability
distribution for avalanche size and duration can have different interevent time distributions. Remarkably, for large packing
ratios, these interevent time distributions coincide with those for earthquakes and are indirect evidence of large space-time
correlations in the system. Our results therefore indicate that interevent time statistics are more informative to characterize the
dynamics of avalanches.
The study of relativistic fluids apply to several fields of modern physics,
covering many different scales, from astrophysics, to atomic scales (e.g. in the study of effective 2D systems such as graphene) and further down to subnuclear scales (e.g. quark-gluon plasmas).
In particular, the experimental results from heavy-ion collisions at RHIC and LHC, with the first experimental observation of the quark-gluon plasma, have significantly boosted in recent years the interest in the study of relativistic fluid dynamics, both at the level of theoretical formulations as well as in the development of reliable numerical simulation methods.
For a long time, relativistic fluid dynamics has been hampered by several theoretical and computational shortcomings, as relativistic versions of the viscous Navier-Stokes equations suffer from causality problems linked to the order of the derivatives appearing in the dissipative terms. Some of these problems can be avoided by employing a lattice kinetic approach, that treats space and time on the same footing (i.e., via first-order derivatives).
This is one of the main reasons why Relativistic Lattice Boltzmann Methods (RLBMs), that discretize in coordinate and momentum space the Boltzmann equation and yet ensure that the resulting synthetic dynamics retains all its hydrodynamic properties, have been recently proposed as effective computational tools to study relativistic flows.
Several different RLBMs have been proposed in the last decade, initially limited to the study of ultra-relativistic regimes (in which the ratio $\zeta = \frac{m c^2}{k_B T}$ goes to zero; $m$ is the mass of the particles in the fluid, $T$ is temperature and $k_B$ is Boltzmann constant) and more recently extended to cover a wider range of physics parameters, going from ultra-relativistic to mildly relativistic and eventually to the non-relativistic limit ($\zeta \rightarrow \infty$).
This talk presents an overview of the formal details needed to develop RLBMs and
the available options to derive effective and robust computational tools; special focus will be posed on the definition of transport coefficients, since the correct link of the kinetic level with the macroscopic one has been debated for a long time in the literature.
Finally, we present numerical results of standard benchmarks for the validation of the method in which we compare with other approaches, as well as a few examples of applications.
In this seminar we briefly review the current status of tensor network methods, a numerical techniques for performing efficient classical numerical simulations of quantum many-body systems. Tensor network methods promise to become a powerful tool for benchmarking and results verification of future quantum simulations and computations. We will review some of the possible applications of this versatile techniques, ranging from lattice gauge theories to applications in the fields of machine learning and optimization.
Human groups face with problem-solving through autonomous and self-organized processes of collective decision-making. Among scientists, statistical physicists have studied large collective systems in nature (make examples) and, in particular, how the dynamics of collective decision-making affect the overall performance of the system. At the local scale, repeated and non-linear interactions among the system’s components (the individuals in the group) trigger the emergence of new and unpredicted patterns at the global scale (the group), where non-linear phenomena like phase transitions, bifurcations, scaling, and self-organization have been observed. In the context of human groups, the emergence of Collective Intelligence (CI) explains why groups manifest abilities of problem solving higher than those characterizing its individual members. Recently, the dynamics of group decision-making has been modelled through a combined process of consensus-seeking and individual search for high-performing solutions on the problem space. The correspondent numerical simulations have proven that groups undergo a critical phase transition from low to high performance, depending on the strength of social interactions among the agents and the level of self-confidence the individuals have about their knowledge of the problem. While social interactions strengthen the mechanism of consensus-seeking within the group, the level of self-confidence drives the agents towards effective exploration of the problem space. Here, we provide empirical evidence for these results. We performed behavioural experiments of group decision-making to assess whether and how the strength of social interactions among the agents in the group influence group performance during problem-solving. The empirical results confirm that social interactions influence the dynamics of group decision-making in real human groups. We found that a critical level of the strength of social interaction, equal to that predicted by numerical simulations, triggers a phase transition from low to high values on the level of consensus among the agents in the group and a similar steep increase on the group performance.
We study the spontaneously broken phase of the XY model in three dimensions, with boundary conditions enforcing the presence of a vortex line. Comparing field theoretical and Monte Carlo determinations of the magnetization profile, we numerically determine the mass of the vortex particle in the underlying O(2)-invariant quantum field theory. The result also shows that Derrick's theorem does not in general pose an obstruction to the existence of stable topological particles in scalar quantum field theories in more than two dimensions.
References:
[1] G. Delfino, W. Selke and A. Squarcini, Phys. Rev. Lett. 122, 050602 (2019).
[2] G. Delfino, J. Phys. A: Math. Theor. 47 (2014) 132001.
Slow processes in physics are typically related to the existence of high (free)
energy barriers, which require strong fluctuations, or to nearly integrable
regions in the phase space, which determine a slow onset of equipartition.
In this talk we intend to discuss a different mechanism appearing in the
Discrete Nonlinear Schroedinger Equation (DNLSE), when we analyze the relaxation
of large excitations (breathers).
Relaxation occurs through a diffusive-type process and through an activated process:
the effectiveness of both mechanisms decreases exponentially with breather height.
We describe a method to probe the quantum phase transition between the short-range and the long-range topological phase in the superconducting Kitaev chain with long-range pairing. We show that, when the leads are biased at a voltage V, the Fano factor is either zero or 2e. As a result, we find that the Fano factor works as a directly measurable quantity to probe the quantum phase transition between the two phases, also showing a remarkable "critical fractionalization effect". Finally, we note that a dual implementation of our proposed device makes it suitable as a generator of large-distance entangled two-particle states.
We study the out-of-equilibrium properties of (1+1)-dimensional quantum electrodynamics (QED), discretized via the staggered-fermion Schwinger model with an Abelian Z_n gauge group. We look at two relevant phenomena: first, we analyze the stability of the Dirac vacuum with respect to particle/antiparticle pair production, both spontaneous and induced by an external electric field; then, we examine the string breaking mechanism. We observe a strong effect of confinement, which acts by suppressing both spontaneous pair production and string breaking into quark/antiquark pairs, indicating that the system dynamics deviates from the expected behavior toward thermalization. We finally comment on the ground state properties of the considered models and on the evidence of phase transitions.
References:
- G. Magnifico et al., Real Time Dynamics and Confinement in the Z_n Schwinger-Weyl lattice model for 1+1 QED, arXiv: 1909.04821 (2019)
- E. Ercolessi et al., Phase Transitions in Z_n Gauge Models: Towards Quantum Simulations of the Schwinger-Weyl QED, Phys. Rev. D 98, 074503 (2018)
- S. Notarnicola et al., Discrete Abelian Gauge Theories for Quantum Simulations of QED, J. Phys. A: Math. Theor. 48, 30FT01 (2015)
After a general introduction on
Dirac and (multi-)Weyl semimetals,
I will discuss the abelian axial anomaly
in multi-Weyl and triple-point semimetals,
commenting on its physical consequences.
Reference:
L. Lepori, M. Burrello, and E. Guadagnini, ”Axial anomaly in
multi-Weyl and triple-point semimetals”, J. High En. Phys. (2018).
Bound states in the continuum are extensively studied both theoretically and experimentally, with the aim of implementing noiseless memories. In quantum optics, models adopted for their description make use of the dipolar interaction, representing an exactly solvable case in the one-excitation sector. We characterize the eigensystem for any number of equally spaced qubits embedded in one dimension using non-perturbative techniques, giving an explicit proof of excitation amplitude profiles ruled by spin-waves and including a description of the degeneracy lifting obtained through the full analytic structure of the complex energy plane imposed by the form factor. For any odd number of qubits, the singularity condition can be factorized, so yielding the emergence of multimers, consisting in subsystems separated by two lattice spacings not filled by the electromagnetic field. Our model is suitable for the description of steady states in waveguide QED, even if its abstraction includes more general bosonic fields.
This work is about the demographic trends and social dynamics observed at the onset and during the evolution of a financial bubble. Our characterization aims to detect demographic trends in the flux of new investors buying the Nokia asset, i.e. the most representative dotcom stock of the Nordic Stock Exchange during the dotcom bubble. The data for our empirical investigation are taken from two datasets. The first one is maintained by Euroclear, and it tracks the daily ownership of financial assets owned by Finnish investors during the period 1995-2003. The second dataset has demographic information collected by the national statistics office of Finland about age categories, number of inhabitants, income levels, education and jobs per postal code of residence. We track the flux of new investors entering the market daily, and we yearly compare their demographic features with those of the whole Finnish population with a method able to detect over-expression and under-expression of attributes in a heterogeneous system [1]. As for many innovation product or services, we detect a bursty dynamics of access to the market by new investors. We investigate the attributes of age, postal code and gender, of new Nokia investors. As far as income levels and job information are concerned, these attributes are first assigned through maximum entropy methods using age levels as a conditioning variable. We also present an agent based model which is a variant of Deffuant’s opinion model [2], that can qualitatively describe the bursty profile of access to the market of new investors.
[1] Tumminello, M., Miccichè, S., Lillo, F., Varho, J., Piilo, J. and Mantegna, R.N., 2011. Community characterization of heterogeneous complex systems. Journal of Statistical Mechanics: Theory and Experiment, 2011(01), p.P01019.
[2] Deffuant, G., Neau, D., Amblard, F. and Weisbuch, G., 2000. Mixing beliefs among interacting agents. Advances in Complex Systems, 3(01n04), pp.87-98.
Joint work with Federico Musciotto (ETH, Zhurich) and Jyrki Piilo (University of Turku, Finland)
In the last years the formulation of statistical ensembles of binary and weighted random graphs satisfying some arbitrary constraints has attracted much attention in phys/math communities for its two-fold potential application [1, 2]: (i) The construction of appropriate null models for the statistical validation of high order properties of real networks; (ii) the reconstruction of the statistical properties of real network starting for partial accessible information. The cornerstone of the statistical physics of complex networks is the idea that the links, and not the nodes, are the effective particles of the system. Here we formulate a mapping between weighted networks and lattice gasses, making the conceptual step forward of interpreting weighted links as particles with a generalised coordinate [3]. This leads to the definition of the grand canonical ensemble of weighted complex networks. We derive exact expressions for the partition function and thermodynamic quantities, both in the cases of global and local (i.e., node-specific) constraints on density and mean energy of particles. We further show that, when modeling real cases of networks, the binary and weighted statistics of the ensemble can be disentangled, leading to a simplified framework for a range of practical applications.
References
[1] T. Squartini, G. Caldarelli, G. Cimini, A. Gabrielli, D, Garlaschelli, Phys. Reports 757, 1-47 (2018).
[2] G. Cimini, T. Squartini, F. Saracco, D. Garlaschelli, A. Gabrielli, G. Caldarelli, Nature Review Physics 1, 58-71 (2019).
[3] A. Gabrielli, R. Mastrandrea, G. Caldarelli, G. Cimini, Phys. Rev. E 99, 030301(R) (2019)
During neurodegeneration, the first stage of neuronal death, abnormal concentrations of structurally disordered proteins and ions, like zinc and copper, are observed in the synapse. Some proteins, like amyloid-beta (Abeta) peptides, form, in the Alzheimer's disease, oligomers and aggregates that become markers of the irreversible pathway towards death. When such conditions occur in the synapses, the chemical species formed by copper and amyloid peptides produce levels of radicals comparable with the free copper ions, thus increasing the oxidative stress.
Using computational resources offered by the PRACE infrastructure, we have extensively modeled the interactions between copper ions and amyloid-beta peptides in a water environment and in contact with a model synaptic membrane. The models explain why weak Cu-Abeta interactions, specific of amyloid beta peptides, make copper more aggressive 1.
We simulated about 120 walkers starting from different configurations for one to four copper ions in contact with one to four amyloid-beta peptides, on the basis of empirical models. Each of these configurations was refined with explicit electrons, thus modeling the details of the copper-peptide interactions for all the configurations at the same time. Thousands of computing units can be
efficiently used at the same time, to provide an approximate statistical view of reactivity.
These models open a new venue for understanding, at an atomic level, the role of disordered biological molecules in making the chemistry of reactive centers versatile, a general feature of living cells 2.
The progress in high performance computing we are witnessing today offers the possibility of accurate ab initio calculations of structures of systems in realistic physico-chemical conditions. In this talk we present a parameter-free strategy aimed at performing a first-principle computation of the low energy part of the X-ray Absorption Spectroscopy (XAS) spectrum, based on density functional theory (DFT).
The starting model system configurations are built by means of classical molecular dynamics simulations calculations of metal-water complexes. Then, DFT is exploited to relax the resulting metal-water geometrical structures. Finally the XAS spectra associated to the resulting structures are calculated.
As first applications we determined the coordination mode of divalent metal cations in water, showing that Cu(II) and Zn(II) ions have different coordination modes. Here we will show preliminary results obtained in the more interesting - and much more computationally expensive - case in which metal ions are in complex with molecules of biological relevance, such as the amyloid beta peptides involved in the pathogenesis of Alzheimer’s disease.
The organization in time, space and energy of earthquakes exhibits scaling laws with exponents $\alpha,\beta,\gamma$ and $p$
which are universal, i.e. they are independent of time and of the geographic region.
A possible explanation of this critical-like behavior is provided by the possibility to describe the evolution of a seismic fault
under friction within the general context of the depinning transition. In fact, minimal models for the seismic fault, such as the Burridge-Knopoff model, can be mapped to classical quenched Edwards-Wilkinson (qEW) interfaces. Nevertheless the exponents of the scaling laws
of the qEW interfaces are different from those exhibited by instrumental earthquakes.
In this talk I will present a more realistic description of the seismic fault which can be viewed as a qEW interface evolving over
a viscous-ductile substrate. I will show that this description produces scaling laws with exponents $\alpha,\beta,\gamma$ and $p$ in very good agreement with experimental values. More precisely, the values of the exponents $\alpha,\beta$ and $\gamma$ are quite independent of model parameters and this explains their universal character. Conversely the exponent $p$, controlling the temporal clustering of seismic sequences,
depends on the specific law implemented for the viscosity. The value $p=1$ of instrumental data is recovered assuming a velocity-strengthening law for the viscous layer, which is a quite realistic description of real fault systems.
We investigate the process of formation of large-scale structures in a turbulent flow confined in a thin layer. By means of direct numerical simulations of the Navier-Stokes equations, forced at an intermediate scale, we obtain a split of the energy cascade in which one fraction of the input goes to small scales generating the three-dimensional direct cascade. The remaining energy flows to large scales producing the inverse cascade which eventually causes the formation of a quasi-two-dimensional condensed state at the largest horizontal scale. Our results show that the connection between the two actors of the split energy cascade in thin layers is tighter than what was established before: the small-scale three-dimensional turbulence acts as an effective viscosity and dissipates the large-scale energy thus providing a viscosity-independent mechanism for arresting the growth of the condensate. This scenario is supported by quantitative predictions of the saturation energy in the condensate.
The understanding of the fundamental relation between electrophysiological activity and brain organization with respect to performing even simple tasks is a long-standing fascinating question. The ability of the brain to self-organize information processing in an efficient way is a crucial ingredient in biologically plausible models. Recent experiments have shown that the spontaneous brain activity is characterized by burst activity, i.e. avalanches showing absence of characteristic size, successfully interpreted in the context of criticality. We introduce a model inspired in self-organized criticality and reproducing the statistical properties of spontaneous activity, to study multitask learning by the implementation of adaptation mechanism inspired in neurobiology. The system is able to learn all the tested Boolean rules, as well as to recognize patterns with a good performance. Finally, the fundamental open question of the relation between spontaneous and evoked activity is addressed by means of the coarse-grained Wilson Cowan model. An approach inspired in non-equilibrium statistical physics allows to derive a fluctuation-dissipation relation, suggesting that measurements of the spontaneous fluctuations in the global brain activity alone could provide a prediction for the system response to a stimulus. Theoretical predictions are in good agreement with MEG data for healthy patients performing visual tasks.
The Skyrme model is a low energy effective field theory of strong interactions where nuclei and baryons appear as topological solitons, more concretely as collective excitations of pionic degrees of freedom. Proposed by Tony Skyrme in the sixties, his ideas received further support when it was discovered that in the limit of the large number of colours of QCD, an effective theory of mesons arises. In the last years, there has been a revival of Skyrme's ideas and new related models have been proposed to overcome two of the main drawbacks of the theory, namely, the too large binding energies and the lack of cluster structures. The aim of this talk is to successfully address both issues at the same time, something that has not been done before, by extending the standard Skyrme model with the inclusion of the rho meson, via dimensional deconstruction of pure Yang-Mills theory in one higher dimension. The complexity of the resulting energy functional makes the use of HPC resources mandatory to successfully deliver this task.
The so called ab-initio methods in nuclear physics permits to solve the full $A$-body Hamiltonian considering realistic potentials. Among the various ab-initio approaches, the Hypespherical Harmonic(HH) method has been successfully applied in studying bound states and low-energy scattering process of $A=3$ and $A=4$ system [1]. The extension of the HH method to $A\geq5$ results to be limited by the large amount of states needed for constructing the Hamiltonian matrix element up to convergence. However, the use of the brute force parallelization combined with some new computational approach opens the door to use it for larger systems. In this Talk we will introduce the computational solutions used to construct the HH basis and the potential matrix elements in systems larger than $A=4$. Then we will present, as first results of this new computational approach, the calculation of the $^6\text{Li}$ wave function discussing also its electromagnetic structure and the $\alpha+d$ clusterization.
[1] A. Kievsky, S. Rosati, M. Viviani, L.E. Marcucci, and L. Girlanda, J. Phys. G: Nucl. Part. Phys. 35, 063101 (2008)
In the recent years, the use of the forces derived from the EFT has grown exponentially. The chiral two-nucleon forces have been used in many microscopic calculations of nuclear reactions and structure. In some cases they have been complemented by the chiral three-nucleon forces very succesfully applications to few-nucleon reactions, structure of light-and medium-mass nuclei, and nuclear and neutron matter. However, its inclusion in heavier systems is very challenging due to the rapid increase in the number of involved matrix elements along with the growing number of nucleons. Therefore, HPC codes and resources are badly needed.
We have implemented the chiral 3N force up to N2LO [1] in realistic shell model calculations. This 3N potential at N2LO consists of three components, the two-pion term, the one-pion term and the contact term. This 3N force, together with the chiral 2N component, is used for deriving effective Hamiltonians for Shell Model calculations by resorting to the Kuo-Lee-Ratcliff folded-diagram expansion [2]. This approach is based on a perturbative expansion of the vertex function called Qbox (for details see ref. [3]).
Shell-model calculations using this effective Hamiltonian were performed for nuclei belonging to the p shell [1], with the aim of making a benchmark with the ab initio no-core shell model results. Having obtained satisfactory results, we moved to heavier systems, up to fp shell nuclei [4], to reproduce the experimental shell evolution towards and beyond the closure at N = 28. We plan to go beyond this mass region, and, in order to do that, we need to improve the performances of our three body code whose HPC properties will be also discussed.
We present some results on the dynamics of a driven tracer particle beyond the linear regime, in two different model fluids. We first focus on a lattice gas model -- where the tracer interacts via hard-core repulsion with a crowding particle bath -- which allows for analytical computations. In this model, two surprising phenomena can occur: negative differential mobility, namely a nonmonotonic force-velocity relation, and enhanced diffusivity induced by the crowding interactions. Then, we consider the dynamics of a driven interial particle in a steady laminar flow. Here we can observe the phenomenon of absolute negative mobility, where the tracer velocity is opposite to the applied external force. In this framework, we also study the dynamics of an active particle with finite persistence time and discuss a generalized fluctuation-dissipation relation, involving the correlation with non-equilibrium extra-terms.
From large-scale Molecular Dynamics simulations, we performed a complete analysis of local hexatic parameter, local density ad out-of-equilibrium equation of state of self-propelled hard disks in two spatial dimensions. We established the complete phase diagram of the model. The equilibrium melting follows a mixed scenario with first-order liquid-hexatic transition and BKT hexatic-solid one. This scenario is maintained at small activities, with coexistence between active liquid and hexatic order. As activity increases, the emergence of hexatic and solid order is shifted towards higher densities. Above a critical activity and for a certain range of packing fractions, the system undergoes motility-induced phase separation and demixes into low and high density phases; the latter can be either disordered (liquid) or ordered (hexatic or solid) depending on the activity [1].
We also provide a quantitative analysis of all kinds of topological defects present in 2D passive and active repulsive disk systems. We show that the solid-hexatic melting is driven by the unbinding of dislocations, and dissociation of disclinations is present as soon as the liquid phase appears. These two processes are in agreement with the two defects-unbinding mechanisms predicted within the Halperin-Nelson theory of melting. Concerning the hexatic-liquid melting, we observe on top that extended clusters of defects largely dominate over the point defects. Such defect clusters percolate at the hexatic-liquid transition in continuous cases or within the coexistence region in discontinuous ones, and their form gets more ramified for increasing activity [2].
[1] PD, D. Levis, A. Suma, L.F. Cugliandolo, G. Gonnella, I. Pagonabarraga. Full Phase Diagram of Active Brownian Disks: From Melting to Motility-Induced Phase Separation. Phys. Rev. Lett., 121, (2018).
[2] PD, D. Levis, L.F. Cugliandolo, G. Gonnella, I. Pagonabarraga. Clustering of topological defects in two-dimensional melting of active and passive disks. arXiv:1911.06366 (2019).
The determination of the phase diagram of QCD as a function of temperature and chemical potential via lattice simulations remains a challenge. This is because
of the "sign problem" at non-zero matter density. I want to highlight some
recent progress, both technical and analytical, which may bring reliable
answers soon.
One of the most promising quantities for the search of signatures of physics beyond the Standard Model is the anomalous magnetic moment g-2 of the muon, where a comparison of the experimental result with the Standard Model estimate yields a deviation of about 3.5 sigma. On the theory side, the largest uncertainty arises from the hadronic sector, namely the hadronic vacuum polarisation and the hadronic light-by-light scattering. I will review recent progress in calculating the hadronic contributions to the muon g-2 from the lattice and discuss the prospects and challenges to match the precision of the upcoming experiments.
We discuss two implementations of the recently developed density of states functional fit approach (DoS FFA) to lattice QCD at finite density. Both implementations are based on suitable pseudo-fermion representations of lattice QCD. The first approach identifies the imaginary part of the pseudo-fermion action in the grand-canonical picture and treats it with a direct application of DoS FFA. The second approach is based on the canonical formulation where physics at fixed net-quark number may be obtained with Fourier transformation with respect to imaginary chemical potential. The imaginary chemical potential is treated as an additional degree of freedom and DoS FFA is used to compute the corresponding density. The two formulations are discussed in detail and we present first tests.
The present status of quark flavour physics, both in the Delta F=1 and Delta F=2 sectors, is reviewed. Possible theoretical implications of present experimental data are discussed and the impact of future experiments is assessed for some observables of interest.
Lattice QCD on modern GPU systems
Quantum computing is emerging as a new paradigm for the solution of a wide class
of problems that are not accessible by conventional high performance computers
based on classical algorithms. Quantum computers can in principle e?ciently solve
problems that require exponential resources on classical hardware, even when using
the best known classical algorithms. In the last few years, several interesting prob-
lems with potential quantum speedup have been brought forward in the domain of
quantum physics, like eigenvalue-search using quantum phase estimation algorithms
and evaluation of observables in quantum chemistry, e.g. by means of the hybrid
variational quantum eigensolver (VQE) algorithm.
The original idea that a quantum computer can potentially solve many-body
quantum mechanical problems more e?ciently than classical algorithms is due to R.
Feynman who proposed to use quantum algorithms to investigate the fundamental
properties of nature at the quantum scale. In particular, the simulation of the
electronic structure of molecular and condensed matter systems is a challenging
computational task as the cost of resources increases exponentially with the number
of electrons when accurate solutions are required. With the deeper understanding
of complex quantum systems acquired over the last decades this exponential barrier
bottleneck may be overcome by the use of quantum computing hardware. To achieve
this goal, new quantum algorithms need to be develop that are able to best exploit
the potential of quantum speed-up [1,2]. While this e?ort should target the design
of quantum algorithms for the future fault-tolerant quantum hardware, there is
pressing need to develop algorithms, which can be implemented in present-day NISQ
(noisy intermediate scale quantum) devices with limited coherence times [3,4].
In this talk, we will ?rst introduce the basics of quantum computing using super-
conducting qubits, focusing on those aspects that are crucial for the implementation
of quantum chemistry algorithms. In the second part, I will brie
y discuss the limi-
tations of currently available classical approaches and highlight the advantages of the
new generation of quantum algorithms for the solution of many-electron problems
in the ground and excited states [5].
[1] B.P. Lanyon et al., Nature Chem. 2, 106 (2010).
[2] N. Moll, et al., Quantum Sci. Technol. 3, 030503 (2018).
[3] A. Kandala et al., Nature, 549, 242 (2017).
[4] P. Baroutsos, et al., Phys. Rev. A, 98 022322 (2018).
[5] M. Ganzhorn,, et al., Phys. Rev. Appl., 11 044092 (2019).
Real-time correlators are difficult to evaluate on a classical computer due to the sign problem. Simulations performed on a quantum computer naturally give real-time correlators. From these real-time correlators, both parton distribution functions and the hadronic tensor may be obtained. In this talk, I describe three ingredients for the evaluation of the hadronic tensor on a quantum computer - the preparation of a proton state, representation and simulation of SU(3) gauge theory, and evaluation of real-time correlators.
In this work we want to demonstrate the effectiveness of the new D-Wave quantum annealer, D-Wave 2000Q, in dealing with real world problems. In particular, it is shown how the quantum annealing process is able to find global optima even in the case of problems that do not directly involve binary variables. The problem addressed in this work is the following: taking a matrix V, find two matrices W and H such that the norm between V and the matrix product WH is as small as possible. The work is inspired by O'Malley's article, where the author proposed an algorithm to solve a problem very similar to ours, where however the matrix H was formed by only binary variables. In our case neither of the two matrices W or H is a binary matrix. In particular, the factorization foresees that the matrix W is composed of real numbers between 0 and 1 and that the sum of its rows is equal to 1. The QUBO problem associated with this type of factorization generates a potential composed of many local minima. We show that simple forward-annealing techniques are not sufficient to solve the problem. The new D-Wave 2000Q has introduced new solution refinement techniques, including reverse-annealing. Reverse-annealing allows to explore the configuration space starting from a point chosen by the user, for example a local minimum obtained with a precedent forward-annealing. In this article we propose an algorithm based on the reverse annealing technique (that we called adaptive reverse annealing) able to reach global minimum even in the case of QUBO problems where the classic forward annealing, or uncontrolled reverse annealing, can not reach satisfactory solutions.
We illustrate the application of Quantum Computing techniques to the investigation
of the thermodynamical properties of a simple system, made up of three
quantum spins with frustrated pair interactions and affected by a hard sign problem
when treated within classical computational schemes.
We show how quantum algorithms completely solve the problem, and discuss how
this can apply to more complex system of physical interest.
Current cosmological observations are in agreement with the standard cosmological model of a homogeneous and isotropic Universe at large scales, based on General Relativity and on the standard model (SM) of particle physics, complemented with a mechanism for the generation of primordial perturbations, i.e., the inflationary paradigm. When interpreted in minimal extensions of this ΛCDM framework, cosmological data exhibit no evidence for departure from the base model.
Nevertheless, the great accuracy of current observations allows to reveal some discrepancies between sets of data (e.g., CMB versus low-redshift measurements) that might excitingly point to the need for a beyond-ΛCDM paradigm. More excitingly, next-generation cosmological surveys have the potential to further investigate the robustness of these discrepancies, and in general to unveil signature of new physics hidden in cosmological probes.
In this talk, I will review the state-of-the-art of cosmological constraints on fundamental physics and prospects for future observations, highlighting the contribution of the INFN community.
The European Space Agency's Euclid Mission will be launched in 2022. During its six-year mission, the Euclid satellite will survey nearly 40% of the sky, providing scientists with an extraordinarily large amount of data that will impact many aspects of modern cosmology and astrophysics. The Euclid Mission's primary science goals are targeted towards constraining the nature of two of the most puzzling quantities in our Universe: Dark Energy and Dark Matter. In order to do so, the Euclid Satellite will image billions of galaxies as well as measure tens of millions of spectra. In this talk we will summarize the main scientific objectives of the mission, focusing in particular to the topics closest to the INFN research interest, and the computational challenges emerging in the analysis of Euclid data.
I will report on the use of INFN HPC resources made by the INFN TEONGRAV group. The focus of the numerical simulations performed within the TEONGRAV collaboration is on the modelling of compact object binaries, as sources of gravitational waves and electromagnetic signals, and on the study of the formation channels of black hole binaries.
We show the properties of the gravitational wave signal emitted after the merger of a binary neutron star system. We show that the post-merger phase can be subdivided into three phases: an early post-merger phase (where the quadrupole mode and a few subdominant features are active), the intermediate post-merger phase (where only the quadrupole mode is active) and, when remnant survives for more than a 60ms before collapsing to a Black Hole, the late post-merger phase (where convective instabilities trigger inertial modes).
Moreover, we show how to perform numerical simulations of Binary Neutron Star Mergers using the Einstein Toolkit. We discuss the motivation for going to high-resolution and the computational requirement needed to reach the required resolution a the numerical performance on the Einstein Toolkit public code. We present vectorization and scaling tests on the code on SkyLake and Knight’s Landing processors to assess its capability of making use of a large amount of parallel computing power. Our tests are run on the full infrastructure, evolving both the space-time metric variable and matter.
An out-of-equilibrium, isolated and uniform over-density of massive particles relax towards a quasi-stationary state close to virial equilibrium through a monolithic collapse driven by its own mean gravitational field. If the system initially breaks spherical symmetry and has some angular momentum such a dissipationless dynamics may give rise to a disk with persistent far out-of-equilibrium structures like spiral arms, bars and/or rings. By considering several numerical experiments of a simple toy model, we also discuss the combined effect of gravitational and gas dynamics in such an out-of-equilibrium framework.
In this work we present new experimental results concerning Acoustic Emission (AE) recorded during cyclic compression tests on two different kinds of brittle building materials, namely concrete and basalt. The AE inter-event times were investigated through a non-extensive statistical mechanics analysis which shows that their decumulative probability distributions follow q-exponential laws. The entropic index q and the relaxation parameter Beta_q =? 1/Tq, obtained by fitting the experimental data, exhibit systematic changes during the various stages of the failure process, namely (q; Tq) linearly align. The Tq = 0 point corresponds to the macroscopic breakdown of the material. The slope, including its sign, of the linear alignment appears to depend on the chemical and mechanical properties of the sample. These results provide an insight on the warning signs of the incipient failure of building materials and could therefore be used in monitoring the health of existing structures such as buildings and bridges.
Quantum statistics have been shown to emerge to describe the statistical properties of growing networks when nodes are associated to a fitness value [1]. Recently it has been shown that quantum statistics emerge also in a growing simplicial complex model called Network Geometry with Flavor (NGF) which allows for the description of many-body interactions between the nodes [2,3]. This model depends on an external parameter called flavor that is responsible for the underlying topology of the simplicial complex. When the flavor takes the value s=−1 the d-dimensional simplicial complex is a manifold in which every (d−1)-dimensional face can only have an incidence number nα ∈ {0, 1}. In this case the faces of the simplicial complex are naturally described by the Bose–Einstein, Boltzmann and Fermi–Dirac distribution depending on their dimension. In this paper we extend the study of NGF to fractional values of the flavor s = −1/m in which every (d−1)-dimensional face can only have incidence number nα ∈{0,1,2,...,m}. We show that in this case the statistical properties of the faces of the simplicial complex are described by the Bose–Einstein or the Fermi–Dirac distribution only. Finally, we comment on the spectral properties of the networks constituting the underlying structure of the considered simplicial complexes [4].
References
1. G. Bianconi and A. L. Barabási, Bose-Einstein Condensation in Complex Networks, Phys. Rev. Lett. 86, p 5632 (2001)
2. G. Bianconi and C. Rahmede, Complex Quantum Network Manifolds in Dimension d > 2 are Scale-Free, Sci. Rep. 5, p 13979 (2015)
3. G. Bianconi, C. Rahmede, Emergent Hyperbolic Network Geometry, Sci. Rep. 7, p 41974 (2017)
4. N. Cinardi, A. Rapisarda, G. Bianconi, Quantum statistics in Network Geometry with Fractional Flavor, J. Stat. Mech, p 103403 (2019)
The light-cone definition of Parton Distribution Functions (PDFs) does not allow for a direct ab initio determination employing methods of Lattice QCD simulations that naturally take place in Euclidean spacetime. In this presentation we focus on pseudo-PDFs where the starting point is the equal time hadronic matrix element with the quark and anti-quark fields separated by a finite distance. We focus on Ioffe-time distributions, which are functions of the Ioffe-time ν, and can be understood as the Fourier transforms of parton distribution functions with respect to the momentum fraction variable x. We present lattice results for the case of the nucleon and the pion and we also perform a comparison with the pertinent phenomenological determinations.
Construction of dual formulations of various abelian Z(N) and non-abelian U(N) and SU(N) lattice gauge theories with static quarks in the presence of non-vanishing chemical potential is reviewed. As an application of the dual formulation for models at finite baryon density we 1) present an exact solution of the Polyakov loop models in the large-N limit and 2) discuss the numerical computations of the Polyakov loop correlations in the SU(3) dual model. Several extensions of the dual transformations are outlined.
The spatial distribution of the chromoelectric field in presence of a static quark-antiquark pair in the SU(3) gauge theory is determined from the numerical simulations. The resulting field can be decomposed into the perturbative part, and the nonperturbative confining field, the last being oriented almost completely along the quark-antiquark line. A way of performing this decomposition using irrotational property of the perturbative field is proposed. The estimation of the string tension and of the parameters of the nonperturbative flux tube is then obtained from the subtracted field.
T.B.A.
The modular bootstrap program for two-dimensional conformal field theories could be seen as a systematic exploration of the physical consequences of consistency conditions at the elliptic points and at the cusp of their torus partition function. The study at $\tau=i$, the elliptic point stabilized by the modular inversion $S$, was initiated in 2009 by Hellerman, who found a general upper bound for the most relevant scaling dimension $\Delta_0$. Likewise, analyticity at $\tau=i\infty$, the cusp stabilized by the modular translation $T$, yields an upper bound on the twist gap, whereas to date
the study at $\tau=\exp[2i\pi/3]$, the elliptic point stabilized by $S\,T$ has been neglected.
Here I found a far stronger upper bound in the large-$c$ limit which is remarkably close to
the minimal mass threshold of the BTZ black holes in the holographic dual $3d$ gravity. Even a modest improvement could push $\Delta_0$ down this threshold, implying that
pure Einstein gravity do not exist as a quantum theory.
We discuss some recent applications of non-equilibrium statistical-mechanics theorems in numerical simulations of lattice gauge theories.
We study the nature of the transition of a three-dimensional lattice scalar model characterized by a nonabelian SU($N_c$) gauge symmetry and a continuous global flavor symmetry. For $N_f>1$ this model presents two different phases, associated with the spontaneous breaking of the flavor symmetry, hence it is an ideal tool to investigate whether the presence of nonabelian gauge symmetry can affect the critical properties of a statistical field theory. Two different effective models are studied, which are expected to describe the phase transition when the gauge degrees of freedom are relevant (continuum scalar chromodynamics) or when they are irrelevant (gauge invariant Landau-Ginzburg-Wilson approach), and their predictions are checked against the results of numerical lattice simulations.
We investigate the complex spectrum of the Dirac operator in 2+1-flavor
QCD, at nonzero temperature and isospin chemical potential, using the
extension of the Banks-Casher relation to the case of Complex Dirac
eigenvalues (derived for the zero-temperature, high-density limits of
QCD at nonzero isospin chemical potential), as a prescription to obtain
information on the BCS gap from the 2d density of the complex Dirac
eigenvalues.
Such study is motivated by the prediction, from perturbation theory, of
a superfluid state of $u$ and $\bar{d}$ Cooper pairs (BCS phase) at
asymptotically high isospin densities, plausibly connected via an
analytical crossover to the a phase with Bose-Einstein condensation of
charged pions at $\mu_I>=m_\pi/2$.
Further motivation comes from recent lattice observations (renormalized
Polyakov loop measurements) that indicate a decrease of the
deconfinement transition temperature as a function of \mu_I, suggesting
that the deconfinement crossover smoothly penetrates into the pion
condensation phase and thus favoring a scenario where the deconfinement
transition connects continuously to the BEC-BCS crossover in the
$(T,\mu_I)$ phase diagram.
In this talk I will present recent results showing how the high-temperature phase of the compact U(1) gauge theory without matter fields in 2+1 spacetime dimensions can be studied in terms of conformal-field-theory predictions for the low-temperature phase of the XY model in 2 dimensions. The conformally-invariant analytical description of the XY model is compared with numerical results obtained in lattice simulations of the U(1) gauge theory above the critical temperature, in particular for the two-point correlation function of static charges and for the profile of the flux tube: excellent quantitative agreement is found with predictions for the functional forms and for the critical indices.
We investigate the competition of coherent and dissipative dynamics in many-body systems at continuous quantum transitions. We consider dissipative mechanisms that can be effectively described by Lindblad equations for the density matrix of the system. The interplay between the critical coherent dynamics and dissipation is addressed within a dynamic finite-size scaling framework, which allows us to identify the regime where they develop a nontrivial competition. We analyze protocols that start from critical many-body ground states, and put forward general dynamic scaling behaviors involving the Hamiltonian parameters and the coupling associated with the dissipation. This scaling scenario is supported by a numerical study of the dynamic behavior of a one-dimensional lattice fermion gas undergoing a quantum Ising transition, in the presence of dissipative mechanisms such as local pumping, decaying and dephasing.
Polar flocking is one of the simplest but at the same time richer examples of collective behaviour in active matter systems. Its physical behaviour — stemming from the spontaneous breaking of a continuous symmetry and the nonequilibrium coupling of density and orientation fluctuations — has been thoroughly investigated in the last two decades, and we now have a good understanding of the asymptotic behaviour of isolated systems, at least in the dry and dilute approximation.
Considering flocks that are not isolated, but rather immersed and interacting with the external world, on the other hand, forces one to consider the effect of boundaries, surface tension and/or the response to external perturbations. Surface tension, for instance, is needed to maintain flock cohesion, and due to their non-equilibrium activity, finite flocks exhibit faster than equilibrium surface fluctuations. The information inflow from the boundary, moreover, may also alter bulk correlations, both for isolated flocks or in the presence of an external perturbations. Asymptotic linear response theory will also be discussed and compared with recent experimental results in active colloids.
We present a novel framework to compute the non-perturbative decay width of pseudoscalar mesons into charged leptons by means of Lattice QCD calculations, including for the first time the radiative emission of a photon. Together with the non-perturbative determination of the virtual photon corrections to the processes, this allow accurate predictions at O(αem) for leptonic decay rates for pseudoscalar mesons, significantly improving the precision in the determination of the corresponding Cabibbo-Kobayashi-Maskawa (CKM) matrix elements.
Using lattice simulations we give evidence of the existence of a non-perturbative mechanism for elementary particle mass generation in models with gauge fields, fermions and scalars, if an exact invariance forbids power divergent fermion masses and fermionic chiral symmetries broken at UV scale are maximally restored. We show that in the Nambu-Goldstone phase a fermion mass term, unrelated to the Yukawa operator, is dynamically generated.
In this work we investigate the relation between the realization of center symmetry and the theta-dependence of
SU(3) and SU(4) Yang-Mills theories defined on $R^3 \times S^1$. In particular we use the double-traced deformed version of Yang-Mills theory,
in which extra pieces coupled to the traces of the powers of Polyakov loop are addedd to the standard action in order
recover center symmetry even at small compactification radii. First we study the phase diagram in the deformation plane; then we compute the topological susceptibility and the first term of the theta expansion of the free energy in the deformed theory and we compare them with the known values of the undeformed one.
Spectral projectors on the eigenmodes of the Dirac operator can be used to derive a fermionic lattice definition of the topological charge. Studying the renormalization properties of the lattice charge, we extend the spectral projectors definition of the topological susceptibility to the case of staggered fermions. Besides, we also generalize the spectral method to any higher-order cumulant of the topological charge. Finally, we present results obtained in the quenched case for the topological susceptibility and for the fourth cumulant, as well as some preliminary results in full QCD.
Active matter, popularized by the collective motion of bird flocks, constitutes a novel and rapidly growing field gathering interests and contributions from very diverse communities. In particular, it offers new promises in organizing elementary units at different scales in ways that are unavailable to equilibrium systems, like clustering and phase separation without attractive interactions, dense fronts of coherently moving entities, etc. At a fundamental level, this new physics emerging in active matter has been mostly understood in terms of simplified particle models, which allow to identify the key ingredients giving rise to such non-equilibrium collective phenomena. However, properly controlling the self-assembly of active particles is still an open challenge.
Here we consider chiral active matter, composed of large assemblies of polar circle swimmers, i.e. polar active particles that follow circular trajectories (like bacteria suspensions in two dimensions or asymmetric L-shaped chiral self-propelled colloids). We show that rotations induce a plethora of new collective behaviors as compared to non-rotating particles and, in particular, they provide a novel generic route to pattern formation. We show that slow rotations induce phase separation while faster rotations result in the emergence of patterns of smaller synchronized structures with self-limited size, such as those observed in suspensions of sperm cells. Most remarkably, we show that the size of these patterns can be directly controlled by the microscopic parameters of the model in a simple way. Moreover, in the presence of a distribution of rotation frequencies, they can synchronize over very large distances, even in 2D, as opposed to non-active oscillators on static or time-dependent networks, usually leading to synchronized domains only. We finally discuss some experimental observations in suspensions of magnetic colloids that can be understood within this framework.