Focus on:
All days
11 Feb 2015
12 Feb 2015
13 Feb 2015
All sessions
Meeting of the SUMA group.
Hide Contributions
Indico style
Indico style - inline minutes
Indico style - numbered
Indico style - numbered + minutes
Indico Weeks View
Back to Conference View
Choose Timezone
Use the event/category timezone
Specify a timezone
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmara
Africa/Bamako
Africa/Bangui
Africa/Banjul
Africa/Bissau
Africa/Blantyre
Africa/Brazzaville
Africa/Bujumbura
Africa/Cairo
Africa/Casablanca
Africa/Ceuta
Africa/Conakry
Africa/Dakar
Africa/Dar_es_Salaam
Africa/Djibouti
Africa/Douala
Africa/El_Aaiun
Africa/Freetown
Africa/Gaborone
Africa/Harare
Africa/Johannesburg
Africa/Juba
Africa/Kampala
Africa/Khartoum
Africa/Kigali
Africa/Kinshasa
Africa/Lagos
Africa/Libreville
Africa/Lome
Africa/Luanda
Africa/Lubumbashi
Africa/Lusaka
Africa/Malabo
Africa/Maputo
Africa/Maseru
Africa/Mbabane
Africa/Mogadishu
Africa/Monrovia
Africa/Nairobi
Africa/Ndjamena
Africa/Niamey
Africa/Nouakchott
Africa/Ouagadougou
Africa/Porto-Novo
Africa/Sao_Tome
Africa/Tripoli
Africa/Tunis
Africa/Windhoek
America/Adak
America/Anchorage
America/Anguilla
America/Antigua
America/Araguaina
America/Argentina/Buenos_Aires
America/Argentina/Catamarca
America/Argentina/Cordoba
America/Argentina/Jujuy
America/Argentina/La_Rioja
America/Argentina/Mendoza
America/Argentina/Rio_Gallegos
America/Argentina/Salta
America/Argentina/San_Juan
America/Argentina/San_Luis
America/Argentina/Tucuman
America/Argentina/Ushuaia
America/Aruba
America/Asuncion
America/Atikokan
America/Bahia
America/Bahia_Banderas
America/Barbados
America/Belem
America/Belize
America/Blanc-Sablon
America/Boa_Vista
America/Bogota
America/Boise
America/Cambridge_Bay
America/Campo_Grande
America/Cancun
America/Caracas
America/Cayenne
America/Cayman
America/Chicago
America/Chihuahua
America/Ciudad_Juarez
America/Costa_Rica
America/Creston
America/Cuiaba
America/Curacao
America/Danmarkshavn
America/Dawson
America/Dawson_Creek
America/Denver
America/Detroit
America/Dominica
America/Edmonton
America/Eirunepe
America/El_Salvador
America/Fort_Nelson
America/Fortaleza
America/Glace_Bay
America/Goose_Bay
America/Grand_Turk
America/Grenada
America/Guadeloupe
America/Guatemala
America/Guayaquil
America/Guyana
America/Halifax
America/Havana
America/Hermosillo
America/Indiana/Indianapolis
America/Indiana/Knox
America/Indiana/Marengo
America/Indiana/Petersburg
America/Indiana/Tell_City
America/Indiana/Vevay
America/Indiana/Vincennes
America/Indiana/Winamac
America/Inuvik
America/Iqaluit
America/Jamaica
America/Juneau
America/Kentucky/Louisville
America/Kentucky/Monticello
America/Kralendijk
America/La_Paz
America/Lima
America/Los_Angeles
America/Lower_Princes
America/Maceio
America/Managua
America/Manaus
America/Marigot
America/Martinique
America/Matamoros
America/Mazatlan
America/Menominee
America/Merida
America/Metlakatla
America/Mexico_City
America/Miquelon
America/Moncton
America/Monterrey
America/Montevideo
America/Montserrat
America/Nassau
America/New_York
America/Nome
America/Noronha
America/North_Dakota/Beulah
America/North_Dakota/Center
America/North_Dakota/New_Salem
America/Nuuk
America/Ojinaga
America/Panama
America/Paramaribo
America/Phoenix
America/Port-au-Prince
America/Port_of_Spain
America/Porto_Velho
America/Puerto_Rico
America/Punta_Arenas
America/Rankin_Inlet
America/Recife
America/Regina
America/Resolute
America/Rio_Branco
America/Santarem
America/Santiago
America/Santo_Domingo
America/Sao_Paulo
America/Scoresbysund
America/Sitka
America/St_Barthelemy
America/St_Johns
America/St_Kitts
America/St_Lucia
America/St_Thomas
America/St_Vincent
America/Swift_Current
America/Tegucigalpa
America/Thule
America/Tijuana
America/Toronto
America/Tortola
America/Vancouver
America/Whitehorse
America/Winnipeg
America/Yakutat
Antarctica/Casey
Antarctica/Davis
Antarctica/DumontDUrville
Antarctica/Macquarie
Antarctica/Mawson
Antarctica/McMurdo
Antarctica/Palmer
Antarctica/Rothera
Antarctica/Syowa
Antarctica/Troll
Antarctica/Vostok
Arctic/Longyearbyen
Asia/Aden
Asia/Almaty
Asia/Amman
Asia/Anadyr
Asia/Aqtau
Asia/Aqtobe
Asia/Ashgabat
Asia/Atyrau
Asia/Baghdad
Asia/Bahrain
Asia/Baku
Asia/Bangkok
Asia/Barnaul
Asia/Beirut
Asia/Bishkek
Asia/Brunei
Asia/Chita
Asia/Choibalsan
Asia/Colombo
Asia/Damascus
Asia/Dhaka
Asia/Dili
Asia/Dubai
Asia/Dushanbe
Asia/Famagusta
Asia/Gaza
Asia/Hebron
Asia/Ho_Chi_Minh
Asia/Hong_Kong
Asia/Hovd
Asia/Irkutsk
Asia/Jakarta
Asia/Jayapura
Asia/Jerusalem
Asia/Kabul
Asia/Kamchatka
Asia/Karachi
Asia/Kathmandu
Asia/Khandyga
Asia/Kolkata
Asia/Krasnoyarsk
Asia/Kuala_Lumpur
Asia/Kuching
Asia/Kuwait
Asia/Macau
Asia/Magadan
Asia/Makassar
Asia/Manila
Asia/Muscat
Asia/Nicosia
Asia/Novokuznetsk
Asia/Novosibirsk
Asia/Omsk
Asia/Oral
Asia/Phnom_Penh
Asia/Pontianak
Asia/Pyongyang
Asia/Qatar
Asia/Qostanay
Asia/Qyzylorda
Asia/Riyadh
Asia/Sakhalin
Asia/Samarkand
Asia/Seoul
Asia/Shanghai
Asia/Singapore
Asia/Srednekolymsk
Asia/Taipei
Asia/Tashkent
Asia/Tbilisi
Asia/Tehran
Asia/Thimphu
Asia/Tokyo
Asia/Tomsk
Asia/Ulaanbaatar
Asia/Urumqi
Asia/Ust-Nera
Asia/Vientiane
Asia/Vladivostok
Asia/Yakutsk
Asia/Yangon
Asia/Yekaterinburg
Asia/Yerevan
Atlantic/Azores
Atlantic/Bermuda
Atlantic/Canary
Atlantic/Cape_Verde
Atlantic/Faroe
Atlantic/Madeira
Atlantic/Reykjavik
Atlantic/South_Georgia
Atlantic/St_Helena
Atlantic/Stanley
Australia/Adelaide
Australia/Brisbane
Australia/Broken_Hill
Australia/Darwin
Australia/Eucla
Australia/Hobart
Australia/Lindeman
Australia/Lord_Howe
Australia/Melbourne
Australia/Perth
Australia/Sydney
Canada/Atlantic
Canada/Central
Canada/Eastern
Canada/Mountain
Canada/Newfoundland
Canada/Pacific
Europe/Amsterdam
Europe/Andorra
Europe/Astrakhan
Europe/Athens
Europe/Belgrade
Europe/Berlin
Europe/Bratislava
Europe/Brussels
Europe/Bucharest
Europe/Budapest
Europe/Busingen
Europe/Chisinau
Europe/Copenhagen
Europe/Dublin
Europe/Gibraltar
Europe/Guernsey
Europe/Helsinki
Europe/Isle_of_Man
Europe/Istanbul
Europe/Jersey
Europe/Kaliningrad
Europe/Kirov
Europe/Kyiv
Europe/Lisbon
Europe/Ljubljana
Europe/London
Europe/Luxembourg
Europe/Madrid
Europe/Malta
Europe/Mariehamn
Europe/Minsk
Europe/Monaco
Europe/Moscow
Europe/Oslo
Europe/Paris
Europe/Podgorica
Europe/Prague
Europe/Riga
Europe/Rome
Europe/Samara
Europe/San_Marino
Europe/Sarajevo
Europe/Saratov
Europe/Simferopol
Europe/Skopje
Europe/Sofia
Europe/Stockholm
Europe/Tallinn
Europe/Tirane
Europe/Ulyanovsk
Europe/Vaduz
Europe/Vatican
Europe/Vienna
Europe/Vilnius
Europe/Volgograd
Europe/Warsaw
Europe/Zagreb
Europe/Zurich
GMT
Indian/Antananarivo
Indian/Chagos
Indian/Christmas
Indian/Cocos
Indian/Comoro
Indian/Kerguelen
Indian/Mahe
Indian/Maldives
Indian/Mauritius
Indian/Mayotte
Indian/Reunion
Pacific/Apia
Pacific/Auckland
Pacific/Bougainville
Pacific/Chatham
Pacific/Chuuk
Pacific/Easter
Pacific/Efate
Pacific/Fakaofo
Pacific/Fiji
Pacific/Funafuti
Pacific/Galapagos
Pacific/Gambier
Pacific/Guadalcanal
Pacific/Guam
Pacific/Honolulu
Pacific/Kanton
Pacific/Kiritimati
Pacific/Kosrae
Pacific/Kwajalein
Pacific/Majuro
Pacific/Marquesas
Pacific/Midway
Pacific/Nauru
Pacific/Niue
Pacific/Norfolk
Pacific/Noumea
Pacific/Pago_Pago
Pacific/Palau
Pacific/Pitcairn
Pacific/Pohnpei
Pacific/Port_Moresby
Pacific/Rarotonga
Pacific/Saipan
Pacific/Tahiti
Pacific/Tarawa
Pacific/Tongatapu
Pacific/Wake
Pacific/Wallis
US/Alaska
US/Arizona
US/Central
US/Eastern
US/Hawaii
US/Mountain
US/Pacific
UTC
Save
Europe/Rome
English (United Kingdom)
Deutsch (Deutschland)
English (United Kingdom)
English (United States)
Español (España)
Français (France)
Italiano (Italia)
Polski (Polska)
Português (Brasil)
Türkçe (Türkiye)
Čeština (Česko)
Монгол (Монгол)
Українська (Україна)
中文 (中国)
Login
Super Massive Computations in Theoretical Physics
from
Wednesday, 11 February 2015 (14:00)
to
Friday, 13 February 2015 (15:50)
Monday, 9 February 2015
Tuesday, 10 February 2015
Wednesday, 11 February 2015
15:00
Atomistic Simulation of Single Molecule Experiments: Molecular Machines and a Dynasome.
-
Helmut Grubmüller
Atomistic Simulation of Single Molecule Experiments: Molecular Machines and a Dynasome.
Helmut Grubmüller
15:00 - 16:00
Proteins are biological nanomachines which operate at many length and time scales. We combined single molecule, x-ray crystallographic, and cryo-EM data with atomistic simulations to elucidate how these functions are performed at the molecular level. Examples include the mechanics of energy conversion in F-ATP synthase and tRNA translocation within the ribosome. We will show that tRNA translocation between A, P, and E sites is rate limiting, and identified dominant interactions. We also show that the so-called L1 stalk actively drives tRNA translocation, and that 'polygamic' interactions dominate the intersubunit interface, thus explaining the detailed interaction free energy balance required to maintain both controlled affinity and fast translation We will further demonstrate how atomistic simulations enable one to mimic, one-to-one, single molecule FRET distance measurements, and thereby to markedly enhance their resolution and accuracy. We will, finally, take a more global view on the 'universe' of protein dynamics motion patterns and demonstrate that a systematic coverage of this 'dynasome' allows one to predict protein function.
16:00
Coffe break
Coffe break
16:00 - 16:30
16:30
Variational Molecular Dynamics
-
Silvio Beccara
Variational Molecular Dynamics
Silvio Beccara
16:30 - 16:50
The Variational Molecular Dynamics (VMD) approach is a framework aimed at the efficient computation of the most probable folding pathways connecting denatured protein configurations with the native state. Its high computational efficiency enables the simulation of very slow (up to hours) transitions on large systems (hundreds of residues), with realistic atomistic force fields. The VMD approach can also be used to predict the most probable pathways for a conformational transition between given states of macromolecules. In my talk I will outline the main features of DRP and give an account of our latest results. I will first discuss the folding of a natively knotted protein. I will then talk about the conformational transition of a serpin comprised of 373 residues between its active (metastable) state and its latent state. I will also show how our method allowed us to give some explanation for the working of a pharmaceutical used in connection with the PAI-1 serpin.
16:50
Zn induced structural aggregation patterns of β-amyloid peptides by first-principle simulations and XAS measurements
-
Velia Minicozzi
Zn induced structural aggregation patterns of β-amyloid peptides by first-principle simulations and XAS measurements
Velia Minicozzi
16:50 - 17:10
We show in this work that in the presence of Zn ions a peculiar structural aggregation pattern of β-amyloid peptides in which metal ions are sequentially coordinated to either three or four histidines of nearby peptides is favored. To stabilize this configuration a deprotonated imidazole ring from one of the histidines forms a bridge connecting two adjacent Zn ions. Though present in zeolite imidazolate frameworks, remarkably in biological compounds this peculiar Zn–imidazolate–Zn topology is only found in enzymes belonging to the Cu,Zn-superoxide dismutase family in the form of an imidazolate bridging Cu and Zn. The results we present are obtained by combining X-ray absorption spectroscopy experimental data with detailed first-principle molecular dynamics simulations.
17:10
Ab initio simulations of X-ray Absorption Spectroscopy spectra. The case of Cu(II) ions in water.
-
Francesco Stellato
(
R
)
Ab initio simulations of X-ray Absorption Spectroscopy spectra. The case of Cu(II) ions in water.
Francesco Stellato
(
R
)
17:10 - 17:30
The possibilities offered by high performance parallel computing forfirst principle simulations of systems composed by a large number of atoms are opening up the way to new approaches not only for ab initio molecular dynamics simulations, but also for providing improved models for the interpretation of experimental data. In this context, we are developing a strategy to compute the electron density of biologically relevant systems to simulate ab initio X-ray Absorption Spectroscopy (XAS) spectra. The XAS signal is originated by the scattering of the photoelectron with the surrounding atoms and therefore strongly depends on the potential seen by it. The structure of the XAS spectrum is usually computed starting from an approximated potential. Here we develop a strategy to evaluate the potential directly from the knowledge of the exact electron density calculated from first principles. After producing a number of atomic configurations (via classical or first principle MD), our approach consists in calculating the electron density in correspondence of each of these configurations from which the photoelectron potential and the simulated XAS spectrum are computed. Under the ergodicity hypothesis, the average over the computed spectra over many configurations will represent the experimental spectrum as the latter results from the sum of XAS signals from the many thermodynamically accessible configurations of the measured system. We report in this talk the results of an ab initio simulation of the XAS spectra of a system consisting of Cu(II) ions in water (see Figure 1(a)). This system is a good starting point to pave the way to the much more complicated case of computing the XAS spectrum of realistic and biologically relevant systems, such as biomolecules in water in complex with transition metals recently studied in [1]. For the system in Figure 1(a), consisting of one Cu(II) ion dissolved in 29 water molecules, we selected a number of equilibrated configurations along a classical MD trajectory. On these we performed single-point (i.e. fixed nuclei positions) electron density calculations with the help of the QuantumESPRESSO suite [2]. The XAS spectra were finally computed making use of the Xspectra code [3]. In Figure 1(b) we show a comparison between the experimental XAS data of a Cu(II) sulfate water solution (blue line), acquired at the ESRF BM30B beamline, and the theoretical spectrum (red line)resulting from averaging over the simulated spectra of eight system configurations taken along a classical MD trajectory. Our preliminary analysis shows that the ab initio strategy for simulating XAS spectra we have described is quite effective in providing a good description of XAS experimental data.
17:30
First-principles simulations at the nanoscale (and towards the exascale) with Quantum ESPRESSO.
-
Paolo Giannozzi
First-principles simulations at the nanoscale (and towards the exascale) with Quantum ESPRESSO.
Paolo Giannozzi
17:30 - 18:30
18:30
18:30 - 19:30
Thursday, 12 February 2015
09:30
Computing the visible universe via large-scale simulations of QCD.
-
Constantia Alexandrou.
Computing the visible universe via large-scale simulations of QCD.
Constantia Alexandrou.
09:30 - 10:30
Quantum Chromodynamics (QCD) is the fundamental theory of the strong interactions that describe, among other phenomena, the binding of nucleons to form nuclei, nuclear fission, supernovae and stellar evolution. Most of the visible matter in the universe is generated via the strong interactions. Understanding the complex phenomena that are due to the strong force is particularly demanding not only for experiments but also for theoretical approaches. The formulation of QCD on a 4-dimensional euclidean lattice, known as lattice QCD, provides a unique approach for studying the strong interactions. Lattice QCD is particularly demanding on the computational power needed for performing these calculations. We will review the status lattice QCD calculations and give future perspectives.
10:30
Coffee Break
Coffee Break
10:30 - 11:00
11:00
Chiral symmetry breaking in QCD Lite
-
Leonardo Giusti
(
M
)
Chiral symmetry breaking in QCD Lite
Leonardo Giusti
(
M
)
11:00 - 11:20
We compute the spectral density of the (Hermitean) Dirac operator in Quantum Chromodynamics with two light degenerate quarks near the origin. We use CLS lattices generated with two flavours of O(a)-improved Wilson fermions corresponding to pseudoscalar meson masses down to 190 MeV, and with spacings in the range 0.05--0.08 fm. Thanks to the coverage of parameter space, we can extrapolate our data to the chiral and continuum limits with confidence. The results show that the spectral density at the origin is non-zero because the low modes of the Dirac operator do condense as expected in the Banks--Casher mechanism. Within errors, the spectral density turns out to be a constant function up to eigenvalues of approximatively 80 MeV. Its value agrees with the one extracted from the Gell--Mann--Oakes--Renner relation.
11:20
Confinement and extreme conditions in lattice QCD
-
Leonardo Cosmai
(
BA
)
Confinement and extreme conditions in lattice QCD
Leonardo Cosmai
(
BA
)
11:20 - 11:40
We discuss some results obtained in studying the QCD vacuum structure at zero temperature and the QCD phase diagram at finite temperature and baryon density.
11:40
Properties of strongly interacting matter in extreme conditions.
-
Francesco Negro
(
PI
)
Properties of strongly interacting matter in extreme conditions.
Francesco Negro
(
PI
)
11:40 - 12:00
We discuss recent results obtained by lattice QCD simulations and regarding the properties of strongly interacting matter at finite temperature and density, or in the presence of external background fields.
12:00
Non-perturbative renormalization of the energy-momentum tensor in SU(3) Yang-Mills theory.
-
Michele Pepe
(
MIB
)
Non-perturbative renormalization of the energy-momentum tensor in SU(3) Yang-Mills theory.
Michele Pepe
(
MIB
)
12:00 - 12:20
We present a strategy for a non-perturbative determination of the finite renormalization constants of the energy-momentum tensorin the SU(3) Yang-Mills theory. The computation is performed by imposing on the lattice suitable Ward Identites in a finite box in presence of shifted boundary conditions. We show accurate numerical data for values of the bare coupling g0^2 ranging for 0 to 1.
12:20
Adopting modern hardware for lattice QCD calculations.
-
Silvano Simula
(
ROMA3
)
Mario Schröck
Adopting modern hardware for lattice QCD calculations.
Silvano Simula
(
ROMA3
)
Mario Schröck
12:20 - 12:40
The calculation of observables in lattice quantum chromodynamics (QCD) requires to solve many linear equation systems with matrices ofrank up to several millions. We discuss the adoption of modern highly parallel computer hardware, in particular GPUs and Intel MICs, to accelerate this task.
12:40
A perturbative study of the Schrödinger Functional in Lattice QCD.
-
Pol Vilaseca Mainar
A perturbative study of the Schrödinger Functional in Lattice QCD.
Pol Vilaseca Mainar
12:40 - 13:00
Strong interactions are (so far) correctly described by Quantum Chromodinamics (QCD). This theory explains accurately phenomena from the scale of hadronic physics at low energies to the production of jets in high energy collisions and quark-gluon plasmas. Due to the property of asymptotic freedom, the theory can be studied at high energies by means of perturbation theory. At low energies, a non-perturbative formulation of QCD is required. A widely used possibility is to formulate the theory on a discrete space-time lattice, which yields numerically tractable the study of strong interactions. Here we will describe some results in the context of lattice perturbation theory, which in many occasions is the only way of having analytic control on the theory before embarking on large scale numerical calculations.
13:00
lunch
lunch
13:00 - 15:00
15:00
Numerical simulations of fluids at high and at low Reynolds numbers.
-
Federico Toschi
(
INFN
)
Numerical simulations of fluids at high and at low Reynolds numbers.
Federico Toschi
(
INFN
)
15:00 - 16:00
In this talk we will review few recent results relative to the direct numerical simulation of turbulent and laminar fluids. We will discuss some of the open challenges in particular in relation to the transport of particulate matter.
16:00
Coffee break
Coffee break
16:00 - 16:30
16:30
Magnetic field amplification and the search for Magneto-Rotational-Instability in bar-mode unstable Neutron stars.
-
Roberto De Pietri
(
PR
)
Magnetic field amplification and the search for Magneto-Rotational-Instability in bar-mode unstable Neutron stars.
Roberto De Pietri
(
PR
)
16:30 - 16:50
Thanks to the recent advances in high performance computing, it is opening the possibility to tackle new problems on the dynamics of magnetic fields inside neutron stars where very-high-resolution three-dimensional simulations in full General Relativity are needed. We present results on the possible roles that magnetic instabilities may have the evolution of neutron star during matter unstable phases. Our main goal was to find and follow the possible onset and growth of magneto-rotational instability (MRI).
16:50
The Einstein toolkit on SUMA systems.
-
Michele Brambilla
(
INFN
)
The Einstein toolkit on SUMA systems.
Michele Brambilla
(
INFN
)
16:50 - 17:10
Recent progress on Numerical Relativity and General Relativistic Magneto Hydrodynamics (GRMHD) are opening new windows on our understanding of possible sources of Gravitational Waves (GWs) that would be detected in the coming years by the INFN gravitational wave observatory VIRGO. Unfortunately the power and memory required for cutting edge simulations with the desired accuracy require ~10PFlops and order of 1TBytes of RAM. The SUMA project aims to study which one of the possible upcoming architecture will be suited to obtain in the near future the desired computational power that would made this simulations achievable. I will discuss scaling tests on the systems available to the SUMA collaboration of one of the leading tool for Numerical Relativity simulations, the Einstein toolkit. This tool implements multithread (OpenMP), multiprocessor (MPI) and mixed MPI+OpenMP parallelization techniques. These results show how does the Einstein toolkit perform on the current generation of supercomputers.
17:10
Distributed Polychronous Spiking Neural Net (DPSNN) application code for evaluation of off-the-shelf and custom systems dedicated to neural network simulations
-
Elena Pastorelli
Distributed Polychronous Spiking Neural Net (DPSNN) application code for evaluation of off-the-shelf and custom systems dedicated to neural network simulations
Elena Pastorelli
17:10 - 17:30
In the framework of the EURETILE European project, a natively distributed application has been developed, as a representative of plastic spiking neural network simulators. The DPSNN-STDP (Distributed Simulation of Polychronous Spiking Neural Network with synaptic Spike-Timing Dependent Plasticity) will be used in the context of the CORTICONIC European FET project to produce simulations of cortical slices for the comparison with in-vivo experiments. The application will be also used to drive the development of future parallel/distributed computing systems dedicated to the simulation of plastic spiking networks. The DPSNN-STDP simulator has been designed to generate identical spiking behaviours and network topologies over a varying number of processing nodes, simplifying the quantitative study of scalability on both commodity and custom architectures. Moreover, it can be easily interfaced with standard and custom software/hardware communication interfaces. Being natively distributed and parallel, it should not pose major obstacles against distribution and parallelization on several platforms. During 2015, the DPSNN-STDP application will be further enhanced to enable the description of larger networks, more complex connectomes, and finalize it for the application to biological simulations. The development of the DPSNN-STDP simulator has been funded by the European FET FP7 projects CORTICONIC (grant 600806) and EURETILE (grant 247846), in cooperation with the SUMA project.
17:30
NaNet: a network interface card family for GPU-based real-time systems.
-
Andrea Biagioni
(
ROMA1
)
NaNet: a network interface card family for GPU-based real-time systems.
Andrea Biagioni
(
ROMA1
)
17:30 - 17:50
The NaNet project aims to deliver a low-latency, high-throughput data transport mechanism for GPU-based real-time systems. The goal is a FPGA-based PCIe Network Interface Card (NIC) design with GPUDirect P2P/RDMA capability featuring a configurable and extensible set of data transmission channels. An ad-hoc network stack protocol offload engine and a data stream processing stage combined with the GPUDirect capability make NaNet suitable for real-time GPU contexts. The design currently supports both standard - GbE (1000BASE-T) and 10GbE (10Base-R) - and custom - 34 Gbps APElink and 2.5 Gbps deterministic latency KM3link - channels, but NaNet architecture modularity allows for a straightforward inclusion of other link technologies. A description of the NaNet architecture and its performances is given, showing two use cases for it: the GPU-based low-level triggerfor the RICH detectorin the NA62 experiment at CERN and the on-/off-shore data transport system for KM3NeT-IT underwater neutrino telescope. This work has been funded by the European FET FP7 project EURETILE (grant 247846).
17:50
Multi-dimensional Torus networks for current and next generation HPC systems: APEnet+ status and perspectives.
-
Roberto Ammendola
(
ROMA2
)
Multi-dimensional Torus networks for current and next generation HPC systems: APEnet+ status and perspectives.
Roberto Ammendola
(
ROMA2
)
17:50 - 18:10
The Apenet family interconnect cards have been out for more than a decade, pursuing the legacy of the APE custom massively parallel computing machines and bringing few concepts (3D-torus network, high bandwidth, low latency) to commodity clusters. Over the years we have been added key features to our custom network, like Remote-DMA programming paradigm or, lately, NVidia peer-to-peer capability for tightly coupling GPUs with our network card. The most strenuous efforts were made on keeping our communication interface IP cores (both for the host side - on PCIe interface, and on remote link side) up to date with current state-of-the-art technology. Apenet+ is the actual production-class interconnect card equipping Quong, the 16-node CPU-GPU hybrid system deployed. Exploring and developing innovative ideas in various research fields, such as fault tolerance, fast address translation or embedded processors, has brought interesting results and significant impact on our system performances. A comprehensive view of these topics will be given in this talk, along with the results achieved at this point. Perspectives of future work will also be given, covering topics such as integrating next-generation ARM System-on-Chips, collective communication optimizations and other planned hardware and software enhancements. This work has been funded by the European FET FP7 project EURETILE (grant 247846) and by the MIUR project SUMA
Friday, 13 February 2015
09:30
The European Supercomputer Projects DEEP and DEEP-ER.
-
Norbert Eicker
(
Julich Supercomputer Centre
)
Thomas Lippert
The European Supercomputer Projects DEEP and DEEP-ER.
Norbert Eicker
(
Julich Supercomputer Centre
)
Thomas Lippert
09:30 - 10:30
10:30
Coffee break
Coffee break
10:30 - 11:00
11:00
Theocluster ZEFIRO.
-
Giuseppe Caruso
(
P
)
Theocluster ZEFIRO.
Giuseppe Caruso
(
P
)
11:00 - 11:20
Attualmente il theocluster denominato 'Zefiro' (cluster per il calcolo di fisica teorica) di nuova installazione finanziato da GR IV e dal progetto SUMA è costituito da 32 macchine ciascuna con 512 Gb di ram e 4 processori (ciascuno da 16 core); conta in totale 2048 core di calcolo AMD Opteron 6380 (2,5GHz) e collegamento Infiniband QDR gestiti da uno switch Mellanox IS5100 a 108 porte. L'utilizzo del cluster previsto è quello del calcolo locale. Gli accessi sono regolati tramite lo scheduler IBM LSF nella versione 9. La sottomissione dei jobs viene fatta tramite code specifiche e indirizzate sul cluster dove i 64 core totali dei singoli nodi sono raggruppati in 2 jobslot. È inotre presente una macchina dedicata alla compilazione dei job cui fa capo una coda specifica e che non è appartenente all'architettura delle 32 macchine di zefiro; è dotata di 2 processori ciascuno con 16 core per un totale quindi pari a 32 core e 256 Gb diram; i core di questa macchina sono statiraggruppati in bunch da 1 core considerato quindi un jobslot. Avremo disponibili un totale di 32 JOBSLOT. Tutti i nodi di calcolo hanno installato SUSE Linux Enterprise Server (SLES), come distribuzione per il funzionamento di base; mentre per l'utilizzo del software di calcolo utilizato per la fisica teorica e per le altre applicazioni di interesse, è prevista una distribuzione Scientific Linux v.6 (SL6) su un'apposita directory di Debug del proprio job su alcune code distinte.
11:20
Portability, efficiency and maintainability: the case of OpenACC
-
Enrico Calore
(
INFN, Sezione di Ferrara
)
Portability, efficiency and maintainability: the case of OpenACC
Enrico Calore
(
INFN, Sezione di Ferrara
)
11:20 - 11:40
An increasing number of HPC systems rely on heterogeneous node architectures combining traditional multicore CPUs with power efficient accelerators. Writing efficient applications for these systems could be cumbersome today, since porting may require code rewriting using new programming languages, such CUDA or OpenCL, threatening maintainability, stability and correctness. Several innovative programming environments try to tackle this problem; among them OpenACC offers an high-level approach based on directives: porting applications to heterogeneous architectures ``simply'' requires to annotate existing – C, C++ or Fortran – codes with specific "pragma" clauses to identify regions to offload and run on accelerators. This approach guarantee high portability of codes since support for different accelerators relies on compilers, however one has to carefully assess the relative costs of portability versus computing efficiency. In this presentation we address precisely this issue, using as a test-bench a Lattice Boltzmann code. We describe our experience in implementing and optimizing a multi-GPU Lattice Boltzmann code using OpenACC and OpenMPI, focusing also on overlapping communications and computation to make the code scaling on a large number of accelerators.
11:40
Round Table and Conclusions
Round Table and Conclusions
11:40 - 13:00