-
Mauro Morandin (INFN)09/03/2010, 14:30
-
Dr David Brown (Lawrence Berkeley National Lab)09/03/2010, 14:50
-
Fabrizio Bianchi (TO)09/03/2010, 15:20
-
09/03/2010, 15:50
-
Vincenzo Innocente (CERN)09/03/2010, 17:00
-
Vincenzo Innocente (CERN)09/03/2010, 17:15In the last few years Computing has been caracterized by the advent of "multicore cpus". Effective exploitation of this new kind of computing architecture requires the adaptation of legacy software and enventually a shift of the programming paradigms to massive parallel. In this talk we will introduce the reasons that brough to the introduction of "multicore" hardware and the consequencies on...Go to contribution page
-
Alfio Lazzaro (MI)09/03/2010, 18:15Increasing the dimension of data samples for data analyses and using advanced algorithms for background suppression (such as unbinned maximum likelihood fits) require high CPU performance. Recently, vendors like Intel and AMD have not incremented the performance of single CPU unit as in the past, but they are working on multi-core CPU. Currently we have up to 8 cores implemented on one single...Go to contribution page
-
09/03/2010, 18:45
-
Leone Battista Bosi (PG)10/03/2010, 08:30Currently, technological solutions being adopted by more manufacturers are bringing to CPU architecture with an even more degree of parallelism, evolving from the multi-core era to the "many-core" era. In this scenario hundreds and, in short, thousands of processing cores are contained within the same processor. A so deep change in architectural paradigm compels an equally deep change in...Go to contribution page
-
Gerry Ganis10/03/2010, 09:00Concurrency aims to improve computing performance by executing a set of computations simultaneously, possibly in parallel. Since the advent of today's many-core machines the full exploitation of the available CPU power has been one of the main challenges for high-performance computing software projects, including the HEP ones. However, in HEP data analysis the bottleneck is not (only)...Go to contribution page
-
Karen Tomko10/03/2010, 09:30Linux clusters consisting of multi-core commodity-chip based nodes, augmented with GPGPU accelerators are becoming common at computing centers. In this talk we report on some early results of a study to investigate use of this computing paradigm to accelerate the fitting algorithms used in MINUIT. In particular we show that very good speedups are possible for the negative log likelihood (NLL)...Go to contribution page
-
10/03/2010, 09:50
-
Peter Elmer (Princeton University)10/03/2010, 11:00
-
Dr Pere Mato (CERN)10/03/2010, 11:15Object-Oriented event data processing software frameworks have been developed by all recent HEP experiments to structure and support their data processing programs such as the simulation, reconstruction and data analysis. These frameworks implement a given architectural vision and provide a set of features and functionalities that are very close among the various experiments. This...Go to contribution page
-
10/03/2010, 11:45
-
Dr David Brown (Lawrence Berkeley National Lab)10/03/2010, 14:30
-
Paolo Calafiura (Lawrence Berkeley National Lab)10/03/2010, 14:45
-
K.-T. Lim10/03/2010, 15:15SciDB is a new open source database management system that emerged from the Extremely Large Database (XLDB) workshop series. Specifically designed to meet the needs of many scientific disciplines, it features an array-based data model that adds order and adjacency to the traditional relational set model. SciDB's array model and the SciDB software in its currently-projected form is *not*...Go to contribution page
-
10/03/2010, 15:55
-
Andrea Di Simone (RM2), Prof. Roberto Stroili (Universita' di Padova and INFN), Dr Steffen Luitz (SLAC)10/03/2010, 17:00
-
Emil Obreshkov (CERN)10/03/2010, 17:20Presentation will be summary of release building and validation procedures done in LHC experiments at CERN. Brief description how this is done in LHC experiments with main focus on ATLAS. The complex detectors determine the existing large collaboration with many users and developers providing software code. Every experiment has its own developed and used tools but there are also many...Go to contribution page
-
Dr Steffen Luitz (SLAC)10/03/2010, 17:50A short overview of the BaBar Online/Offline interplay, lessons learned and some candidate topics for SuperB R&DGo to contribution page
-
Luciano Orsini (CERN)10/03/2010, 18:05The CMS online software encompasses the whole range of elements involved in the CMS data acquisition function. A central online software framework (XDAQ Cross-Platform DAQ Framework) has matured over a height years development and testing period and has shown to be able to cope well with the CMSrequirements. The framework relies on industry standard networks and processing equipment. All...Go to contribution page
-
10/03/2010, 18:35
-
I Gaponenko (SLAC)11/03/2010, 08:30
-
I. Gaponenko11/03/2010, 08:45BABAR has been a unique experiment in a history of the High Energy Physics and not only due to the Physics itself, but also in a way this physics was extracted. It pioneered in involving hundreds of physicists, engineers and students into a practical C++ programming using OOAD methodologies. The experiment introduced a geographically distributed data processing and events simulation production...Go to contribution page
-
A Vaniachine (Argonne National Laboratory)11/03/2010, 09:15Database applications and computing models of LHC experiments. Database integration with frameworks and into an overall data acquisition /production/processing and analysis chain of the experiment(s). Best practices in database development cycle: schema, contents, code deployment, etc. Choice of database technologies: RDBMS, hybrid, etc. Examples of database interfaces and tools, such as...Go to contribution page
-
11/03/2010, 10:00
-
Armando Fella (CNAF), Eleonora Luppi (Ferrara University & INFN)11/03/2010, 14:30
-
Claudio Grandi (INFN-Bologna)11/03/2010, 14:40In this presentation, the main aspects related to distributed computing that a HEP experiment has to address are discussed. This is done analyzing what the current experiments, mainly at the CERN LHC, are using, either provided by Gird infrastructures or developed by themselves. After a brief introduction on the overall distributed computing architecture, the specific aspects that are treated...Go to contribution page
-
F. Giacomini (INFN - CNAF)11/03/2010, 15:10After the end of the EGEE series of projects, the way improvements to the middleware are developed and deployed will change significantly, with a stronger focus on stability of the infrastructure and on quality assurance. The middleware will evolve according to the requirements coming from users in terms of reliability, usability, functionality, interoperability, security, management,...Go to contribution page
-
R. Alfieri (Parma University & INFN)11/03/2010, 15:30The talk will be focused on the support for parallel programs, mpi and multithread, in EGEE, that is actually the Grid infrastructure of which InfnGrid is member of. I’ll review the status of the support of MPI and the usage of the upcoming new syntax through the analysis of a Case Study.Go to contribution page
-
Salomoni, Davide (INFN - CNAF)11/03/2010, 15:50The presentation will first describe what the main current evolution trends for distributed computing seem to be, moving then on to explore how computing resources could be uniformly accessed via either grid or cloud interfaces using virtualization technologies. The integration of grids and clouds can on the one hand expand and optimize the use of available computing resources, while on the...Go to contribution page
-
11/03/2010, 16:10
-
Fabrizio Bianchi (TO)11/03/2010, 17:30
-
Ulrik Egede (Imperial College (London))11/03/2010, 17:40Using experience gained from the analysis environment in LHCb, BaBar and ATLAS, I will comment on the critical aspects that can make an analysis environment effective. I will do this by creating a set of requirements and then look at what lessons can be learned from past and current analysis environments with respects to these requirements. Some particular issues to consider are: - The...Go to contribution page
-
Mat Bellis11/03/2010, 18:10* A discussion of 4vector viewers as a visualization tool residing between the event display and final histograms. This has been useful for educating new students/analysts as well as an outreach tool. * Viewpoints: a NASA developed mult-variate display package used to develop more of an intuition for multi-variate datasets. This has been used with great success with undergraduate students...Go to contribution page
-
11/03/2010, 18:40
-
Dr Fabrizio Furano (CERN), Vincenzo Maria Vagnoni (BO)12/03/2010, 08:30
-
Dr Fabrizio Furano (CERN), Vincenzo Maria Vagnoni (BO)12/03/2010, 08:40
-
Masahiro Tanaka12/03/2010, 09:10Gfarm File System is a wide-area distributed file system that federates local disks of compute nodes in a Grid or computer clusters. It is a high performance distributed parallel file system designed for I/O intensive scientific data analysis conducted in collaboration with multiple distant organizations. Gfarm is under vigorous development for better performance and usability. A parallel...Go to contribution page
-
Giacinto Donvito (INFN)12/03/2010, 09:35A report on some new emerging technologies suitable for managing, with high edfficiency, huge amount of data will be given. In particular details about Lustre, Hadoop and Ceph, as examples of different approaches to tackle the problem of providing input/output data to scientific applications, will be presented.Go to contribution page
-
Dr Moreno Marzolla (Bologna University)12/03/2010, 10:05Capacity planning is a very useful tool to estimate future resource demand to carry on a given activity. Using well known modelling techniques it is possible to study the performance of a system before actually building it, and evaluate different design alternatives. However, capacity planning must be done properly in order to be effective. This presentation describes the benefits,...Go to contribution page
-
12/03/2010, 11:00
-
Peter Elmer (Princeton Univ.), Vincenzo Innocente (CERN)12/03/2010, 14:00
-
Andrea Di Simone (RM2), Prof. Roberto Stroili (Universita' di Padova and INFN), Dr Steffen Luitz (SLAC)12/03/2010, 14:30
-
Dr David Brown (Lawrence Berkeley National Lab), Sasha Vanyashin (CERN)12/03/2010, 14:45
-
Armando Fella (CNAF), Eleonora Luppi (Ferrara University & INFN)12/03/2010, 15:05
-
Fabrizio Bianchi (TO)12/03/2010, 15:25
-
Dr Fabrizio Furano (CERN), Vincenzo Maria Vagnoni (BO)12/03/2010, 15:45
-
12/03/2010, 16:05
Choose timezone
Your profile timezone: