This conference is the fifteenth of a series of meetings initiated in Tirrenia in 1980, and continued in Castiglione della Pescaia and La Biodola, devoted to review progresses on advanced detectors and instrumentation for physics experiments. The meeting is sponsored by the Istituto Nazionale di Fisica Nucleare (INFN), the Società Italiana di Fisica (SIF), the European Physical Society (EPS), the University of Pisa and the University of Siena.
Please visit this link for remote streaming and zoom connections.
Transformative discovery in science is driven by innovation in technology. Our boldest undertakings in fundamental physics have at their foundation precision instrumentation. To reveal the profound connections underlying everything we see from the smallest scales to the largest distances in the Universe, to understand its fundamental constituents, and to reveal what is still unknown, we must invent, develop, and deploy advanced instrumentation. The 2020 European Strategy for Particle Physics requested that ECFA organize a roadmap developed by the community to balance the detector R&D efforts in Europe, taking into account progress with emerging technologies in adjacent fields. The roadmap identified and described a diversified detector R&D portfolio that has the largest potential to enhance the performance of the particle physics programme in the near and long term. This talk will outline some of the great questions in particle physics and how the ECFA Detector Roadmap addresses them.
Early Career Researchers (ECRs) play a crucial role in the LHC experiments. Since future experiments in particle physics can take decades to conceptualise, design, build and operate, today's ECRs are the leaders of tomorrow's experiments. The ECFA ECR panel conducted a survey about the training of ECRs in instrumentation. This poll however also yielded many other findings about issues of networking, recognition and diversity in instrumentation, that will as well be presented. The goal is to stimulate discussion about the needs of ECRs in instrumentation and what actions can be taken to help them.
The Belle II experiment at the SuperKEKB e+e- collider has started data taking in 2019 with the perspective of collecting 50ab-1 in the course of the next several years. The detector is working well with very good performance, but the first years of running are showing novel challenges and opportunities for reliable and efficient detector operations with machine backgrounds extrapolated to full luminosity. For this reason, and also considering that an accelerator consolidation and upgrade shutdown is being studied for the timeframe of 2026-2027 to reach the target luminosity of 6E35 cm-2s-1, Belle II has started to define a detector upgrade program to make the various sub-detectors more robust and performant even in the presence of high backgrounds, facilitating the SuperKEKB running at high luminosity. This upgrade program will possibly include the replacement of some readout electronics, the upgrade of some detector elements, and may also involve the substitution of entire detector sub-systems such as the vertex detector. The process has started with the submission of Expressions Of Interest that are being reviewed internally and will proceed towards the preparation of a Conceptual Design Report. This paper will cover the full range of proposed upgrade ideas and their development plans.
With proton-proton collisions about to restart at the Large Hadron Collider (LHC) the ATLAS detector will double the integrated luminosity the LHC accumulated in the ten previous years of operation. After this data-taking period the LHC will undergo an ambitious upgrade program to be able to deliver an instantaneous luminosity of $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$ allowing to collect more than 3 ab$^{-1}$ of data at $\sqrt{s}=$14 TeV. This unprecedented data sample will allow ATLAS to perform several precision measurements to constrain the Standard Model Theory (SM) in yet unexplored phase-spaces, in particular in the Higgs sector, a phase-space only accessible at the LHC. The price to pay to be able to collect such a rich data-sample is to upgrade the detector to cope with the challenging experimental conditions that include huge levels of radiation and pile-up events about a factor 5 higher than in the present condition. The ATLAS upgrade comprises a completely new all-silicon tracker with extended rapidity coverage that will replace the current inner tracker detector; a redesigned trigger and data acquisition system for the calorimeters and muon systems allowing the implementation
of a free-running readout system. Finally, a new subsystem called High Granularity Timing Detector that will aid the track-vertex association in the forward region by incorporating timing information into the reconstructed tracks. A final ingredient, relevant to almost all measurements, is a precise determination of the delivered luminosity with systematic uncertainties below the percent level. This challenging task will be achieved by collecting the information from several detector systems using different and complementary techniques.
This presentation will describe the ongoing ATLAS detector upgrade status and the main results obtained with the prototypes, giving a synthetic, yet global, view of the whole upgrade project.
After more than 15 years of successful data taking, the Pierre Auger Observatory started a major upgrade, called AugerPrime, whose main aim is the collection of new information about the primary mass of ultrahigh-energy cosmic rays (UHECRs), besides adding new indications on hadronic interactions at UHE.
The upgrade program includes: the installation of plastic scintillator detectors (SSDs) on top of each water-Cherenkov detector (WCD) of the surface array; new electronics to process signals from the WCD and the SSD with higher sampling frequency and enhanced resolution in amplitude; an extension of the dynamic range of measurement through an additional small photomultiplier tube in the water-Cherenkov tank; an array of underground scintillator detectors to measure the muonic component of extensive air showers; the deployment of a radio antenna atop each WCD.
After presenting the motivations for upgrading the Observatory, an overview of the detector upgrade is provided, together with the expected performances and the improved physics sensitivity. The first results from the data collected with the already upgraded AugerPrime stations are presented and discussed.
The LHCb Vertex Detector (VELO) will be upgraded for the LHC run-III to a pixel detector capable of 40 MHz full event readout and operation in very close proximity to the LHC beams. The thermal management of the system is provided by evaporative CO$_2$ circulating in micro-channels embedded within thin silicon plates. The VELO modules host 12 VeloPix ASICs with a total power consumption of up to 30 W. The implementation of an efficient and radiation hard cooling system is mandatory to remove the heat produced by the ASICs and keep the sensors below -20o.C and mitigate the radiation damage. The solution created is to use a cooling substrate composed of thin silicon plates with embedded micro-channels that allow the circulation of boiling CO$_2$. The direct advantages of this technique is the low and uniform material contribution, same thermal expansion coefficient that of the sensor-ASIC tiles, the radiation hardness of CO$_2$ and high heat transfer capacity. The fluidic connector to the substrate should be leak tight in order to withstand the operational pressures and be placed in vacuum. A flux-free connector soldering solution was developed which respects the planarity and the correct positioning required for the subsequent construction of a precise tracking system. The solder joint was tested for long term effects of creep and fatigue. Alternative solutions were pursued in parallel to the development of the micro-channels, based on 3D printed titanium tubes or on steel capillaries inside a ceramic substrate. However, the micro-channel evaporative cooling provides a better physics performance, due to the low material and no CTE mismatch. This talk will cover the key points of the micro-channels R&D which includes design optimisation, fabrication, robustness tests, cooling performance and the comparison with the backup options.
The ATHENA (A Totally Hermetic Electron-Nucleus Apparatus) detector is designed to deliver the full physics program of the Electron-Ion Collider (EIC) as set out for the EIC project approval (December 2019), providing the best possible acceptance, resolution, and particle identification capabilities. As an entirely new detector, ATHENA has been designed to accommodate all necessary subsystems without compromising on performance, while leaving room for future upgrades. Central to the proposal is a new, large-bore magnet with a maximum field strength of 3T. Particle tracking and vertex reconstruction are performed by a combination of next-generation silicon pixel sensors and state-of-the-art micro-pattern gas detectors. The combination of magnetic field strength and high resolution, low mass tracking technologies optimizes momentum resolution and vertex reconstruction. The large bore of the magnet allows for layered, complementary, state-of-the-art particle identification technologies. A novel hybrid imaging/sampling electromagnetic calorimeter is proposed for the barrel region of the detector, along with a high resolution crystal calorimeter in the electron-going direction. The hadron endcap has calorimetry, tracking and particle identification detectors that are optimized for high-momentum hadron identification and high-energy jet reconstruction. We have striven for hermeticity by closely integrating the far-forward and far-backward detectors with the central detector to achieve maximal kinematic coverage and to optimize the detection of particles at small scattering angles. Careful balance between choice of cutting-edge and mature detector technologies achieves the necessary detector performance while minimising risk and providing a cost-effective solution. Scalable modern technology choices assure optimum performance for multi-year operation from day one.
The ATHENA detector and its potentialities are reviewed in the frame set by the outcome of the EIC Call for Proposal Process, not yet known at the time this abstract is submitted, but which will be announced at the beginning of March 2022.
The design of a feasible multi-TeV Muon Collider facility is the mandate of the international Design Study based at CERN and is considered with great interest along the presently on-going US SnowMass process. The physics potential of such a novel future collider is overwhelming, ranging from discovery searches to precision measurements in a single experiment. Despite the machine-design challenges it is possible to reach the uncharted territory of 10 TeV center-of-mass energy or higher while delivering luminosity up to a few 10^35 cm^-2 s^-1.
The experiment design, the detector technology choices along with the reconstruction tools are strongly affected by the presence of the Beam Induced Background (BIB) due to muon beams decay products interacting at the Machine Detector Interface (MDI).
Full simulation studies at 𝒔 = 1.5 and 3 TeV, adopting the CLIC experiment technologies with important tracker modification to cope with BIB, are the starting point to optimize the detector design and proposing future dedicated R&Ds. Present results and future steps will be discussed.
FASER is a new experiment designed to search for new light weakly-interacting long-lived particles (LLPs) and study high-energy neutrino interactions in the very forward region of the LHC collisions at CERN. The experimental apparatus is situated 480~m downstream of the ATLAS interaction-point aligned with the beam collision axis. The FASER detector includes four identical tracker stations constructed from silicon microstrip detectors. Three of the tracker stations form a tracking spectrometer, and enable FASER to detect the decay products of LLPs decaying inside the apparatus, whereas the fourth station is used for the neutrino analysis. The spectrometer has been installed in the LHC complex since March 2021, while the fourth station was also installed since November 2021. FASER will start physics data taking when the LHC resumes operation in early 2022. This talk describes the design, construction and testing of the tracking spectrometer, including the associated components such as the mechanics, readout electronics, power supplies and cooling system.
In the High Luminosity era, the Large Hadron Collider (LHC) will be upgraded to deliver instantaneous luminosities up to $5 \times 10^{34} \ \mathrm{cm^{-2}s^{-1}}$, five times more than the original design value. In order to maintain performance of the Compact Muon Solenoid (CMS) experiment under these conditions, ME0 is one of the three new muon sub-detectors being added, along with GE1/1 and GE2/1, which use the triple Gas Electron Multiplier (GEM) technology. ME0 is designed to cover the forward region of 2.0<$|\eta|$<2.8, thus improving muon reconstruction at high background rates by supplementing other overlapping muon subsystems up to $|\eta|$=2.4, while also extending the acceptance for the first time to $|\eta|$=2.8. The readout electronics for ME0 must be designed to deal with high data rates and be sufficiently radiation hard to operate so close to the beamline. The Optohybrid (OH) board for ME0, which reads out data from the front-end VFAT3b ASICs, has therefore been designed to operate without an FPGA (unlike GE1/1 and GE2/1) to ensure radiation hardness. It will use the radiation-hard CERN designed lpGBT ASIC and high bandwidth optical links at 10.24 Gb/s, thus also providing the benefit of high data rates. The backend system will be based on the ATCA standard. The design and development status of the readout electronics for ME0 will be presented, along with recent results from integration tests performed using the first prototypes.
The full optimization of the design and operation of instruments whose functioning relies on the interaction of radiation with matter is a super-human task, given the large dimensionality of the space of possible choices for geometry, detection technology, materials, and data-acquisition and information-extraction techniques, and the interdependence of the related parameters. On the other hand, enormous potential gains in performance over standard, "experience-driven" layouts are in principle at reach if an objective function fully aligned with the final goals of the instrument is maximized by a systematic search of the configuration space.
The stochastic nature of the involved quantum processes make the modeling of these systems an intractable problem from a classical statistics point of view, yet the construction of a fully differentiable pipeline and the use of deep learning techniques may allow the simultaneous optimization of all design parameters.
In this presentation I will lay down the plans for the design of a modular and versatile modeling tool for the end-to-end optimization of complex instruments for particle physics experiments as well as industrial and medical applications that share the detection of radiation as their basic ingredient, and show results of the study of a muon tomography use case, to highlight the potential of this approach.
The MoEDAL Apparatus for Penetrating Particles (MAPP) was recently approved by CERN's Research Board to take data during LHC's Run-3. This detector extends the physics reach of the MoEDAL detector, the LHC's first dedicated search experiment that was built to detect highly ionizing avatars of new physics. The MAPP detector will concentrate on the search for feebly (electromagnetically) interacting particles (FIPs) such as milli-charged particles. The MAPP detector will also provide sensitivity to very long-lived neutral and charged particles.
For the Phase 2 upgrade, the CMS experiment foresees the installation of a MIP Timing Detector (MTD) to assign a precise timestamp to every charged particle up to pseudorapidity |$\eta$| = 3, empowering the CMS detector with unique and new capabilities. The target timing resolution of MTD, 40 ps per track, will help reduce the challenging pile-up conditions expected at the High-Luminosity LHC down to current LHC levels. To match the requirements on radiation tolerance and occupancy, the forward region of the MTD, 1.6 < |$\eta$| < 3, will be equipped with silicon low-gain avalanche diodes (LGADs) coupled to the Endcap Timing Read Out Chip (ETROC), currently under development. We will present the current status of LGAD sensor testing, their qualification from beam tests, bench measurements, and the performance of the final ETROC design. Finally, we will discuss the challenges and the road map necessary to achieve timely installation of ETL.
Throughout ATLAS Run-2, the LUCID detector, that is located close to the beampipe on both side of the interaction point, has been the reference luminosity detector, providing the online and offline luminosity measurement with high stability and a preliminary uncertainty of about 1.7%.
For the high-luminosity LHC, new beampipe equipment and more demanding luminosity precision requirements and LHC beam conditions are expected. The detector will therefore be completely redesigned, exploiting both new and tried-and-tested technologies. Prototype detectors for the new running conditions and technologies have been developed and installed and will be tested during the upcoming LHC Run-3. These consist of a PMT-based detector, which uses the quartz window as Cherenkov medium and is positioned further from the beampipe, a low-rate PMT detector, located in the shadow of one of the ATLAS shieldings, and the fiber detector, in which fiber bundles are used as Cherenkov-light emitter and transmitter and that are calibrated with an innovative hybrid LED and radioactive-source system. In these prototypes, the behavior of new Hamamatsu R1635 and R7459 PMT’s will be evaluated.
In this contribution, the motivations for the detector redesign and a description of the LUCID upgrade are illustrated, as well as a detailed account of the preliminary tests performed with the prototypes, including PMT characterization and a study of the fiber degradation under irradiation.
In the frame of the progress towards the High Luminosity Program of the Large Hadron Collider at CERN, the ATLAS and CMS experiments are boosting the preparation of their new environmental friendly low temperature detector cooling systems. This paper will present a general overview of the progress in development and construction of the future CO2 cooling systems for silicon detectors at ATLAS and CMS (trackers, calorimeters and timing layers), due for implementation during the 3rd Long Shut Down of LHC (LS3). We will describe the selected technology for the primary chillers, based on an innovative transcritical cycle of R744 (CO2) as refrigerant, and the oil-free secondary “on detector” CO2 pumped loop, based on the evolution of the successful 2PACL concept. Different detector layers will profit from an homogenized infrastructure and will share multi-level redundancy that we will describe in details. The technical progresses achieved by the EP-DT group at CERN over the last years will be discussed in view of the challenges and key solutions developed to cope with the unprecedented scale of the systems. We will finally present how mechanics- and controls-related problems have been addressed via a vigorous prototyping programme, aiming at cost- and resource-effective construction of the final systems, which is starting now.
The LUXE experiment aims at studying high-field QED in electron-laser and photon-laser interactions, with the 16.5 GeV electron beam of the European XFEL and a laser beam with power of up to 350 TW. The experiment will measure the spectra of electrons, positrons and photons in expected ranges of 10^-3 to 10^9 per 1 Hz bunch crossing, depending on the laser power and focus. These measurements have to be performed in the presence of low-energy high radiation-background. To meet these challenges, for high-rate electron and photon fluxes, the experiment will use Cherenkov radiation detectors, scintillator screens, sapphire sensors as well as lead-glass monitors for backscattering off the beam-dump. A four-layer silicon-pixel tracker and a compact electromagnetic tungsten calorimeter with GaAs sensors will be used to measure the positron spectra. The layout of the experiment and the expected performance under the harsh radiation conditions will be presented. Beam tests for the Cherenkov detector and the electromagnetic calorimeter were performed at DESY recently and results will be presented. The experiment received a stage 0 critical approvement (CD0) from the DESY management and is in the process of preparing its technical design report (TDR). It is expected to start running in 2024/5.
With its increased number of proton-proton collisions per bunch crossing, track
reconstruction at the High-Luminosity LHC (HL-LHC) is a complex endeavor. The Inner
Tracker (ITk) is a silicon-only replacement of the current ATLAS tracking system as part of
its Phase-II upgrade.
It is specifically designed to handle the challenging conditions resulting from the increase
in luminosity.
Having undergone a series of layout optimizations, the ITk pixel detector now features a
reduced radius of its innermost barrel layer, among other changes. This contribution will
discuss the evolution of the ITk design, alongside its impact on the tracking performance
and some higher-level object reconstruction and identification.
To ensure stable data-taking conditions, it is critical to manage the rate at which ITk data is
being read out. ITk information is read out for bunch crossings selected by the first level
trigger with an expected rate of 1 MHz. Recent calculations on the expected data rates at
the design frequencies will be presented, and handles to ensure rates stay below the
bandwidth thresholds will be discussed.
The CMS silicon strip tracker with its more than 15000 silicon modules and 200m2 of active silicon area will resume operation after 3 years of Long Shutdown 2 in the Spring of 2022. We present the status of the detector before the start of the LHC Run 3 data taking. The performance of the detector during the Run 2 data taking is presented including the signal-to-noise ratio, fraction of bad components in the detector, hit efficiency, and single hit resolution. We discuss projections of the detector performance during Run 3. In addition the change of detector parameters with increasing radiation damage is reviewed.
The tracking performance of the ATLAS detector relies critically on its 4-layer
Pixel Detector. As the closest detector component to the interaction point, this detector is subjected to a significant amount of radiation over its lifetime. By the end of the LHC proton-proton collision RUN2 in 2018, the innermost layer IBL, consisting of planar and 3D pixel sensors, had received an integrated fluence of approximately Φ = 9 × 1014 1 MeV neq/cm2.
The ATLAS collaboration is continually evaluating the impact of radiation on the Pixel Detector. During the LHC long shutdown 2 LS2 dedicated data taking of cosmic rays have been taken at this purpose.
In this talk the key status and performance metrics of the ATLAS Pixel Detector are
summarised, and the operational experience and requirements to ensure
optimum data quality and data taking efficiency will be described, with special emphasis to radiation damage experience. A quantitative analysis of charge collection, dE/dX, occupancy reduction with integrated luminosity, under-depletion effects, effects of annealing will be presented and discussed, as well as the operational issues and mitigation techniques adopted during the LHC Run2 and the ones foreseen for Run3.
The Electromagnetic Calorimeter (ECAL) barrel of CMS experiment at CERN is made of 36 Supermodules, each consisting of 1700 lead tungstate scintillating crystals. Each Supermodule weighs 2.7 tonnes and is a highly sensitive and fragile object. The Supermodules, 18 Supermodules on each side of CMS barrel, were successfully inserted inside the Hadronic Calorimeter (HCAL) barrel of CMS in 2007 with a dedicated insertion tool called “Enfourneur”. The movements of the Enfourneur are controlled by a fine adjustment system for the Supermodule insertion and extraction. During the Long Shutdown 3 foreseen in 2026, the Enfourneur will be used to extract the Supermodules for their electronics upgrade in view of the HL-LHC future runs and to insert the Supermodules again in CMS.
Based on the past operations, modifications on the current Enfourneur have been implemented in order to improve and facilitate the functionalities, in compliance with the up-to-date international standards concerning machinery safety and CERN internal applicable rules. This work was carried out through several stages and iterations covering a complete design study, FEA simulation within the scope of Eurocode 3, installation of the modifications, and validation tests. The modified Enfourneur fulfills all the intended technical and safety requirements.
In this paper, a review of the Enfourneur functionalities, the applied modifications, and the performed validation tests will be presented.
At the Mainz Microtron MAMI, the technique of high-resolution spectroscopy of decay-pions in strangeness electroproduction has been established to extract $\Lambda$ ground state binding energies of light hyperfragments. In a first series of measurements, a $^9$Be target was used to determine the $^4_\Lambda$H binding energy with unprecedented precision in a momentum setting near 133 MeV/c. The current measurement employs a novel lithium target of 50 mm length and only 0.75 mm thickness to precisely determine the hypertriton binding energy in a 114 MeV/c setting.
The complex setup in the spectrometer hall comprises a pre-target beam-line chicane, a high-luminosity lithium target, two high-resolution pion spectrometers, one zero-degree forward spectrometer for strangeness tagging, one photon beam-line and one electron exit beam-line. The focusing magnetic spectrometers provide a high momentum resolution at the 10$^{-4}$ level over the momentum range of hypernuclear decay-pions, a large acceptance in both angle and momentum, good position and angular resolution in the scattering plane, an extended target acceptance, and a large angular range to optimally accommodate for different beam-target angles. A thermal imaging system controls the target alignment with respect to the beam. A recalibration of the pion spectrometers will be possible due to the precise beam energy determination with the undulator light interference method.
The experiment aims for a statistical and systematic error of about 20 keV and will run during the summer of 2022.
The Mu2e experiment at Fermilab will search for the Standard Model forbidden coherent conversion of a negative muon into an electron
in the field of an aluminum nucleus.
The calorimeter complements the tracking information, providing track-seeding and particle identification to help reconstruct the mono-energetic electron candidates. The calorimeter is based on 1348 undoped CsI crystals displaced in two donut-shaped staggered matrix disks.
Each crystal is read by two custom made arrays of UV-extended Silicon Photomultipliers (SiPMs).
The system is completed by a radioactive calibration source, a fast laser calibration system and the digitizing electronics\
The two SiPMs glued on a copper holder, two independent Front-End
Electronics (FEE) boards, coupled to each SiPM, and the guide for the calibration fiber needle form a Readout Unit (ROU).
The ROU holder has a size of approximately 34$\times$34$\times$70 mm$^3$ and consists of a copper bulk structure where 2 SiPM and readout boards are mounted, a fiber needle centering tube and a copper Faraday cage fastened with 4 custom stainless steel screws to a brazed cooling copper line.
There are 674 ROU packed next to each other, vertically staggered, per calorimeter disk.
From each ROU, 2 SiPM multiple cables and a fiberglass fiber depart towards different locations.
The very compact matrix of SiPM modules,
the multiplicity of services they need and the narrow space of accessing after installation,
complicates the manipulation of such modules in the experimental hall which could be necessary for maintenance.
This poster shows the conceptual mechanical design of a robotic arm
composed by a gantry structure for xyz positioning on the desired ROU
and equipped with custom-designed grippers to unscrew fasteners, un-clip connectors,
unscrew the fiber needle and pick up the module, in a dedicated sequence.
FASER, or the Forward Search Experiment, is a new experiment at CERN designed to complement the LHC's ongoing physics program, extending its discovery potential to light and weakly-interacting particles that may be produced copiously at the LHC in the far-forward region. New particles targeted by FASER, such as long-lived dark photons or dark scalars, are characterized by a signature with two oppositely charged tracks or two photons in the multi-TeV range that emanate from a common vertex inside the detector. The experiment is composed of a silicon-strip tracking-based spectrometer using three dipole magnets with a 20-cm aperture, supplemented by four scintillator stations and an electromagnetic calorimeter. The full detector was successfully installed in March 2021 in an LHC side-tunnel 480 meters downstream from the interaction point in the ATLAS detector. FASER is planned to be operational for the upcoming LHC Run 3.
In 2021 a test beam campaign was carried out using one of the CERN SPS beam lines to characterize and calibrate a subset of the FASER detector in preparation for physics data taking. Placed in the test beam was a FASER tracking station composed of spare ATLAS SCT modules, followed by a simple preshower system consisting of two-layers of tungsten and scintillator, and lastly a 3x2 stack of spare LHCb electromagnetic calorimeter modules. Beams of electrons with energies between 10 and 300 GeV, as well as high energy muons and pions, were scanned across the entire face of the setup. The performance of the detector components as measured in the test beam will be presented, including the calorimeter resolution, particle identification capabilities, and the efficiencies of the tracker and scintillators.
Lepton beam facilities at intensity frontiers open new opportunities for precision and BSM physics. Jefferson Lab currently hosts the CEBAF accelerator which delivers a 12 GeV high power electron beam (up to 1 MW) to run in parallel up to four fixed target experiments. The comprehensive physics program includes: nucleon and nuclear structure, hadron spectroscopy and physics beyond the SM. While the future Electron Ion Collider is being built at Brookhaven National Lab, JLab is considering an upgrade in intensity (up 2.5 MW) and energy (up to 24 GeV). The upgraded machine will be able to extend the current electron-scattering program to unexplored kinematical regions and add new capabilities including a polarized positron beam and high intensity secondary muon and neutrino beams. In this contribution I will give an overview of the physics opportunities, the status of the proposal, and plans for accelerator and detectors upgrades.
Magnetic and electric dipole moments of fundamental particles provide powerful probes for physics within and beyond the Standard Model. For the case of short-lived particles, these have not been experimentally accessible to date due to the difficulties imposed by their short lifetimes. The R&D on bent crystals and the experimental techniques developed to enable such measurements are discussed. An experimental test at the insertion region IR3 of the LHC is considered for the next few years as proof of principle of a future fixed-target experiment for the measurement of charm baryon dipole moments. The layout of the experiment, the instrumentation to be developed, and the main goals of the test are also presented.
The BRAND experiment aims at the search of Beyond Standard Model (BSM) physics via measurement of exotic components of the weak interaction. For this purpose, eleven correlation coefficients of neutron beta decay will be measured simultaneously. Seven of them: H, L, N, R, S, U, and V, are sensitive to the transverse polarization of electrons from free neutron decay. Coefficients: H, L, S, U, and V were never attempted experimentally before. The BRAND detection system is oriented for the registration of charged products of the beta decay of polarized, free neutrons. With the measurement of the 4-momenta of electron and proton, the complete kinematic of the decay will be determined. Moreover, the transverse spin component of the electron will be measured via Mott scattering which is a key factor to probe BSM weak interaction.
The electron detection system features both tracking and energy measurement capability. It is also responsible for the determination of the electron spin orientation. For the 3D tracking, a low density,
helium-based drift chamber of a hexagonal cell structure that is optimized for beta particles is used. The Mott polarimeter is an integral part of the tracker. It is realized by a thin Pb-foil as a Mott-scatterer installed inside the drift chamber and two plastic scintillators providing the trigger and energy of the scattered electrons.
A challenging aim of the detection of low-energy protons from the beta decay is performed with a system, which involves the acceleration and subsequent conversion of protons into bunches of electrons. These ejected electrons (~25 keV) from a thin LiF layer are finally registered in a position-sensitive thin plastic scintillator readout with the arrays of SiPMs.
(Full abstract in the attached PDF file)
After ten years of intense work, the two New Small Wheels (NSW) for the upgrade of the Atlas Muon Spectrometer are now installed in the experiment and ready for final commissioning and to collect data in LHC Run3, starting March 2022.
The NSW is the largest phase-1 upgrade project of ATLAS. Its challenging completion and readiness for data taking is a remarkable achievement of the Collaboration.
The two wheels (10 meters in diameter) replace the first muon stations in the high-rapidity regions of ATLAS and are equipped with multiple layers of two completely new detector technologies: the small strips Thin Gap Chambers (sTGC) and the Micromegas (MM). The latter, belonging to the family of Micro Pattern Gaseous Detectors (MPGD, for the first time used in such a large scale in HEP experiments. Each of the detector technology will cover more than 1200 m2 of active area.
The new system is required to maintain the same level of efficiency and momentum resolution of the present detector, in the expected higher background level in view of the ongoing series of LHC luminosity upgrades. As well as keeping an acceptable muon trigger rate with the same muon momentum threshold.
In this presentation the motivation of the NSW upgrade and the steps from construction to assembly and surface commissioning will be reviewed, with particular focus on the main challenges, the adopted solutions and measured performance of the system. First results will be reported from commissioning data and first cry in the experiment.
The small sensitive area of commercial silicon photomultipliers (SiPMs) is the main limitation for their use in many experiments and applications where large detection areas, low cost and power consumption are needed. Since capacitance, dark count rate and cost increase with the SiPM size, they are rarely found in sizes larger than 6 mm $\times$ 6 mm. Photo-Trap offers a low-cost solution to build SiPM pixels of a few cm$^2$ by combining a wavelength-shifter plastic (WLS), a dichroic filter and a standard commercial SiPM (not larger than 6 mm $\times$ 6 mm). Photo-Trap collects light over an area that can be $\sim10-100$ times larger than the area of a commercial SiPM, while keeping the noise, single-photoelectron resolution, power consumption and likely the cost of a single, small SiPM. We developed and characterized through laboratory measurements and simulations, four different proof-of concept pixels, the largest one being of 40 mm $\times$ 40 mm. These pixels are sensitive in the near UV and achieve an optical gain that goes from ~5 to ~15, depending on the areas if the WLS and the SiPM employed. In all pixels we measured a time resolution of ~3 ns or better. Photo-Trap could provide a solution to use SiPM technology in applications in which large collection areas, low cost and low noise are needed (e.g., optical wireless communication, free space quantum key distribution, Cherenkov detectors). Here we present the results of our laboratory measurements, Geant4 simulations of the pixel and we briefly discuss the some of the potential applications of Photo-Trap.
The Belle II Time-Of-Propagation (TOP) counter is a novel particle
identification detector based on the combined measurement of a
particle's time of flight, the propagation time of Cherenkov photons
it emits when crossing a thin fused silica bar, and their geometrical pattern.
The Cherenkov radiation is internally reflected to an array of micro-
channel-plate photomultipliers located at one end of the bars. The
photomultiplier signal is digitized by zero-deadtime waveform sampling
ASIC with time resolution of 20 ps. The waveform features like timing,
amplitude and integral are extracted online using a Xilinx FPGA-
ARM device. The single photo-electron time resolution of the readout
chain is better than 100 ps.
Similar devices have been proposed, but TOP is the only operational
detector of this kind at the moment.
We will describe the status of the detector hardware in its fourth
year of operations, the stability and quality of the time calibration,
the particle identification performance and we will present an outlook
for possible upgrades.
We present the development of a single-photon detector encapsulating the analog and digital front-end electronics and the connected data acquisition electronics.
This 'hybrid' detector is composed of a vacuum tube, transmission photocathode, micro-channel plate stack and a pixelated CMOS read-out anode encapsulating the analog and digital-front end electronics.
The detector will be capable of sustaining a rate of up to $10^9$ photons per second with simultaneous measurement of position and time.
This assembly will be able to reach $5$-$10~\mathrm{\mu m}$ position resolution and timing resolution of $o(10)~\mathrm{ps}$.
The detector will be highly compact thanks to the encapsulated front-end electronics allowing local data processing and digitization.
A dual-micro-channel plate chevron stack operated at low gain ($<10^4$) and treated with atomic layer deposition, allows a lifetime of $>20~\mathrm{C/cm^2}$ accumulated charge.
The pixelated read-out anode used is based on the Timepix4 ASIC designed in the framework of the Medipix collaboration.
This ASIC integrates an array of $512\times448$ pixels distributed with a $55~\mathrm{\mu m}$ square pitch over a sensitive area of $6.94~\mathrm{cm}^2$.
It features $50$-$70~\mathrm{e^{-}}$ equivalent noise charge, a maximum rate of $2.5~\mathrm{Ghits/s}$, and allows to time-stamp the leading-edge time and to measure the Time-over-Threshold (\textit{ToT}) for each pixel.
The pixel-cluster position combined with its ToT information allows to reach $5$-$10~\mathrm{\mu m}$ position resolution.
This information can also be used to correct for the leading-edge time-walk achieving a timing resolution of $o(10)~\mathrm{ps}$.
An FPGA-based data acquisition board, placed far from the detector, will receive the detector hits using $16$ links operated at $10.24~\mathrm{Gbps}$.
The data acquisition board will decode the information and store the relevant data in a server for offline analysis.
These performance will allow significant advances in particle physics, life sciences, quantum optics or other emerging fields where the detection of single photons with excellent timing and position resolutions are simultaneously required.
Large Area Picosecond Photodetectors (LAPPDs) are micro-channel based photosensors featuring hundreds of square centimeters of sensitive area in a single package and timing resolution on the order of 50 ps for a single photon detection. However, LAPPDs currently do not exist in finely pixelated 2D readout configurations that in addition to the high-resolution timing would also provide the high spatial resolution required for Ring Imaging CHerenkov (RICH) detectors. One of the recent LAPPD models, the so-called Gen II LAPPD, provides the opportunity to overcome the lack of pixellation in a relatively straightforward way. The readout plane of Gen II LAPPD is external to the sealed detector itself. It is a conventional inexpensive capacitively coupled printed circuit board (PCB) that can be laid out in a custom application-specific way for 1D or 2D sensitive area pixellation. This allows for a much shorter readout-plane prototyping cycle and provides unprecedented flexibility in choosing an appropriate segmentation that then could be optimized for any detector needs in terms of pad size, orientation, and shape. We fully exploit this feature by designing and testing a variety of readout PCBs with conventional square pixels and interleaved anode designs.
Data acquired in the lab with the LAPPD tile 97 provided by Incom will be shown using a laser system to probe the response of several interleaved and standard pixelated patterns. Results from a beam test at Fermilab Test Beam Facility will be presented as well, including world’s first Cherenkov ring measurement with this type of a photosensor. 2D spatial resolutions well below 1 mm will be demonstrated for several pad configurations. Future plans, including a direct demonstration of e/p/K/p separation by a proximity focusing RICH detector prototype with a LAPPD as a photosensor in a forthcoming beam test at Fermilab in summer 2022, will be discussed.
The DSSC camera was developed for photon science applications in the energy range 0.25-6 keV at the European XFEL in Germany. The first 1-Megapixel DSSC camera is available and is successfully used for scientific experiments at the “Spectroscopy and Coherent Scattering” and the “Small Quantum System” instruments. The detector is currently the fastest existing 2D camera for soft X-rays.
The camera is based on Si-sensors and is composed of 1024×1024 pixels. 256 ASICs provide full parallel readout, comprising analog filtering, digitization and data storage. In order to cope with the demanding X-ray pulse time structure of the European XFEL, the DSSC provides a peak frame rate of 4.5MHz. The first megapixel camera is equipped with Miniaturized Silicon Drift Detector (MiniSDD) pixels. The intrinsic response of the pixels and the linear readout limit the dynamic range but allow one to achieve noise values of ~60 electrons r.m.s. at 4.5MHz frame rate.
The challenge of providing high-dynamic range (~104 photons/pixel/pulse) and single photon detection simultaneously requires a non-linear system, which will be obtained with the DEPFET active pixels foreseen for the advanced version of the camera. This technology provides lower noise and a non-linear response at the sensor level.
We will present the architecture of the whole detector system with its key features. We will summarize the main experimental results obtained with the MiniSDD-based camera and give a short overview of the performed user experiments.
We will present for the first time the experimental results with complete sub-modules of the DEPFET camera which is in the final stages of assembly. Measurements obtained with full size sensors and the complete readout electronics have shown a mean noise of ~15 el. rms with MHz frame rate and a dynamic range more than one order of magnitude higher with respect to the MiniSDD camera.
Silicon Photo-Multipliers (SiPMs) have emerged as a compelling photo-sensor solution over the course of the last decade and due to their optimal operation at cryogenic temperatures and low radioactivity levels are the baseline photo-sensor solution for several next generation dark matter detectors.
SiPMs are the baseline photo-sensor solution for the Darkside-20k detector and thanks to their high timing resolution and photon detection efficiency will allow to achieve an excellent pulse shape discrimination discriminating electron recoil from nuclear recoil events.
To establish experimentally the effect of the timing resolution in the pulse shape discrimination for DarkSide-20k, a detailed characterisation study of the SiPM Single Photon Timing Resolution (SPTR) was carried out at Laboratori Nazionali del Gran Sasso (LNGS). More precisely we studied from room temperature down to 40 K the SiPM SPTR as a function of the over-voltage and for different wavelengths. The factors affecting the SPTR electronically are bandwidth and rise time which were also investigated to identify the quantities that can potentially improve the detector timing resolution. The SPTR was studied at different scales of integration of SiPMs in order to identify the key quantity that reduces the final detector SPTR with increasing photodetector readout area.
Photon Science X-ray Sources (PSXSs) are divided between Synchrotron Rings (SRs), and Free Electron Lasers (FELs), having either low (<200Hz) or high (>=1MHz) repetition rate. SRs and low-repetition-rate FELs are usually served by imagers capable of continuous readout up to a few k-frame/s; while high-repetition-rate FELs need dedicated detectors capable of M-frame/s, but only for short imaging bursts.
However, PSXSs are also being upgraded: SRs evolve towards diffraction-limited operation, expected to increase brilliance by 2 orders of magnitude and asking for proportionally faster (continuously-operating) imagers. Several high-repetition-rate FELs are considering Continuous Wave operation, which will marginally reduce the repetition rate (to a few 100kHz), but will make short imaging bursts no longer an option.
A common need emerges, to bridge the gap and to provide imagers able to operate continuously, at a frame rate of few 100k frame/s.
Our collaboration is developing such an X-ray imager: our goals include continuous operation in excess of 100 kframe/s, single-photon sensitivity at 12 keV, a full well of 10k photon/pixel/image, and a 100μm pixel pitch. A readout ASIC is being developed for this purpose, compatible to traditional Silicon Sensor (for our main energy range), high-Z sensors (for shorter wavelenghts), and sensors with built-in amplification (for soft X-rays).
ASIC architecture includes an adaptive-gain charge-integrator (on the experience of the AGIPD detector), a battery of on-chip ADCs (embedded in the pixel array) and a fast readout system (on the principle of the GWT-CC developed by Nikhef for Timepix4). These stages are pipelined to allow for continuous writing-reading.
Exploratory prototypes of the ASIC circuital blocks have been designed in TSMC65nm are presently under test.
We plan to develop the imager it in two phases, first targeting the continuous readout scheme and the frame rate target, and later aiming at extending dynamic range and reducing noise.
A typical gamma camera for full-body Single Photon Emission Computed Tomography (SPECT) employs a lead collimator and a scintillator crystal (∼ 50 x 40 x 10) cm$^3$. The crystal is coupled to an array of 50-100 photo-multiplier tubes (PMTs). The camera is shielded by a thick layer of lead, making it heavy and bulky. Its weight and size could be significantly reduced if replacing the PMTs by silicon photomultipliers (SiPMs). However, one would need a few thousands channels to fill a camera with SiPMs even with the largest commercially-available SiPMs of 6 x 6 mm$^2$. As a solution we propose using Large-Area SiPM Pixels (LASiPs) in SPECT, which are built by summing individual currents of several SiPMs into a single output. We developed a LASiP prototype that sums 8 SiPMs of 6 x 6 mm$^2$ (pixel area ∼2.9 cm$^2$ ) and built a proof-of-concept micro-camera holding 4 of those prototypes coupled to a NaI(Tl) crystal. We measured an energy resolution of ∼ 11.6 % at 140 keV and were able to reconstruct simple images of a $^{99m}$Tc capillary of 0.5 mm diameter with an intrinsic spatial resolution of ∼2 mm. The micro-camera was also simulated with Geant4 and validated with experimental measurements. To study the possibility of using (eventually larger) LASiPs in a full-body SPECT camera, we extended the simulations to a camera of 50 x 40 cm$^2$. We optimized the trigger and reconstruction settings for LASiPs summing 9, 16, 25 and 36 SiPMs (pixel area up to ∼13 cm$^2$ ). We found an intrinsic spatial resolution going from ∼2 to ∼6 mm depending on the pixel size and the simulated LASiP noise (dark counts, crosstalk) and were able to reconstruct images of phantoms. In the conference we will present the results of this study.
Sensors based on GaAs are of particular interest as X-ray detectors since they have several advantages over Si, like a wider bandgap (lower dark current) and higher atomic number (higher detection efficiency).
In recent years we have developed and have studied Separate Absorption and Multiplication Avalanche PhotoDiodes in GaAs designed explicitly for synchrotron and free electron laser, featuring multiplication layers based on superlattice with staircase structures.
Effects of doping level of the various layers, the number of multiplication steps and the role of the "separation layer" have been analyzed.
Here we present further studies concerning quantum efficiency and the possibility of working in a "non punch-through" regime.
Devices with different thicknesses of the absorption zone have been studied using synchrotron light, producing electrons in the absorption layer at variable distances from the multiplication zone, and the role of the interfaces in the loss of efficiency has been measured.
Then we analyzed the devices depositing an δ p-doped sub-monolayer of carbon atoms such as to achieve complete depletion of the multiplication region but not punch-through, and thin enough to allow most of the electrons produced in the absorption zone enter the multiplication zone. In this way, the efficiency is high and the absorption zone is never subjected to a field such as to induce unwanted charge multiplication or band-to-band tunneling.
Photon science with extended ultra violet (EUV) to soft X-ray photons generated by state of the art synchrotrons and FEL sources imposes an urgent need for suitable photon imaging detectors. Requirements on such EUV detectors include high quantum efficiency, high frame rates, very large dynamic range, single-photon sensitivity with low probability of false positives, small pixel pitch and (multi)- megapixels. Such characteristics can be found in few state of the art commercial detectors based on scientific CMOS (sCMOS), which have been recently developed for applications in the visible light regime. In particular back thinned sCMOS are suited for experiments in the photon energy range between 30 eV and 2000 eV, which requires vacuum operations.
In this contribution we describe the adaption of a commercial back illuminated sCMOS imager for soft X-rays in the energy range from 35 eV – 2000 eV. The sCMOS imager comprises 2048 x 2048 pixels with a pixel size of 6.5 µm x 6.5 µm. The sensor exhibits a full well capacity of 48 000 e- and a readout noise of 1.9 e- (rms) with a dynamic range of 88 dB. The integration time can be adjusted between 10 µs – 2 seconds. The maximum frame rate is given by 48 fps for the full frame. Vacuum compatibility has been obtained by sealing the carrier board of the sensor, which constitutes the barrier between vacuum and normal atmosphere, which allows to keep the entire readout and trigger electronics in air. At the moment a KF flange is utilized to attach the camera and subsequently sensor to the experimental vacuum chamber. Here we present the first measurements showing a very high quantum efficiency for energies between 100 eV and 2000 eV. Soft X-ray (spectral) imaging capabilities with single photon resolution have been assessed.
TRICK is a project funded by the INFN CSNV Young grant 2021. It will deploy an innovative 5D technique to provide incoming particles' 3D position, time, and ID information. The proposed idea is based on the well-known technology of GEM-based TPC together with conventional Aerogel proximity focussing RICH in one single box. Both TPC and RICH parts will be readout simultaneously and instrumented by the same TIGER ASIC, developed for the BESIII CGEM-IT detector. By combining information from both systems, the TRICK technique will improve the single instrumentation performance: precise time information will help the extraction of the TPC position, while the tracking will help the rings identification, by measuring the expected center, also in a magnetic field.
The TRICK-box prototype, instrumented with triple-GEM and Hamamatsu H12700 MA-PMT, aims to reach a spatial resolution of 100 microns, time resolution below 1 ns, and 3 sigma separation for pi/K up to 4 GeV.
In this poster, a presentation of the project will be presented, with a focus on the initial studies with the prototype, the preparation of the first cosmic stand, and the next steps.
Silicon Photo-Multipliers (SiPMs) are widely used as light detectors for the new generation of experiments dedicated to high energy physics. For these reason, we tested several recent devices from different manufacturers: Hamamatsu 13xxx and 14xxx series; Ketek; SensL ONsemiconductors; AdvanSid; Broadcom. Particular emphasis has been put on measurements of breakdown voltage, dark counts and dark current and gain, performed at different temperatures by means of a climatic chamber (F.lli Galli model Genviro-030LC) with a temperature range from -60 ◦C to +60 ◦C, housing the SiPM under test and of a cryo-pump with a cold head, allowing to scan the temperature from 300 K down to 50 K. In this way it was also possible evaluating the temperature coefficient of all models. Moreover, all devices have been successfully tested in a Liquid Nitrogen bath (77 K), having in mind possible applications to detectors for neutrino and dark matter searches using liquefied noble gases such as Xenon and Argon as a target medium. In this case, the thermal component of the noise decreases at low temperature, thus allowing the use of the device at higher overvoltage.
The organometal halide perovskites (OMHP) semiconductors are promising candidates for fast, sensitive and large area photodetectors. A gain in OMHP based detectors has been observed in several architectures, but usually in association with a slow time response. A model describing the underlying mechanics is still missing or at least incomplete. In this talk the state of art of the photo-detectors based on OMHP perovskites will be presented, and the activities carried on within the PEROV experiment as well. One goal of the PEROV project is to find out whether OMHPs exhibit an internal avalanche multiplication. Several CH3PbBr3 perovskite based devices have been developed, fabricated and characterized: film-based devices with 300 nm thickness and devices based on high quality single crystals with seeding techniques or with unconventional lithographic techniques, with thickness from microns to mm.
A 7.25 x 12.04 cm^2 Silicon Drift Detector (SDD) has been developed for the enhanced X-Ray Timing and Polarimetry (eXTP) mission of the Chinese Academy of Science, with a large contribution by a European consortium inherited from the ESA-M3 LOFT mission study. In the frame of the project X-Ray Observatories (XRO), active in the National Scientific Commission 2 of the INFN, we report the details of the qualification procedure to select from the mass production the 640 detectors that will equip the Large Area Detector (the eXTP instrument dedicated to the X-ray spectroscopy in the range 2-30 keV), with energy resolution below 240 eV FWHM at 6 keV during the entire mission duration of at least 5 years. This stringent requirement dictates the need to thoroughly verify the characteristics of each single detector before integration in the final layout. We describe the dedicated testing facilities that have been developed. We report on the detector selection criteria and test results obtained in the pre-series production.
The ABALONE is a new type of photosensor produced by PhotonLab with cost effective mass production, robustness and high performance. This modern technology provides sensitivity to visible and UV light, exceptional radio-purity and excellent detection performance in terms of intrinsic gain, afterpulsing rate, timing resolution and single-photon sensitivity. This new hybrid photosensor, that works as light intensifier, is based on the acceleration in vacuum of photoelectrons generated in a traditional photocathode and guided towards a window of scintillating material that can be read from the outside through a Silicon PhotoMultiplier (SiPM). In this contribution we present the extensive characterization of the ABALONE as a possible photosensor for future astroparticle physics experiments
The Mu3e experiment searches for a rare lepton flavor violating μ+ → e+e+e− decay and it aims at reaching an ultimate sensitivity of 10−16 on the branching fraction of the μ+ → e+e+e− decay, four orders of magnitude better than the current limit B(μ+ → e+e+e−) < 10−12. The experiment will be hosted at the Paul Scherrer Institute (Villigen, Switzerland) which delivers the most intense low momentum continuous muon beam in the world (up to a few ×10^8 μ/s).
In order to be sensitive to the signal at this so high level, to reject the background and to run at the intensity beam frontier excellent detector performances are needed.
We will report the R&D that has been performed presenting some of the prototypes of the scintillating fiber detector by defining the path for the final detector. These studies have been supported with detailed Monte Carlo simulations from the fiber through the photosensors up to the electronics and the data acquisition. The fiber detector is designed to detect minimum ionizing particles (m.i.p.) with a minimal amount of material (the detector thickness below 0.4 % of radiation length X0) with full detection efficiency, timing resolutions well below 1 ns, and spatial resolution of ≈ 100 μm. While expertise in scintillating fibers and SiPMs has been around for a while, this detector will be the first to match these demands. A very high detection efficiency (≥ 99%) and timing resolutions < 500 ps have been measured. The optical cross-talk between Aluminum coated fibers has been kept at a negligible level (< 1%), for which spatial resolutions < 50 μm are foreseen. The very good agreement between data and Monte Carlo simulation predictions will be also presented and discussed.
TORCH is a large-area time-of-flight (ToF) detector, proposed for the Upgrade-II of the LHCb experiment. The detector will provide charged hadron identification over the 2-20 GeV/c range to complement LHCb’s particle identification to lower momentum. To achieve this level of performance, a 15 ps timing resolution per track is required, given a 10 m flight distance from the LHC interaction point. TORCH utilizes a 1 cm thick quartz plate which, on the passage of a charged particle, acts as a source of prompt Cherenkov photons. The photons are propagated to the periphery of the plate via total internal reflection where they are focused by a cylindrical-mirrored surface onto an array of micro-channel plate photomultiplier tubes (MCPs).The MCPs record the position and arrival times of the Cherenkov photons which allows a correction for chromatic dispersion in the quartz. The MCPs are custom-developed with an industrial partner and give a 1 mrad precision on the photon trajectory; the anode of each MCP is finely segmented to give an effective granularity of 8 x 128 pixels over a 53 x 53 mm^2 square area. The MCP single-photon time resolution has been measured at around 50 ps in the laboratory, including the contribution from the TORCH customised electronics-readout system. A TORCH prototype module having a 125 x 66 x 1 cm^3 fused-silica radiator plate and housing two MCP-PMTs has been tested in a 8 GeV/c CERN test-beam. Single-photon time resolutions of between 70-100 ps have been achieved, dependent on the beam position in the radiator. The measured photon yields also agree with expectations. The performance approaches the ToF design goal for LHCb, considering that a fully-instrumented TORCH module will detect around 30 photons. Finally, the future TORCH R&D plans and the expected particle-identification performance at LHCb will be presented.
X-ray photon science at free-electron lasers (FEL) and synchrotron light sources supports diverse research spanning from medicine to solid-state physics. Detectors that are able to cope with the brilliance, repetition rate, and pulse duration of these X-ray sources are in high demand. The hybrid silicon pixel detector JUNGFRAU provides low noise and, simultaneously, high dynamic range, fast readout, and high position resolution. It is optimized for a photon energy range between $2$ keV and $16$ keV and can resolve single photons down to $\sim 1.5$ keV with a dynamic range of $10^4$ photons at $12$ keV. For this purpose, JUNGFRAU combines a charge-integrating architecture and three linear, dynamically switching gains per pixel. JUNGFRAU systems of various sizes (i.e. up to 16 megapixels to date) are operated at FEL and synchrotron facilities worldwide. The success of these systems promotes ongoing research to further improve the JUNGFRAU detector and make it applicable for photon science at the low and high-energy ends of the X-ray spectrum. For instance, the combination of the low-noise JUNGFRAU readout ASIC with inverse LGAD (iLGAD) sensors with thin entrance windows is expected to extend the sensitive range of the system down to $250$ eV.
In this contribution, we present the state of the art of current JUNGFRAU systems and discuss recent improvements. We cover measurement results of prototypes for low-energy X-ray detection and present an outlook on possible combinations of JUNGFRAU with high-Z sensor materials to facilitate experiments with high-energy X-rays.
A novel imaging technique for thermal neutrons using a fast-optical camera is presented. Thermal neutrons are reacted with Lithium-6 to produce a pair of 2.73 MeV tritium and 2.05 MeV alpha particles, which in turn interact in a thin layer of LYSO crystal scintillator to produce a localized flash of light. These photons are directed by a pair of lenses to a micro-channel plate intensifier, the optical camera, TPX3CAM is connected to the intensifier output. The setup is shown in figure 1 (attached).
The results from the camera are reconstructed through a custom algorithm. Each reconstructed neutron event is made up of several sub-clusters, each cluster represents a group of photons, which were produced by the photon multiplier from a single photon input. A neutron hit is calculated to produce 3-6 photons at the intensifier input. The background of this experiment consists of low energy beta particles and x-rays, which produces single photons. Figure 2 (attached) shows 3 groups of photons, which are relatively close to each other both spatially and temporally, this event was is determined as the result of a neutron hit.
In conclusion, this new optical neutron imaging technique allows remote and long-distance detection from the radiation source also magnifies the field of view of a detector by using an appropriate set of focusing lenses.
Due to their single-photon sensitivity and timing resolution, SiPMs are now the baseline solution for a large fraction of noble liquid experiments and medical imaging such as positron emission tomography, among others. Following this trend, digital SiPMs, or Photon-to-Digital Converters (PDC), are foreseen like the next generation of photon sensors. PDCs and SiPMs are both based on an array of Single-Photon Avalanche Diodes (SPAD) with the major difference that CMOS circuit is used to quench and read out the SPAD in the former compared to a passive resistor and an analog sum of each SPAD in the latter. PDCs offer major advantages over SiPMs due to the one-to-one SPAD-CMOS readout coupling. It enables control of the afterpulsing, improved timing resolution, disabling noisy SPADs, a single photon counting on a dynamic range equal to the number of SPAD in the array, to name a few.
Our team and collaborators are working to develop 3D PDCs, where a SPAD array is vertically integrated on a CMOS readout circuit with digital signal processing. In this contribution, the SPAD array developed by U. of Sherbrooke and Teledyne DALSA Semiconductor Inc (Bromont, Canada) will be presented in public for the first time. The structure of the SPAD array will be detailed. Measurements and wafer-level test setups will be presented and discussed.
In the forward end-cap of the Belle II spectrometer, the proximity focusing Ring Imaging Cherenkov counter with an aerogel radiator (ARICH) has been in operation since 2018. The single Cherenkov photons emitted from a double layer aerogel radiator are detected by 420 Hamamatsu hybrid avalanche photodetectors (HAPD) with 144 channels working in a perpendicular 1.5 T magnetic field. The sensor signals are digitized by a custom front-end ASIC and sent to the experiment acquisition system. The detector has shown a very reliable operation in several years of operation. 94% of channels are fully operational; there hasn’t been any significant degradation since the beginning. Although each HAPDs requires six different high voltages for the operation, the intelligent slow control and monitoring system supports the ARICH function. The ARICH runs almost without any human intervention, e.g., during the last run period, there has not been any significant downtime due to ARICH. A precise alignment and calibration of the detector and the quality assessment of the components before installation contributed to ARICH capabilities. The particle identification performance measured by $D^{\pm *}$ decays meets the design expectations: the kaon identification efficiency is above 96% in the wide momentum range from 0.5 to 4 GeV/c at a relatively low pion misidentification probability of 10%. The ARICH was designed to operate up to the nominal design luminosity of 8$\times10^{35} cm^{-2} s^{-1}$. Until then, the leak current of HAPDs will increase, causing the degradation of HAPD performance. Also, single event upsets will affect the electronics. We are implementing several new mitigation measures to ensure the ARICH functionality. For the operation beyond the design luminosity, we are studying different possible HAPD replacements: silicon photomultipliers and large area picosecond photon detectors.
We report on the calibration and performance of the TOF-Wall detector of the FOOT (FragmentatiOn Of Target) experiment. The experiment aims at measuring the fragmentation cross-section of 200–800 MeV/u carbon and oxygen ions impinging onto carbon and polyethylene targets for applications in hadrontherapy and radioprotection in space. The TOF system of the experiment is composed of a thin plastic scintillator, positioned in the upstream region of the experiment, and the TOF-Wall. This system allows the identification of the charge of each fragment by measuring the energy deposited in the TOF-Wall and the time of flight (TOF) between the two detectors. The TOF-Wall is composed of 20 + 20 plastic scintillator bars arranged on two orthogonal layers, coupled to silicon photomultipliers, covering an active area of 40 cm x 40 cm. The analog signals are digitized by the WaveDAQ system. The TOF-Wall detector was characterized by scanning its surface with 400 MeV/u oxygen ions, and by detecting the fragments produced by a carbon ion beam onto a graphite target. The results for the TOF-Wall timing performance with different impinging particles and the energy calibration of the detector will be reported in this contribution. A time resolution of 41 ps was obtained between the two layers of the TOF-Wall using 200 MeV/u carbon ions, corresponding to a contribution of about 20 ps to the time resolution of the TOF system. The energy resolution achieved with carbon ions was 4-5% when both layers are considered. The fragments produced by the C-C interactions were used to study the saturation of the plastic scintillator bars as a function of the released energy and of the impinging ions. The uniformity of the performance on the whole TOF-Wall area was also analyzed and will be discussed.
Single-photon detectors are a corner stone of many scientific experiments. While some require precise timing resolution under 100 ps, others need components to be radiopure and operational at noble liquid temperatures. To this end, the team at Université de Sherbrooke and their collaborators have been working on the development of a photodetection module. This module is comprised of Photon-to-Digital Converters (PDC) – an array of Single-Photon Avalanche Diodes (SPAD) vertically integrated on a CMOS readout circuit with digital signal processing where photon-to-bit conversion is performed. To match the coefficient of thermal expansion of silicon-based PDCs in cryogenic experiments, a silicon interposer was implemented. To manage and read out the PDCs, a tile controller was implemented and tested with an FPGA, and we are now designing a custom integrated circuit to fulfill this purpose to be radiopure. Finally, to provide low power and radiopure communication, R&D on a silicon photonic-based interface is ongoing with devices currently being characterized. In this contribution, an overview of these key components with their most recent results will be presented. This includes SPAD array characterization, a demonstration of a photodetection module prototype converting a pulse of light into a digital signal, interposer DC and RF characterizations and the silicon photonic communication interface modulation and performances at cryogenic temperature.
MYTHEN III is the latest generation of single photon-counting strip detectors developed by the PSD detector group at the Paul Scherrer Institut. It presents the same geometry as its predecessor MYTHEN II (50 μm pitch, 8 mm long strips, 6.4 cm wide modules), but its performance has been greatly improved, in terms of noise, threshold dispersion, count rate capability and frame rate.
The new readout chip, developed in 110nm UMC technology, contains 128 readout channels. Every channel features a double polarity preamplifier and shaper with variable gain and shaping time. Three discriminators, each one having a dedicated threshold, trim bit set and gate signal, process independently the shaped signal. The outputs of the three discriminators feed a counting logic that, according to the selected mode of operation, generates the increment signals for the three following 24-bit counters.
The various chip modes of operation allow use in new applications: the three fully independent counters per strip enable energy binning, time resolved pump-probe applications, but can also push the count rate capability to above 20 MHz per strip with 90% efficiency, thanks to the possibility of counting piled-up photons. Additionally, we implemented an innovative digital communication logic between channels, allowing charge sharing suppression and improving the spatial resolution beyond the strip pitch, as a first demonstration of on-chip interpolation in a single photon-counting detector.
A full MYTHEN III detector has been commissioned, consisting of 48 modules with 10 chip each, covering 120°, which recently started user operation at the powder diffraction end station of the Swiss Light Source.
We will present the architecture of the new detector, starting from the readout chip, and its latest characterization results, showing its superior performance with respect to MYTHEN II. Particular emphasis will be given to the many unpublished results of the novel modes of operation.
Collider experiments as the upcoming Phase II- LHC or the future circular collider (FCC) will increase the demands of the detectors used for tracking. In the FCC, sensors will not only face fluences of up to $1\times10^{17}~n_\mathrm{eq}/\mathrm{cm}^2$ , but also high pile-up scenarios. Therefore, sensors will be required that not only have a good spatial resolution and a very high radiation hardness, but also an excellent time resolution of 5ps. Currently, Low Gain Avalanche Diodes (LGADs), which have an additional gain layer to achieve fast signals through charge multiplication, are the prime candidate when it comes to timing, reaching a resolution of below 30 ps. However, their radiation hardness is not sufficient for future colliders. As an alternative, 3D sensors are an interesting research area, as they are known to be extremely radiation hard. In 3D sensors, there are columns etched into the sensor from the top (junction columns) and from the back (ohmic columns), causing short drift distances, low depletion voltages and a high electric field and, therefore, fast signals.
In this study, the time resolution of both LGADs and 3D sensors was investigated with MIP-like signals generated by a beta-source, as well as measurements using a laser with an infrared wavelength. We will demonstrate that 3D sensors can achieve time resolutions competitive with LGADs.
Transient current technique (TCT) timing measurements allow a position-resolved study of the time resolution. This is interesting especially for the 3D sensors, where the time walk component due to the more complex electric field structure influences the time resolution strongly. We will show that this can be observed in the position dependent time resolution measurements. Additionally, the timing performance of 3D sensors before and after irradiation with reactor neutrons will be demonstrated.
The ATLAS experiment is currently preparing for the High Luminosity Upgrade of the LHC.
An all-silicon Inner Tracker (ITk) that will replace the current ATLAS Inner Detector, is under development with a pixel detector surrounded by a strip detector. The strip system consists of 4 barrel layers and 6 EC disks. After completion of final design reviews in key areas, such as Sensors, Modules, Front-End electronics and ASICs, and a successful large scale prototyping program, the ITk Strip system has started the pre-production phase. We present an overview of the Strip System, and highlight the final design choices of sensors, module designs and ASICs. We will summarise results achieved during prototyping and the current status of pre-production on various detector components, with an emphasis on QA and QC procedures, and preparation for the production phase.
The LHC machine is planning an upgrade program which will smoothly bring the luminosity to about $5-7.5\times10^{34}$cm$^{-2}$s$^{-1}$, to possibly reach an integrated luminosity of $3000-4500\;$fb$^{-1}$ by the end of 2039. This High Luminosity LHC scenario, HL-LHC, will require an upgrade program of the LHC detectors known as Phase-2 upgrade. The current CMS Outer Tracker, already running beyond design specifications, and CMS Phase-1 Pixel Detector will not be able to survive HL-LHC radiation conditions and CMS will need completely new devices, in order to fully exploit the highly demanding conditions and the delivered luminosity.
The Phase-2 Outer Tracker (OT) is designed in order to ensure at least the same performances of the Phase-1, in terms of tracking and vertexing capabilities, at the high pileup (100-200 collisions per bunch crossing) expected at HL-LHC. The Phase-2 OT will have higher radition tollerance, granularity and track separation power with respect to the Phase-1. Moreover the Phase-2 OT will have also trigger capabilities since tracking information will be used at L1 trigger stage. In order to achieve such capabilities Phase-2 OT should be able to perform a data reduction directly on front end electronics. This has been implemented through the $p_{T}$ discriminating module concept, each OT module will be composed by two silicon sensors, with a small spacing, read out by a single ASIC which correlates data from both sensors selecting tracker "stubs". These stubs will then be used to perform the tracking for L1 trigger.
This report is focusing on the replacement of the CMS Outer Tracker system, describing new layout and technological choices together with some highlights of research and development activities.
The ALICE collaboration is pursuing the development of a novel and
considerably improved vertexing detector called ITS3, to replace the
three innermost layers of the Inner Tracker System during the LHC Long
Shutdown 3.
The primary goals are to reduce the material budget to the
unprecedented value of 0.05% X_{0} per layer, and to place the first layer
at a radial distance of 18 mm from the interaction point. These
features will boost the impact parameter resolution by a factor two
over all momenta and drastically enhance the tracking efficiency at
low transverse momentum.
The new detector will consist of true cylindrical layers. Each
half-cylinder is based on curved wafer-scale monolithic pixel
sensors. The bending radii are 18, 24 and 30 mm, and the length of the
sensors in the beam direction is 27 mm.
The sensors will be produced using a commercial 65 nm CMOS Imaging
technology and a recent technique called stitching. This allows to
manufacture chips reaching the dimensions of 27 cm x 9 cm on silicon
wafers of 300 mm diameter. The chips will be thinned down to 50 um or
below.
The ITS3 concept foresees cooling by air flow, ultra-light carbon foam
support elements and no flexible printed circuits in the active area.
This demands a power density limit of 20 mW/cm^{2} for the sensor,
and the need to distribute supply and transfer data over the entire
sensors towards circuits located at the short edges of the chip.
This contribution will summarise the status of the microelectronic
developments and present selected results from the characterisation of
the first prototype chips.
Furthermore, it will describe the ongoing efforts on the design of a
first wafer-scale stitched sensor prototype, the MOSS (Monolithic
Stitched Sensor) chip.
Major advances in silicon pixel detectors, with outstanding timing performance, have recently attracted significant attention in the community. In this work we present and discuss the use of state-of-the-art Geiger-mode APDs, also known as single-photon avalanche diodes (SPADs), for the detection of minimum ionizing particles (MIPs) with best-in-class timing resolution. The SPADs were implemented in standard CMOS technology and integrated with on-chip quenching and recharge circuitry. Two devices in coincidence allowed to measure the time-of-flight of 180 GeV/c momentum pions with a coincidence time resolution of 22 ps FWHM (9.5 ps Gaussian sigma). This result paves the path for new generation of cheap plug-and-play trackers with extremely high spatial and timing resolution, meant to be used in beam test facilities.
Depleted Monolithic Active Pixel Sensors are of highest interest at the HL-LHC and beyond for the replacement of the Pixel trackers in the innermost radii of HEP experiments where maximum performance, and cost effectiveness is required. They aim to provide high granularity and low material budget over large surfaces and ease of integration. This research includes the development of radiation hard DMAPS with small collection electrode in TowerJazz 180 nm CMOS imaging technology with asynchronous read-out (MALTA sensor), design and fabrication of prototypes, and characterization under high demanding conditions. The MALTA sensor features a pixel pitch of 36um and has been optimised for radiation hardness and best possible time resolution. The presentation will summarise the latest measurement results for sensor design and process optimisation towards radiation hardness of >2x10^15 n_eq/cm^2 (NIEL) and 100Mrad (TID). Spatial emphasis will be given to the optimisation of its time-resolution of 2ns in order to utilise the sensor for demanding time-tagging applications in a fine-pitch pixel tracker.
Both the current upgrades to accelerator-based HEP detectors (e.g. ATLAS, CMS) and also future projects (e.g. CEPC, FCC) feature large-area silicon-based tracking detectors. Using production lines of industrial CMOS foundries to fabricate silicon radiation detectors, both for pixels and for large-area strip sensors would be most beneficial in terms of availability, throughput and cost. In addition, the availability of multi-layer routing of signals will provide the freedom to optimize the sensor geometry and the performance, with biasing structures implemented in poly-silicon layers and MIM-capacitors allowing for AC coupling. First samples of pixel sensors coming from the LFoundry production line have already been tested and showed good performance up to irradiation levels of 10^16 neq.cm^-2 for their potential operations as sensors for the CMS inner tracker. This presentation will focus on the systematic characterization of pixel modules at high irradiation levels, up to 1.64 x 10^16 neq.cm^-2, studying the performance in terms of charge collection, position resolution and hit efficiency with measurements performed in the laboratory and with beam tests.
The unprecedented density of charged particles foreseen at the next generation of experiments at future hadronic machines poses a significant challenge to the tracking detectors, which are expected to withstand extreme levels of radiation as well as to be able to efficiently reconstruct a huge number of tracks and primary vertices. To meet this challenge new extremely radiation hard materials and sensor designs will be needed, to build high granularity and excellent time resolution tracking detectors. In particular, the availability of the time coordinate ("4D-tracking") significantly simplifies the track and vertex reconstruction problem. Diamond 3D pixel sensors, with thin columnar resistive electrodes orthogonal to the surface, specifically optimised for timing applications may provide an optimal solution to the above problems. The 3D geometry enhances the well-known radiation hardness of diamond and allows to exploit its excellent timing properties, possibly improving the performances of the extensively studied planar diamond sensors.
We report on the timing characterization, based on beta-source and particle beam tests, of innovative 3D diamond detectors optimised for timing applications, fabricated by laser graphitisation of conductive electrodes in the bulk of 500μm thick single-crystal diamonds, developed within the INFN TimeSpot initiative.
A time resolution well below 100 ps have been obtained with a prototype 55x55μm^2 pitch sensor at a recent beam test at CERN, with a measured efficiency above 99%.
We have also fabricated ten 32x32, 55x55μm^2 pitch sensors which are being bump-bonded on a dedicated 28nm ASIC and will be tested during this year.
Preliminary results on the simulation of the full chain of signal formation in the sensor will also be presented and plans for further optimisation briefly discussed.
Prospects on the construction and test of a “4D" diamond tracker demonstrator will be finally discussed.
We describe the status of the ATLAS Forward Proton Detectors (AFP and ALFA) for LHC Run 3 after all refurbishments and improvements done during Long Shutdown 2. Based on analysis of Run 2 data, the expected performance of the Tracking and Time-of-Flight Detectors, the electronics, the trigger, and the readout and detector control and data quality monitoring are described. Finally, the physics interest and the most recent studies of beam optics and detector options for participation at the HL-LHC are discussed.
The Inner Tracker (ITk) will be one of the major upgrades that the ATLAS experiment will undergo during the long shutdown 3 of the LHC. The ITk Pixel detector will be composed by an Inner System (IS), two Endcaps (EC) and an Outer Barrel (OB). The OB itself will be composed of more than 4,000 pixel modules, arranged on modular "local support" structures (longerons and half rings).
In total, 158 local support structures will compose the OB. QC testing will be performed at the different stages of production (modules standalone, module loaded on cells and modules integration to loaded local supports, and after integration of several loaded local supports).
Dedicated environmental boxes will be developed for the purpose, providing the required connectivity to services (CO2 cooling, power and data connectivity), light tightness and safe operation area during testing.
In order to ensure the safety of operation of several modules at the loaded local support QC testing and integration stage, a dedicated DCS and Interlock system was developed at CERN, based entirely on industrial PLC solutions and providing a Scada WinCC-OA interface. Such system is meant to be employed in a standalone configuration during QC tests, while at the integration stage it is foreseen to be coupled to the specific interlock crate of the ITk.
The system is meant to be modular and adaptable to the several different test configurations which are foreseen at the QC and integration stage.
The talk will give an overview of the system and its capabilities as well as describe the validation of its operation in a representative use case, with a system test setup currently operating at CERN.
The ATLAS experiment will undergo substantial upgrades to cope with the higher radiation environment and particle hit rates foreseen for HL-LHC. The phase II upgrade will include the replacement of the inner detector with a completely new silicon-based tracker. The ATLAS phase II Inner Tracker (ITk) will consist of hybrid pixel detectors and silicon strip detector layers. The innermost five-barrel layers and several endcap rings will be equipped with hybrid pixel detector modules. The modules are consisting of bare silicon modules connected to flexible printed circuits. Bare silicon modules are made of a silicon pixel sensor connected to either four FE chips to form a quad module or one FE chip to form a single chip module. The ITk phase II pixel community has conducted many developments geared towards meeting the necessary module production quality and throughput. These include establishing quality checking routines of bare module components, tooling developments for their assembly as well as electrical testing infrastructure to assess their operability to specification. A dedicated program to set in motion this effort and streamline these various stages was established using the RD53A front-end chip. Subsequent test and assembly work is being carried out using the ITkPix chips which are final size FE chips. This talk will provide a detailed overview of these developments and their results in preparation for the ATLAS ITk pixel phase II upgrade module production.
The High Luminosity upgrade of the CERN Large Hadron Collider (HL-LHC) requires new radiation tolerant silicon pixel sensors. In the case of the CMS experiment, the first layer of pixel detectors will be installed at about 3 cm distance from the beam pipe: fluence up to 2E16 neq/cm2 (1 MeV equivalent neutrons) are expected. The 3D concept for silicon pixel sensors presents several advantages with respect to traditional, planar, sensors. Thanks to their peculiar structure, 3D sensors are resistant to radiation damage, making them suitable for use in the inner layer of the future CMS tracker. In this presentation results obtained in beam test experiments with highly irradiated 3D and planar pixel sensors interconnected with the RD53A readout chip are reported. RD53A is the first prototype in 65nm technology issued from RD53 collaboration for the future readout chip to be used in the upgraded pixel detectors. The sensors were made in FBK foundry in Trento, Italy, and their development was done in collaboration with INFN (Istituto Nazionale di Fisica Nucleare, Italy). Both 3D and planar sensors feature a pixel area of 2500 μm2 and an active thickness of 150 μm. The interconnected modules, irradiated to fluences up to 2.4E16 neq/cm2, were tested in various test beam facilities: analysis of collected data shows excellent performances measured after unprecedented irradiation fluences. All results are obtained in the framework of the CMS R&D activities.
The current prototype for the proposed sensor was developed in 180nm TSI HV-technology with a 24x40 pixel matrix. Single pixels exploit deep nwells on p-substrate diodes. Secondary particles are collected on the deep n-wells which include the front-end pixel electronics. Front-end electronics contains an integrator in addition to a comparator. Each time the charge acquired surpasses the threshold of the comparator a pump pulse is generated and counted into an 8-bit register, and the integrator is reset. By storing an 8-bit timestamp of the first and last pump, it is possible to obtain with presition the charge acquired during the integration time. A 16-bit output resolution is achieved by this Pump-timestamp method, which is converted into 2 LWDS lines with 4 bits in parallel, to increase data transfer speed, as well as to maintain the integrity of the output. Preliminary tests shown a noise floor of 0.8 fC with a maximum charge of 3000 fC, limited by the resolution bits. The sensor presents a linear response along the whole dynamic range. A test with a high energy particles beam was carried out, the results show the performance of the sensor under real life conditions, as well as the radiation hardness capabilities of it.
In order to cope with the occupancy and radiation doses expected at the High-Luminosity LHC, the ATLAS experiment will replace its Inner Detector with an all-silicon Inner Tracker (ITk), containing pixel and strip subsystems. The strip detector will be built from modules, consisting of one or two n+-in-p silicon sensors, PCB hybrids accommodating the front-end electronics, and powerboard providing high voltage, low voltage, and monitoring electronics. The aluminium strips of the silicon sensors developed for the ITk project are AC-coupled with n-type implants in a p-type float-zone silicon bulk. The module powering configuration includes a voltage of up to 0.5 V across the sensor coupling capacitor. However, this voltage is usually not applied in the sensor irradiation studies due to the significant technical and logistical complications. To study the effect of an irradiation and a subsequent beneficial annealing on the ITk strip sensors in real experimental conditions, four prototype ATLAS17LS miniature sensors were irradiated by Co60 source and annealed for 80 minutes at 60°C, both with and without the bias voltage of 0.5 V applied across the coupling capacitors. The values of interstrip resistance measured on irradiated samples before and after annealing indicate that increase of radiation damage caused by the applied voltage can be compensated by the presence of this voltage during annealing.
The high luminosity upgrade of the Large Hadron Collider, foreseen for 2028, requires the replacement of the ATLAS Inner Detector with a new all-silicon Inner Tracker (ITk). The expected total integrated luminosity of 4000 fb^−1 means that the strip part of the ITk detector will be exposed to the total particle fluences and ionizing doses reaching the values of 1.6E15 1 MeV n_eq/cm^2 and 0.66 MGy, respectively, including a safety factor of 1.5. Radiation hard n+-in-p micro-strip sensors were developed by the ATLAS ITk strip collaboration and are produced by Hamamatsu Photonics K.K. The active area of each ITk strip sensor is delimited by the n-implant bias ring, which is connected to each individual n+ implant strip by a polysilicon bias resistor. The total resistance of the polysilicon bias resistor should be within a specified range to keep all the strips at the same potential, prevent the signal discharge through the grounded bias ring and avoid the readout noise increase. While the polysilicon is a ubiquitous semiconductor material, the fluence and temperature dependence of its resistance is not easily predictable, especially for the tracking detector with the operational temperature significantly below the values typical for commercial microelectronics.
Dependence of the resistance of polysilicon bias resistor on the temperature, as well as on the total delivered fluence and ionizing dose, was studied on the specially-designed test structures called ATLAS Testchips, both before and after their irradiation by protons, neutrons, and gammas to the maximal expected fluence and ionizing dose. The resistance has an atypical negative temperature dependence. It is different from silicon, which shows that the grain boundary has a significant contribution to the resistance. We will discuss the contributions by parameterizing the activation energy of the polysilicon resistance as a function of the temperature for unirradiated and irradiated ATLAS Testchips.
The ATLAS experiment is currently preparing for an upgrade of the inner tracking detector for High-Luminosity LHC operation, scheduled to start in 2027. The new detector, known as the Inner Tracker or ITk, employs an all-silicon design with five inner Pixel layers and four outer Strip layers. The staves are the building blocks of the ITk Strip barrel layers. Each stave consists of a low-mass support structure which hosts the common electrical, optical and cooling services as well as 28 silicon modules, 14 on each side. To characterize the stave, a set of electrical and functional measurements have been performed both at room and at cold temperature. In this conference, the results on the first fully instrumented pre-production staves assembled at Brookhaven National Laboratory will be presented
During the era of the High-Luminosity (HL) LHC the experimental devices will be subjected to enhanced radiation levels with fluxes of neutrons and charged hadrons in the outer tracker detectors (200mm - 1200mm from the beam axis) from $3x10^{14}$ to $1x10^{15}$ neq/$cm^{2}$ and total ionization doses from 10 kGy to 750 kGy after 3000 $fb^{-1}$ of irradiation. A systematic program of radiation tests with neutrons and charged hadrons is being run by LHC collaborations in view of the upgrade of the experiment, in order to cope with the higher luminosity of HL-LHC and the associated increase in pile-up events and radiation fluxes. In this work, complementary radiation studies with gamma photons from a 60Co source are presented. The doses are of the orders of tens of kGy. The irradiated test structures contain among others gate-controlled diodes (GCD) and field-effect transistors (FET). Τhe alterations in the current components after irradiation are investigated. The results of IV measurements on these devices are presented as a function of the total absorbed radiation dose following a specific annealing protocol. The measurements are compared with the results of a TCAD simulation. The devices under test are made of oxygen enriched float zone p-type silicon.
The ATLAS collaboration is working on a major upgrade of the Inner-Tracker, able to withstand the extreme operational conditions expected for the forthcoming High-Luminosity Large Hadron Collider (HL-LHC) upgrade. During the prototyping phase of the new large area silicon strip sensors, the community observed a degradation of the breakdown voltage (down to 200-500 V from $\geq$1 kV in bias voltage) when the devices with final technology options were exposed to high humidity, recovering the electrical performance prior to the exposure after a short period in dry conditions [J. Fernandez-Tejero, et al., NIM A 978 (2020) 164406]. These findings helped to understand the humidity sensitivity of the new sensors, defining the optimal working conditions and handling recommendations during production testing.
In 2020, the ATLAS strip sensor community started the pre-production phase, receiving the first sensors fabricated by Hamamatsu Photonics K.K. using the final layout design. The work presented here is focused on the analysis of the humidity sensitivity of production-like sensors with different surface properties, providing new results on their influence on the humidity sensitivity observed during the prototyping phase.
Additionally, the new production strip sensors were exposed to short (days) and long (months) term exposures to high humidity. This study allows to recreate and evaluate the influence of the detector integration environment expected during the Long Shutdown 3 (LS3) in 2025, where the sensors will be exposed to ambient humidity for prolonged times. A subset of the production-like sensors were irradiated up to fluences expected at the end of the HL-LHC lifetime, allowing the study of the evolution of the humidity sensitivity and influence of the passivation layers on sensors exposed to extreme radiation conditions.
A new generation of Monolithic Active Pixel Sensors (MAPS), produced in a 65 nm CMOS imaging process, promises higher densities of on-chip circuits and, for a given pixel size, more sophisticated in-pixel logic compared to larger feature size processes. MAPS are a cost-effective alternative to hybrid pixel sensors since flip-chip bonding is not required. In addition, they allow for significant reductions of the material budget of detector systems, due to the smaller physical thicknesses of the sensor and the absence of a readout chip.
The TANGERINE project aims for a sensor with a spatial resolution below 3 μm, temporal resolution below 10 ns, and a total physical thickness below 50 μm, suitable for future Higgs factories or as beam telescope in beam-test facilities. The sensors will have small collection electrodes (order of μm), to maximize the signal-to-noise ratio and hence minimize power dissipation in the circuitry. An extensive program of electric field and Monte Carlo simulations is pursued, to optimize the sensor layout and to reach full depletion of the epitaxial layer, hence high hit detection efficiencies, despite the small collection electrodes. This includes different types of process modifications to enlarge the depletion region and enhance the lateral electric field strength.
The first batch of test chips, featuring the full front-end amplifiers with Krummenacher feedback, was produced and tested at the Mainzer Mikrotron (MAMI) end of 2021. MAMI provides an electron beam with currents up to 100 μA and an energy of 855 MeV. The analog output signal of the test chips is recorded with a high bandwidth oscilloscope and used to study the charge-sensitive amplifier of the chips in terms of waveform analysis. A beam telescope was used as a reference system, to allow for also track-based analysis of the recorded data.
High luminosity upgrades will be performed on all experiments at CERN’s Large Hadron Collider. The increased number of events will provide a larger statistic, giving a consequent better probability of discovering new phenomena. Not only will this cause an increase in radiation damage to the detector systems, but this will give an increased event overlap. As a result, radiation-tolerant detectors with a fast response time are being researched and developed in several detector development groups. 3D silicon sensors have shown to be one of the most radiation hard silicon sensors technologies. In 3Ds the inter-electrode distance is decoupled from, and can be made much shorter than, the substrate thickness. The proximity of the electrodes to the point of charge carrier formation allows for a fast signal response, reduced trapping probabilities, and suppresses effects caused by radiation damage. The poster will present results on 3D sensor timing properties and discuss them in perspective to luminosity upgrade applications.
A silicon-based modern detector, which acts as an active-target capable of imaging particles in 3D, similar to a bubble chamber, does not exist. Ideas for a silicon active target providing continuous tracking were put forward already almost 40 years ago, but the required technology did not exist until recently.
In this talk, a project to construct the first silicon active target based on silicon pixel sensors, called Pixel Chamber, will be described. The aim is to create a bubble chamber-like high-granularity stack of hundreds of very thin monolithic active pixel sensors glued together, capable of performing continuous, high resolution (O($\mu m$)) 3D tracking, including open charm and beauty particles. For the stack, the ALPIDE sensor, designed for the ALICE experiment at the CERN LHC, will be used.
The power consumption of a stack consisting of hundreds of sensors could result in very high temperatures, affecting the performance of the detector, thus requiring a cooling scheme. Simulations were carried out to evaluate different options and converge on a cooling solution. Preliminary results of laboratory cooling tests will be presented.
High-efficiency tracking and vertexing algorithms were developed to reconstruct tracks and vertices inside Pixel Chamber. They were tested on Monte Carlo simulations of proton-silicon interactions occurring inside the detector. The vertex resolution can be up to one order of magnitude better than state-of-the-art detectors like those of LHC experiments. The tracking algorithm has been also tested with real data, using tracks produced in a single ALPIDE sensor exposed to electrons and hadrons beams with very good results.
Finally, the first results obtained for the development of prototypes of stacks of few ALPIDE sensors will be presented. Future perspectives of the project will be illustrated at the end of the talk.
The MONOLITH ERC Advanced project aims at producing a monolithic silicon pixel ASIC with picosecond-level time stamping by using fast SiGe BiCMOS electronics and a novel sensor concept, the Picosecond Avalanche Detector (PicoAD).
The PicoAD uses a multi-PN junction to engineer the electric field and produce a continuous gain layer deep in the sensor volume. The result is an ultra-fast current signal with low intrinsic jitter in a full fill factor highly granular monolithic detector.
A proof-of-concept ASIC prototype confirms that the PicoAD principle works according to simulations. Testbeam measurements show that the prototype is fully efficient and achieves time resolutions down to 24ps.
Single Photon Avalanche Diodes (SPADs) are getting rising attention in the field of optical sensing systems, since they can offer outstanding time and space resolution in a wide set of applications. In addition, SPADs can take advantage of CMOS planar technology, which enables the integration of both sensor and processing electronics in the same chip.
This work will present the results from the characterization of a SPAD based sensor, fabricated in a 150 nm CMOS technology, for charged particle tracking. In order to compensate for the relatively high dark noise mostly deriving from using a non-custom technology, two chips of SPADs were vertically interconnected by means of bump bonding techniques, to make up a dual layer structure. The detection system is based on the coincidence of signals coming from the two different layers of SPAD sensors. If a particle passes through both the sensing elements of a bi-layer cell, the two pulses overlap with each other and a coincidence signal is generated. On the other hand, obtaining overlapping signals as a consequence of dark pulses is unlikely, due to the statistical nature of noise. Dark count rate (DCR) measurements, performed on both independent single layer and dual layer chips, featured respectively a median value approximately equal to $2\;Hz/\mu m^2$ and $100\;\mu Hz/\mu m^2$, therefore demonstrating the beneficial impact of the two-layer approach on noise performance.
In the conference paper, measurement results relevant to crosstalk featured by single and dual layer chips will be discussed. The structure under test consists of $1728$ cells with a pitch of $50\;\mu m$. Different measurement procedures, described in the final paper, have been used to study the crosstalk contribution coming from pixels highly affecting the noise performance of the neighboring ones. Eventually, some considerations about crosstalk probability will be provided.
In the past few years, thanks to the introduction of controlled low gain and to the optimization of the sensor design, silicon sensors have become the detector of choice for the construction of 4D trackers. Presently, both the ATLAS and CMS experiments are building large timing layers (about 20 m2) to add to their experiment the capabilities of time-tagging charge particles.
In this contribution, I will present the 4DInSiDe project that aims at developing the next generation of 4D silicon detectors characterized by a fully active detecting volume, low material budget and high radiation tolerance.
To this purpose, different areas of research have been identified, involving the development, design, fabrication and test of radiation-hard devices that guarantee to operate efficiently in the future high energy physics experiments. This has been enabled thanks to ad-hoc advanced TCAD modelling of LGAD devices, accounting for both technological issues, e.g. sensitivity of the gain layer, as well as physical aspects such as different avalanche generation models and combined surface and bulk radiation damage effects modelling. A massive test campaign has been carried out on specifically devised LGAD structures, both not irradiated and irradiated ones, fostering the validation of the development framework and the evaluation of the impact of several design options, thus orienting the sensor design and optimization before its large volume production.
This work is focused on reviewing the progress and the relevant detector developments obtained during the research activities in the framework of the Italian 4DInSiDe collaboration.
Monolithic Active Pixel Sensors (MAPS) are a promising technology that provides large sensitive areas at potentially low power consumption and low material budget. The ARCADIA project is developing Fully Depleted MAPS (FD-MAPS) with an innovative sensor design, that uses a backside bias to improve charge collection efficiency and timing over a wide range of operational and environmental conditions. The sensor design is based on a modified 110 nm CMOS process and incorporates a low-doped n-type silicon active volume with a p+ region at the bottom. The p-n junction sits on the bottom of the sensor, which results in the depletion region growing from the backside surface with increasing bias voltage. These FD-MAPS are thus operational at low front-side supply voltages while facilitating a fully depleted silicon bulk, which allows the electrode on the top to read out fast electron signals produced by drift.
The ARCADIA collaboration has produced a large set of prototypes in a first engineering run, with a main design consisting of a 512×512 pixel matrix with 25 $\mu$m pixel pitch and other smaller active sensor arrays. Test structures of pixel matrices with pixel pitches ranging from 10 to 50 $\mu$m and total thicknesses of 50 to 200 $\mu$m have been included, to ease the characterization of the sensors independently from integrated electronics.
We will give an overview of the status of the project including first results of the operation of the main demonstrator chip, and then focus on the characterization of the passive pixel matrices which include Capacitance-Voltage (CV) and Current-Voltage (IV), as well as Transient Current Technique (TCT) measurements with a red and an infrared laser. The results are supported by Technology Computer Aided Design (TCAD) simulations. An additional emphasis will be put on the design of pixels optimized for timing applications with sub-100 ps resolution.
The ALICE collaboration is currently carrying out the final commissioning the upgraded Inner Tracking System (ITS), a new ultralight and high-resolution silicon tracker designed to match the requirements of the experiment in terms of material budget, readout speed and low power consumption of the sensors. The upgraded ITS has an active area of about 10m2, consisting of 24120 Monolithic Active Pixels Sensors (referred to as ALPIDE) produced using the 180 nm TowerJazz CMOS image sensor process. They are assembled in seven concentric layers around the beam pipe ranging from 22 mm to 406 mm. The extremely low material budget of 0.35% X0, the fine granularity with a pixel size of 27um x 29um and the small distance of the innermost layer to the beam axis will allow a major improvement of the detector performance in terms of impact resolution and tracking efficiency in particular for low p_T.
After the end of the production in late 2019, the fully assembled ITS was thoroughly characterised during the on-surface commissioning before being installed in the ALICE experiment in the beginning of 2021. Since then the full ITS detector system has been extensively studied in terms of performance and operational stability both in stand-alone mode including cosmic data taking and integrated with the full ALICE detector system.
In this contribution we present the operational experience gained with the upgraded ITS during commissioning as well as selected results of the LHC pilot beam tests providing first measurements of the detector performance in terms of efficiency and spatial resolution.
Without an external magnetic field, the position resolution of silicon sensors is about $pitch\;size/\sqrt{12}$: in identical conditions, silicon sensors with resistive read-out achieve a resolution of a few percent of the pitch size. This remarkable improvement is due to the introduction of resistive read-out in the silicon sensor design. Resistive silicon sensors are based on the LGAD technology, characterised by a continuous gain layer and by an internal signal-sharing mechanism. Thanks to an innovative electrode design aimed at maximising signal sharing, the second FBK production of RSD sensors, RSD2, achieves a position resolution on the whole pixel surface of about 3 microns for 200-micron pitch, 15 microns for a 450-micron pitch and less than 40 microns for a 1300-micron pitch. RSD2 arrays have been tested in the Laboratory for Innovative Silicon Sensors in Torino using a Transient Current Technique setup equipped with a 16-channels digitizer, allowing simultaneously recording all the detector channels. In this contribution, I will present the characteristics of RSD2 and the results obtained with analytic methods and with machine learning algorithms.
The High Luminosity-Large Hadron Collider is expected to start in
2027 and to provide an integrated luminosity of 3000 fb-1 in ten year,
about a factor 20 more than what was collected so far. This high
statistics will allow to perform precise measurements in the Higgs sector and
improve searches of new physics at the TeV scale.
The luminosity needed is L ~7.5 1034 cm-2 s-1, correspondent to
~200 additional proton-proton pile-up interactions, which can
significantly degrade the reconstruction performances.
To face such harsh environment some sub-detectors of the ATLAS
experiment will be upgraded or completely substituted.
The current Inner Detector will be replaced with a new all-silicon
Inner Tracker (ITk) designed to face the challenging environment
associated with the high number of collisions per bunch crossing.
In this poster an overview of the ITk performance to reconstruct and
identify high-level objects will be shown.
A particular focus will be given to the pile-up jets tagging and the
impact of the spatial density of the number of collisions per bunch
crossing.
The Belle II experiment at the SuperKEKB $e^+e^-$ collider has started developing an upgrade program on the time frame of 2026-2027 to improve the detector performance and robustness against beam-induced backgrounds.
To replace the current Belle II pixel and strip system (VXD), the VTX detector concept has been developed, a fully pixelated system based on thin Depleted Monolithic Active Pixel Sensors organized in 5, or possibly more, barrel layers.
To optimize the VTX design and compare it with the VXD system, a full simulation framework has been developed and integrated with the standard Geant4-based Belle II simulation, allowing direct comparison among different layouts.
This is made possible by the flexibility of the Belle II track reconstruction code, which can be retrained to operate on any detector layout without changing the code itself.
This poster will present the VTX detector concept, the development of the simulation framework, and the simulation results obtained, such as tracking efficiency and vertex resolution on benchmark physics channels (including D mesons from B decays), showing significant improvements with respect to the VXD.
This simulation work forms the basis for further optimization of the VTX design.
The EUDET-style telescopes provide excellent spatial resolution, but timing capabilities are limited by the rolling shutter architecture. The Telepix prototype is developed to significantly improve the time stamping of the telescopes and to provide a fast trigger signal with a selectable region of interest. This will be used to efficiently take data with small sensor prototypes.
Telepix is designed in the TSI 180nm HV-CMOS process and profits from a decade of research for the Mu3e experiment and others. In a test submission, multiple pixel matrices with 29x124 pixels and a pitch of 25umx165um have been submitted, featuring different amplifiers, based on PMOS or NMOS only as well as full CMOS. These are systematically characterised and compared, both for laboratory as well as test beam measurements.
During a test beam campaign, the fast region of interest trigger has been studied extensively and a delay of below 25ns with respect to a trigger scintillator could be determined, alongside with a jitter of less than 5ns, making Telepix a well suited trigger plane. A full column had been used, to also include different transmission line length that might influence the jitter of the trigger signal.
Also the performance of the three amplifiers has been studied in test beam and laboratory measurements: Efficiencies above 99% and time resolutions below 5ns have been observed for fully integrated readout. The threshold range with efficiencies above 99% strongly depends on the amplifier type, as well as the threshold dependence of the time resolution. Finally, the spatial resolution as a function of the detection threshold has been determined.
The EUDET-style beam telescope are introduced, the specifications derived and sensor design introduced. A comparison of the different chips from laboratory measurements and a test beam campaign will be presented and the region of interest triggering capabilities will be demonstrated.
The TRISTAN project is the upgrade of the KATRIN experiment that will search for sterile neutrinos with mass in the keV range through precise measurements of the entire Tritium $\beta$-spectrum.
For this purpose, the current KATRIN detector must be replaced with a multipixel detector based on Silicon Drift Detectors (SDDs). SDDs have a small anode capacitance that is reflected in a small equivalent noise charge and therefore in a very high energy resolution close to the Fano limit in Silicon. Moreover, thanks to this small capacitance, the signal risetimes are of the order of few tens of nanoseconds. These features make SDDs ideal for high-rate spectroscopy. In particular, they are commonly used for X-ray measurements. Electron spectroscopy is a relatively novel application, it is therefore necessary to characterize SDDs response to electrons.
We focused our attention on two aspects: the detector dead-layer and the electron backscattering probability. We performed precise measurements in a dedicated setup consisting in a SDD matrix and an electron-gun as a monochromatic and collimated electrons source. In both cases we compared our results with Geant4 Montecarlo simulations.
The precise knowledge of SDDs response to electrons is mandatory in order to accurately reconstruct the continuous $\beta$-spectrum that will be measured in TRISTAN.
We have also investigated the possibility to use a SDD as a versatile and compact $\beta$ spectrometer that can be operated with standard technologies. The goal is to make precise measurements of some interesting $\beta$-decaying isotopes that can have an impact in neutrino and nuclear physics.
In the last few years, fast timing detectors have become more and more important for high energy physics and for technological applications. The CMS Proton Precision Spectrometer (PPS), operating at the LHC, makes use of 3D silicon tracking stations to measure the kinematics of protons scattered in the very forward region, as well as timing detectors based on planar single crystal CVD diamond to measure the proton time-of-flight with high precision. The time information is used to reconstruct the longitudinal position of the proton interaction vertex and to suppress pile-up background. To move PPS detectors closer to the circulating LHC beams they are housed in special movable vacuum chambers, the Roman Pot, placed in the beam pipe. A novel architecture with two diamond sensors read out in parallel by the same electronic channel had been used to enhance the timing performance of the detector. A dedicated amplification and readout chain had been developed to sustain particle fluency of $\sim$1 MHz/channel. The PPS timing detector has operated demonstrating its capability to reconstruct the interaction vertex and to be used to suppress pile-up background. In Run 2 detectors were exposed to a highly non-uniform irradiation, with local peaks above $10^{16}$neq/cm$^2$, a similar value is expected in the future in Run 3. LHC data and subsequent test beam results show that the observable radiation damage only led to a moderate decrease of the detector timing performance. We will present the PPS timing system in detail. Detector Performance in Run2 will be reported, inclusive of the recent studies of radiation effects. The timing system has been upgraded and new detectors packages are currently being installed, with the goal of reaching an ultimate timing resolution of better than 30 ps on protons in the TeV energy range.
Data quality monitoring (DQM) and data certification (DC) are of vital importance to advanced detectors such as CMS, and are key ingredients in assuring solid results of high-level physics analyses using its data. The current approach for DQM and DC is mainly based on manual monitoring of reference histograms summarizing the status and performance of the detector. This requires a large amount of person power while having a rather coarse time granularity in order to keep the number of histograms to check manageable. We discuss some ideas for automatic DQM and DC using machine learning at the CMS detector, focusing on a number of case studies in the pixel tracker. In particular, using legacy data taken in 2017, we show that data certification using autoencoders is able to accurately spot anomalous detector behaviour, with a time granularity previously inaccessible to the human certification procedure. We propose some ideas and plans to commission these automatic DQM and DC procedures in the coming Run 3 of CMS data taking.
The Quality Control (QC) of pre-production strip sensors for the Inner Tracker (ITk) of the ATLAS Inner Detector upgrade has finished, and the collaboration has embarked on the QC test programme for production sensors. This programme will last more than 3 years and comprises the evaluation of approximately 22000 sensors. 8 Types of sensors, 2 barrel and 6 endcap, will be measured at many different collaborating institutes. The sustained throughput requirement of the combined QC processes is around 500 sensors per month in total. Measurement protocols have been established and acceptance criteria have been defined in accordance with the terms agreed with the supplier. For effective monitoring of test results, common data file formats have been agreed upon across the collaboration. To enable evaluation of test results produced by many different test setups at the various collaboration institutes, common algorithms have been developed to collate, evaluate, plot and upload measurement data. This allows for objective application of pass/fail criteria and compilation of corresponding yield data. These scripts have been used to process the data of more than 2500 sensors so far, and have been instrumental for identification of faulty sensors and monitoring of QC testing progress. The analysis algorithms and criteria were also used in a dedicated study of strip tests on gamma-irradiated full-size sensors.
PIONEER is a next-generation experiment to measure the charged-pion branching ratio to electrons vs muons, Re/μ and pion beta decay (Pib) π+→π0eν. Re/μ provides the best test of e-µ universality and is extremely sensitive to new physics at high mass scales; Pib could provide a clean high precision value for Vud. PIONEER was approved with high priority at the Paul Scherrer institute (PSI), with the plan to start data taking as early as 2028.
PIONEER features a high granularity active target (ATAR), designed to suppress the muon decay background sufficiently so that the eν tail can be directly measured. In addition, the ATAR will provide detailed 4D tracking information to suppress other significant systematic uncertainties, and to separate the energy deposits of the pion decay products in both position and time.
The chosen technology for the ATAR is Low Gain Avalanche Detector (LGAD): thin silicon detectors with moderate internal signal amplification (up to a gain of ~50). LGADs have fast rise time and short full charge collection time, and are capable of providing measurements of minimum-ionizing particles (MiP) with time resolution as good as 17 ps. The ATAR baseline design is 48 planes of 2×2 cm strip LGADs with 120 μm of active thickness. To achieve a ~100% active region, several technologies still under research are being evaluated, such as AC-LGADs and TI-LGADs. As dynamic range from MiP (positron) to several MeV (pion/muon) of deposited charge is expected, the detection and separation of close-by hits in such a wide dynamic range will be a main challenge.
In the contribution, a brief introduction to the PIONEER experiment will be presented, then studies made on ATAR candidate LGAD sensors with TCT laser and particle beam will be shown. Furthermore, results with integrated amplifier chips and interposed flex cable will be presented.
Scientific Charge-Coupled Devices (CCDs) have been widely used in astronomy and particle physics due to their great spatial resolution and sensitivity to low-energy signals. The skipper-CCD, a recently developed sensor, allows to measure single-electron signals with sub-electron noise, making its application very attractive in experiments where a low-energy threshold is required. In this talk I will describe the skipper-CCD technology and discuss its current usage in dark matter and neutrino experiments. Furthermore, I will give an overview of the ongoing efforts for constructing multi-kg experiments with skipper-CCDs.
The proven potential of 3D geometries at higher than $10^{16} n_{eq}/cm^{2}$ radiation fluences, in combination with a small cell approach, makes them an excellent choice for a combined precision timing tracker. In this study, the timing resolution of a single 50 x 50 μm 3D pixel cell is presented in various temperatures through charge collection measurements with discrete electronics in a laboratory setting. The series is complemented by an extensive test-beam campaign with 160 GeV SPS pions, using a multi-plane timing telescope with an integrated pixelated matrix. Through a varied incidence angle study, field uniformity, Landau contribution and collected charge are treated at incidence angles of +/- 12$^{o}$. Using state of the art numerical methods, the choice of instrumentation on signal composition and induced bias on results is also evaluated. Finally, with the help of the EUDAQ telescope, a detailed timing, field and efficiency map is presented with a 5 μm spacial resolution through MIMOSA CMOS tracking at CERN SPS pion beams.
Experiments at the future Electron-Ion Collider (EIC) pose stringent requirements on the tracking system for the measurement of the scattered electron and charged particles produced in the collision, as well as the position of the collision point and any decay vertices of hadrons containing heavy quarks. Monolithic Active Pixel Sensors (MAPS) offer the possibility of high granularity in combination with low power consumption and low mass, making them ideally suited for the inner tracker of the EIC detector(s). In this talk, we will discuss the configuration optimized for the ATHENA detector, selected physics performance metrics, and associated R&D towards a well-integrated, large-acceptance, precision tracking and vertexing solution for the EIC based on a new generation MAPS sensors in 65 nm CMOS imaging technology.
The negative capacitance (NC) feature of doped high-k dielectric HfO2 has emerged with important technological applications in CMOS nanoscale electronic devices. The discovery of ferroelectricity in HfO2 reveals a new perspective for manufacturability and scalability in multiple fields, with groundbreaking implications in the design of low power, steep switching transistors. Ferroelectricity in thin HfO2 films does not degrade with the thickness scaling, showing excellent miniaturization properties. The voltage amplification triggered by the ferroelectric material properties, further pushes its use in almost every low-power application. The NC concept promising to provide room temperature sub-60 mV/decade subthreshold swing in FET devices. The presence of a negative capacitor in the gate stack of a transistor can provide an amplified internal potential (step-up voltage), which can potentially overcome the fundamental limit in the subthreshold swing of conventional transistors. The theory of “capacitance matching” is of utmost importance for obtaining a hysteresis-free operation with maximum amplification of the internal potential.
In this contribution, the INFN-CSN5 NegHEP (NEGative capacitance field effect transistors for the future High Energy Physics applications) project will be presented. The project proposes the use of the NC working principle in the framework of High Energy Physics experiments detection systems at future colliders, fostering the fabrication of tracking devices with high spatial resolution, extremely thin layers and capable of detecting signals from noise in harsh radiation environments. The project intends to study, for the first time ever, the radiation hardness of this innovative technology to irradiation.
Advanced TCAD (Technology Computer Aided Design) modeling will be used aiming at investigating the potentiality of Negative Capacitance (NC) devices in non-conventional application domains (e.g., radiation detection). When numerical simulations are capable of verify experimental results, they will also gain predictive power, resulting in reduced time and cost in detector design and testing.
The Belle II experiment is taking data at the asymmetric Super-KEKB collider, which operates at the Y(4S) resonance. The vertex detector is composed of an inner two-layer pixel detector (PXD) and an outer four-layer double-sided strip detector (SVD). The SVD-standalone tracking allows the reconstruction and identification, through dE/dx, of low transverse momentum tracks. The SVD information is also crucial to extrapolate the tracks to the PXD layers, for efficient online PXD-data reduction.
A deep knowledge of the system has been gained since the start of operations in 2019 by assessing the high-quality and stable reconstruction performance of the detector. Very high hit efficiency, and large signal-to-noise are monitored via online data-quality plots. The good cluster-position resolution is estimated using the unbiased residual with respect to the track, and it is in reasonable agreement with the expectations.
Currently the SVD average occupancy, in its most exposed part, is still < 0.5%, which is well below the estimated limit for acceptable tracking performance. With higher machine backgrounds expected as the luminosity increases, the excellent hit-time information will be exploited for background rejection, improving the tracking performance. The front-end chip (APV25) is operated in “multi-peak” mode, which reads six samples. To reduce background occupancy, trigger dead-time and data size, a 3/6-mixed acquisition mode based on the timing precision of the trigger has been successfully tested in physics runs.
Finally, the SVD dose is estimated by the correlation of the SVD occupancy with the dose measured by the diamonds of the radiation-monitoring and beam-abort system. First radiation damage effects are measured on the sensor current and strip noise, although they are not affecting the performance.
After the manufacture and delivery of a state-of-the-art detection system for the XRF-XAFS beamline of the synchrotron light source SESAME, a new and improved detection system was realized. This new multichannel modular detection system based on Silicon Drift Detectors consists of 8 monolithic multipixel arrays, each comprising 8 (SDD) cells with a total area of 570 mm$^2$. As the previous one, this 64 channels integrated detection system includes ultra-low noise front-end electronics, dedicated acquisition system, digital filtering, temperature control and stabilization. With respect to the SESAME version, the new instrument implements a collimation system yielding a total collimated sensitive area of 499 mm$^2$. Optimized to work in an energy range of 3-30 keV, the system shows an overall energy resolution (sum of its 64 cells) below 170 eV FWHM at the Mn 5.9 Ka line at room temperature. We highlight the system performance, and in particular the peak to background ratio, before and after the collimation of the sensors.
Owing to its excellent radiation hardness, diamond has been widely used as solid-state particle detectors and dosimeters in high-radiation environments. A system based on single-crystal synthetic-diamond detectors has been developed and installed in order to monitor the radiation level and detect beam losses near the interaction region of the SuperKEKB collider for the the Belle II experiment.
In order to assess the crystal quality and response of these devices, all diamond sensors are characterized with different radiation sources, comparing the measurement results with dedicated simulations. We devised a novel current-to-dose-rate calibration method for steady irradiation, which employs a silicon diode as a reference in order to greatly reduce uncertainties associated with the radiation source. The calibration results, in agreement between and X radiation, span a dose rate range from tens of nrad/s to rad/s.
In addition, beam tests of the devices are being carried out at the linac of the FERMI@Elettra FEL in Trieste (Italy), with short 1GeV electron bunches of 1ps duration and bunch charge from one to hundreds of . The aim is to test the transient response to very high intensity pulses and study possible saturation effects due to a very high charge carrier density in the diamond bulk.
A two-step numerical simulation approach is employed to study the time response of the diamond sensor, separating the effects of charge carriers drifting in the diamond bulk from the effects of the circuit on the signal shape.
Validation of the approach is conducted by comparing the simulation with measurements of the TCT (Transient Current Technique) signals generated by particles.
Preliminary results show remarkable agreement between measurements and numerical simulation, where the diamond resistance is modeled as a function of the variable charge density in the diamond bulk.
LHCb physics achievements to date include the world's most precise measurements of the CKM phase 𝛾 and the rare decay $𝐵^0_𝑠$→𝜇$^+$𝜇$^−$, the discovery of 𝐶𝑃 violation in charm, and intriguing hints of lepton-university violation. These accomplishments have been possible thanks to the enormous data samples collected and the high performance of the sub detectors, in particular the silicon vertex detector (VELO). The experiment is being upgraded to run at higher luminosity, which requires 40 MHz readout for the entire detector and newer technologies for most of the sub detectors. The VELO upgrade modules are composed of hybrid pixel detectors and electronics circuits mounted onto a cooling substrate, which is composed of thin silicon plates with embedded micro-channels that allow the circulation of liquid CO$_2$. This cooling substrate gives excellent thermal efficiency, no mismatch to the front-end electronics, and optimises physics performance due to the low and very uniform material distribution. The detectors are located in vacuum, separated from the beam by a thin Al foil. The foil was manufactured through a novel milling process and thinned further by chemical etching. The detectors are linked to the opto-and-power board (OPB) by 60 cm electrical data tapes running at 5 Gb/s. The tapes are vacuum compatible and radiation hard and flexible enough to allow the VELO to retract during LHC beam injection. The upgraded VELO is composed of 52 modules placed along the beam axis divided into two retractable halves. The modules are currently being assembled into the two halves before final installation into LHCb. The design, production, installation and commissioning of the VELO upgrade system will be presented together with test results.
The High Luminosity upgrade of the Large Hadron Collider will force the experiments to cope with harsh radiation environments. The CMS experiment is considering the option of installing 3D pixel sensors in the innermost layer of its tracking system where a fluence up to 2e16 neq/cm2 is expected. This pixel technology should maintain high detection efficiency and manageable power dissipation at such unprecedented expected fluences. Results from beam test experiments with pixelated 3D sensors fabricated at IMB-CNM and bump-bonded to RD53A readout chips are presented. The irradiation with protons of 400MeV-momentum to fluences of roughly 1.3-2.0e16 neq/cm2, as well as the measurement of these sensors in a test beam have been both performed at Fermilab.
The FOOT (FragmentatiOn On Target) experiment aims to measure the double differential cross-sections that are in the energy range of therapeutic interest (100-400 MeV), to produce sufficiently precise measurements. These data will allow a better modelling of the dose imparted to the health tissues traversed, and therefore an accurate assessment of the damage induced during therapy. To succeed the experiment will use a magnetic spectrometer operated in the inverse kinematics mode, i.e. sending ions of the appropriate energy onto a proton rich target, and studying the charge, energy and emission angle of the fragments.
The Microstrip Silicon Detector apparatus is the last tracking station of the magnetic spectrometer, located downstream of the magnets and it consists of 6 layers of silicon microstrip sensors, organised in three x-y stations with mutually orthogonal sensors. The MSD is used to measure the spatial points of the track needed for fragments' momentum reconstruction while providing also additional information about the charge and the energy loss of the charged fragments.
To characterise both its tracking capabilities (namely, its spatial resolution and detection efficiency) and its response to particles that are not
at the ionizing minimum a series of tests at several accelerators have been performed.
We present the results of the complete Microstrip Silicon Detector apparatus, during the construction and testing performed in the laboratory phase, as well as the ones obtained from data taken at beam facilities delivering protons and heavier ions (Carbon and Oxygen).
The High Luminosity Large Hadron Collider (HL-LHC) at CERN is expected to collide protons at a centre-of-mass energy of 14 TeV and to reach the unprecedented peak instantaneous luminosity of $5-7.5\times10^{34} cm^{-2}s^{-1}$ with an average number of pileup events of 140-200. This will allow the CMS experiment to collect integrated luminosities up to 3000-4000 fb$^{-1}$ during the project lifetime. The current CMS Pixel Detector will not be able to survive the HL-LHC radiation conditions and thus CMS will need completely new Inner Tracker in order to fully exploit the highly demanding conditions and the delivered luminosity. The new pixel detector will feature increased radiation hardness, higher granularity and capability to handle higher data rate and longer trigger latency. The design choices for the Inner Tracker Phase-2 upgrade are discussed along with some highlights on the technological approaches and R&D activities.
In the last few years, Low Gain Avalanche diodes (LGAD) have been considered one of the most promising solutions for timing application in HEP experiments, as well as for 4-dimensional tracking, due to some important advantages: larger internal signal, better time resolution and higher radiation hardness with respect to standard p-i-n based sensors.
Although the LGAD technology recently reached a good technology readiness level, an increasing number of foundries and R&D laboratories are proposing novel design schemes and microfabrication technologies mainly focused on improving two key aspects of the technology: i) increasing the radiation hardness at fluence higher than 3e15 neq/cm$^2$ and ii) improve the spatial resolution moving through fine-pixellated and high-fill-factor sensor designs.
In this contribution, the major technology developments in these directions done at Fondazione Bruno Kessler together with INFN Torino will be presented and discussed, supported by experimental results and simulation studies.
To improve the spatial resolution, a novel segmentation scheme named Trench-Isolated LGAD (TI-LGAD) has been developed. In this technology, the pixel segmentation is obtained by means of trenches, physically etched in the silicon, and filled with silicon oxide. The electrical and functional characterization of the first prototypes before and after irradiation will be presented, prooving the possibility to produce LGAD sensors with a pixel pitch of 50 $\mu$m and not-sensitive inter-pixel width less than 5 $\mu$m.
Moreover, to improve the radiation hardness at high fluences, novel junction schemes based on dopant co-implantation with electrical inactive elements (like carbon) and compensated doping profiles are under investigations. The outcome from a simulation campaign and the first experimental results will be presented, showing the potentiality of these techniques to mitigate the effect of the radiation damage on some important figure-of-merit of the sensor, like gain and breakdown voltage.
The MALTA pixel chip is a 2 cm x 2 cm large monolithic sensor developed in the 180 nm TowerJazz imaging process. The chip contains four CMOS transceiver blocks at its sides which allow chip-to-chip data transfer. The power pads are located mainly the side edges on the chip which allows for chip-to-chip power transmission. The MALTA chip has been used to study module assembly techniques using different interconnection techniques to transmit data and power from chip to chip and to minimise the overall material budget. Several 2-chip and 4-chip modules have been assembled using standard wire bonding, ACF and laser reflow interconnection techniques. This presentation will summarise the experience with the different interconnection techniques and performance tests of MALTA modules with 2 and 4 chips tested in a cosmic muon telescope. It will also show first results on the effect of serial power tests on chip performance as well as the impact of the different interconnection techniques and the results of mechanical tests. Finally, a conceptual study for a flex based ultra-light weight monolithic pixel module based on the MALTA chip with minimum interconnections is presented.
The upgrade of the MEG experiment, MEGII, has started physics data taking
in fall 2021, collecting ~ 8x10^13 mu on target during 34 days of DAQ live
time, searching for the Standard Model violating Lepton Flavor Violating
Decay mu->e gamma with sensitivity improved by an order of magnitude.
During this period the pixelated Timing Counter (pTC), a time of flight
detector devoted to extrapolating the muon decay time on target by
measuring the positron hit time, has been fully readout and has
operated stably.
The detector consists of 512 fast plastic scintillator pixels
(120x50(40)x5 mm^3) readout by two arrays of 6 SiPM each connected in series
glued on opposite sides.
Its goal is to achieve a resolution on positron hit time of about 40 ps by
exploiting multiple-hits events.
This contribution will show how the detector achieved the design performance
during the 2021 run reaching ~39 ps for events with 8 hits corresponding to the average number of hits expected from MC simulation for mu->e gamma events.
This result was obtained in spite of a suboptimal performance of electronic
noise and of a slow degradation in dark current due to irradiation damage
on SiPMs.
Instrumental in achieving this performance was a full set of hardware and
software calibration tools developed to align precisely in time and space
the counters relative to each other and to the rest of the MEG II detector.
For the HL-LHC upgrade the current ATLAS Inner Detector is replaced by an all-silicon system. The Pixel Detector will consist of 5 barrel layers and a number of rings, resulting in about 14 m2 of instrumented area. Due to the huge non-ionizing fluence (1e16 neq/cm2) and ionizing dose (5 MGy), the two innermost layers, instrumented with 3D pixel sensors (L0) and 100μm thin planar sensors (L1) will be replaced after about 5 years of operation. All hybrid detector modules will be read out by novel ASICs, implemented in 65nm CMOS technology, with a bandwidth of up to 5 Gb/s. Data will be transmitted optically to the off-detector readout system. To save material in the servicing cables, serial powering is employed for low voltage.
Large scale prototyping programs are being carried out by all sub-systems.
The talk will give an overview of the layout and current status of the development of the ITk Pixel Detector.
For the upgrade of the Large Hadron Collider (LHC) to the High-Luminosity Large Hadron Collider (HL-LHC) the ATLAS detector will install a new Inner Tracker (ITk), which consists completely of silicon detectors. Although different technologies were chosen for the inner and outer part, the major risk for all silicon detectors are heat-ups, which can cause irreparable damages. As, once the detector is installed, detector elements are not accessible for several years or even for the lifetime of the detector, such damages must be avoided by all means.
The ITk interlock system is a hardwired safety system, it acts as last line of defense and is designed to protect the sensitive detector elements against upcoming risks. Core of the interlock system is an FPGA, which houses an interlock matrix. It collects signals from interlock protected devices and distributed signals onto interlock controlled units (e.g. power supplies). Additionally, signals from external systems can be integrated. To keep the number of detector elements, which are out of operation, at a minimum, the power supplies are controlled with a high granularity. The resulting large number of channels also explains why no commercial solution was selected.
We explain the concept in detail, report about the realization of the interlock system and future plans.
Within the RD50 Collaboration, a large and dedicated R&D program has been underway for more than two decades across experimental boundaries to develop silicon sensors with high radiation tolerance for Phase-II LHC trackers. Based on the success of this R&D, these trackers are now entering their construction phase. RD50 is continuing its mission to study silicon sensors for particle tracking, shifting the focus to applications beyond the LHC. The next generation of collision experiments, such as the FCC, requires unprecedented radiation hardness in the range of a few 10$^{17}N_{eq}$ as well as time resolutions of the order of 10ps. Another key challenge is to move the sensor technology away from traditional planar passive float-zone sensors, which form large parts of the current trackers to sensor technologies such as CMOS where front-end electronics can be integrated, and where a wide availability in industry promises cost advantages.
Key areas of recent RD50 research include technologies such as Low Gain Avalanche Diodes (LGADs), where a dedicated multiplication layer to create a high field region is built into the sensor, resulting in time resolutions of a few tens of ps. We also study 3D sensors as a radiation-hard alternative to LGADs for fast timing applications. In another R&D-line we seek for a deeper understanding of the connection between macroscopic sensor properties such as radiation-induced increase of leakage current, doping concentration and trapping, and the microscopic properties at the defect level. A new measurement tool available within RD50 are the Two-Photon-Absorption (TPA) TCT systems, which allow position-resolved measurements down to a few um.
We will summarise the current state-of-art in silicon detector development in terms of radiation hardness and fast timing, and give an outlook on silicon sensors options for e.g. the FCC.
Low Gain Avalanche Detectors (LGADs) are thin silicon detectors with moderate internal signal amplification, providing time resolution as good as 17 ps for minimum ionizing particles. In addition, their fast rise time and short full charge collection time (as low as 1 ns) is suitable for high repetition rate measurements in photon science and other fields. However, a major limiting factor for spatial resolution are electric field termination structures, which currently limit the granularity of LGAD sensors to the mm scale.
AC-LGADs, also referred to as resistive silicon detectors, are a recent variety of LGADs based on a sensor design where the multiplication and n+ layers are continuous, and only the metal layer is patterned. This simplifies sensor fabrication and reduces the dead area on the detector, improving the hit efficiency while retaining the excellent fast timing capabilities of LGAD technology. In AC-LGADs, the signal is capacitively coupled from the continuous, resistive n+ layer over a dielectric to the metal electrodes. A high spatial precision on the few 10‘s of micrometer scale is achieved by using the information from multiple pads, exploiting the intrinsic charge sharing capabilities provided by the common n+ layer. The response depends on the location, the pitch and size of the pads.
Using focused IR-laser scans, the following detector parameters have been investigated with the scope of optimizing the sensor design: sheet resistance and termination resistance of the n-layer, thickness of the isolation dielectric, and pitch and size of the readout pads. Furthermore, capacitance-voltage characterization of the sensors will be shown. Finally, charge sharing distributions produced with data taken at the Fermilab test beam facility will be presented. The results will be used to recommend a base-line sensor for near-future large-scale detector application like the Electron-Ion Collider, where simultaneous precision timing and position resolution is required.
Future collider experiments operating at very high instantaneous luminosity will greatly benefit in using detectors with excellent time resolution to facilitate event reconstruction. For the LHCb Upgrade2, when the experiment will operate at 1.5x10^34/cm/s, 2000 tracks from 40 pp interactions will cross the vertex detector (VELO) at each bunch crossing. To properly reconstruct primary vertices and b-hadron decay vertices VELO hit time stamping with 50ps accuracy is required. To achieve this, several technologies are under study and one of the most promising today is the 3D trench silicon pixel, developed by the INFN TimeSPOT collaboration. These 55µmx55µm pixels are built on a 150µm-thick silicon and consist of a 40µm-long planar junction located between two continuous bias junctions, providing charge-carriers drift paths of about 20µm and signals’ total durations close to 300ps. Two sensors’ batches were produced by FBK in 2019 and 2021. The most recent sensors’ beam test was performed at SPS/H8 in 2021. Various test structures were readout by means of low-noise custom electronics boards featuring a two-stage transimpedance amplifier, and the output signals were acquired with an 8GHz 20GS/s oscilloscope. The arrival time of each particle was measured with an accuracy of about 7ps using two 5.5mm-thick quartz window MCP-PMTs. Two 3D trench silicon pixel test structures and the two MCP-PMTs were aligned on the beam line and acquired in coincidence. Signal waveforms were analyzed offline with software algorithms and pixel signal amplitudes, particle time of arrival and efficiencies were measured. A preliminary analysis indicates efficiencies close to 100% for particles impinging at more than 10 degrees with respect to normal incidence, and time resolutions close to 10ps. More up-to-date results will be presented at the Conference. 3D trench-type silicon pixels appear to be a promising technology for future vertex detectors operating at very high instantaneous luminosity.
The success of the Belle II experiment in Japan relies on the very high instantaneous luminosity, close to 6x10^35 cm^-2 s^-1, expected from the SuperKEKB collider. The corresponding beam conditions generate large rates of background particles and creates stringent constraints on the vertex detector, adding to the physics requirements.
Current prospects for the occupancy rates in the present vertex detector (VXD) at full luminosity fall close to the acceptable limits and bear large uncertainties.
In this context, the Belle II collaboration is considering the possibility to install an upgraded VXD system around 2026 to provide a sufficient safety factor with respect to the expected background rate and possibly enhance tracking and vertexing performance.
Our international consortium has started the design of a fully pixelated VXD, dubbed VTX, based on a depleted CMOS Monolithic Active Pixel Sensor prototype developed for LHC-type conditions and recent light detection layer concepts.
The striking technical features of the VTX proposal are the usage of the same sensor over the few layers of the system and the decrease of the overall material budget below 2 % of radiation length. The new dedicated OBELIX sensor is under development, starting from the existing TJ-MONOPIX-2 sensor. The time-stamping precision below 100 ns will allow all VTX layers to take part in the track finding strategy contrary to the current situation. The first detection layers are designed according a self-supported all-silicon concept, where 4 contiguous sensors are diced out of a wafer, thinned and interconnected with post-processed redistribution layers. Beyond a radius of 3 cm, detection layers follow a more conventional approach with a carbon fiber support structure and long but light flex cables interconnecting sensors.
This talk will review the context, technical details and development status of the proposed VTX as well as discussing performance expectations from simulations.
LHCb has recently submitted a physics case for an Upgrade II detector to begin operation in 2031. The upcoming upgrade is designed to run at instantaneous luminosities of $1.5\times 10 ^{34}cm^{−2} s^{−1}$, to accumulate a sample of more than 300 fb$^{−1}$. The LHCb physics programme relies on an efficient and precise vertex detector (VELO). Compared to Upgrade I, the data output rates, radiation levels and occupancies will be about ten times higher during LHC runs 5 and 6. To cope with the pile-up increase, new techniques to assign b hadrons to their origin primary vertex, and to perform the real time pattern recognition are needed. To solve these problems, a new 4D hybrid pixel detector with enhanced rate and timing capabilities in the ASIC and sensor will be developed. This presentation will discuss the most promising technologies to be used in the future upgrade for the HL-LHC, with emphasis on the timing precision as a tool for vertexing in the next generation detectors. An initial simulation effort has been made to investigate what would be the required temporal resolution sufficient to mitigate pile-up and identify secondary vertices, which points to at least 20 ps per track. The most recent results from beam tests motivated by time measurements will be presented together with the R\& D scenarios for the future upgrade. Improvements in the mechanical design of the Upgrade II VELO will also be needed to allow for periodic module replacement. The design will be further optimised to minimise the material before the first measured point on a track and to achieve a fully integrated module design with thinned sensors and ASICs combined with a lightweight cooling solution.
Proton beam therapy (PBT) is a more advanced form of radiotherapy that allows dose to be delivered more precisely, sparing healthy tissue. In recent years there has been increasing interest in a new high dose rate form of radiotherapy called FLASH. In FLASH radiotherapy, extremely high dose rates above 40 Gy/s and delivery times below 100ms have shown exceptional reduction in damage to healthy tissue with similar tumour control to standard radiotherapy. In addition, such short delivery times have the potential to eliminate dose delivery inaccuracy related to patient movement during treatment. Research is currently underway to develop the first clinical systems capable of delivering therapeutic beams at FLASH rates with protons, electrons and photons.
Two key challenges exist in the development of FLASH PBT:
1) The development of accelerator systems fast enough to deliver spot-scanned PBT beams within a suitably short time frame to elicit the FLASH effect;
2) The improvement of diagnostic and Quality Assurance (QA) detectors capable of making dosimetric measurements at FLASH rates.
A background to PBT and the advantages over conventional radiotherapy is presented. A brief history of FLASH radiotherapy is given with a focus on progress in delivering FLASH PBT. The challenges in both accelerator and diagnostics development are outlined. Finally, the UCL QuARC project to develop a FLASH-ready QA detector for fast proton range measurements is described, with experimental results of the first clinical tests of the prototype detector system.
Muons of cosmic origin have a great capability to penetrate through matter. This property is exploited in muon radiography, a technique which allows to highlight
the presence of discontinuities in the subsoil of different possible origins, such as the presence of cavities, tunnels or rock masses. More generally, it provides two-dimensional maps of the mass distribution; if multiple measurements from different points are available, 3D distributions can be obtained. We have developed, in collaboration with TECNO-IN SpA and S.c.a r.l. STRESS, a detector optimized for borehole studies. The cylindrical shape is realized with arc-shaped plastic scintillator bars combined with rectangular section bars, arranged vertically. This geometry allows to maximize the effective surface of the detector and provides a large investigation volume. Currently the first constructed prototype is 1 m high and has a diameter of about 20 cm. It consists of 64 vertical bars for measuring the azimuth angle and 256 arcs for the z-coordinate measurement, considering a cylindrical coordinates system. The scintillation light is read out by 384 Silicon Photomultipliers, directly coupled to the bars. Particular attention has been paid to the transport of photons inside the scintillators, with the use of light guides realized by the bars itself.
The front-end and acquisition electronics, entirely housed inside the detector, are based on the EASIROC chip and are characterized by limited energy consumption (about 30 W for the entire detector).
The detector is enclosed in a waterproof case and is remotely controlled via ethernet. The presentation will describe the detector and the results obtained in a series of measurements carried out in the subsoil of the hill of Mt Echia in Naples, where its ability to reveal some known cavities and to identify possible hypothesized hidden cavities were tested.
Proton therapy offers highly localised dose distribution and better healthy tissue sparing over conventional radiotherapy. Crucial in optimising patient safety is the proton range: this is the largest source of uncertainty in proton therapy and prevents full advantage being taken of the superior dose conformality. In the clinic, daily Quality Assurance (QA) is performed each morning before patient treatment, including verification of the proton range in water (a proxy for human tissue) for specific beam energies. This process however, often compromises between speed and accuracy. Recently, there has been increased interest in FLASH: a high dose rate form of radiotherapy offering even greater healthy tissue sparing. However, standard detectors used in QA become unusable at FLASH dose rates.
The Quality Assurance Range Calorimeter (QuARC) is currently under development at UCL with our industrial partners Cosylab to provide fast, accurate, water-equivalent proton range measurements for daily QA, with the capability to operate at FLASH dose rates. Based on plastic scintillator developed for the SuperNEMO experiment, the detector is a series of optically isolated scintillator sheets that sample the proton energy deposition along its path. Light from each sheet is measured by a series of photodiodes: this light output is proportional to the deposited energy. An analytical depth-light model is used to fit the data and measure the proton range to sub-mm precision.
Two preliminary beam tests at UCLH with proton pencil beams between 70-110 MeV found that the QuARC is able to consistently recover proton ranges with good accuracy, even at low light levels. Fast curve fitting enables stable real-time range reconstruction at 40 Hz, as protons are delivered to the detector. Due to large dynamic range, the detector can be scaled up to FLASH dose rates. Further measurements are required to fully characterise detector performance and light output with FLASH.
OLD
Recent developments on scintillators together with fast digital signal processing, allowed the implementation of techniques that facilitate their use in applications that required excellent Pulse Shape Discrimination and FOM such as the identification of Special Nuclear Material through both combined gammas counting / spectrometry and neutron counting with time stamp correlated information. This work is presenting extensive tests executed with many radionuclides in agreement with ANSI Standards, SNM (Pu, U, HEU, HEPu) and n-alpha neutron sources measurements and the technical solutions implemented for the realization of two nuclear measurement systems dedicated to nuclear safeguards and nuclear security applications.
NEW
Recent developments on scintillators together with fast digital signal processing enables the implementation of innovative techniques for the identification and accountancy techniques of Special Nuclear Material through both combined gammas counting and spectrometry, and neutron counting with time stamp correlated information. Those techniques leverage the combined and excellent Pulse Shape Discrimination and Time-of-Flight measurements that can be recently obtained. This work is presenting extensive tests executed with many radionuclides in agreement with ANSI Standards, SNM (Pu, U, HEU, HEPu) and n-alpha neutron sources measurements and the technical solutions implemented for the realization of two nuclear measurement systems dedicated to nuclear safeguards and nuclear security applications.
In the uRANIA project (μ-RWELL Advanced Neutron Imaging Apparatus) the μ-RWELL technology is applied to neutrons detection, a key point for homeland security. The device is a compact resistive detector, composed of two elements: the micro-RWELL_PCB, incorporating the amplification stage and the readout plane, and the cathode. This latter works as well as main element for thermal neutron detection: a thin 10B layer, sputtered on the metallic surface, allows the neutron capture with the release of heavy charged particles (alpha or Lithium ion) in the detector gas active volume. The sputtering has been made by the ESS Neutron Detector Coatings Section (Linköpping, SE).
Prototypes with 10 x 10 cm2 active area and different cathodic profiles have been realized and tested at the HOTNES facility of the ENEA Frascati. Meshes sputtered with Boron have been moreover introduced in the device active volume and tested at the same facility.
A remarkable efficiency between 5 and 10% has been measured for thermal neutron, with single detector, with two methods: current and counting mode (CREMAT pre-amplifier). This work required an extensive simulation and validation campaign made with GEANT4.
The project pushes also for strong engineering activities to include FEE and the HV supply system in a compact device. The final goal is to produce an optimized design to start the development of large-area and cost-effective neutron detector for the Radioactive Portal Monitor (RPM) and Radioactive Waste Monitor (RWM), exploiting the compactness of the device that allows a stack of different detectors to increase the effeciency.
The Cosmic Ray Cube is a portable tracking device conceived for outreach activities allowing a direct scientific experience for secondary school students. In the context of the PTOLEMY project, the detector was used to measure the differential muon flux inside the bunker of Monte Soratte, a suitable location at about 50 km north of Rome (Italy). Its simple operation was crucial to finalise the measurements, carried out during the COVID-19 lockdown in a site devoid of scientific equipment. The fine scanning of the differential muon rate highlights the details of the mountain above the bunker providing a map of the thickness of the rock which surrounds the detector. The result shows a muon flux at the Soratte hypogeum of about two orders of magnitude lower than the one observed on the surface.
A 32$\times$32 Bicron 1 mm$^2$ polystyrene scintillating fibre-based beam hodoscope, with an entrance window of 6$\times$6 cm$^2$, has been designed and characterised for monitoring low-energy charged particle beams. The hodoscope has been designed to fit into the 60 Mev/c negative muon beam at Port 1 of the RIKEN-RAL muon facility (UK) as a beam monitor for the FAMU experiment. Each fibre is read by a 1 mm$^2$ Hamamatsu SiPM biased with around $-$70 V, and the signal is fanned out and digitised by means of CAEN VME digitisers.
After calibrations made using cosmic muons and a $^{90}$Sr/$^{90}$Y 3.7 kBq source, the detector has been exposed to the calibrated single-proton beam at CNAO (Italy), in the momentum range between 340 MeV/c and 690 MeV/c. The activation of the instrument materials has been tested by exposing a mock-up to the same particle beam in advance with respect to the measurement run.
This experimental campaign provides further calibration in dE/dx and shows the feasibility of the detector as an instrument for proton beam characterisation too. In particular, aside from its usage in FAMU, we investigated the possibility of using our hodoscope as a beam monitor in hadron therapy at CNAO.
The actual and next decade will be characterized by an exponential increase in the exploration of the Beyond Low Earth Orbit space(BLEO). Moreover, the firsts tentative to create structures that will enable a permanent human presence in the BLEO are forecast. In this context, a detailed space radiation field characterization will be crucial to optimize radioprotection strategies (e.g., spaceship and lunar space stations shielding), to assess the risk of the health hazard related to human space exploration, and to reduce the damages potentially induced to astronauts from galactic cosmic radiation. In this context, since the beginning of the century, many astroparticle experiments aimed at investigating the unknown universe components (i.e., dark matter, antimatter, dark energy) have collected enormous amounts of data regarding the cosmic rays (CR) components of the radiation in space.
Such experiments are cosmic ray observatories. The collected data (cosmic ray events) cover a significant period and permit to have integrated information of CR fluxes and their variations on time daily. Further, the energy range is exciting since the detectors operate using instruments that allow measuring CR in a very high energy range, usually starting from the MeV scale up to the TeV, not usually covered by other space radiometric instruments. Last is the possibility of acquiring knowledge in the full range of the CR components and their radiation quality
The collected data contains valuable information that can enhance the space radiation field characterization.
In this talk, the status of the art in this research topic will be presented and the research topic initiative titled "Astroparticle Experiments to Improve the Biological Risk Assessment of Exposure to Ionizing Radiation in the Exploratory Space Missions" will be presented.
We launched it in December 2021 on three different Frontiers Journals (Astronomy and Space Science/Astrobiology, Public Health/Radiation and Health, Physics/Detectors, and Imaging).
Several hereditary disesases due to retina degeneration affect
one over ~4000 persons resulting in total or partial blindness.
These disesases cannot be cured and the only chance of improving
the quality of life in the patients is a visual prostheses replacing
the damaged layers in the retina.
Some prostheses prototypes already exist and have been implanted.
Nevetherless the improvements in visual acuity is stillbvery limited.
SPEye proposes a novel approach based on a subretinal
implant of a matrix of silicon photodetector with inner amplification
SiPM (Silicon PhotoMultiplier).
The advantage over solutions employing traditional silicon diodes is
that the large inner amplification avoids the need of preamplifier
reducing power consumption to much lower level.
That makes also possible to reduce the size of the single photodiode
down to the size of the cones and rods increasing visual acuity without
increasing power consumption.
A number of preliminary tests have been performed on commercial SiPMs
including electric field calculations, simulation of cell response to
electrical stimuli, detailed measurement of SiPM response to focalized
light response, biocompatibility of the material involved, mechanical
matching to a spherical surface designing and test of a remote power
system, cell deposition on SiPM surface.
Those results are presented together with ideas on how to proceed in
designing optimized custom photodetector for surgical implantation in
animal and humans.
This contribution deals with the development, production and test, within the ANET project, of a new concept of compact neutron collimator, for neutron radiography and tomography. The novel multi-channel collimator, thanks to extensive experimental campaigns, has proved to deliver highly collimated neutron beams within very limited distances, outperforming other types of neutron collimators. This new instrument has been tested in different facilities demonstrating its applicability both on reactor and accelerator based sources. The performances of the ANET collimator and its first application to tomography is shown and discussed.
The positron emission tomography (PET) is an effective functional imaging technique especially for cancer diagnosis. Its performance is strictly connected to the ability to detect and reconstruct photons emitted by the positron - electron annihilation. Its sensitivity is enhanced when time information are included (time-of-flight (ToF) PET). The measure of the detection time difference between the two photons leads to a higher contrast image and more accurate diagnoses.
We describe the studies for a possible development of a ToF-PET based on Micro Pattern Gas Detector (MPGD). This kind of detector has a very good spatial and time resolution (order of 100 $\mu$m and few ns, respectively) and very low price, making it suitable for a full-body scanner. Further improvement in the time precision (suitable goal is to achieve values of the order of 100 ps) could be reached thanks to the Fast Timing MPGD (FTM) design, where multiple layers of MPGD compete in better measuring time information.
In order to detect PET photons, an additional element is needed: the converter. In this material, photon interacts with matter mostly by Compton effect, producing electrons that drift towards the MPGD where the multiplication step will take place.
In these studies, we show PET photon detection using a FTM, in several configurations, working not only with the numbers of the layers of the MPGD but also with the converter material.
Currently, cancer is one of the most frequent death causes in the world and
radiation therapy is used in approximately 50% of patients diagnosed with
cancer. This implies the need of the treatment to be as efficient and safe
as possible. In this work, a novel reconfigurable Dose-3D detector intended for
a full spatial therapeutic dose reconstruction to improve radiotherapy
treatment planning by providing a breakthrough detector with active voxels
is presented. The device is comprising a customizable detector head, a
scalable data acquisition system (including hardware, firmware and low-level
software) and a state of the art high-level software.
The detector head is being designed as a set of 3D-printed scintillator
pieces, whose shape and arrangement can be changed to accommodate patient's
needs. A feasibility study was done to assure the quality of the detector
manufactured using the aforementioned method. The results show, that the
light output of 3D-printed scintillators provides sufficient signal to noise
ratio for the project.
The data acquisition system (DAQ) is designed to accommodate the changing
geometry by varying the number of slices, each capable of aggregating
64 detection channels into 1 Gbps Ethernet link. The low-level software can
interact with virtually any number of DAQ units. Prototype devices have been
tested successfully with the whole detection chain in place.
The high-level software is being designed to automatically convert medical
data (CT scans) into accurate 3D models of the tumor and neighbouring cells
using machine learning. Obtained geometry will be used to create dedicated
detector head for the patient, as well as an environment for dose simulation
in GEANT.
In conclusion, the research undertaken until now confirm the possibility
to build a device to greatly personalise and improve radiotherapy planning
and effectiveness.
Radiation detection in the environment is of great importance and suitable instruments are highly needed. One possibility for the detection of gamma-ray sources is a Compton gamma camera (CGC) which uses electronic collimation based on the kinematics of the Compton scattering. Most realizations comprise two separate detector planes, a scatterer and an absorber, with some recent attempts to make a single plane CGC in order to enhance compactness and reduce costs. We have designed a novel single plane CGC based on pixelated GaGG scintillators read out by silicon photomultipliers (SiPM). The CGC comprises the scatterer and the absorber layers consisting of 8x8 arrays of 3 mm x 3 mm x 3 mm GaGG scintillator pixels. In the introduced concept, the individual pixels in the scatterer layer are optically coupled to the corresponding pixels in the absorber by the matching 3 mm x 3 mm plexiglass lightguides, and hence both the scatterer and the absorber pixel in one column are readout by the same SiPM. The single-pixel energy resolution is measured to be 12.3% for 662 keV gammas. GEANT4 simulations have been done to estimate the intrinsic efficiency of various detector configurations in dependence on the lightguide length. The angular resolution is estimated from the point-source image reconstructed by the simple back-projection method. The length of 20 mm is chosen for the final design, with an estimated intrinsic efficiency of 0.11% and angular resolution of about 10.5o (FWHM). The first results of the measured characteristics of the detector will be shown. A successful realization of the described detector may be a significant step in the realization of a compact, efficient, cost-effective and easily transportable Compton gamma camera, also with the realistic potential for upgrading to application-specific larger systems comprising more identical modules.
Anatomical changes occurring during proton therapy treatment are considered a relevant source of uncertainty in delivered dose. The INSIDE in-beam Positron Emission Tomography scanner, installed at the National Oncological Center of Hadrontherapy (CNAO), performs in-vivo range monitoring to obtain information about morphological changes in the irradiated tissue. Our purpose is to assess the sensitivity of the INSIDE PET system in detecting anatomical changes using inter-fractional range variations methods.
Eight proton treated patients, enrolled during the first phase of the INSIDE clinical trial at CNAO, were considered. Range variations along the beam direction were estimated using the Most-Likely Shift (MLS) method, which was for the first time applied to in-beam PET images. It was tested on a simulated patient, for which notable anatomical changes occurred, and validated on six patients without and two with anatomical changes. In order to establish the efficacy of the MLS method, we made a comparison with the previously used Beam Eye View (BEV) method. The sensitivity of the INSIDE in-beam-PET scanner in detecting range variation was evaluated by the standard deviation of the range difference distributions for each patient. The range differences obtained were superimposed on the CT scan as colorized maps, which indicate where an anomalous activity range variation was found.
For patients showing no morphological changes, the average range variation standard deviation was found to be 2.5 mm with the MLS method and 2.3 mm with the BEV method. On the other hand, for the two patients where small anatomical changes occurred, we found larger standard deviation values. In the simulated patient case, the standard deviation gradually increases according to the increasing anatomical changes. The changes detected with our range analysis were localized in the same zones as the one observed with the control CT scans.
This work presents a systematic study of multiple counts detection in a Pixirad/Pixie-II detection system. To characterize the dependence of multiple counts from the energy and discriminator threshold, monochromatic photons have been employed. Measurements have been performed at the SYRMEP (SYnchrotron Radiation for Medical Physics) beamline of Elettra synchrotron, Trieste. For each energy, the beam has been attenuated to have a very low fluence rate at the detector. By combining this low fluence filtered-beam with a short acquisition time, the probability of detecting two or more photons in neighboring pixels in a single frame has been made negligible. With this setup, when multiple counts occur, clusters of different sizes (one, two or more adjacent pixels), each induced by a single interacting photon, appear in the recorded images. For each combination of energy and threshold, the number and the size of clusters have been quantified.
Results show that, when photons with energies below the Cd K-edge are employed, the plots of number and size of the detected clusters against the relative thresholds (i.e. Threshold/Energy) are independent from the energy of the impinging photons. In particular, when the relative threshold is set to 0.1, the relative frequencies of clusters corresponding to single, double and triple counts are respectively of 0.4, 0.4 and 0.2. Otherwise, when imaging with photons having energy above Cd K-edge, clusters of more than 4 pixels are observed. In this case, the number and the maximum size of the clusters increase with the energy of the impinging photons.
We compare the performance of gamma-ray detectors based on monolithic BGO crystals versus LYSO ones, using a novel neural-network event characterization algorithm. LYSO represents the gold standard in applications such as Positron Emission Tomography and is considered a key component for time-of-flight (ToF) photon detection. On the contrary, BGO has been used so far only for non-ToF applications because of its long scintillation decay time and low light yield.
The setup consists of a 22Na point source between two detectors composed of a 25.9 mm x 25.9 mm x 12 mm scintillating crystal coupled to Hamamatsu MPPC arrays. The acquired events are reconstructed using a neural network trained with both experimental and simulated data. The experimental data are acquired by moving the detector on a 2 mm step grid, so as to irradiate a regular mesh. The simulated data are obtained by modeling the photon interactions and the optical tracking using Geant4, and subsequently using the timestamp of each detected optical photon to simulate the response of the SiPM arrays. In each scan, about 500 coincidence events are acquired for each point, equally divided between the training and the test sets.
The x and y positions of the interactions in both crystals can be reconstructed with a full width at half maximum (FWHM) of 0.8 mm with either crystal. An energy resolution of 20.2% and 12.7% is obtained for BGO and LYSO, respectively. The time difference distribution between the monolithic and the coincidence detector shows an average coincidence time resolution (CTR) of 320 ps FWHM for BGO and 160 ps for LYSO.
The obtained results show that the performance gap between BGO and the more performant LYSO in terms of CTR can be reduced significantly to the level that BGO becomes a valid alternative for time-of-flight applications.
The use of muon tomography in geoscience, and in glacier monitoring, is being increasingly used, and showed how these detectors can provide insights on relevant topics as the time evolution and dynamics of glacier melting. The latest experiments results present in literature make use of detectors to be placed in tunnels beneath the target of the study. This approach limits the number of glaciers to be studied due to the limited number of places the experiments can take place.
We here present a novel concept for a muon tomography detector to be used in open-sky applications, lightweight, and with limited costs, so that can be used in the field and can be produced in large numbers to provide for a large area monitoring for glacier evolution. The aim of the detector is to measure the directional flux of muons with an angular accuracy better than 0.010 radiants. The results presented show the feasibility and optimization of a detector based on scintillating fibers bundles, read out by silicon photomultipliers. As well we will show the speed of such detector is enough to detect and reject backgrounds muons not transversing the target under study, and is able to measure the ice thickness with resolution of order of 5 meters. This resolution would allow us to measure the seasonal increase and reduction of ice thickness and the melting trend of the glacier under study, and also to monitor the formation of the melting channels inside the glaciers, which are one of the hot topics in the glacier evolution studies.
The application of safety margins in treatment planning to account for possible morphological variations prevent from profit from the particle therapy intrinsic precision. Thus, the development of an in vivo verification system for particle therapy treatments is considered a crucial step forward in improving the clinical outcome, allowing to experimentally check the planned and delivered dose consistency and to re-schedule the treatment when needed. The Dose Profiler is a device designed and built to operate as an in vivo verification system of the carbon ion treatments, exploiting the secondary charged fragments escaping from the patient body. The capability of spotting morphological variations of the Dose Profiler has been investigated for pathologies of the neck-head district in the context of a clinical trial (ClinicalTrials.gov Identifier: NCT03662373) carried on at CNAO (Centro Nazionale di Adroterapia Oncologica, Pavia, Italy) from the INSIDE collaboration. The measured fragment 3D emission map has been compared computing the gamma-index after each monitored fraction to spot possible modifications. The results obtained analysing the full patient sample are presented in details the potential in detecting the insurgence of morphological changes in clinical condition detecting charged fragments is discussed.
Currently PSI delivers the most intense continuous muon beam in the world with up to few 10^8 μ+/s and aims at keeping its leadership upgrading its beamlines within the HIMB project to reach intensities up to 10^10 μ+/s, with a huge impact for low-energy, high-precision muon based searches.
Here we present two novel beam monitors designed for the current PSI beams and that will be upgraded for the HIMB operations: the scintillating fiber (SciFi) detector, a grid of scintillating fibers coupled to SiPMs, and the MatriX detector, a matrix of plastic scintillators coupled to SiPMs.
The advantage of these highly segmented detectors is to be able to withstand high magnetic fields (up to 1.25 T) and to measure the full beam rate at once.
The final version of the SciFi detector is going to be assembled in 2022 to be included permanently along the MEG II beamline and it will include an insertion system to perform measurements at demand. As a grid of fibers it is quasi non-invasive and 80 % of the beam passes through without being affected by the detector: it could be used for real time monitoring of the muon beam during data taking. It is able to perform particle ID through energy deposition and TOF measurement.
The final version of the MatriX detector is going to be assembled in 2022. It is thought to be used for beam tuning in high magnetic field environment and can easily be redesigned to fit space requirements. A major upgrade from the prototype will be the use of thinner scintillators, from 2 mm to 250 um in thickness, and the introduction of a plexiglas light guide between the scintillator and the sensor to stop low energy particles and increase separation with MIPs.
The performances of these detector as measured along the beamline, their detailed MC simulations and the beam characteristics will be presented.
The Low-Temperature Cofired Ceramic (LTCC) technology is known as a highly suitable material for the production of electronic microstructures in 3D. In particular, the material is characterized by good mechanical and electrical properties, a wide range of operating temperatures, high thermal conductivity and low outgassing. Additionally, the high radiation resistance of such materials has been already confirmed. Such a combination of parameters makes LTCC an excellent candidate for High Energy Physics (HEP) applications. The preliminary tests have been already conducted at Wroclaw University of Science and Technology concerning manufacturing Gas Electron Multiplier (GEM) amplification elements as well as readout plates. The first LTCC-GEM prototypes have been manufactured, and the results are presented. Research is continued to improve their parameters as well as produce microstructures with diameters that are difficult to obtain by standard "wet etching" techniques. It is anticipated that the developed technology will be used for specialized applications and the production of prototype systems or small series.
The ORIGIN project (Optical Fiber Dose Imaging for Adaptive Brachytherapy), supported by the European Commission within the Horizon 2020 framework program, targets the production and qualification of a real-time radiation dose imaging and source localization system for both Low Dose Rate (LDR) and High Dose Rate (HDR) brachytherapy treatments, namely radiotherapy based on the use of radioactive sources implanted in the patient’s body.
This goal will be achieved through a 16 fiber sensor system, engineered to house in a clear-fiber tip a small volume of the scintillator to allow point-like measurements of the delivered dose. The selected scintillating materials feature a decay time of about 500 μs and the signal associated with the primary γ ray interaction results in the emission of a sequence of single photons distributed in time. Therefore, the operation requires a detector with single-photon sensitivity a system designed to provide dosimetry by photon counting. The instrument being developed is based on Silicon Photomultipliers (SiPMs), with a solution fully qualified on a single fiber prototype and currently scaled-up relying on the CITIROC1A ASIC by WEEROC, embedded in the FERS-DT5202 scalable platform designed by CAEN S.p.A.
The paper presents the laboratory qualification of the system in terms of response uniformity, stability, and reproducibility. Moreover, the commissioning and assessment in a clinical environment, both for Low and High Dose brachytherapy will be discussed. The measurements performed in the laboratory using an X-ray cabinet show that the uncertainty due to fiber positioning, fiber non-uniformity, and geometry acceptance is less than 1 %. According to the laboratory measurements results and taking into account the fiber non-uniformity, the source position can be obtained from the measurements in the hospital with the precision required by ORIGIN's project specifications.
Resistive plate chambers (RPCs) with electrodes of high-pressure phenolic laminate (HPL) and small gas gap widths down to 1 mm provide large area tracking at relatively low cost in combination with high rate capability and fast response with excellent time resolution of better than 500 ps. These chambers offer a wide range of applications. In particular, they are perfectly suited for experiments requiring sub-nanosecond time resolution and spatial resolution on the order of a few millimeters over large areas. Thin-gap RPCs will therefore be employed in the upgrade of the barrel muon system of the ATLAS experiment at HL-LHC and are candidates for the instrumentation of future collider detectors and for experiments searching for long-lived particles in experiments. RPCs are also frequently used in large area cosmic ray detectors. The large demand for RPCs exceeds the presently available production capacities. At the same time, the requirements on mechanical precision, reliability and reproducibility for collider detectors have increased. Additional suppliers with industry-style quality assurance are urgently needed. We have established RPC production procedures compliant with industrial requirements and are in the process of certifying several companies for RPC production for the ATLAS upgrade for HL-LHC and beyond. We will report about the technology transfer, the RPC prototype production at the selected companies and the results of the certification procedure.
We propose a gamma-ray detection module for the development of a SPECT system for real-time dose monitoring in Boron Neutron Capture Therapy (BNCT). BNCT is a radiotherapy technique where the tumor volume is loaded with boron-10 and irradiated with thermal neutrons. Boron neutron capture reactions occur, and they deposit their energy within the tumor cells, thus sparing normal cells. Moreover, the B-10(n, α)7Li reactions create gamma rays at 478 keV, and their detection can be used to quantify and localize the dose delivered to the patient. However, this detection is very challenging because of the mixed radiation field present during BNCT irradiations and the low 10B concentration. We report here about the performance of the BeNEdiCTE (Boron Neutron CapTurE) module, based on a 2 inches cylindrical LaBr3(Ce+Sr) scintillator crystal, optically coupled to a matrix of Silicon Photomultipliers (SiPMs), when irradiating 10B-loaded samples with neutrons. Vials filled with different boron concentrations have been irradiated in the TRIGA MARK II nuclear reactor of Pavia University (Italy), and spectra have been acquired with the BeNEdiCTE module, wrapped in cadmium foils to avoid the neutron activation of the detector. The excellent energy resolution of the module (<3% at 662 keV) allows to resolve the photopeak of the boron neutron capture events at 478 keV. Very good linear correlation between the number of events detected at 478 keV and the boron concentration has been achieved, down to 62 ppm, with a neutron flux of approximately 10^5 n/cm2/s.
Traditional thermal neutron detectors are based on Helium-3 as conversion and detection material due to its large neutron cross-section.
In light of the upgrade and construction of several neutron scattering facilities such as the European Spallation Source (ESS) and a simultaneous shortage of Helium-3, new detection technologies have been introduced. The most prominent one is to use solid converts with a large thermal neutron cross-section such as Gadolinium and Boron. Those material emit charged particles when hit by a neutron. The technique then relies on detection and/or tracking of the charged particle, as in detectors of particle physics. At the same time, this requires an increase of the readout channels by an order of magnitude with the advantage to also increase the position resolution by the same amount compared to traditional neutron detectors. A prime example is the Gadolinium Gas Electron Multiplier (GdGEM) detector for the NMX instrument at ESS jointly developed by the CERN Gaseous Detector Group and the ESS Detector Group.
In this contribution, some of our efforts to transfer particle physics detector and readout electronics to neutron science will be presented.
We employed the VMM chip, originally designed for the ATLAS New Small Wheel upgrade, to read out a GEM-based neutron detector. The Timepix3 chip is employed in a neutron Time Projection Chamber as well as to read out a neutron sensitive Micro-Channel Plate detector. Those readout chips are integrated in the Scalable Readout System of the RD51 collaboration.
Here we present an overview of Jagiellonian University's PET scanner versions and their performances. Independent J-PET detector variations are barrel J-PET, Modular J-PET, and Total-Body Jagiellonian-PET (TB J-PET) concept. Experimental results from the J-PET barrel and the modular J-PET will be presented, while the project of TB J-PET will be conveyed through GATE simulations [1,2].
Our objective is to develop a cost-effective positron emission tomograph with capabilities of simultaneous PET/CT and PET/MR imaging and diagnosis. J-PET detectors are the first of their kind made up of plastic scintillators and have only digital front-end electronic circuits and a triggerless data acquisition system. A Modular J-PET prototype is made and tested as a first step in building a total body J-PET tomograph. An axial arrangement of strips of plastic scintillators, which have a minimal light attenuation, exceptional timing qualities, and the possibility of cost-effectively increasing the axial field-of-view, opens promising aspects for a low-cost construction of a total-body PET scanner [3]. TB J-PET is based on the novel idea of plastic scintillators in conjunction with wavelength shifters (WLS) to improve the axial resolution of the scanner [4]. The TB-J-PET estimated sensitivity and NECR are higher than those of existing commercial PET systems, making it an alternative for the wide range of clinical applications of total-body PET scanners.
Geometries, electronics, and the use of WLS are the main components that differentiate the detectors. The system and elaborated calibration methods, including the first results of the image reconstruction, will be presented on the basis of experimental and simulation results.
References:
[1] Moskal et al., Sci. Adv. 7 : eabh4394, (2021).
[2] P.Moskal et al., Nature Communications 12, 5658 (2021).
[3] P. Moskal, et al., IEEE Trans Instrum Meas, vol. 70,( 2021).
[4] J. Smyrski, et al., BioAlgorithms and Med-Systems 10, 59 (2014).
Recent developments in semiconductor pixel detectors allow for a new generation of positron-emission tomography (PET) scanners that, in combination with advanced image reconstruction algorithms, will allow for a few hundred microns spatial resolutions. Such novel scanners will pioneer ultra-high-resolution molecular imaging, a field that is expected to have an enormous impact in several medical domains, neurology among others.
The University of Geneva, the Hôpitaux Universitaires de Genève, and the École Polytechnique Fédérale de Lausanne have launched the 100µPET project that aims to produce a small-animal PET scanner with ultra-high resolution. This prototype, which will use a stack of 60 monolithic silicon pixel sensors as a detection medium, will provide volumetric spatial resolution one order of magnitude better than today’s best operating PET scanners.
The R&D on the optimisation of the monolithic pixel ASIC, the readout system and the mechanics, as well as the simulation of the scanner performance, will be presented.
Alto Ritmo Concert
Jacopo Taddei (sax) & Samuele Telari (accordion)
The Mu2e experiment at Fermilab searches for the neutrino-less conversion of a negative muon into an electron, with a distinctive signature of a mono-energetic electron with energy of 104.967 MeV. Mu2e aims to improve by four orders of magnitude with respect to the current best limit.
The calorimeter plays an important role to provide excellent particle identification capabilities and an online trigger filter while improving the track reconstruction capabilities, asking for 10% energy resolution and 500 ps timing resolution for 100 Mev electrons. It consists of two disks, each one made by 674 un-doped CsI crystals, read out by two large area UV-extended SiPMs.
In this talk, we present the status of construction and the QC performed on the produced crystals and photosensors, the development of the rad-hard electronics and the most important results of the irradiation tests done on the different components from crystals to SIPMs and electronics. Irradiation has been carried out with ionising doses, neutrons and protons.Production of electronics is now underway.We summarize the QC in progress on the analog electronics and on the integrated SIPM+FEE units. Construction of the mechanical parts are also well underway. Status and plans for the final assembly and commissioning
are described.
A large calorimeter prototype (dubbed Module-0) has been tested with an electron beam between 60 and 120 MeV at different impact angles and the obtained results are reported. A full vertical slice test with the final electronics is in progress on Module-0 at the Frascati Cosmic Rays test setup. Stability of response and calibration results are shown.
The existing CMS endcap calorimeters will be replaced with a High Granularity Calorimeter (HGCAL) for operation at the High Luminosity (HL-LHC). Radiation hardness and excellent physics performance will be achieved by utilising silicon pad sensors and SiPM-on-scintillator tiles with high longitudinal and transverse segmentation. One of the major challenges of the HL-LHC will be the high pileup environment, with interaction vertices spread by a few centimetres, equivalent to a few 100ps in time. In order to efficiently reject particles originating from pileup, the HGCAL is designed to provide timing measurements for individual energy depositions with signals above an equivalent of 10 MIPs. By this means, precision timing information at the order of 30ps for clusters beyond 5 GeV will be achieved. Given the complexity and size of the system, this poses a particular challenge to the readout electronics as well as to the calibration and reconstruction procedures. Recently, the proof-of-principle of the envisaged concept could be demonstrated using experimental data of more than 100 time-calibrated readout channels of an HGCAL prototype tested with particle beam in 2018.
In this contribution, we present the general challenges for the front-end electronics in the final design, the recent proof-of-concept with the HGCAL prototype in test beam, as well as the anticipated timing performance from simulation at HL-LHC.
Progress in experimental high-energy physics has been closely tied to developments of high-performance calorimeters. Since their invention, crystal calorimeters have consistently achieved the best resolution for measurements of the energies of electromagnetic (e.m.) particles (electrons and photons). Recently, we experimentally demonstrated the possibility to significantly accelerate the e.m. shower development inside a lead tungstate (PWO) crystal, when the incident beam is aligned with the crystal axes within some tenths of a degree. Here, we present the results obtained at the H2 line of CERN SPS with a hundred-GeV electron beam with different PWO samples (0.5, 1 and 2 radiation length thick), coupled with SiPM for direct measurement of scintillation light enhancement in case of beam alignment to the main crystal axes. This is indeed the first direct measurement of scintillation light enhancement due to shower acceleration caused by the strong axial field. Since the angular acceptance of the crystal strong field depends little on particle energy, while instead the decreasing of the shower length remains pronounced at very high-energy, a crystal based calorimeter on oriented crystals would feature a consistent compactness enhancement while rivaling the current state of the art in terms of energy resolution in the range of interest of present and future forward detectors, beam dumps for light dark matter search and source-pointing space-borne $\gamma$-ray telescopes.
The project of a Multi-TeV Muon Collider represents a unique opportunity to explore the high energy physics frontier and to measure with high precision the Higgs coupling with the other particles of the Standard Model as well as the Higgs self-coupling, in order to confirm the results already achieved in the SM and possibly to find evidences for new physics. One of the major challenges for the design and optimization of the technologies suitable for a Muon Collider experiment is represented by the high background induced by the decay of the muons coming from the beam.
This contribution will present the design of an innovative MPGD-based hadronic calorimeter.
The detector consists of a sampling calorimeter exploiting MPGDs as active layers: the MPGDs offer a fast and robust technology for high radiation environments and a high granularity for precise spatial measurements. Moreover, the detector is designed to optimize the jet reconstruction and for background suppression. The calorimeter is simulated using the Geant4 toolkit to support the detector R&D. The detector design and layout optimization supported by the simulation will be presented.
We are developing a new type of electromagnetic calorimeter based on a SiW sampling design using silicon pixel sensors with digital readout. The R&D is performed in the context of the Forward Calorimeter upgrade proposal within the ALICE experiment and is strongly related to studies of imaging in proton CT; it is equally applicable to other future collider projects such as EIC, ILC, CLIC or FCC. Based on experience with a first full prototype of a digital calorimeter, which demonstrated a proof of principle, we have constructed an advanced second prototype, EPICAL-2, which makes use of the Alpide MAPS sensor developed for the ALICE ITS upgrade. A binary readout is possible due to the pixel size of $\approx 30 \times 30 \, \mu \mathrm{m}^2$. The prototype consists of alternating W absorber and Si sensor layers, with a total thickness of ~20 radiation lengths, an area of $\mathrm{30mm\times30mm}$, and ~25 million pixels. This prototype has been successfully tested with cosmic muons and with test beams at DESY and the CERN SPS.
We will report on performance results obtained at DESY, showing good energy resolution and linearity, and compare to detailed MC simulations. We will also show preliminary results of shower-shape studies with unprecedented spatial precision and of the high-energy performance as measured at the SPS.
A highly granular silicon-tungsten electromagnetic calorimeter (SiW-ECAL) is part of the design of the ECAL for many detectors conceived future Higgs factories, in particular for the International Large Detector (ILD) concept, one of the two detector concepts for the detector(s) at the future International Linear Collider.
Prototypes for this type of detector are developed within the CALICE Collaboration.
The technological prototype features integrated front-end electronics or compact layer and readout design.
During 2019-20 a stack of 15 layers with a dimension of ~$250×180×10\,{\rm mm^3}$ each was assembled, for a record number of 15360 cells, one of biggest for this type of calorimeters. A beam test at DESY has been carried out in November 2021 and a second one is scheduled for March 2022. These tests will allow for first detailed tests in terms of energy resolution and linearity but also in terms of homogeneity and efficiency of the individual layers and cells. The beam test will be a proof for the feasibility of the application of a highly compact readout system that in terms of compactness meets already the needs for detector systems at future Higgs factories. At the Pisa Meeting we will present first beam test results and the status of the implementation in simulation.
In 2021/22 we have developed a new version of detector layers that notably will be optimised for power pulsing with an innovative local storage of power for the readout ASICs. The results of first tests with these layers will be available at the time of the Pisa Meeting.
Note finally that for 2022 and 2023 large scale beam test campaigns with CALICE prototypes of hadronic calorimeters are planned. The common readout of the SiW-ECAL with the CALICE Analogue HCAL will be tested in March 2022.
The Mu2e experiment at Fermi National Accelerator Laboratory (Batavia, Illinois, USA) searches for the charged-lepton flavor violating neutrino-less conversion of a negative muon into an electron in the field of an aluminum nucleus. The dynamics of such a process is well modelled by a two-body decay, resulting in a mono-energetic electron with energy slightly below the muon rest mass (104.967 MeV). Mu2e will reach a single event sensitivity of about 3x10−17 that corresponds to four orders of magnitude improvement with respect to the current best limit.
The calorimeter plays an important role to provide excellent particle identification capabilities and an online trigger filter while aiding the track reconstruction capabilities, asking for 10% energy resolution and 500 ps timing resolution for 100 Mev electrons. It consists of two disks, each one made by 674 un-doped CsI crystals, read out by two large area UV-extended SiPMs. In order to match the requirements of reliability, a fast and stable response, high resolution and radiation hardness (100 krad, 10^12 n/cm^2) that are needed to operate inside the evacuated bore of a long solenoid (providing 1 T magnetic field) and in the presence of a really harsh radiation environment, fast and radiation hard analog and digital electronics has been developed. To support these crystals, cool down the SiPMs and support and dissipate the electronics heat power, a sophisticated mechanical and cooling system has been also designed and realized.
We describe the mechanical details, design and performances along with the assembly status of all the calorimeter components and its integration in the Mu2e Experiment.
The Crilin calorimeter is a semi-homogeneous calorimeter based on Lead Fluoride (PbF2) Crystals readout by surface-mount UV-extended Silicon Photomultipliers (SiPMs). It is a proposed solution for the electromagnetic calorimeter of the Muon Collider. A high granularity is required in order to distinguish signal particles from the background and to solve the substructures necessary for jet identification. Time of arrival measurements in the calorimeter could play an important role, since large occupancy due to beam-induced backgrounds is expected, and the timing could be used to assign clusters to the corresponding interaction vertex. The calorimeter energy resolution is also fundamental to measure the kinematic properties of jets. Moreover, the calorimeter should also operate in a very harsh radiation environment: 10 Mrad/year total ionizing dose (TID) and a 10^14 1MeVeq/cm^2 neutron fluence.
In June 2021, a dedicated test beam was performed at the Beam Test Facility (BTF) of the INFN-LNF with electrons. The timing resolution, evaluated as the time difference of the two SiPMs as a function of the collected energy, shows a sigma below 300 ps for deposited energy in the range 150-500 MeV. Another test beam has been performed at H2 at CERN in August 2021 with electrons of energy between 20 and 120 GeV and with 150 GeV muons. Analysis results will be shown: a timing resolution better than 100 ps has been achieved for deposited energies greater than 1 GeV. The first radiation tolerance studies and the development and tests of the small size prototype (Proto-0) are reported along with the relative results.
A bigger prototype (Proto-1), made of two layers of 3x3 PbF2 crystals each, will be realized in 2022, aiming at a temperature operation of 0/-20 degrees; this calorimeter will be qualified at a dedicated test beam at Cern before the end of 2022.
The Liquid Argon Calorimeters are employed by ATLAS for all electromagnetic calorimetry and for hadronic calorimetry in the region from |η| = 1.5 to |η| = 4.9. It also provides inputs to the first level of the ATLAS trigger. After successful period of data taking during the LHC Run-2 the ATLAS detector entered a long shutdown period starting 2019. In 2022 the LHC Run-3 should see an increased pile-up of 80 interactions per bunch crossing. To cope with this harsher conditions, a new trigger path have been installed during the long shutdown. This new path should improve significantly the triggering performances by increasing by a factor of ten the number of available units of readout at the trigger level.
The installation of this new trigger chain required the update of the legacy system to cope with the new components. It is more than 1500 boards of the precision readout that have been extracted from the ATLAS pit, refurbished and re-installed. For the new system 124 new on-detector boards have been added. Those boards are able to digitize at 40 MHz the calorimeter signal in a radiative environment. The digital signal is then processed online to provide the measured energy for each unit of readout which corresponds to 31Tbps of data. To minimize the triggering latency the processing system had to be installed underground. There the limited space available imposed the need of a very compact hardware structure. For this large FPGAs with high throughput have been mounted on ATCA mezzanine boards. Given that modern technologies have been used compared to the previous system, all the monitoring and control infrastructure had to be adapted.
This contribution should present the challenges of such installation, what have been achieved and the first results with the new system including calibration and data taking performance.
Noble liquid calorimetry is a well proven technology that successfully operated in numerous particle physics detectors (D0, H1, NA48, NA62, ATLAS, …). Its excellent energy resolution, linearity, stability, uniformity and radiation hardness as well as good timing properties make it a very good candidate for future hadron and lepton colliders. Recently, a highly granular noble liquid sampling calorimeter was proposed for a possible FCC-hh experiment. It has been shown that, on top of its intrinsic excellent electromagnetic energy resolution, noble liquid calorimetry can be optimized in terms of granularity to allow for 4D imaging, machine learning and - in combination with the tracker measurements - particle-flow reconstruction. This talk will discuss the ongoing R&D to adapt noble liquid sampling calorimetry for an electromagnetic calorimeter of an FCC-ee experiment with a focus on signal extraction, noise mitigation and cryostat material budget. First electrical tests on a high granularity PCB prototype and performance studies realized with the FCCSW full simulation framework will also be presented.
The Mu2e experiment at Fermilab aims to search for the SM forbidden process of muon to electron conversion in the Coulomb field of Al nuclei. The signal signature consists of 104.96 MeV monoenergetic
conversion electrons, identified by a complementary
measurement carried out by a very precise straw-tube tracker and an electromagnetic calorimeter.
The calorimeter is composed of 3.4×3.4×20 cm$^3$ undoped CsI crystals, each one
coupled to two custom UV-extended Mu2e-SiPMs, arranged in two annular disks for a total of 1348 elements, to achieve high granularity and high resolution in energy ($<10\%$) and timing ($<500$ ps) for 100 MeV electrons. In order to calibrate the calorimeter with cosmic ray muons in the assembly area, we have designed and realized a Cosmic Ray Tagger (CRT) at Laboratori Nazionali di Frascati (LNF) of INFN.
The CRT consists of two planes of eight 2.5x1.5x160 cm$^3$ plastic scintillator (EJ-200) bars, each one coupled to Mu2e-SiPMs on both edges to reconstruct the hit position by their time difference. A template fit algorithm is used for timing reconstruction of both sensors of each bar, achieving position measurements in the longitudinal direction, with a resolution $\sigma_Z<1.5$ cm, as measured with dedicated runs where a 1x1 cm$^2$ scintillator is used as external trigger. The 2D reconstruction of the hits in the two modules, placed one above and one below the calorimeter disk, allows to track muons in 3D.
The selected tracks are finally used to equalise and calibrate the energy response of all calorimeter channels to a level below 1$\%$ using the MIP energy deposition. The CRT will also be employed to estimate the dependence of energy and time response and resolution along the crystal longitudinal coordinate. A first test will be carried out at LNF on the 51 crystals arranged in the large size calorimeter prototype named Module-0.
The Mu2e experiment at Fermilab will search for the Standard Model forbidden conversion of a negative muon into an electron and the calorimeter is an important part of this experiment. It is based on undoped CsI crystals, each one read by two custom-made arrays of UV-extended Silicon Photomultipliers (SiPMs). Two SiPMs glued on a copper holder and two independent Front End Electronics (FEE) boards, coupled to each SiPM, form a Readout Unit (ROU). To ensure consistency and reliability of the ROUs, we have built an automated Quality Control (QC) station to test them.
The QC station is located at LNF (Laboratori Nazionali di Frascati) and can test two ROUs at the same time.
The SiPMs are exposed to the light of a 420 nm pulsed LED attenuated by means of an automated nine-positions filter-wheel. The transmitted light is diffused on the SiPMs surface using a box with sanded glass that also provides light tightness and allows to have a controlled environment, ensuring good reproducibility of the measurements. The ROUs are held in place by an aluminum plate that serves also as a conductive medium for temperature stabilization.
The ROUs are powered by a low voltage and a high voltage supply controlled remotely. The data acquisition of the FEE signals is handled by a Mezzanine Board and a Master Board (Dirac) USB-controlled with Python and C++ programs. The data acquisition has been parallelized and 10000 events per wheel position can be acquired in around one minute.
A scan at different light intensities is performed for each of the selected supply voltages, V$_{i}$, around the SiPM operational voltage, V$_{op}$, thus allowing to reconstruct the response, gain, photon detection efficiency and their dependence on V$_{i}$-V$_{op}$. We will present the first results obtained on a large sample of production ROUs and the achieved reproducibility.
The Tile Calorimeter (TileCal) is a sampling hadronic calorimeter covering the central region of the ATLAS experiment, with steel as absorber and plastic scintillators as active medium. The High-Luminosity phase of LHC, delivering five times the LHC nominal instantaneous luminosity, is expected to begin in 2029. TileCal will require new electronics to meet the requirements of a 1 MHz trigger, higher ambient radiation, and to ensure better performance under high pile-up conditions. Both the on- and off-detector TileCal electronics will be replaced during the shutdown of 2026-2028. PMT signals from every TileCal cell will be digitized and sent directly to the back-end electronics, where the signals are reconstructed, stored, and sent to the first level of trigger at a rate of 40 MHz. This will provide better precision of the calorimeter signals used by the trigger system and will allow the development of more complex trigger algorithms. The modular front-end electronics feature radiation-tolerant commercial off-the-shelf components and redundant design to minimise single points of failure. The timing, control and communication interface with the off-detector electronics is implemented with modern Field Programmable Gate Arrays (FPGAs) and high speed fibre optic links running up to 9.6 Gb/s. The TileCal upgrade program has included extensive R&D and test beam studies. A Demonstrator module with reverse compatibility with the existing system was inserted in ATLAS in August 2019 for testing in actual detector conditions. The ongoing developments for on- and off-detector systems, together with expected performance characteristics and results of test-beam campaigns with the electronics prototypes will be discussed.
TileCal, the central hadron calorimeter of the ATLAS experiment at the Large Hadron Collider (LHC), is readout by about 10,000 photomultipliers (PMTs). Earlier studies of performance showed a degradation in PMTs response as a function of the integrated anode charge. At the end of the High-Luminosity LHC (HL-LHC) program, the expected integrated charge for PMTs reading out the most exposed cells is 600 C. A model of the evolution of the PMT response as a function of the integrated charge, based on the measurement response during the Run 2, was built. The projected loss at the end of the HL-LHC is 25% for 8% of the total TileCal PMTs. These PMTs will be replaced with a newer version, in order to keep the global detector performance at an optimal level. A local test setup is being used in the Pisa laboratory to study the long term response of the new PMT model considered for replacement in the TileCal readout of most exposed calorimeter cells. Furthermore, the performance of the new is compared to the old PMT model, the current version used to readout TileCal cells. For the first time this new PMT model has been tested after integrating more than 300 C of anode charge. Preliminary results obtained from data collected in the Pisa laboratory over a period exceeding six moths are shown in this presentation.
The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the LHC. It is made of steel plates acting as absorber and scintillating tiles as active medium. The TileCal response is calibrated to electromagnetic scale by means of several dedicated calibration systems.
The accurate time calibration is important for the energy reconstruction, non-collision background removal as well as for specific physics analyses. The initial time calibration using so-called splash events and subsequent fine-tuning with collision data are presented. The monitoring of the time calibration with laser system and physics collision data is discussed as well as the corrections for sudden changes performed still before the recorded data are processed for physics analyses. Finally, the cell time resolution as measured with jet events in Run 2 is presented.
The CMS Collaboration is preparing to build replacement endcap calorimeters for the HL-LHC era. The new high-granularity calorimeter (HGCAL) is, as the name implies, a highly-granular sampling calorimeter with 47 layers of absorbers (mainly lead and steel) interspersed with active elements: silicon sensors in the highest-radiation regions, and scintillator tiles equipped with on-tile SiPMs in regions of lower radiation. The active layers include copper cooling plates embedded with thin pipes carrying biphase CO2 as coolant, front-end electronics and electrical/optical services. The scale and density of the calorimeter poses many engineering challenges that we discuss here. These include: the design & production of 600 tonnes of stainless-steel absorber plates to very high physical tolerances; the development of the CO2 cooling system to maintain each 220-tonne endcap at -35oC whilst the electronics dissipate up to 140kW; the need to cantilever the calorimeters from the existing CMS endcap disks, using titanium wedges; the production of a thin but strong inner cylinder to take the full weight but have little impact on physics performance; the development of low-power high-dynamic-range front-end electronics for over 6 million detector channels; the integration of all services in a volume of only a couple of mm in height.
We give an overview of the design of HGCAL, focusing on the materials and techniques being used to overcome the many challenges for this world’s first calorimeter of its type at a hadron collider.
The reconstruction of electrons and photons in CMS depends on topological clustering of the energy deposited by an incident particle in different crystals of the electromagnetic calorimeter (ECAL). These clusters are formed by aggregating neighbouring crystals according to the expected topology of an electromagnetic shower in the ECAL. The presence of upstream material causes electrons and photons to start showering before reaching the ECAL. This effect, combined with the 3.8T CMS magnetic field, leads to energy being spread in several clusters around the primary one. It is essential to recover the energy contained in these satellite clusters to achieve the best possible energy resolution. Historically, satellite clusters have been associated to the primary cluster using a purely topological algorithm which does not attempt to remove spurious energy deposits from additional pileup interactions (PU). The performance of this algorithm is expected to degrade during LHC Run 3 (2022+) because of the larger average PU levels and the increasing levels of noise due to the ageing of the ECAL detector. New methods are being investigated that exploit state-of-the-art deep learning architectures like Graph Neural Networks (GNN) and self-attention algorithms. These more sophisticated models improve the energy collection and are more resilient to PU and noise. This talk will cover the challenges of training the models and the opportunities that this new approach offers.
The FoCal-E detector is a part of the FoCal detector aiming to provide unique capabilities to measure small-x gluon distributions via prompt photon production. It represents an upgrade to the ALICE experiment, and will be installed during LS3 for data taking in 2027–2029 at the LHC.
This detector is composed of a Si+W sampling calorimeter hybrid design combining two different Si (Silicon) readout technologies: Pad layers and Pixel layers.
A first prototype is under development to demonstrate the performance of the proposed readout electronics. It is composed of 18 single E-pad boards and 2 MAPS layers. They are all connected via an interface board to an aggregator system. Each single E-pad contains 72 Si-pixel sensors and a front-end ASIC (HGCROC). This ASIC ensures that the response of each sensor is read out using an integrated charge sensitive amplifier-shaper and an analog to digital conversion system (few fC up to 10 pC) enabling the transmission of data on a standard digital connection. This board also contains probes to monitor the temperature, the power consumption and a local power converter to provide clean power supplies. The aggregator board is used to gather the data and trigger information from the detector (data rate of 1,28 Gb/s). It is based on an FPGA allowing the extraction of data via multiple supports.
This prototype is firstly used to validate the choice of the ASIC with the design of a testing board capable of emulating the response of the Si-sensors while developing the aggregator board and its associated firmware and software. It allows also measuring the performances of this system: measurements under beam and through a cosmic test for the measurements of the MIP. Results are used to optimize the design of the final E-pad modules and to finalize the aggregator system.
Jets play a central role in many physics analyses. Initially jets based on topological clusters (Topo jets) using only the calorimeter information have been used. In the last years, jets reconstructed with the Particle-Flow algorithm (PFlow jets), leveraging also the tracking information, found increasing application. It is thus necessary to test if the calibration methods applied to Topo jets can also be used for PFlow jets in ATLAS. Two different studies will be discussed.
First of all, estimating the uncertainty on the Jet-Energy-Scale (JES) calibration at very high pT (pT > 2 TeV) by using the calorimeter response to single particles (single particle uncertainties) is studied. It is found to be very well applicable to PFlow jets in this pT regime. Further, a good agreement between data and Monte Carlo simulation is observed, which is stable with respect to $\eta$ as well as pT.
Secondly, the performance of the Local Hadron Calibration (LCW) for PFlow jets (LCPFlow) is investigated. It aims at correcting for the difference in the calorimeter response to processes at the electromagnetic and hadronic scale. This yields very promising results as well: Overall, a better agreement of LCPFlow jets with truth jets is found compared to PFlow jets at the electromagnetic scale (EMPFlow jets). On top of that, LCPFlow jets show an overall better resolution.
The Light-only Liquid Xenon (LoLX) experiment is designed to study the properties of light emission and transport in liquid xenon (LXe) using silicon photomultipliers (SiPMs). In addition, we also plan to perform long-term stability studies of the SiPMs in LXe. Another important goal of the LoLX experiment is to characterize and utilize the differences in the timing of Cherenkov and scintillation light production to develop a background discriminator for low-background LXe experiments such as, neutrino-less double beta decay searches. The first phase of LoLX is operational and consists of an octagonal 3D-printed structure housing 24 Hammamtsu VUV4 SiPM modules, for a total of 96 individual SiPM channels. The LoLX structure is placed in a cryostat that allows for the liquefaction of Xe along with a Sr-90 beta-emitter placed at the center of the LoLX detector volume. The beta decay electrons on interaction with LXe produce Cherenkov and scintillation light to be studied using LoLX. This talk will cover the current status of the LoLX experiment and present the results obtained from the first runs of the experiment. This data-taking campaign focused on validating the optical transport simulations of LoLX done in GEANT4 by the collaboration. In addition, the effect of external cross-talk (eXT) between the SiPMs was also explored. The DAQ system has been recently upgraded with a GSPS ADC, allowing for improved timing resolution of the light signals.
Several future Higgs factories based on the electron-positron collider are planned for precision Higgs physics to search for the new physics beyond the Standard Model. The calorimeters with the high granularity play a crucial role on the precision Higgs measurement. Especially the high granularity of the cell size of the $5~\mathrm{mm}\times5~\mathrm{mm}$ is required for the electromagnetic calorimeter.
The Scintillator Electromagnetic CALorimeter (Sc-ECAL) is one of the technology options for the ECAL at the future Higgs factories. It is based on a scintillator strip readout by a Silicon Photomultiplier (SiPM) to realize the $5~\mathrm{mm}\times5~\mathrm{mm}$ cell size by aligning the strips orthogonally in x-y configuration. In order to demonstrate the performance of the Sc-ECAL and the scalability to the full-scale detector, the technological prototype has been developed with the full 30 layers.
The commissioning of the prototype is based on long-term tests with LED and cosmic-ray. The per-channel calibrations are successfully done for the key parameters of the Sc-ECAL. It is found that the Sc-ECAL can be properly calibrated and operated.
The performance of the Sc-ECAL is evaluated. The key parameters are successfully monitored and it is found that most of the parameters show excellent stabilities over a long period. The efficiency and position resolution are found to be consistent with the Monte Carlo simulation, and the position resolution meets the requirement of the cell size of $5~\mathrm{mm}\times5~\mathrm{mm}$. The shower analysis is performed using the cosmic-ray. The showers induced by the cosmic-ray are successfully measured as expected in the simulation.
In conclusion, the Sc-ECAL is found to be a promising and mature technology for the highly granular calorimeter to achieve the precision physics at the future Higgs factories.
The MEG II experiment searches for $\mu \rightarrow e \gamma$ decay which is one of the charged lepton flavor violation decays, and the discovery of the decay will be a clear evidence of new physics beyond the Standard Model. The liquid xenon (LXe) gamma-ray detector to precisely measure the energy, position, and timing of the gamma-ray from $\mu \rightarrow e \gamma$ is a key to the unprecedented sensitivity of the MEG II experiment. The LXe scintillation light is read out by VUV-sensitive photosensors (4092 SiPMs and 668 PMTs) specially developed for the MEG/MEG II LXe detector. In 2021, a full commissioning of the LXe detector with all the channels read out was carried out for the first time, and a pilot physics run was also performed in the beamtime 2021. The detector response was monitored using a muon beam and several calibration sources, and the timing and energy resolutions were measured using the gamma-rays whose energies are around the signal energy from the $\pi^0$ decays after charge exchange reactions of charged pions in a liquid hydrogen target. The performance of the entire LXe detector depending on the gamma-ray interaction points was evaluated. Further investigations were performed about the degradation of the photosensor sensitivity by radiation damage found in the previous years. The MEG II LXe detector has been successfully commissioned and is now ready for the long physics run of the MEG II starting in 2022. In this presentation, the performance of the LXe detector measured in the commissioning will be reported.
The MEGII experiment searches for the μ+ → e+γ decay with a sensitivity of 6*10-14 at 90% C.L. The precise measurement of the kinematical variables of the two particles in the final state, generated by muons stopped in a thin target, is key in finding the signature of this process. A major upgrade has been carried out over the last years and a new Liquid Xenon (LXe) calorimeter has been introduced, equipped with both PMT and SiPM immersed in Xenon collecting the Xe scintillation light emitted in the Vacuum Ultra Violet region.
MEGII has successfully completed the eng. run and just started data taking.
The characterization of the 1000 L LXe calorimeter is a cardinal (and not trivial) task. To fully and precisely characterize the performances of this detector physical events in the μ → eγ signal region are desired.
The production at rest of π0, in the charge exchange reaction π− + p → π0 + n, matches this requirement. Gammas from the π0 decay have an energy spectrum flat in the interval 54.9 < Eγ < 82.9 MeV and one can easily select a 54.9 MeV γ detecting a coincident γ emitted in the opposite direction. An auxiliary detector, facing the LXe calorimeter, is therefore required to select the higher energy γ while the other is used for calibration. The method illustrated allows establishing the energy, position and time resolutions of the LXe calorimeter.
A core component of these measurements is a target with the right properties and able to work in the presence of a high magnetic field. Here we present the liquid hydrogen target designed, built and used for this purpose during the first data-taking period of MEGII.
The challenge for new calorimetry for incoming experiments at intensity frontiers is to provide detectors with ultra-precise time resolution and supreme energy resolution.
Two very promising materials on the market are BrilLanCe (Cerium doped Lanthanum Bromide, LaBr3 (Ce)) and LYSO (Lutetium Yttrium OxyorthoSilicate, Lu2(1-x) Y2x SiO5 (Ce)), supported by recent developments aiming at providing new relative large crystals.
The response of both LaBr3 (Ce) and LYSO detectors having silicon photomultipliers as photosensors have been studied via detailed Monte Carlo (MC) simulations. The impinging gammas are in the range of 50-100 MeV. The MC simulations are based on GEANT4, including the full electronic chain up to the waveform digitization and finally the reconstruction algorithms.
For the (R = 4.45 cm, L = 20.3 cm) LaBr3 (Ce) crystal an energy resolution of σE /E ∼ 2.3(1)% and a timing resolution of σt ∼ 35(1) ps have been predicted. The energy resolution can be further improved by using larger crystals (either R = 6.35 cm or R = 7.6 cm, L = 20.3 cm) approaching respectively a σE/E ∼ 1.20(3)% or a σE /E ∼ 0.91(1)%.
Due to the shorter radiation length and smaller Moliere radius the LYSO crystal of the available size (R = 3.5 cm, L = 16 cm) performs better in terms of energy deposit compared to the currently available larger crystal made of LaBr3(Ce). An energy resolution of σE /E ∼ 1.48(4)% can be obtained, and that can be further improved using bigger crystals ( R = 6.5 cm, L = 25 cm, σE /E ∼ 0.74(1)% ). A σt ∼ 40(1) ps can be also achieved.
The size of the crystals considered here is optimal for assembling segmented big detectors as will be shown. Such results put these future high-energy calorimeters at the detector forefront at intensity frontiers.
SiPMs, Silicon Photo-Multipliers also referred to as Multi-Pixel Photon Counters (MPPCs), are solid state photo detectors, which consist of a high density matrix of avalanche photodiodes. Each photodiode operates in Geiger mode and works as photon-independent counter. They are characterized by an high internal gain which allows to detect from single photon to several thousand of photons. Furthermore their internal avalanche amplification is fast enough to obtain good timing properties. Due to their insensitivity to magnetic fields, low operating voltages, low cost and compactness, SiPMs have a wide range of applications in high energy physics instrumentation.
The present study aims to investigate the performance of a SiPM readout for application n calorimetry.
Hamamatsu MPPCs , with an effective photosensitive area of 3~\times~3~mm$^{2}$ and \lambda$_{MAX}~=~450~nm$, have been tested in two different configuration of 16 and 64 channels, for reading out a sampling calorimeter.
A dedicated experimental set-up has been realised using an electromagnetic calorimeter made of lead thin (0.5 mm) layers and scintillating fibres. The calorimeter is segmented in modules with a diameter of 4.3~cm; internal modules are read by conventional photo-multipliers tubes (PMTs) connecting to photo-guides at one ends. Similar photo-guides are used to connect tested SiPMs to the other end, coupling different configuration of guides.
Also the possibility of a directly SiPM read out, without light guides, is evaluated.
The SiPMs efficiency, time and space resolutions have been studied using secondary cosmic rays, with an external trigger provided by a system of scintillators.
Some preliminary results, compared to PMTs performance, will be presented.
The dual-readout calorimetric technique reconstructs the event-by-event electromagnetic fraction of hadronic shower through the simultaneous measurement of scintillating (S) and Cherenkov (C) light produced by the shower development. The new generation of prototypes, based on Silicon Photomultipliers (SiPMs) readout, is adding an unprecedented granularity to the well-known energy resolution.
A highly granular prototype (10x10x100mm3), designed to fully contain electromagnetic showers, has been recently built and qualified on beam. It consists of 9 modules, each made of 320 brass capillaries (OD = 2mm) equipped, alternatively, with scintillating and clear fibers. All the fibers of the central module are instrumented with SiPMs (one per capillary) while the PMTs are used for the others. The SiPM readout is based on the new FERS-System designed by Caen to fully exploit the CITIROC1A performances (i.e. wide dynamic range, linearity and multi-photon quality) even with SiPMs of small pitch size (15 μm) and small gain (1-3×105).
The recent test beam allowed to qualify the readout system and to define a procedure to calibrate the SiPM response from ADC to ph-e in a wide dynamic range: from 1 to 4000 ph-e (almost 60% of the cells available in the SiPM in use). In addition, this calibration provides the possibility to compensate for the intrinsic non-linear response of the sensor, when needed. The number of ph-e per GeV has been measured both for scintillating and Cherenkov light together with the calorimetric performances in the energy range of 10 – 100 GeV.
In this talk, I’ll review the system qualification, the test beam results, and the on-going R&D required to build a demonstrator capable to fully contain hadronic showers, required to assess the hadronic energy resolution.
CUPID is a proposed upgrade to the ton-scale neutrinoless double beta decay experiment, CUORE which is currently operating at the Laboratori Nazionali del Gran Sasso (LNGS). The primary background in CUORE are degraded $\alpha$'s, and CUPID aims to improve this background by over a factor of 100 via a two channel energy collection approach using scintillation light and heat. This will allow for event by event discrimination of $\alpha$ and $\gamma$/$\beta$ interactions. In order to meet the timing and energy resolution requirements of CUPID and beyond, large area light detectors which use low-Tc transition edge sensors (TES) deposited on Si wafers are a promising technology. Here we will present the current state of the ongoing collaboration with ANL to develop light detectors using an IrPt bilayer TES with Au pads to enhance thermal conductivity to the Si wafer. We report on the preliminary measures of timing and energy resolution, and possible differences in response due to position. Additionally we will discuss ongoing plans to explore multiplexed readout and other improvements.
Noise at the quantum limit over a large bandwidth is a fundamental requirement in forthcoming particle physics applications operating at low temperatures, such as neutrino measurements, x-ray observations, CMB measurements, and axion dark matter detection---involving MKIDs, TESs and microwave resonant cavity detectors---as well as in quantum technology applications, as the high-fidelity readout of qubits. The readout sensitivity of these detectors is currently limited by the noise temperature and bandwidth of available cryogenic amplifiers such as HEMTs or JPAs. The DARTWARS (Detector Array Readout with Traveling Wave AmplifieRS) project has the goal of developing high-performing innovative traveling wave parametric amplifiers with high gain, high saturation power, and nearly quantum-limited noise. The practical development follows two different promising approaches, one of which is based on Josephson junctions and is presented in this contribution: the Josephson Traveling Wave Parametric Amplifier (JTWPA).
Our JTWPA is designed as a coplanar waveguide embedded with a serial array of nonhysteretic single-junction cells of rf-SQUIDs, which allow to operate both in 3-wave-mixing and 4-wave-mixing mode. To avoid the presence of additional undesired tones besides the signal and idler, two layouts are currently being studied, the resonant phase matching and the quasi-phase matching.
A preliminary characterization was performed on a prototype JTWPA with 990 cells, in a dilution refrigerator with base temperature 15 mK. The operation in 3-wave-mixing was demonstrated, although with some nonhomogeneity issues, and a gain of about 25 dB was obtained.
The next step consists in improving the homogeneity of junctions: a sample of junctions with critical current $4~\mu\text{A}$ and self-capacitance 225 fF was fabricated. Their room-temperature normal resistances were tested with a probe-station, showing a good resistance spread between 5% and 10%.
Spectral information and imaging with photon wavelengths longer than 1.1 µm (equivalent to Si bandgap) become highly valued in astronomical applications. Thin-film-based image sensors are considered as one of the next-generation imaging platforms for this long-wavelength spectral range that cannot be covered by Si image sensors. Colloidal Quantum Dot (CQD)-based imagers are appealing due to their potential for scaling the pixel pitch and array size. Monolithic processing availability of the photodiode (PD) layer onto the Si Readout Integrated Circuit (ROIC) enables substantially scaling the pixel dimensions of CQD-based imagers compared to flip-chip integrated ones with bulk crystalline PDs made of III-V (InGaAs, InSb, ...) or II-VI semiconductor materials. In addition, the light absorption peak of CQD PD made from PbS can be tuned, covering the extended Short-Wave InfraRed (SWIR) wavelength region, which provides the capability for hyperspectral imaging and spectroscopy. In the sensors presented, the scalability is demonstrated by the pixel pitch down to sub-2 µm for our CQD SWIR imagers beneficial for better resolution, which enables diffraction-limited imaging with oversampling of the optical point spread function that can be used to correct aberrations and lessen requirements on optical system tolerances. Making a single CQD PD imager chip as large as the full wafer size becomes available with the help of full-wafer level processing capability. Thus, a sensor area up to 20,000 mm2, with a maximum 6 Gigapixel number is processible, assuming 200 mm Fab processing and minimum 1.82 µm pixel pitch. The external quantum efficiency (EQE) is shown to be 40% at its peak absorption wavelength of 1450 nm. We believe that this scalable SWIR imager equipped with the competitive EQE values can be applied to limited load satellites (e.g., CubeSat), as the high spatial and spectral resolution sensor enables on-the-fly reconfigurability extending the mission capacity of the satellites.
Large arrays of superconducting transition-edge sensor (TES) X-ray microcalorimeters are becoming the key technology for space and ground-based observatory in the field of astrophysics, laboratory astrophysics, particle-physics, plasma physics and material analysis. TES based X-ray detectors are non-dispersive spectrometers bringing together high-resolving power, imaging capability and high-quantum efficiency.
The TES X-ray calorimeters technology is entering a new era where arrays with more than 1000 pixels are routinely fabricated and cutting-edge instruments with dozen of multiplexing channels are being build for fundamental research at synchrotron and free electron laser facilities and plasma sources.
At SRON, we are developing the focal plane assembly and the back-up detector array for the the X-ray Integral Field Unit (X-IFU) on board of Athena. X-IFU will host an array of more than 3000 TES pixels with a $T_c\simeq 90\, \mathrm{mK}$, sensitive in the energy range of 0.2--12 $\mathrm{keV}$, with~an unprecedented energy resolution of 2.5~eV at 7 keV.
We have recently demonstrated the Frequency Division Multiplexing (FDM) read-out of 37 TiAu TES calorimeters with an exquisite energy resolution of 2.23 eV at 5.9 keV. Our FDM technology has proven to have low electrothermal cross-talk and to be relatively insensitive to external magnetic field, with respect to other multiplexing schemes.
We will discuss the prospects of using our cryogenic high-resolution X-ray imaging spectrometer based on TES detectors and FDM read-out as a diagnostic instrument for the existing and future fusion reactors. Moreover, our detectors could contribute in the study of atomic properties of high-Z metals, like tungsten and its many ionization stages.
We will finally show the challenges of developing and reading-out very large arrays of TES X-ray calorimeters with more than 10000 pixels for future astrophysics and fundamental research in particle physics, such as the detection of solar axions and the direct detection of the neutrino mass.
The electron electric dipole moment (e-EDM) is a model-independent probe of parity and time-reversal violation at energies beyond the ones that can be reached in particle colliders. The PHYDES project is an R&D experiment funded by CSN V of INFN aimed to test innovative approaches for e-EDM studies. In particular, the proposed idea is to use diatomic polar molecules, where e-EDM effects are amplified because of the large internal molecular field, embedded into cryogenic matrices made of unreactive elements. In such solids a diatomic molecule substitutes one the atom or molecule of the host matrix and, since the host-guest ratio can be 1:200, the density of the host molecules could be as large as $10^{22} $cm$^{−3}$.
The main goal of the PHYDES R&D program would be to try to embed Barium Fluoride (BaF) molecules in a solid matrix of para-Hydrogen (p-H2) and study their alignment with an external electric field and verify the assumption that BaF molecules are all polarized in p-H2 matrix.
The set-up we are developing to grow cryogenic crystal of around 1 cm$^3$ doped with about 100 ppm of BaF, consists of five different chambers. In the first one the BaF molecues are produced, ionized, accelerated and focused into the Wien Filter chamber which is necessary for mass selection. Then the molecular beam will be neutralized and cooled in order to prepare the BaF for the insertion in cryogenic crystal. In parallel we are developing an opportune system for para-Hydrogen production and storage. Finally the last chamber is the condensation chamber where a crystal of p-H2 doped with BaF can be grown through the matrix isolation technique.
Coherent elastic neutrino nucleus scattering (CEvNS) is a well-predicted Standard Model process only recently observed for the first time. Its precise study could reveal non-standard neutrino properties and open a window to search for physics beyond the Standard Model.
NUCLEUS is a CEvNS experiment conceived for the detection of neutrinos from nuclear reactors with unprecedented precision at recoil energies below 100 eV. Thanks to the large cross-section of CEvNS, an extremely sensitive cryogenic target of 10g of CaWO4 and Al2O3 crystals is sufficient to provide a detectable neutrino interaction rate.
NUCLEUS will be installed between the two 4.25 GW reactor cores of the Chooz-B nuclear power plant in the French Ardennes, which provide an anti-neutrino flux of 1.7 x 10^12 v/(s cm2). At present, the experiment is under construction. The commissioning of the full apparatus is scheduled for 2022, in preparation for the move to the reactor site.
This talk will present the concept and design of the experimental setup and go in detail on the sensitive detector technology enabling an advance of neutrino physics at the low-energy frontier.
Quantum sensing is a rapidly growing field of research which is already improving sensitivity in fundamental physics experiments. The ability to control quantum devices to measure physical quantities received a major boost from superconducting qubits and the improved capacity in engineering and fabricating this type of devices. Superconducting qubits have already been successfully applied in the detection of single photons via Quantum Non-Demolition (QND) measurements: this technique enables to perform multiple measurements of the same single photon improving sensitivity and reducing the dark counts rate. The goal of the Qub-IT project is to realize an itinerant single-photon counter exploiting QND measurements and entangled qubits, in order to surpass current devices in terms of efficiency and low dark-count rates. Such a detector has direct applications in Axion dark-matter experiments (such as QUAX), which require the photon to travel along a transmission line before being measured. For the Axion to interact, large magnetic fields are needed, therefore the superconducting device should be placed far from the interaction region.
In this contribution we present the design and simulation of the first superconducting device consisting of a transmon qubit coupled to a resonator which is being performed with Qiskit-Metal (IBM): this Python package provides a user-friendly toolkit for chip prototyping and simulation. Qiskit-Metal comes with different simulations to extract the circuit Hamiltonian parameters, such as resonant frequencies, anharmonicity and qubit-resonator couplings as well as an estimation for the qubit decay time ($T_{1}$). The Lumped Oscillator Model (LOM) and the Energy Participation Ratio (EPR) analyses exploit Ansys Q3D and Ansys HFSS to perform electromagnetic simulations, before calculating the Hamiltonian of the circuit.
The simulation phase is fundamental in order to tune each parameter of the chip design to obtain the desired Hamiltonian before moving to the manufacturing stage.
To achieve the extreme sensitivities necessary to perform elusive particle searches like $\beta$-decay spectroscopy for neutrino mass measurement or dark matter detection, future experiments will employ large arrays of cryogenic detectors, such as metallic-magnetic calorimeters or transition-edge sensors (TES).
A TES is a thin film of superconducting material weakly coupled to a thermal bath typically at $T < 100$ mK, that can be used as a radiation detector by exploiting its very sharp phase transition. We have been developing X-ray TES micro-calorimeters optimized for X-ray astronomy up to energies of 12 keV, as well as a frequency-domain multiplexing (FDM) technology to perform their readout. Energies up to $\sim$10 keV are compatible with the expected spectrum of axion-like particles arriving on Earth generated in the Sun by electron processes and Primakoff conversion, which will be investigated in the future by axion helioscopes. A fundamental instrumental requirement is the background of the X-ray detectors, which should be at a level of $10^{-7}$ keV$^{-1}$cm$^{-2}$s$^{-1}$. TES represent a suitable choice for this science case, given their high energy resolution and quantum efficiency, low intrinsic background and scalability to large ($\sim 1000$s) arrays.
In this contribution we present a measurement of the X-ray detectors background, using a TES array with $240\times240\ \mu \text{m}^2$ absorber area and energy resolution at a level of 2 eV at 5.9 keV with an FDM readout. With an effective integration time of 40 days, we measured a background rate at a level of $10^{-3}$ keV$^{-1}$cm$^{-2}$s$^{-1}$ in the energy range of 1 to 10 keV.
We show the data analysis method and prospect possible improvements, such as coupling with a cryogenic anti-coincidence and the introduction of a PTFE and Cu shielding around the sensitive area of the setup, to further reduce the background rate.
The 50 mK cryogenic focal plane anti-coincidence detector of the Athena X-ray observatory (CryoAC) is a silicon suspended absorber sensed by a network of 400 Ir/Au Transition Edge Sensors (TES) and connected through silicon bridges to a surrounding silicon frame plated with gold (RIM). The device is shaped by Deep Reactive Ion Etching (DRIE) from a single silicon wafer of 500 um. There are two different possible geometries: A single Monolithic absorber and a Segmented one with 4 distinct absorber structure. As part of the payload of space a mission the detector must resist to several mechanical excitations. We have tested a set of prototypes of the CryoAC vibrating several hexagonal Silicon samples. This vibrating them using the vibrational mask provided by CNES for the future ARIANE 6. The aim is to have a first information on the mechanical response of the Silicon bridges that connect the absorber to the RIM, to start a tradeoff over the two geometries and to validate the elastic-mechanical response.
Future experiments pursuing scientific breakthroughs in the fields of astronomy, cosmology or astro-particle physics will take advantage of the extreme sensitivities of cryogenic detectors, such as transition-edge sensors (TES).
A TES is a thin film of superconducting material weakly coupled to a thermal bath typically at $T < 100$~mK, used as a radiation detector by exploiting its sharp phase transition, providing unprecedented resolving power and imaging capabilities. We have been developing TES micro-calorimeters for X-ray spectroscopy for the Athena X-ray Integral Field Unit (X-IFU), demonstrating under AC bias resolving power capabilities of $E/\Delta E \simeq 3000$.
Performing the readout of thousands of detectors operating at sub-K temperatures represents an instrumental challenge. We have been developing, in the framework of X-IFU, a frequency-domain multiplexing (FDM) technology, where each TES is coupled to a superconducting band-pass LC resonator and AC biased at MHz frequencies through a common readout line. The TES signals are summed at the input of a superconducting quantum interference device (SQUID), performing a first amplification at cryogenic stage. A custom analog front-end electronics further amplifies the signals at room temperature. A custom digital board handles the digitization and modulation/demodulation of the TES signals and bias carriers.
Using Ti/Au TES micro-calorimeters, high-Q LC filters and analog/digital electronics developed at SRON and low-noise two-stage SQUID amplifiers from VTT Finland, we demonstrated using two experimental setups the feasibility of our FDM readout technology, with the simultaneous readout of 31 pixels with an energy resolution of 2.14 eV and 37 pixels with an energy resolution at of 2.23 eV, exploiting 5.9 keV photons from an $^{55}$Fe source.
We report the technological challenges of the FDM development and their solutions, already implemented or envisaged to further improve the matureness of this technology, as well as prospects for further scaling up and future possible applications.
The current technology of thermal detectors for rare events physics is based on large cryogenic calorimeters read with NTD thermistors. Measuring the total energy deposition via the heat release in the crystal lattice allows for optimal energy resolutions when the detectors are operated at 10mK. In case the crystals are made of a scintillating material, a double readout of heat and scintillation light could allow for an improved discrimination between alpha and beta/gamma events.
Cryogenic detectors read with NTD are generally characterised by a slow time response, limited by the several thermal factors playing a role in the signal formation. For example, the traditional glue coupling between the NTD and the absorber could introduce spurious and variable thermalisation time constants to the signal. A new technique for coupling the NTD to the absorber crystal would be the silicate bonding, applied already in several fields of satellite and optical physics. We will be showing the first results of NTD coupled on LMO crystal with the silicate bonding technique, when these are operated as cryogenic calorimeters at 10 mK.
A second fundamental aspect for these detectors is the improvement in the collection of scintillation light. In general, the light detectors are Ge or Si wafers, operated also as thermal detectors, absorbing the scintillation photons and converting them into heat. We are now proposing to use instead a plastic film with high absorbance for optical photons, wrapped around the scintillating calorimeter. In particular, we will be showing the results obtained with a thermal light detector made in KAPTON® film, operated in the Milano Cryogenic Lab.
We have developed a SQUID controller unit for TES sensors readout, designed to be used in a space mission. The unit is made of 8 boards and each board can condition four SQUID array amplifiers. The board design is inspired by a similar one developed for ground based experiments, but specific changes have been made to adopt COTS with space grade equivalents, to implement redundancy and cross-strapping capabilities. The design also includes the thermal path to lift the heat off the boards towards an in-house designed monolithic aluminum rack. In this contribution we report the board performances in terms of cross-talk, bandwidth and noise, together with the thermo-mechanical simulations.
Nowadays, many experiments with very high energy resolution detectors rely on the faithful detection of low power microwave signals at cryogenic temperatures. This is especially true also in the field of the superconducting quantum computation, where quantum-limited noise microwave amplification is paramount to infer the qubit state with high fidelity.
For these applications, the goals are to maximize the signal to noise ratio of microwave signals extremely feeble while allowing a broad readout bandwidth.
The latter is also important, because both the quantum and the particle physics fields require to readout very large arrays of qubit and Low Temperature Detectors to achieve meaningful results in terms of computational power and acquired statistics respectively. To solve this problem, parametric amplification, a well known technique used for low noise amplifiers, will be exploited and developed to its technical limits.
DARTWARS (Detector Array Readout with Traveling Wave AmplifieRS) is a three years project that aims to develop high-performing innovative traveling wave parametric amplifiers (TWPAs). The practical development follows two different promising approaches, one based on the Josephson junctions (TWJPA) and the other one based on the kinetic inductance of a high-resistivity superconductor (KITWPA). The technical goal is to achieve a gain value around 20 dB, comparable to the currently used semiconductors low temperature amplifiers (HEMT), with a high saturation power (around -50 dBm), and a quantum limited or nearly quantum limited noise ($T_N<$ 600 mK). These features will lead to the readout of large arrays of detectors or qubits with no noise degradation. In particular, this contribution will present the progress made so far in the design and development of a KITWPA as a weakly dispersive artificial transmission line by the DARTWARS collaboration.
BULLKID (Bulky and low-threshold kinetic inductance detectors) is an R&D project on an innovative cryogenic particle detector to search for low-energy nuclear recoils induced by neutrino coherent scattering or Dark Matter interactions. The detector unit consists of an array of 60 silicon absorbers of 0.3 g each sensed by phonon-mediated, microwave-multiplexed Kinetic Inductance Detectors. The arrays built up to now feature a total active mass of 20 g and the technology is engineered to ensure an easy scalability to a future kg-scale experiment. In this talk we will describe BULLKID and we will present the recent and encouraging results obtained from the operation of the first prototypes.
Advances in superconducting detector arrays are driving progress in the field of cosmic microwave background (CMB) measurements. In the last decade ground-based CMB projects have employed arrays of thousands of superconducting transition-edge sensors (TESes) to make great progress in cosmological constraints from early universe inflation to the Hubble expansion rate. These arrays are operated at sub-Kelvin temperatures and utilize superconducting quantum interference device (SQUID) amplifiers to multiplex and amplify the TES signals before they are digitized. Kinetic inductance detectors (KIDs) are a newer superconducting detector technology that are naturally multiplexed in frequency and have been deployed at smaller scales with promising results. We review the development of arrays of TESes and KIDs, then describe the progress being made in scaling these technologies towards tens of thousands and hundreds of thousands of detectors for the upcoming Simons Observatory, CCAT-prime, and CMB-S4 projects.
The next 10 years will be exciting for High Energy Physics, with new experiments entering data taking (High Luminosity LHC) or being designed and eventually approved (FCC, CEPC, ILC, MU_COLL). In all the cases, the computing infrastructures, including the software stacks for selection, simulation, reconstruction and analyses, will be crucial for the success of the physics programs. Many directions are being explored by the community, like heterogeneous computing for the most time-critical tasks, and AI inspired techniques to squeeze the ultimate performance and in order to match reasonable resource budgets.
The contribution wants to address the landscape and the state-of-the-art in the field, highlighting the strong and weak points, and the aspects which still need sizeable R&D.
After a concise description of the bolometric technique, including hybrid devices with double readout, the advantages of this technology for rare event searches are highlighted and discussed. Special methods for the reduction of different forms of background are overviewed. Three
examples of bolometric double-beta decay experiments, spanning a wide range of background rejection techniques, are examined in detail: CUPID-Mo, CROSS and BINGO.
The nSOL experiment to operate a neutrino detector close to the Sun is building a small test detector to orbit the Earth to test the concept in space. This detector concept has to provide a new way to detect neutrinos unshielded in space. A double delayed coincidence on Gallium nuclei that have a large cross section for solar neutrino interactions convert it into an excited state of Germanium,which decays with a well-known energy and half-life. This unique signature permits operation of the detector volume mostly unshielded in space with a high single particle counting rate of gamma and cosmic ray events. The test detector concept which has been studied in the lab and is planned for a year of operations orbiting Earth which is scheduled for launch in late 2024. It will surrounded by an active veto and shielding will be operated in a polar orbit around the Earth to validate the detector concept and study detailed background spectrums that can fake the timing and energy signature from random galactic cosmic or gamma rays. The success of this new technology development will permit the design of a larger spacecraft with a mission to fly close to the Sun and is of importance to the primary science mission of the Heliophysics division of NASA Space Science Mission Directorate, which is to better understand the Sun by measuring details of our Sun's fusion core.
Serendipitously discovered by the BATSE mission in the nineties, Terrestrial Gamma-ray Flashes (TGFs) represent the most intense and energetic natural emission of gamma rays form our planet. TGFs consist of sub-millisecond bursts of gamma rays (energy up to one hundred MeV) generated during powerful thunderstorms by lightenings (average ignition altitude of about 10 km) and are in general companions of several other counterparts (electron beams, neutrons, radio waves). The ideal observatory for TGF is therefore a fast detector, possibly with spectral abilities and orbiting around Earth in LEO (Low Earth Orbit). To date, the benchmark observatory is ASIM, an instrument flying onboard the International Space Station (ISS). LIGHT-1 is a 3U Cubesat mission launched in December 21st, 2021 and deployed from the ISS on February 3rd, 2022. The LIGHT-1 payload consists of two similar instruments conceived to effectively detect TGF at few hundred nanoseconds timescale. The detection unit is composed of a scintillating crystal organised in four optically independent channels, read out by as many photosensors. The detection unit is surrounded by a segmented plastic scintillator layer that acts as an anti coincidence VETO for charged particles. The customised electronics consists of three different boards embedding the power supplies and detector readout, signal processing, detector controls and interface with the bus of the spacecraft. LIGHT-1 makes the use of two different scintillating crystals, namely (low background) Cerium Bromide ($\mathrm{CeBr_3}$) and Lanthanum Bromo Chloride (LBC), and two different photo sensing technologies based on PhotoMultiplier Tubes (R11265-200 manufactured by Hamamatsu) and Silicon Photomultipliers (ASD-NUV1C-P manufactured by Advansid and S13361-6050AE-04 manufactured by Hamamatsu). Payload performance and detailed description will be provided, along with simulation and pre-flight diagnostic tests and calibration. The first release of in orbit Science Data will be also presented.
Only a few years after the first direct detections by LIGO and Virgo, the gravitational-wave (GW) field is at a turning point, with a rapidly increasing number of confirmed signals – all from compact binary mergers so far. This dataset offers a wealth of information and allows scientists to study the populations of compact objects and the rates at which they merge, permitting tests of general relativity in a strong regime that had not been probed previously. This progress is made possible by the improvements in terms of sensitivity and duty cycle of the second-generation ground-based GW detectors, such as by advances in the analysis of the recorded data.
Key to this dataflow are the detector characterization and data quality activities, collectively referred to as “DetChar” in the following. The former help improve knowledge of the instruments while fighting against their dominant noise sources (transient or continuous), while the latter shape the dataset for the analysts, vet the GW candidates found either in low latency or offline, and contribute to mitigating the effect of noise when inferring the GW source properties from the detected signals.
With this joint abstract, the LIGO and Virgo DetChar groups present their main contributions to the GW detection effort during the third Observing Run (O3, April 2019 – March 2020). After that summary, we describe the main developments and improvements that are in progress to cope with the increase in detection rate expected for the fourth LIGO-Virgo-KAGRA Observing Run (O4), currently planned to start at the end of the year. The goals are manifold: to extend the data quality coverage, to decrease the latency of the main DetChar products and to automate as much as possible the different analysis – both their processing and their reporting.
We present the latest results on the development of the Dark-PMT, a novel light Dark Matter (DM) detector. The detector is designed to be sensitive to DM particles with mass between 1 MeV and 1 GeV. The detection scheme is based on DM-electron scattering inside a target made of vertically-aligned carbon nanotubes. Carbon nanotubes are made of wrapped sheets of graphene, which is a 2-dimensional meterial: therefore, if enough energy is transferred to overcome the carbon work function, the electrons are emitted directly in the infra-tube vacuum. Vertically-aligned carbon nanotubes have reduced density in the direction of the tube axes, therefore the scattered electrons are expected to leave the target without being reabsorbed only if their momentum has a small enough angle with that direction, which is what happens when the tubes are parallel to the DM wind. This grants directional sensitivity to the detector, a unique feature in this DM mass range. We will report on the construction of the first Dark-PMT prototype, on the establishment of a state-of-the-art carbon nanotube growing facility in Rome, and on the characterizations of the nanotubes with XPS and angular-resolved UPS spectroscopy performed in Sapienza University, Roma Tre University, and at synchrotron facilities. This project was recently awarded a PRIN2020 grant with which we aim, over the course of the next three years, to construct the first large-area cathode Dark-PMT prototype with a target of 10 mg of carbon. The main focus of the R&D will be the development of a superior nanotube synthesis capable of producing optimal nanotubes for their use as DM target. In particular, the nanotubes will have to exhibit high degree of parallelism at the nanoscale, in order to minimize electron re-absorption.
The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment searching for 0νββ decay that has been able to reach the one-tonne mass scale. The detector, located at the LNGS in Italy, consists of an array of 988 TeO2 crystals arranged in a compact cylindrical structure of 19 towers. CUORE began its first physics data run in 2017 at a base temperature of about 10 mK and in April 2021 released its 3rd result of the search for 0νββ, corresponding to a tonne-year of TeO2 exposure. This is the largest amount of data ever acquired with a solid state detector and the most sensitive measurement of 0νββ decay in 130Te ever conducted, with a median exclusion sensitivity of 2.8×10^25 yr. We find no evidence of 0νββ decay and set a lower bound of 2.2 ×10^25 yr at a 90% credibility interval on the 130Te half-life for this process. In this talk, we present the current status of CUORE search for 0νββ with the updated statistics of one tonne-yr. We finally give an update of the CUORE background model and the measurement of the 130Te 2νββ decay half-life, study performed using an exposure of 300.7 kg⋅yr.
The detection of the first gravitational wave in 2015 by the LIGO-VIRGO collaboration has opened a new era for the astronomy, that now can correlate even more types of signals from the space, and now the multi-messenger astronomy is a new approach to have a more complete and deeper vision of the universe. The goal of this philosophy is to evaluate how the synchronized arrival of quite different signals from the same astronomical source can give us a more detailed description of the events. Focusing on the photon detection, different approaches and technologies are necessary for observing remote sources in gamma or X rays, UV or visible, infrared, micro or radio waves.
Considering the infrared detection, some telescopes are for satellites, like JWST (James Webb Space Telescope) and WISE, while others are ground-based, like three infrared recent instruments: VIRCAM (the VISTA InfraRed Camera), MOONS (The Multi-Object Optical and Near-infrared Spectrograph) and ERIS-NIX, both by ESO’s Very Large Telescope (VLT). These detectors are designed for working “transversally” to get images and spectrographs.
The FAIRTEL experiment has been funded for 2022 by CSNV of INFN to study a not very explored and, in our opinion, interesting case: the ultra-fast infrared transient’s detection. Observations on astrophysical signals with fast transient present more promising and exciting cases every day, as demonstrated for gamma ray bursts/flares or for the fast radio bursts. In the latter case, transients <1 ms have been observed. In FAIRTEL (FAst InfraRed ground-based TELescope), the authors aim to design a low-cost detector based on HgCdTe semiconductors to be used for searching astronomical infrared fast transients, even of the order of nanoseconds. The philosophy of the proposed detector consists in observing the IR signal longitudinally (which means recording time tracks) rather than transversely (which means taking pictures) as it is usually done.
The ICARUS T600 LAr TPC is located at shallow depth on the Booster Neutrino Beam at Fermilab. To reduce the cosmic rays background, in addition to a full coverage cosmic ray tagger, a system based on 360 large area Hamamatsu R5912-MOD PMTs is used, to detect scintillation light at 128 nm from ionizing particles. An important asset for this system is the calibration in gain and time of each PMT. This calibration is based on a custom laser diode system, where laser pulses at 405 nm are delivered to each PMT. Laser pulses arrive to a 1x36 optical switch and then to a UHV flange, by a 20 meters long optical patches. Light is then delivered to the ten PMTs connected to a single flange, by 7m long injection optical patches. Extensive tests of the used components and care in the design of the optical system have guaranteed a sizeable signal with minimal distortions to each PMT, as respect to the original one,even in a situation where available power is low. Gain equalization has reached a 1% resolution, starting from an initial 15% from gain measurements at room temperature. In this procedure data from background photons are used.
Timing calibration, to take into account the different delay in time of the different electronic channels, due to temperature excursions, ... is still in progress. The status of the construction of the laser system and its possible upgrades, as well as performances of the calibration procedure will be reported.
We have overwhelming evidence that 85% of the matter content in the Universe is composed of dark matter (DM), some new kind of beyond-the-Standard-Model particle that has yet to be detected. Probing the nature of DM is recognised as one of the most pressing scientific pursuits worldwide.
Since DM candidates span about 45 orders of magnitude in mass, from ultra-light bosons to massive primordial black holes, discovery can come from anywhere. While WIMPs become less motivated, with large experiments ruling out the traditionally favoured parameter space, dark photons (DP) are receiving increasing attention among the alternative DM models. The DP arises naturally in extensions of the Standard Model by theorising the existence of an extra U(1) symmetry coupled to the U(1) gauge group of electromagnetism via kinetic mixing.
I am one of the few developers of a new type of detector to search for DPs called “dielectric haloscope,” having built one of the only two in operation anywhere in the world. Dielectric haloscopes consist of thin dielectric layers with alternating high and low refractive indices and can convert DP into SM photons, thus making them detectable. I have developed and run a SiO$_2$/Si$_3$N$_4$ dielectric haloscope coupled with a single-photon avalanche diode operated in Geiger-mode. As no excess of events was observed in the data, the method of maximum log-likelihood was used to set exclusion limits at 90% confidence level on the kinetic mixing coupling constant between dark photons and ordinary photons. This prototype experiment, baptised MuDHI (Multilayer Dielectric Haloscope Investigation), has been designed, developed and run at the Astroparticle Laboratory of New York University Abu Dhabi, which marks the first time a dark matter experiment has been operated in the Middle East.
The JUNO-TAO detector is a liquid scintillator that will be placed near one core of the Taishan nuclear power plant of China.
It is a satellite detector of the Jiangmen Underground Neutrino Observatory (JUNO) and will provide a precise measurement of the reactor antineutrino spectra with unprecedented high energy resolution, improving the sensitivity of JUNO on mass hierarchy study.
Furthermore, JUNO-TAO will provide benchmark tests on nuclear database.
In this talk, JUNO-TAO’s design, the status of its R&D program, and its physics potential, will be presented.
Nowadays additive manufacturing is catching on and spreading across various fields at an astonishing rate. High energy physics, where materials are often exposed to special environmental conditions, is also starting to use this technology. The aim of this paper is to compare traditional and 3D printed stainless steel AISI 316L products with an eye turned to the specific high energy applications. The manufactured samples are subjected to different heat treatments, including vacuum firing, which is usually adopted for ultra-vacuum applications. Experimental tests are carried out on a set of samples to analyse the material composition and to assess properties such as mechanical performance in cryogenic application, high radiation resistance and ultra-vacuum compatibility. Such analysis of the material behaviour allows weakness and strength of the technology to be identified, compared to traditional AISI 316L
The proposed MATHUSLA experiment (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles) could open a new avenue for discovery of Physics Beyond the Standard Model at the LHC. The large-volume detector will be placed above the CMS experiment with O(100) m of rock separation from the LHC interaction point. It is instrumented with a tracking system to observe long-lived particle decays inside its empty volume. The experiment is composed of a modular array of detectors covering together (100 × 100) m^2 × 25 m high. It is planned in time for the high luminosity LHC runs. With a large detection area and good granularity tracking system, MATHUSLA is also an efficient cosmic-ray Extensive Air Shower (EAS) detector. With good timing, spatial and angular resolution, the several tracking layers allow precise cosmic-ray measurements up to the PeV scale that compliment other experiments.
We will describe the detector concept and layout, the status of the project, the on-going cosmic ray studies, as well as the future plans. We will focus on the current R&D on 2.5 m long extruded plastic scintillator bars readout by wavelength shifting fibers connected to Silicon Photomultipliers (SiPM) located at each end of the bar. We will discuss the studies made on possible fiber layout, dopant concentration, as well as report on the timing resolution measurements obtained using Saint Gobain and Kuraray fibers. We will also describe the tests made on the Hamamatsu and Broadcom SiPM, a possible SiPM cooling system using chillers, as well as highlight the structure of the trigger and data acquisition. Moreover, we will discuss the proposal of adding a 10^4 m^2 layer of RPCs with both digital and analogue readout to improve significantly cosmic ray studies in the 100 TeV – 100 PeV energy range with a focus on large zenith angle EAS.
The CYGNO experiment aims at the development of a large gaseous TPC with GEM-based amplification and an optical readout by means of PMTs and scientific CMOS cameras for 3D tracking down to O(keV) energies, for the directional detection of rare events such as low mass Dark Matter and solar neutrino interactions.
The largest prototype built so far towards the realisation of the CYGNO experiment demonstrator is the 50 L active volume LIME, with 4 PMTs and a single sCMOS imaging a $33\times 33 cm^{2}$ area for 50 cm drift, that has been installed in underground Laboratori Nazionali del Gran Sasso in February 2022.
We will illustrate LIME performances as evaluated overground in Laboratori Nazionali di Frascati by means of radioactive X-ray sources and cosmic rays, and in particular the detector stability, energy response and energy resolution. We will discuss the MC simulation developed to reproduce the detector response and show the comparison with actual data. We will furthermore examine the background simulation worked out for LIME underground data taking and illustrate the foreseen expected measurement and results in terms of natural and materials intrinsic radioactivity characterisation and measurement of the LNGS underground natural neutron flux. The results that will be obtained by underground LIME installation will be paramount in the optimisation of the CYGNO demonstrator, since this is foreseen to be composed by multiple modules with the same LIME dimensions and characteristics.
We are going to present the CYGNO project for the development of a high precision optical readout gaseous TPC for directional Dark Matter search and solar neutrino spectroscopy, to be hosted at Laboratori Nazionali del Gran Sasso (LNGS). CYGNO peculiar features are the use of sCMOS cameras and PMTs coupled to GEM amplification of a He-CF4 gas mixture at atmospheric pressure, in order to achieve 3D tracking and background rejection down to O(keV) energy, to boost sensitivity to low WIMP masses. By measuring not only the energy, but also the direction of the nuclear recoils of the atoms of the gas, CYGNO (a CYGNus TPC with Optical readout) fits into the wider context of the CYGNUS proto-collaboration, for the development of a Galactic Nuclear Recoil Observatory at the ton scale with directional sensitivity.
We will illustrate the characteristics of the optical readout approach in terms of the effective energy threshold, the capability of 3D tracking, the possibility of inferring the absolute Z coordinate and the particle identification properties down to O(keV) energies.
The project time-line foresees in the next 2-3 years, the realisation and installation of a O(1) cubic meter TPC in the underground laboratories at LNGS to act as a demonstrator for the scalability of the technology and performance and hence, we will show its sketches and design, including a rather complex gas purification and recirculation system, the DAQ and the trigger system.
Finally, we will present the results and studies of the expected background from external and internal radioactivity and thus the expected Dark Matter sensitivities of the CYGNO demonstrator.
ixpeobssim is a simulation and analysis framework, based on the Python programming language and the associated scientific ecosystem, specifically developed for the Imaging X-ray Polarimetry Explorer (IXPE). Given a source model and the response functions of the telescopes, it is designed to produce realistic simulated observations, in the form of event lists in FITS format, containing a strict super-set of the information provided by standard IXPE level-2 files. The core ixpeobssim simulation capabilities are complemented by a full suite of post-processing applications, allowing for the implementation of complex, polarization-specific analysis pipelines, and facilitating the inter-operation with the standard visualization and analysis tools traditionally in use by the
X-ray community.
We emphasize that, although a significant part of the framework is specific to IXPE, the modular nature of the underlying implementation makes it potentially straightforward to adapt it to different missions with similar polarization capabilities.
One important optimization for a successful detection of neutrinoless double-beta decay is the
energy resolution at its Q-value. nEXO is a tonne-scale experiment aiming to search for such a
decay in the isotope Xe-136 using a five tonne single-phase TPC filled with liquid xenon (LXe)
and equipped with scintillation readout capability. A major factor affecting the energy resolution
in LXe is the event-by-event fluctuation of the ionization charge and scintillation light. nEXO
exploits the microscopic anticorrelation between ionization and scintillation in xenon to maximize
the energy resolution.
In a TPC detector, the electron collection efficiency is usually close to one. Conversely, the
collection of photons can vary dramatically depending, along with other factors, on the overall
light-sensitive area of the detector.
The Stanford liquid xenon TPC is a nEXO prototype teststand planning to host the first VUV
large area (~200cm2) SiPM array. The setup firstly aims to study the feasibility of such a system
with dedicated readout electronics and ultimately to investigate how light collection affects the
detector performance.
In this talk, I will report on the status of the assembly of this photodetector array, along with
characterization measurements and comparison with simulation
The new experiment at Laboratori Nazionali del Gran Sasso, called LEGEND-200, will search for neutrinoless double beta decay ($0\nu \beta \beta$), a yet-to-be-observed weak radioactive transition. The discovery of $0\nu \beta \beta$ decay would prove unambiguously not only the existence of new lepton-number-violating physics but also its connection to the mysterious origin of the neutrino mass.
LEGEND-200 will use about 200 kg of high-purity germanium, acting simultaneously as source and detector, deployed bare in ultra-pure Liquid Argon (LAr). The LAr scintillation light was successfully used in the previous GERDA experiment to actively veto background events, like $ \beta $ decays near or on the detector surface and $ \gamma $ background from natural decay chains. It was one of the key factors in achieving the lowest background index and the best half-life sensitivity. For this reason an upgrated version of LAr veto will be deployed in LEGEND-200.
The LAr veto system, which surrounds the germanium detectors, is composed of two concentric curtains made of WaveLength Shifting (WLS) fibers, realized by coating the fibers with TetraPhenyl Butadiene (TPB). The TPB shifts first the LAr scintillation light from vacuum-ultraviolet to blue light, then the WLS fibers shift the light to green light which is read-out by SiPMs mounted on both ends of the fibers. The read-out system, based on SiPMs and front-end electronics, is already in the commissioning phase. In this talk an overview of the LEGEND-200 LAr veto system and preliminary performance will be illustrated.
The Penetrating particle ANalyzer is an instrument designed to operate in space to precisely measure and monitor the flux, composition, and direction of highly penetrating particles of energy ranging from 100MeV/n to 20 GeV/n filling the current observational gap in this energy interval.
The detector design is based on a modular magnetic spectrometer of small size and reduced power consumption and weight to make the instrument suitable for deep space and interplanetary missions. The high-field permanent magnet sectors are instrumented with high resolution silicon micro-strip detectors, Time Of Flight scintillator counters readout by SiPMs, and active Pixel detectors to maintain the detection capabilities in high rate conditions occurring during solar energetic particle events (SEPs) or when traversing radiation belts around planets. We will present the PAN concept together with the ongoing activity, funded in the framework of the the EU H2020 FETOPEN program, on the development and first performance of a demonstrator, Mini.PAN, for the validation of the key functionalities of the instrument.
Darkside-20k is a global direct dark matter search experiment situated at Laboratori Nazionali del Gran Sasso, designed to reach a total exposure of 200 tonne-years free from instrumental backgrounds. The core of the detector is a dual phase time projection chamber (TPC) filled with 50 tonnes of low-radioactivity liquid argon. This is surrounded by an active neutron veto, employing Gadolinium-loaded polymethylmethacrylate (Gd-PMMA), and hosted inside a protoDUNE-like cryostat. The most dangerous background to the dark matter search comes from nuclear recoils induced by radiogenic neutrons, since this process can mimic a dark matter scattering-induced recoil. Neutron-induced nuclear recoils are rejected by identifying the presence of the neutron. The DarkSide-20k detector has a novel design in which the neutron veto and the TPC are integrated into a single mechanical unit that sits in a common bath of low-radioactivity argon. The entire TPC wall is surrounded by a Gd-PMMA shell which is equipped with large area Silicon Photomultiplier (SIPMs) array detectors. SiPMs are disposed in a compact design designed to minimise the number of Printed Circuit Boards (PCBs), cables and connectors, called vPDU+. The components of a vPDU+ are: a Tile+, which contains SIPMs and front end electronics, a MB+, which distributes voltage and control signals, sums Tile+ channels, and drives the electrical signal transmission. The neutron veto will be equipped with 120 vPDU+. The talk will focus on the preliminary results of vPDU+ prototype and the expected neutron veto performances.
The BDX-Mini experiment is the first electron beam-dump experiment specifically designed to search for Light Dark Matter (LDM) particles in the MeV-GeV mass range. BDX-Mini was exposed for about six months in 2019-2020 to weakly interacting particles (neutrinos and DM) produced by a 2.176 GeV electron beam incident on the beam dump of experimental Hall-A at Jefferson Lab. The detector, positioned 20 m downstream of the dump, is an electromagnetic calorimeter made by lead tungsten crystals for a total volume of 4 dm3. The calorimeter is surrounded by a multi-layer veto aimed to reject cosmic background: the innermost layer of the veto is made by a passive tungsten shielding, while the middle and outer layer are made by plastic scintillators to detect charged cosmic background particles. In addition to the veto system, the dirt between the dump and detector is sufficient to shield the detector from the beam-related background.
In this contribution I will describe the BDX-Mini detector and its excellent performance during a long LDM measurement campaign performed at JLab. Moreover I will show the data analysis technique used for LDM searches and finally I will show the obtained result.
Successfully launched in December 2021, after only four years from the adoption, IXPE belongs to the NASA Explorers program, which offers frequent flight opportunities for world-class scientific investigations from space.
IXPE will accomplish the first-ever survey of the polarization properties of tens of celestial X-ray sources, with percent accuracy, and within the boundaries of a small explorer program.
This goal can be achieved with the use of Gas Pixel Detectors, which precisely reconstruct the sub-mm long tracks of single electrons generated by the photo-electric interactions of incoming soft X-rays.
This poster summarises the most important design elements of the GPDs and of the Detector Unit housing them, the qualifications obtained for operating them onboard IXPE, and the fast-paced integration and verification cycle entirely developed in Italy to make the IXPE mission a reality.
The Penetrating particle ANalyzer (PAN) is an astroparticle instrument designed to operate in space to measure and to monitor the flux, the composition and the direction of highly penetrating particles with energy ranging from 100MeV/n to 20 GeV/n. The main parts of the PAN spectrometer are: a high field permanent magnet, a Silicon Microstrip Tracker (SMT), a Time-of-Flight counter and an active Pixel detector. Here we report on the design, construction and test of the Silicon Tracker built for a demonstrator, called MiniPAN, in the framework of the the EU H2020 FETOPEN program. The SMT is composed of three tracking planes of single sided thin miscrostrip sensors with fine readout pitch of 25um on x-coordinate (bending plane in the magnetic field) alternated with sensor with readout pitch of 400 um on y-coordinate. The SMT characteristics and assembly will be described and the quality and performance of the complete detector will be reported.
Future satellite experiments for cosmic-ray and gamma-ray detection will employ plastic scintillators to discriminate gamma-rays from charged particles and to identify nuclei up to Iron. The High Energy Cosmic Radiation Detector (HERD) facility will be one of those new experiments, and will be installed onboard the Chinese Tiangong Space Station (TSS) . The main goal of the HERD experiment is to detect charged cosmic-rays up to PeV and gamma-rays up to hundreds of GeV. The plastic scintillator detector (PSD) surrounds the inner detectors from five sides. For energies above a few GeV a high detector segmentation is required in order to avoid the back-splash effect, due to the interaction between the high energy particles and the innermost calorimeter. Each PSD basic element (bar or tile) is coupled to several Silicon Photomultipliers (SiPMs) for the scintillation light detection. In 2021 we have performed a beam test campaign to test all the subdetectors of HERD experiment at CERN PS and SPS. We tested two different PSD prototypes, one made of bars and one made of tiles of different scintillating materials (BC-404 and BC-408). Both the prototypes were equipped with SiPMs of two different sizes (MPPC S14160-30050 and S14160-1315) and they were read-out with CAEN Citiroc-based board DT5550W. In this talk we will describe the PSD design and show the beam test results.
Launched on December 9, 2021, the Imaging X-ray Polarimetry Explorer (IXPE) is the first imaging polarimeter ever flown providing sensitivity in the 2--8 keV energy range, and during the 2-year prime phase of the mission will sample tens of X-ray sources among different source classes. While most of the measurements will be statistics-limited, for some of the brightest objects observed an for long integration times, the systematic uncertainties on the detector response (primarily the effective area, the modulation factor and the absolute energy scale) will be important.
In this contribution, we describe a framework to propagate on high-level observables (e.g., spectro-polarimetric fit parameters) the systematic uncertainties connected with the response of the detector, that we estimate from the relevant ground calibrations and from observations of celestial point sources. We shall illustrate our approach in a few real-life case studies.
The absolute mass of neutrinos is one of the most important riddles yet to be solved, since it has many implications in Particle Physics and Cosmology.
HOLMES is an ERC project started in 2014 that will tackle this topic. It will perform a model independent calorimetric measurement of the neutrino mass with a sensitivity of the order of 1 eV using 1000 low temperature microcalorimeters detectors (TES). A TES is a sensitive thermometer, able to detect the energy of a X-ray photon with a resolving power $E / \Delta E < 10^3$.
The goal is to employ these detectors to study the end-point region of the electron capture (EC) decay of $^{163}$Ho. In such a measurement, all the energy is measured except for the fraction carried away by the neutrino.
Although the neutrino is not detected, the value of its mass affects the shape of the de-excitation spectrum, reducing also the end-point of the spectrum by an amount equal to the effective neutrino mass. The spectrum distortion is statistically significant only in a region close to the end-point, where the count rate is lowest and background can easily hinder the signal.
Holmes has adopted a high-risk/high-gain approach: with a target single pixel activity of 300 Bq, both the detectors and the readout will be tested to their technical limits, requiring also advanced discrimination techniques to decrease the resulting number of pile-up events.
In this contribution, I will present the recent results achieved that lay the grounds for the low-activity phase of the Holmes experiment, that will lead to its first limit on the neutrino mass.
The ICARUS collaboration employed the 760-ton T600 detector in a successful three-year physics run at the underground LNGS laboratories studying neutrino oscillations with the CNGS neutrino beam from CERN, and searching for atmospheric neutrino interactions. ICARUS performed a sensitive search for LSND-like anomalous $\nu_e$ appearance in the CNGS beam, which contributed to the constraints on the allowed parameters to a narrow region around 1 eV$^2$, where all the experimental results can be coherently accommodated at 90% C.L. After a significant overhaul at CERN, the T600 detector has been installed at Fermilab. In 2020 cryogenic commissioning began with detector cool down, liquid Argon filling and recirculation. ICARUS has started operations and is presently in its commissioning phase, collecting the first neutrino events from the Booster Neutrino Beam and the NuMI off-axis. The main goal of the first year of ICARUS data taking will then be the definitive verification of the recent claim by NEUTRINO-4 short baseline reactor experiment both in the $\nu_\mu$ channel with the BNB and in the $\nu_e$ with NuMI. After the first year of operations, ICARUS will commence its search for evidence of a sterile neutrino jointly with the SBND near detector, within the Short Baseline Neutrino (SBN) program. The ICARUS exposure to the NuMI beam will also give the possibility for other physics studies such as light dark matter searches and neutrino-Argon cross section measurements. The proposed contribution will address ICARUS achievements, its status and plans for the new run at Fermilab and the ongoing developments of the analysis tools needed to fulfill its physics program.
The observation of few optical photons is a very common requirement in instrumentation, for instance for the detection of scintillation photons from liquid noble gases in dark matter search experiments. The photon sensors must in particular offer low dark count rate (DCR), high fill factor and good quantum efficiency. Commonly used SiPMs require a single photon sensitive readout for each of many channels. We propose using Single Photon Avalanche Diodes (SPADs) fabricated in a CMOS technology so that the readout circuitry can be integrated and noisy pixels can be disabled. Chips fabricated in an optimized manufacturing process reach a DCR of $0.04~\mathrm{Hz}$ per $\mathrm{mm}^2$ of active SPAD area at $160~\mathrm{K}$ for typical pixels. In the latest design, the geometric fill factor is above $80\%$ and should still be around $70\%$ after disabling noisy SPADs. The implemented low power readout is fully data driven and provides time-stamped hits with a spatial granularity of $\approx 250\times200\mu\mathrm{m}^2$. The proposed approach could provide a performant, compact, low power single photon readout for large area detectors.
ULTRASAT (ULtraviolet TRansient Astronomy SATellite) is a wide-angle space telescope that will perform deep time-resolved surveys in the near ultraviolet spectrum. ULTRASAT is led by the Weizmann Institute of Science (WIS) in Israel and the Israel Space Agency (ISA) and is planned for launch in 2024. The telescope implements a backside-illuminated, stitched pixel detector. The pixel
has 4T architecture with a pitch of 9.5 μm and is produced in 180 nm process by Tower m and is produced in 180 nm process by Tower Semiconductor.
The final flight design sensors have been received by DESY and the operational parameters have been tested. Preliminary results will be presented in this talk. As part of the space qualification for the sensors, radiation tests are to be performed on both test sensors provided by Tower and the final flight design of the sensor. One of the main contributions to sensor degradation due to radiation for the ULTRASAT mission is Total Ionizing Dose (TID). TID measurements on the test sensors have been performed with Co-60 gamma source at Helmholz Zentrum Berlin (HZB) and CC-60 facilities at CERN and preliminary results are presented in this talk.
A next-generation magnetic spectrometer in space will open the opportunity to investigate frontiers in direct high-energy cosmic ray measurements and to accurately measure the amount of the rare antimatter component in cosmic rays beyond the scope of current missions. We propose the concept of an Antimatter Large Acceptance Detector In Orbit (ALADInO), designed to take up the legacy of direct cosmic ray measurements in space by PAMELA and AMS-02. ALADInO presents technological solutions designed to overcome the current limitations of magnetic spectrometers in space with a layout that provides an acceptance larger than 10m^2 sr. A high-temperature superconducting toroidal magnet is coupled with precision tracking and time-of-flight systems to provide the required matter-antimatter separation capabilities and rigidity measurement resolution with a maximum detectable rigidity better than 20 TV. The inner 3D-imaging deep calorimeter designed to maximise isotropic particle acceptance allows cosmic rays to be measured up to PeV energies with accurate energy resolution. ALADInO is planned to operate at the Sun-Earth L2 Lagrangian point for at least 5 years. It would enable unique observations with groundbreaking discovery potential in the field of astroparticle physics through precise measurements of electrons, positrons and antiprotons up to 10 TeV and of nuclear cosmic rays up to PeV energies, and through the possible unambiguous detection and measurement of low-energy components of antideuterons and antihelium in cosmic rays.
Future spaceborne spectrometers for astroparticle detection need high bending power therefore, the use of superconducting magnets is the only applicable solution. The main requirements for superconducting magnets for space applications are: (i) low mass budget, i.e. high stored energy to mass ratio; (ii) low power consumption, i.e. efficient cryogenics; (iii) very high stability. Besides, the presence of liquid helium tanks is regarded as a drawback. The use of high temperature superconductors (HTS) or magnesium diboride (MgB2) combines all the requirements. The magnet envisaged for the Antimatter Large Acceptance Detector IN Orbit (ALADInO) is a large HTS toroid operating at about 40 K. It will host a silicon tracker inside its internal volume and a 3D isotropic calorimeter in the center. The inner and outer diameters are 1 m and 4.3 m, respectively. The magnet mass is about 1200 kg.
The HOLMES experiment aims to directly measure the $\nu$ mass studying the $^{163}$Ho electron capture decay spectrum, developing arrays of TES-based micro-calorimeters implanted with O(10$^{2}$ Bq/detector) Ho atoms.
The embedding of the source inside detectors is a crucial step of the experiment. Because $^{163}$Ho is produced by neutron irradiation of a $^{162}$Er sample, the source must be separated from a lot of contaminants. A chemical process removes every species other than Ho, but it is not sufficient to remove all background sources: in particular, $^{166\mbox{m}}$Ho beta decay can produce fake signal in the region of interest. For this reason a dedicated implantation / beam analysis system has been set up and commissioned in Genoa’s laboratory. It is designed to achieve more than 5$\sigma$ separation @163/166 a.m.u. simultaneously allowing an efficient Ho atoms embedding inside microcalorimeter absorbers. Its main components are a 50 kV sputter-based ion source, a magnetic dipole and a target chamber. A specially designed co-evaporation system has been designed in such a way to “grow” the gold microcalorimeter absorber during the implantation process, increasing the maximum achievable activity which can be embedded. The machine performances in terms of achievable current, beam profile and mass separation have been evaluated by means of calibration runs using Cu, Mo, Au and $^{165}$Ho beams. A special care has been given to the study of the more effective way to populate source plasma with Ho ions obtained from different Ho compounds, testing different target production techniques. In this work, the machine development and commissioning will be described.
The Advanced Virgo+ gravitational wave interferometer (AdVirgo+) has recently completed the first phase of its upgrades and is currently being commissioned for O4 observing run, scheduled for December 2022 and expected to last one year.
O4 run will be followed by a major upgrade, called phase II, that has as its main objective the reduction of thermal noise, expected to increase the observing horizon to more than 150 Mpc. This will be done by enlarging the laser beam size on end test masses and by implementing better coatings with lower mechanical losses on mirrors. To do so the most critical mirrors (test masses and recycling mirrors) must be changed. In particular, end mirrors will have to be larger and therefore heavier to deal with the larger beam size. To this purpose mirrors 55 cm in diameter and about 105 kg in weight will be used.
Seismic isolation of AdVirgo+ mirrors and of the injection and detection benches will be provided, both in phase I and phase II, by the SuperAttenuator (SA), a passive attenuation system capable of reducing the seismic noise by more than 10 orders of magnitude in all six degrees of freedom above a few Hz.
In order to be able to implement the design changes of phase II upgrade, SAs of end towers will need to be upgraded. In particular the payload will be re-scaled and all the elastic elements (blade springs, suspension wires, magnetic antisprings and inverted pendulum flex joints) will need to be re-designed in order to stiffen the suspension in the vertical direction to sustain the new loads. Several studies are being performed in order to identify and validate the required mechanical updates to the SA. Such studies are also providing useful insights on the design of seismic isolation systems for third generation detectors.
Hyper-Kamiokande (Hyper-K) is a next generation underground water Cherenkov detector designed to study neutrinos from J-PARC accelerator and astronomical sources, nucleon decay, with the main focus on the determination of leptonic CP violation.
To detect the weak Cherenkov light generated by neutrino interactions or proton decay, the newly developed 20-inch PMTs by Hamamatsu will be used. The addition of a system of small photomultipliers as implemented in the KM3NeT experiment, the so called multi-PMT module (mPMT) is considered to improve the Hyper-K physics capability. The mPMT system is composed of 19 3-inch PMTs and all the electronics required for the system, with a power budget of only 4 W.
Each PMT is equipped with an active HV board capable of providing up to 1500V with only 3.2 mW of power consumption. Also, each PMT is equipped with a Front End Board based on a fast discriminator for trigger generation and a slow shaper, followed by a 12 bit@2Msps ADC.
A Main Board collects the 19 channels information and provides a timestamp for the event using a TDC for each channel at 200 ps RMS. The data are then transferred to the DAQ system using the Ethernet protocol. The only connection to the experiment is the Ethernet cable, that provides also the power and the clock to the system. For the power we designed a custom PoE+ Power supply, with an efficiency of 87% at 4W, while for the clock distribution we developed a distribution system based on a 25 MHz clock transmission in which a PPS is embedded thanks to a duty cycle clock modulation.
The developed system has been fully validated.
The HV and read-out electronics will be described and the results of the measurements will be discussed.
Hyper-Kamiokande (Hyper-K) is to be the next generation of large-scale water Cherenkov detectors. With a volume an order of magnitude bigger than its predecessor Super-Kamiokande (Super-K) and improved photosensors’ system and beamline, Hyper-K aims to obtain exciting results in many fields such as the study of CP violation in the leptonic sector, the search for proton decay and the study of accelerator, atmospheric, solar neutrinos and neutrinos from astronomical origin.
For the Hyper-K far detector, to improve Hyper-K physics capability, there are plans to adopt a hybrid configuration that combines the 20" PMTs, already adopted in Super-K, with the multi-PMT (mPMT) modules, a novel technology first designed for the KM3NeT experiment.
A mPMT module based on a pressure vessel instrumented with 19 multiple small diameter (7.7 cm) photosensors, each one with a different orientation, readout electronics and power, offers several advantages as increased granularity, reduced dark rate, weaker sensitivity to Earth’s magnetic field, improved timing resolution and directional information with an almost isotropic field of view.
In this contribution the prospects of physics capabilities with a hybrid configuration as well as the testing results on mPMT prototypes and Hyper-K’s mPMT program will be discussed.
The Einstein Telescope (ET) gravitational wave interferometer will be the biggest research infrastructure built in Europe in the next decade and its design and construction will bring unprecedented technological challenges.
Third-generation gravitational wave detectors, such as ET, aim at reducing their noise to the lowest possible level for an Earth bound detector, broadening their detection band down to 2 Hz. This improved sensitivity, with respect to Virgo and LIGO, gives access to the early Universe by detecting high red-shift black hole mergers, to the extreme space-time curvature of high mass black holes. It makes possible to detect neutron star inspirals well before they merge, allowing a multimessenger observation of extreme states of matter.
The sensitivity increase in the low frequency region will put however challenging constraints on the suppression of seismic noise. On the basis of the experience accumulated in construction and running the Virgo interferometer for the last two decades, we are developing a suspension system that seismically isolates the test masses of ET at frequencies above 2 Hz with the same height - of about 10 m - of current Virgo Superattenuator. With respect to the baseline design for ET, this study aims at reducing the size of the isolation system, resulting in very significant cost savings in ET civil works. The project foresees an evaluation of possible solutions performing simulation with software tools validated by the previous experience, a detailed mechanical design of the first isolation stages, that are most critical, their construction and successive tests at the Sar-Grav laboratory located in the Sos Enattos mine in Sardinia, candidate as the site to host ET, due to its unique seismic characteristics.
The Plastic Scintillation Detector (PSD) is one of the subdetectors of the HERD apparatus,
planned to fly onboard the Chinese Space Station (CSS) in the second half of the 2020s to
study high energy Cosmic Rays (CR).
The main requirements of the PSD are to provide online trigger for CR and online veto for
gamma rays and to determine the charge of CR ions measuring the energy loss on five sides.
Veto requires very high hermeticity as close as possible to 100%.
Rejection of backsplash particles produced from interaction in the calorimeter surrounded by the PSD
required also precise timing.
The requisite of measuring CR charge for Z up to iron requires a large dynamic range
that cannot be reached with a single read out chain.
With all those constrains we developed a prototype for testing with ion beams at CNAO
(Centro Nazionale Adroterapia Oncologica).
The prototype consists of long (50 cm) Printed Circuit Board (PCB) each housing 5 tiles.
SiPMs of different sizes 5 of 3x3 mm2 and 4 of 1x1 mm2 mounted on the PCB reads out each tile
on the wide side to guarantee uniformity of light collection.
Signals are extracted to one end of the PCB through lines inside the PCB avoiding cables.
Two long PCB can be electrically and mechanically joined to create a 100 cm long ladder with 10 tiles.
In addition the edges of the tiles are shaped such to overlap guaranteeing full hermeticity.
This configuration has been tested extensively with p and C beams at CNAO
at different energies such to mimic the energy loss of higher Z ions.
The full SiPM waveforms are acquired by a high frequency digital oscilloscope and stored for
offline analysis.
Large volumes of liquid Argon or Xenon constitute an excellent medium for the detection of Neutrino interactions and for Dark Matter searches. As an alternative or a complement to a Time Projection Chamber type of readout, we explore the use of imaging of the scintillation light to provide information on the event topology and deposited energy.
Designing such an imaging detector presents several challenges: the performance of both photon detectors and conventional optical elements in the deep UV is limited; thousands of photosensor channels in dense matrices must be operated in cryogenic conditions; the optical system must provide a sufficiently wide and deep field of vision to maximize the fiducial volume.
Through the use of Coded Aperture Masks and SiPM matrices, coupled to cryogenic readout ASICs, we have developed a first "camera" with 256 channels. Prototypes have been built and characterized for operations in cryogenic conditions, and a concept demonstrator detector has been assembled.
We developed a custom image reconstruction algorithm to obtain a 3D map of the deposited energy, based on a probabilistic approach. The algorithm is designed to be run on GPUs to provide the required performance, and in simulations with low levels of detected light it improves on the results obtained with traditional deconvolution techniques.
Extensive simulation of the performance of larger cameras and detectors with a larger number of cameras will also be presented.
Baikal-GVD (Gigaton Volume Detector) is a neutrino telescope deployed in Lake Baikal in the south-eastern part of Russia with its primary purpose to observe high and ultra-high energy (TeV–PeV) neutrino, as well as to identify and explore their sources. As of 2021, the detector consists of eight clusters of 288 optical modules each, immersed in the water at the depths spanning from 750 m to 1300 m below the surface. Eight more clusters are scheduled to be deployed by 2025. Each optical module is equipped with a 10'' photo-multiplier tube enabling to gather the Cerenkov photons emitted by secondary particles produced in neutrino interactions in the vicinity of the detector. The spatial and temporal distribution of the signal in the optical modules is used to reconstruct the neutrino interaction type, its energy and direction. The talk will cover the components and structure of the detector, its deployment process and the first physics results.
The mitigation of all potential noise sources detrimentally affecting gravitational wave (GW) detection is mandatory for present and future GW interferometers. Here we approach two apparently uncorrelated issues: the electrostatic charge forming on test masses at room and cryogenic temperature, and the build-up of a frost layer on cryogenically cooled mirrors.
Electrostatic charge has been shown to affect LIGO data taking. Its mitigation routinely requires mirror’s long exposures (hours) to a relatively high pressure (tenth of mbar) of N$_2$ ions flux.
Cryogenic mirrors in future GW detectors have been individuated as a viable solution to reduce thermal noise and to obtain the desired detection sensitivity at low frequency. Operating at temperatures down to ~10 K presents several extraordinary challenges, one being on the cryogenic vacuum system hosting the cold mirrors. Gases composing the residual vacuum will tend to cryosorb on the mirror surface forming a contaminant ice layer (“frost”). This can severely perturb mirror optical properties preventing detection with the design sensitivity.
Noticeably, the method used at LIGO to mitigate electrostatic charging cannot be applied on cryogenically cooled mirrors without forming on its surface an unacceptably thick condensed N$_2$ layer.
Low energy (between 10 to 100 eV) electrons are known to be very efficient in inducing gas desorption. Also, by properly tuning the energy of the incident electrons, an electron beam can be used to neutralize both positive and negative charges on the mirror’s dielectric surface. Electrons are also known to interact only with the very top layers (some nm) of any irradiated surface and seems ideal to neutralize charge and induce frost desorption without damaging the mirror surfaces’ optical properties.
Here we present an experimental proof of principle suggesting that low energy electrons may be indeed used as a mitigation method to cure surface charging and frost formation.
The Jiangmen Underground Neutrino Observatory (JUNO) is an experiment aiming to detect rare events, such as antineutrinos originating from nuclear reactors and from the interior of the Earth, as well as neutrinos from galactic and extragalactic sources. JUNO’s active target is made of 20 kton of organic liquid scintillator, monitored by more than 40,000 photosensors.
JUNO will act as a huge homogenous calorimeter, designed to measure MeV-scale energy depositions with a resolution better than $3\%/\sqrt{E}$, and with a sub-percent bias. As a consequence, JUNO’s most challenging task will be to ensure that such a demanding calorimetric performance is met consistently over a volume larger than $2\cdot 10^4 m^3$, and for a time period longer than 10 years.
In my contribution, I will introduce the innovative instrumentation that has been developed and that is being built to calibrate the JUNO detector. I will describe how we plan to ensure a linear detector response when using a scintillation medium known to be non-linear at low energy. More importantly, I will explain how we will make JUNO’s main photosensors, and their readout electronics, respond linearly over a dynamic range spanning three orders of magnitude, i.e. from single photoelectron (PE) up to more than 1000 PE. While this challenge has been met before on benchtop setups, JUNO’s novelty will be the capability to precisely assess the performance of the light-detection system during the actual data taking, by employing simultaneously two different sets of PMTs of different size, which will experience very different illumination levels. I will finally show how JUNO’s calorimetric performance is so demanding that it might be affected even by the shape and polishing of the cm-sized envelopes enclosing the radioactive sources.
Low-threshold (sub 100eV recoil), large-mass (100gm-kg scale) detectors are essential for searches for new physics in Dark Matter and Coherent Elastic Neutrino Nucleus Scattering experiments. I will present latest developments in this area at Texas A&M as the detectors apply to experiments like the SuperCDMS Dark Matter search experiment, the MINER CENNS experiment and the SPICE/HERALD low-mass Dark Matter search experiment.
The surface array of the IceCube Neutrino Observatory currently consists of 162 ice Cherenkov tanks and is used both as a veto for the in-ice neutrino observations and as a capable cosmic-ray detector. In order to further enhance the science case of the IceCube surface array, the existing detectors will be complemented by an array of scintillation panels and radio antennas. The scintillation detectors will lower the energy threshold and the radio antennas will significantly improve the energy and Xmax reconstruction performance, especially for inclined showers at higher energies. The radio-quiet environment at the South Pole and the design of the radio antennas allows to measure air-shower radio emission in the novel higher frequency band between 70-350 MHz. The utilisation of this higher frequency band will give us a higher signal-to-noise ratio and a lower shower detection threshold compared to traditional sparse cosmic-ray radio arrays which mostly utilise the 30-80 MHz frequency range. A prototype station consisting of 8 scintillation panels and 3 radio antennas has been deployed at the South Pole in January 2020 and has been collecting data since then. Detection and successful reconstruction of air showers using this single station has proven the viability of the hardware and informs further optimizations of the detector design and shower analysis techniques that will be applied to the full array when deployed in a few years. It has also been confirmed that we can indeed measure the radio emission from air showers with energies of a few 10s PeV. Due to the successful validation of this surface station design, it builds the baseline for the layout of the future IceCube-Gen2 surface array. In this talk, I will introduce the IceCube Surface Array Enhancement with the focus on the air shower detection with the radio antennas.
Rare events search experiments are one of the challenges of modern physics. Sensitivity of this kind of experiments is conditioned by the background usually coming from materials of the experimental apparatus.
In this context it is crucial to develop high-sensitivity analysis techniques to select the most suitable materials in order to reduce the radioactivity contribution at the background coming from the different components of the detector.
For this purpose we have developed a methodological approach which combines neutron activation analysis (NAA), radiochemical treatments and high sensitivity measurements by a novel β-γ low background detector made of a liquid scintillator and an high purity germanium (HPGe) operating in time coincidence. This measurement system is suitable to detect well-defined time correlated events allowing to obtain a strong reduction of background, increasing the sensitivity. The procedure developed enable to perform measurements of uranium and thorium trace concentrations on activated liquid samples.
Tests conducted so far have demonstrated how this methodology is well suited in the context of ultra-trace element measurements techniques achieving limits of contaminations for $^{238}\textrm{U}$ and $^{232}\textrm{Th}$ at $10^{-14}$ g/g (<1μBq/kg) level.
The Large Area Telescope (LAT) is the primary instrument of the Fermi Gamma-ray Space Telescope launched on June 11 2008. It is an imaging, wide field-of-view, gamma-ray telescope, covering the energy range from 30 MeV to more than 300 GeV. The LAT tracker is formed by silicon strips planes alternated with tungsten foils and is used to convert the incoming photon into an electron-positron pair and to measure its direction. The tracker comprises 36 mutually orthogonal planes of single-sided silicon strip detectors, for a total of 73 square meters of silicon and about 900,000 independent electronics channels. The tracker system was designed to achieve a single-plane hit efficiency in excess of 99% within the active area and a noise occupancy at the level of ~1 channel per million. Here we describe the performances of the LAT tracker, that has been constantly monitored using calibration and science data. In particular we show that, after almost 14 years of continuous operation in space, the fraction of defective channels is less than 0.5% while the readout noise increased less than 5%.
The IceCube Collaboration plans to upgrade IceTop, the surface array located on the South Pole ice sheet, with scintillation detectors augmented by radio antennas. This IceCube Surface Array Enhancement will measure and mitigate the effects of snow accumulation on the operating 162 IceTop Cherenkov tanks, as well as improve the measurements of high-energy cosmic rays by lowering the energy threshold. The enhancements also provide R&D experience for the next generation (IceCube-Gen2) surface instrumentation.
A full prototype station was installed near the center of IceTop in January 2020. The station features custom-designed DAQ electronics and consists of three radio antennas and eight scintillation detectors read out by silicon photomultipliers (SiPM).
This contribution will focus on the scintillation detectors developed for the Surface Array Enhancement and its DAQ, calibration methods, deployment and operation experience, as well as first results of the reconstruction of extensive air showers coincidently measured by the prototype station and IceTop. Future plans for instrumenting the entire IceTop array as well as the surface of IceCube-Gen2 with hybrid stations will be presented as well.
The High Energy Cosmic Radiation Detection (HERD) facility is a large field-of-view and high-energy cosmic ray detector planned to be installed on the Chinese Space Station in 2027. The Silicon Charge Detector (SCD) is a specialized HERD sub-detector measuring with accuracy the cosmic ray absolute charge magnitude $Z$, separating chemical species in cosmic rays from hydrogen ($Z=1$) to iron ($Z=26$) and beyond. The SCD concept is based on layers of single-sided micro-strip sensors able to measure energy loss and coordinates of the traversing particle. The status of the SCD design, the expected performances, and its prototyping will be discussed.
The TRISTAN project represents an upgrade under development of the KATRIN (KIT, Germany) focal plane detector (FPD) for the search of the sterile neutrino in the keV energy range. After having assessed and modeled the response of SDDs to electrons and smaller arrays, we present the design and characterization of the detection module featuring a monolithic array of 166 SDD pixels (of 3mm diameter) with integrated JFET readout by integrated preamplifiers (Ettore ASIC). The module is 4-side buttable and mounted on a cold finger (Fig.1). 21 modules will be juxtaposed in the FPD, in high vacuum (10-11mbar) and magnetic fields (~T) for a total of 3846 pixels, each one operating at 100kcps targeting the best spectroscopic performance for electrons up to 20keV. Given the compact size (4 cm side), high density PCB and interconnection were adopted. Data processing is performed with the custom-developed Ethernet-controlled 192-channel Athena platform, made of four 48-channel Kerberos units, combining integrated analog shapers and peak sampling (SFERA ASIC) with FPGA data acquisition and concentration. Preliminary results were obtained with a planar configuration (Fig.2), showing an average energy resolution across all 166 pixels (Fig.3) better than 250eV at 5.9keV compliant with the experiment specifications, with moderate cooling (0°C) and at a count-rate of 1 kpcs (Fig.4). Noise and cross-talk have been carefully investigated leading to an improved design of detector traces. Commissioning of the first module in the Monitor Spectrometer of KATRIN, in operating conditions similar to the FPD and for characterization with electron sources, is planned for mid 2022.
The High Energy cosmic Radiation Detector (HERD) is one of the leading projects among future space-borne instruments. It will be installed onboard the Chinese Space Station (CSS) thanks to a collaborative effort among Chinese and European institutions.
HERD core is a 3D calorimeter (~55 X0, 3lI?), forming an octagonal prism. Five calorimeter sides will be surrounded by 3 subdetectors, in the order from the innermost: Fiber Tracker (FiT), Plastic Scintillator Detector (PSD) and Silicon Charge Detector (SCD). Finally, a Transition Radiation Detector (TRD) is instrumented on one of its lateral faces, for energy calibration in the TeV scale.
Enabling SiPM technologies for space application allows to push the design requirements and enhances the detection features. HERD is, thus, uniquely configured to accept particles from both its top and four lateral sides. Thanks to its pioneering design, HERD shows an order of magnitude increase in geometric acceptance compared to current generation experiments. This will allow for: precise measurements of Cosmic Ray (CR) energy spectra and mass composition up to the highest achievable energies in space (~ few PeV), gamma ray astronomy and transient studies, along with indirect searches for Dark Matter particles.
The advent of the system on a chip that integrates a CPU unit and multiple input-output analogs and digital peripherals and the simultaneous improvement of Silicon photomultiplier detectors with a low dark count has allowed our group of INFN Rome1 to build in 2014 the first all-in-one scintillation detector in literature called ArduSiPM.
The original idea is to use the minimum possible COTS components external to the SoC (typically fast analog) and make the most of the peripherals inside the chip, thus obtaining compact electronics without using ASICs and an external data acquisition system.
The system consists of a scintillator, a SiPM, conditioning electronics, and a microcontroller with fast peripherals. The detector can measure the rate, the arrival time with an accuracy of tens of nanoseconds, and the number of photoelectrons produced in the SiPM.
Compared to the ASICs solution, having a processor and related communication interfaces onboard has allowed us to build an all-in-one detector, including a control system, data acquisition and elaboration, and relative data transmission using only COTS.
This approach has proved to be very fruitful as the importance of technology is not only given by its current state but its growth trend.
In the last few years, the class of microcontrollers has acquired performance in terms of the internal CPU's speed and the number and speed of peripherals.
As CSN5 INFN MICRO experiment, we explore different circuits and engineering solutions to use this class of detectors in various fields such as picosatellites, Transient Luminous Events in the high atmosphere, and analytical chemistry to measure the bioluminescence.
In the case of picosatellites, we plan to integrate our detector into the onboard computer firmware, making it one of the system's peripherals consisting of actuators, navigation instruments, and attitude control.
The Crystal Eye detector is proposed as a space-based X and gamma ray all-sky monitor to be active from 10keV up to 30MeV.
In its full scale configuration, it consists in a 32cm diameter hemisphere, made by 112 pixels, with an overall weight lower than 50kg, wide field of view (FOV, about 6sr), full sky coverage and very large effective area (about 6 times higher than Fermi-GBM at 1MeV) in the energy range of interest.
Each pixel consists of two layers of scintillating LYSO crystals, read by arrays of Silicon PhotoMultipliers (SiPMs), equipped with a segmented anticoincidence detector for charged Cosmic Ray (CR) identification and hard X rays detection.
The primary scientific goals include the observation of transient X and gamma flashes from Gamma Ray Bursts (GRBs), GW follow up, SN explosions, etc. and stable gamma ray source observation in the MeV energy range. The pioneering design optimizes these observations in terms of localization of the source and timing. By using specific triggers for charged particles, solar flares and space weather phenomena could also be studied.
A custom electronics based on the CITIROC1A asic is in use. A pathfinder mission is foreseen onboard of the Space Rider vehicle run by ESA, allowing technology tests and qualification and both deep space and Earth observation during the mission. We here present the Crystal Eye technology and the prototype for the pathfinder mission.
On December 9th 2021, the Imaging X-ray Polarimetry Explorer (IXPE) was launched on a Falcon IX from Cape Canaveral into its equatorial, low-Earth orbit, where it began scientific observations on January 11th 2022. Equipped with three identical telescopes---each providing simultaneous polarimetric, spatial, spectroscopic and temporal information---IXPE will measure, for the first time in the soft X-ray band, the polarization of tens of celestial objects of different classes: supernova remnants, pulsars and pulsar wind nebulae, magnetars, active galactic nuclei and accreting black holes.
In this contribution I will describe the design and construction of the innovative polarization-sensitive gas detectors at the IXPE focal plane, with emphasis on the lesson learned through the development phase of the mission. In addition, I will report on the instrument commissioning and early experience in orbit, as well as the first scientific results.
The largest gaseous time projection chamber (TPC) in the world, the ALICE TPC, has been upgraded with gas electron multiplier (GEM) readout for continuous operation. The new readout chambers consist of stacks of 4 GEM foils combining different hole pitch. In addition to a low ion backflow, other key requirements such as energy resolution and operational stability were met. New readout electronics sends the continuous data stream to the new ALICE online computing farm with a rate of 3.5 TByte/s. The detector has been optimised to read all minimum bias Pb-Pb events that the LHC will deliver in the high-luminosity heavy-ion era, at the anticipated peak interaction rate of 50 kHz. We will report on the commissioning of the ALICE GEM TPC and the first operational experience.
The High Luminosity LHC (HL-LHC) program will pose a great challenge for the different components CMS Muon Detector. Existing systems, which consist of Drift Tubes (DT), Resistive Plate Chambers (RPC) and Cathode Strip Chambers (CSC), will have to operate at 5 times larger instantaneous luminosity than designed for, and, consequently, will have to sustain about 10 times the original LHC integrated luminosity. Additionally, to cope with the high rate environment and maintain good performance, additional Gas Electron Multiplier (GEM) and improved RPC (iRPC) detectors will be installed in the innermost region of the forward muon spectrometer of the CMS experiment. The design of these new detectors will have to assure their long-time operation in a hard environment. Finally, RPC and CSC use gases with a global warming potential (GWP) and therefore a search for new eco-friendly gases is necessary, as part of the CERN-wide program. To address all of these challenges a series of accelerated irradiation studies have been performed for all the muons systems, mainly at the CERN Gamma Irradiation Facility (GIF++), or with specific X-ray sources. In this talk will be reported the status of the studies on the longevity of the different systems of the CMS Muon Detector, after the large integrated charge in the last years. Additionally, actions taken to reduce the actual detector aging and to minimize greenhouse gas consumption will be discussed.
Micromegas (Micro-MEsh GAseous Structures) detectors are a modern form of micro-pattern gaseous detectors. The primary charges are amplified by electron avalanches between a planar anode and a mesh 120$\,$µm above the anode. For resistive strip type Micromegas detectors the signal is read out via readout strips below the anode. A 2D particle position is reconstructed using two perpendicular readout strip layers below the resistive anode structure.
Using a standard 2D resistive strip Micromegas readout structure, a unique 2D particle position reconstruction is only possible if the detector is hit by only one particle at the same time. Ambiguities will occur if multiple particles arrive at the same time. A unique X-Y assignment is not possible.
This issue can be resolved by replacing the mesh with a GEM (Gas Electron Multiplier) foil, which is segmented into 0.5$\,$mm wide strips on one side. The GEM strips need to be turned by 45° with respect to the Micromegas readout strips. Thus the detector has three readout strip directions (X, Y and V).
A prototype of such a Segmented GEM Readout (SGR) detector is built with GEM strips and readout strips perpendicular to each other. Test beam measurements with this detector were performed using 120$\,$GeV muons. The GEM and the Micromegas strips show a similar pulse height. For perpendicular incident particles a position reconstruction efficiency better than 90$\,$% is reached on both the GEM strips and the readout strips. This detector achieves a resolution better than 80$\,$µm for the GEM strips and readout strips.
Also the angle and position reconstruction for inclined tracks the works, achieving position reconstruction efficiencies of better than 90$\,$%.
MicroMEGAS (Micro-MEsh-GAseous Structure) and Gas Electron Multiplier (GEM) detectors are commonly used as readout technologies in Time Projection Chambers (TPCs) for particle physics experiments. However, for these two types of detectors, a small fraction of the secondary ions produced in the amplification region returns to the drift volume, i.e. the TPC itself, causing local distortions of electric field. This effect is know as space charge effect. We present a new Micro- Pattern Gaseous Detector (MPGD) structure, that combines a micro-mesh and a set made of a GEM surmounted by a micro-mesh at only a few hundred μm. We report the performance results of 2 prototypes using this new structure, capable of reducing the ion feedback fractions to less than 0.2% for a total gain of around 2000.
A large, worldwide community of physicists is working to realise an exceptional physics program of energy-frontier, electron-positron collisions with the International Linear Collider (ILC). The International Large Detector (ILD) is one of the proposed detector concepts at the ILC. The ILD tracking system consists of a Si vertex detector, forward tracking disks and a large volume Time Projection Chamber (TPC) embedded in a 3.5 T solenoidal field. The TPC is designed to provide up to 220 three dimensional points for continuous tracking with a single-hit resolution better than 100 μm in rφ, and about 1 mm in z. An extensive research and development program for a TPC has been carried out within the framework of the LCTPC collaboration. A Large Prototype TPC in a 1 T magnetic field, which allows to accommodate up to seven identical Micropattern Gaseous Detector (MPGD) readout modules of the near-final proposed design for ILD, has been built as a demonstrator at the 5 GeV electron test-beam at DESY. Three MPGD concepts are being developed for the TPC: Gas Electron Multiplier, Micromegas and GridPix. Successful test beam campaigns with different technologies have been carried out. Fundamental parameters such as transverse and longitudinal spatial resolution and drift velocity have been measured. In parallel, a new gating device based on large-aperture GEMs have been produced and studied in the laboratory. In this talk, we will review the track reconstruction performance results and summarize the next steps towards the TPC construction for the ILD detector.
The extension of data acquisition for the Beijing Electron Spectrometer (BESIII) experiment until at least 2030 has resulted in upgrades to both the accelerator and the detector.
An innovative Cylindrical Gas Electron Multiplier (CGEM) is under construction to upgrade the inner tracker, which is suffering from aging. The CGEM Inner Tracker (CGEM -IT) was designed to restore efficiency and enhance the reconstruction of the secondary vertexes position with a resolution of 130 μm in the xy-plane and 350 μm along the beam direction. For the reconstruction in the magnetic field of 1 T, an analog readout, and an electronic contribution to the time resolution better than 5 ns are required. The entire system consists of about 10,000 electronic channels and must maintain a peak rate of 14 kHz/strip of signal hits for the innermost layer of the CGEM-IT. The CGEM readout system is based on the innovative TIGER ASIC, which is manufactured using 110 nm CMOS technology. A special readout chain consisting of GEM Read Out Cards (GEMROC) was developed for data acquisition.
Two out of three layers, instrumented with final electronics, have been operating in Beijing since January 2020 and are being remotely controlled by Italian groups due to the pandemic situation.
In July 2020, a test beam was conducted at CERN with the final electronics configuration on a small prototype consisting of four GEM planar detectors. About 250M triggers have been acquired. Both muon (@80 GeV) and pion (@150 GeV) beams were used, with the beam incidence angle varying from 0° to 45°.
In this presentation, the general status of the project CGEM-IT will be presented, with particular emphasis on the results of the test beam data collection.
Additive manufacturing is a popular technique currently providing new opportunities in several domains. In this proposed work, we applied the above-mentioned technology to detector construction. We had hypothesized that a fully automated 3D printing of a detector would be ideal to drastically reduce: 1) detector construction cost and assembly time 2) the probability of mistakes during construction.
We introduced and optimized a new electrode material to match the properties of the existing Bakelite featured in RPC detectors installed in LHC experiments. The new material we present is extruded in a filament form, readily usable by any general-purpose desktop 3D printers.
By using our custom-made filament, we developed and printed several detector prototypes. Preliminary results, under cosmic rays regime, are going to be presented to demonstrate the proof-of-concept of this new RPC fully built with additive manufacturing.
The search for weakly interacting, light particles that couple to photons received significant attention in recent years. When those particles are produced at high energies, they may decay in two collinear photons that can be detected by an electromagnetic calorimeter system. The typical dominant background in searches for those high energetic weakly particles are single, high energetic photons, which leave similar signatures in a standard calorimeter system.
One promising approach to separate signal from background events is to employ a dedicated pre-shower detector in front of the calorimeter that can distinguish one- and two-photon signatures. In this work we present a conceptual design of such detector which is able to separate one from two collinear photon signatures. For energies above 300 GeV, it allows for efficiencies between 20% to 80% for two photons separated by 100 µm to 2000 µm, respectively, and a background rejection of more than 90%. Our pre-shower detector design has an active surface area of 10 x 10 cm$^2$, a depth of 230 mm and is based on Micromegas detectors operated with Ar:CO$_2$ ($93\% : 7\%$), thus offering a cost effective solution.
In view of the LHC Phase-2, the CMS experiment is being upgraded with three stations of triple-GEM detectors (GE1/1, GE2/1 and ME0) to maintain the excellent trigger pT resolution of its muon spectrometer in the high-luminosity LHC environment and extending its coverage to the very-forward pseudorapidity region 2.4<|η|<2.8. The challenges faced for adapting the triple-GEM technology to a large-area detector have required the introduction of innovations such as discharge protection, an optimized GEM foil segmentation, and the development of complex front-end electronics. The CMS GEM detectors have been tested for the first time under beam irradiation in their final design with their complete front-end electronics and data acquisition software in Fall 2021 at the CERN North Area, with the goals of demonstrating the operation of their full readout chain, measuring their efficiency and space resolution under intense beam irradiation, and verifying the operating principle of a novel foil sectorization. We describe the setup of the test beam, made of a GE2/1 detector and a second-generation ME0 detector and completed by a high-space resolution beam telescope made of four 10x10 cm2 triple-GEMs. We discuss the preparation of the full DAQ chain, made by the VFAT3 front-end ASIC, an OptoHybrid front-end FPGA and a custom back-end made of a commercial FPGA (CVP-13), all operated with the final CMS GEM acquisition software. We report on the performance of both the large-area detectors and the tracker, measured with muons and pions.
With the increase of the LHC luminosity foreseen in the coming years, many detectors currently used in the different LHC experiments will be impacted dramatically and some need to be replaced. The new ones should be capable not only to support the high particle rate, but also to provide time information to reduce the data ambiguity due to the expected high pileup. The CMS collaboration have shown that the RPCs, using smaller gas gap (1.4 mm) and low-resistivity High Pressure Laminate, can stand rates of few kHz/cm2. They are equipped with new electronics sensitive to low signal charges. This electronics was developed to read out the RPC detectors from both sides of a strip and, using timing information, to identify the position along it. The excellent relative resolution of ~200 ps leads to a space resolution of few cm. The absolute time measurement, determined by RPC signal around 500 ps, will also reduce the data ambiguity due to the highly expected pileup at the Level 1 trigger. An engineering prototype of the final chamber was qualified in test beams at Gamma Irradiation Facility (GIF), located on one of the SPS beam lines at CERN. In addition, 4 demonstrator chambers have just been installed in CMS cavern. This talk will present the results of the tests done in GIF, as well as brand new results from the demonstrator chambers.
Position resolution and gain uniformity of gaseous ionization detectors play essential roles in tracking charged particles and subsequent imaging. In the present work, experimental studies has been conducted to investigate the position resolution, charge spread and gain uniformity of Gas Electron Multipliers based detector. These results are essential for understanding the performance of the detector. The data has been recorded using a front-end APV25 board combined with the Scalable Readout System (SRS) as DAQ. The position resolution up to 36.7 microns has been archived with a double GEM configuration with an Ar:CO2 based gas mixture. The studies on gain uniformity and charge spread were important to understand the detector performance for future application-based experiments.
The Large Hadron Collider (LHC) will undergo a major upgrade in the mid - 2020s referred as High Luminosity LHC (HL-LHC) to extend its operability by another decade and in addition to increase its luminosity with the aim to increase the potential for new discoveries. In order to meet the experimental challenges of unprecedented proton-proton luminosity causing expected increase of both radiation and rates, the experiments have to upgrade its electronics and detector performance. Drift Tubes (DT) detectors in the CMS muon barrel region, serving both as offline tracking and triggering device, are also planning the upgrade of its current readout and trigger electronics with the new On-Board electronics (OBDT), to withstand the high rate environment of HL-LHC. During Long Shutdown 2 (LS2), prototypes of the new electronics were installed in four DT chambers with the same azimuthal acceptance, instrumenting a demonstrator of the HL-LHC DT upgrade (DT slice-test) and integrated into a central data acquisition and trigger systems. A series of tests aiming to optimise installation in LS3 were performed. This report summaries the present status of the slice test, as well as its performance assessed with cosmic-ray events.
In the large size ongoing experiments for particle physics and dark matter search (rare event experiments) new gas media are being used or sought. One of the key features required for these media is low electron transverse diffusion, to achieve good tracking accuracy that will enable the most needed background rejection. The properties of these new mixtures are mostly unknown and assessing them beforehand is paramount. We present a new device, especially designed to measure electron transverse diffusion in a gas medium. The device is composed of a xenon lamp and a measuring chamber separated by a quartz window. In the chamber side of the window, where a transmissive CsI photocathode was deposited, photoelectrons produced by the lamp are guided through a drift electric field to a GEM, where they are multiplied, before being collected in a flange - an insulating substract, where metal strips are deposited and duly biased, located at a fixed distance (0.5 mm) from the GEM. The flange is mounted on a precision motion feedthrough that enables to set the variable drift distance, between 3.78 and 60 mm. The charge is collected in each strip with an electrometer, at several drift distances. Results for transverse diffusion coefficients are obtained from charge distributions for the different drift distances, taken at each setup settings. Preliminary results have already been obtained in pure Xe and CH4 - two gases with different electron drift properties – and are in good agreement with available data.
Resistive strip Micromegas (MICRO-MEsh GAseous Structure) detectors provide even at square meter sizes a high spatial resolution for the reconstruction of Minimum Ionizing Particles (MIPs) like muons. Micromegas detectors consist of three parallel planar structures. One Cathode, a grounded mesh and a segmented anode structure form the detector. Square meter sizes challenge the high-voltage stability during operation when using the frequently used gas mixture of Ar:CO$_2$ (93$:$7 vol$\%$). To improve the HV-stability and enhance the discharge quenching different gas mixtures have been investigated. A very promising one has an admixture of isobutane which replaces part of the CO$_2$ to form a ternary gas of Ar:CO$_2$:iC$_4$H$_{10}$ (93:5:2 vol$\%$).
Long term irradiation studies investigating both gas mixtures with cosmic muon tracking efficiency measurements will be presented. The comparison shows a gain increase under Ar:CO$_2$:iC$_4$H$_{10}$ in addition to the more HV-stable operation of the detector. This leads to a better timing resolution and higher pulse-heights improving the position reconstruction.
The longevity of the detector has been studied by irradiation with neutrons and gammas from a 10$\,$GBq Am-Be source for a period of two years. It is investigated for any performance deterioration for each of the two gas mixtures with focus on pulse-height and changes of efficiency.
Additionally a characterization of the Am-Be neutron source was done, determining the effectively irradiated area as well as disentangling the effects from neutrons and photons on the detector. For this different shielding materials including lead, borated plastic and polyethylene are introduced between the source and the detector.
He-CF$_4$ is a very attractive gas mixture for Electroluminescence (EL)-based tracking detectors, with applications in Optical Readout Time Projection Chambers for Dark Matter Search. Whereas He maintains a low target mass (relevant for track reconstruction and low WIMP mass sensitivity), CF$_4$ is a fast and efficient scintillator in the UV and visible wavelengths, also providing sensitivity to spin-dependent WIMP-nucleon interactions due to its high fluorine content.
The addition of hydrocarbons to He-CF$_4$ increases the H content of the mixture, improving the tracking capabilities and low WIMP mass sensitivity. Nevertheless, hydrocarbons are known to quench the EL photons produced by some scintillating species, reducing the signal amplification through EL. Therefore, it is necessary to find the optimum hydrocarbon admixture to He-CF$_4$ (species and concentration); one that improves the gas tracking capabilities without compromising the EL readout.
In this work, we studied how small percentages of isobutane and methane influence the charge gain, EL yield and corresponding energy resolution of He-CF$_4$ mixtures. The detector, operated in continuous flow-mode, was irradiated with low energy x-rays and a Large Area Avalanche Photodiode (LAAPD) was used to readout the EL produced in the avalanches of a single Gas Electron Multiplier (GEM). Besides the total EL yield, the visible component of the He-CF$_4$ emission was also quantified by placing a borosilicate glass window on top of the LAAPD window to cut-off the UV photons.
Our results show that small percentages of both isobutane (1% to 5%) and methane (up to 10%) do not compromise the EL readout, which makes them good admixtures for EL readout gas tracking detectors based on He-CF$_4$.
The IDEA detector concept has been designed to operate at a future large circular e+e- collider, like FCC-ee or CEPC. The IDEA detector has an innovative design with a central tracker enclosed in a superconducting solenoidal magnet. Going outwards, a preshower system followed by a dual readout calorimeter is foreseen. In the iron yoke, that closes the magnetic field, are then located three stations of muon detectors. The preshower and muon detectors are based on the μ-RWELL technology that inherits the best characteristics of the GEM, in particular the layout of the amplification stage, and Micromega detectors, that inspired the presence of a resistive stage.
To profit of the industrial production capabilities of this technology, a modular design has been adopted for both systems: the μ-RWELL "tile" will have an active area of 50x50 cm2, but with a pitch between the readout strips of 400 μm for the preshower and about 1 mm for the muon system. Other requirements are: a spatial resolution of the order of 100 μm for the preshower and a reasonable total number of front-end channels for the muon system.
To optimize the resistivity and the strip pitch, we have built 2 sets of prototypes, each made of 5 detectors for the preshower and 3 detectors for the muon, with active area of 16x40 cm2 and 40 cm long strips. For the preshower prototypes the DLC resistivity ρs is ranging from 10 to 200 MOhm/square, while for the muon ones ρs is about 20 MOhm/square. All these detectors have been exposed in October 2021 to a muon/pion beam at the CERN SPS. The very positive results obtained pave the way for a completely new and competitive MPGD tracking device for high energy physics experiments. Preliminary results on a long detector stability measurement will also be presented.
Large Resistive Plate Chamber (RPC) systems have their roots in High Energy Physics (HEP) experiments at European Organization for Nuclear Research (CERN): ATLAS, CMS, ALICE, where hundred of square meters of both trigger and timing detectors, have been deployed. These devices operate with complex gas systems, equipped with re-circulation and purification units, which require the addition of quantities of fresh gas of the order of 6 cm³/min/m², creating logistical, technical and financial problems. Recently, new EU legislation for the progressive phasing out of the main gas used on RPCs, Tetrafluoroethane C2H2F4, due to its high Global Warming Power (GWP) 1430, has further increased the pressure on these systems. This poses problems for existing experiments but especially for new ones where current solutions will most likely not be allowed.
In this communication, we present a new concept in the construction of RPCs which allows to operate the detector in a ultra-low gas flow regime. In this construction, the glass stack (sensitive part of the detector) is encapsulated in a tight polypropylene plastic box, which presents an excellent water vapor blocking properties as well as a good blocking to atmospheric gases. A detector module with almost 2 m² was operated for more than one month with a gas flow of less than 1 cm³/min/m² in stable conditions.
The applications of this technology in high energy physics as well as in cosmic rays experiences are discussed. In particular, results are presented concerning a cosmic ray telescope equipped with four of such planes, for the precise monitoring of cosmic rays flux, but also with the capability to perform muon tomography are presented.
Timing Resistive Plate Chambers (tRPC) is a mature and widely used gaseous detector (ALICE@CERN, START@RHIC or HADES@GSI) for the precise timing tag of charged particles exhibiting an excellent timing precision, down to 50 ps, together with a high efficiency, larger than 98%, for minimum ionizing particles. Characteristics that can be implemented in large areas.
tRPC have traditionally been used with relatively low particle loads (a few kHz/cm2) due to the inherent limitation to the counting rate imposed by the resistive electrodes. Since tRPCs are one of the main large-area timing detectors, extension of its counting rate capability is of great interest for future HEP experiments, were the luminosity is expected to increase considerably.
Attempts have already been made to increase the count rate capability by using materials with lower electrical resistivity compared to the commonly used float glass, such as ceramics, special glasses or some technical plastics. As a result, the operation of small area detectors was successfully achieved, but the implementation of the medium/large area detectors failed due to the lack of homogeneity of the materials, which present low electrical resistivity paths, resulting in an unstable behavior of the detector. Another possibility, still very little explored, is to decrease the resistivity of standard float glass by increasing the operational temperature of the detectors, providing a ten-fold decrease in resistivity every 25ºC.
In this communication, test beam result of common float glass RPCs operated up to 40 ºC are presented. The results suggest an improvement of the count rate capability by a factor of four compared with room temperature, while keeping the timing precision and efficiency unchanged.
As a practical case, the use of RPCs operated in this regime in the forward region of the HADES spectrometer is discussed and preliminary results shown.
Theoretical prediction for the distribution of the angle between electrons and positrons originating in internal pair creations is a monotonic featureless decrease with the opening angle. Recent studies on excited states of 8Be and 4He nuclei, made in ATOMKI, Hungary, however, revealed deviations from this expectation. If true, such a result may have a fundamental impact: the anomaly can be explained by introducing a new short-lived neutral boson that can still fit into known experimental and theoretical constraints. Although serious work has been done on the theoretical side, an independent laboratory has not yet verified these results, despite related experiments are being prepared worldwide. In this work we will describe the ongoing construction of a suitable Time Projection Chamber-based (TPC) spectrometer for light charged particles, utilising magnetic field as a means for energy measurement and also Multiwire Proportional Chambers (MWPC) together with Timepix3 pixel detectors, for spatial and angular resolution. The experimental effort will be operated at the Institute of Experimental and Applied Physics (IEAP) Van-de-Graaff accelerator facility in order to either confirm or refute the above-mentioned anomaly. Details of the detectors will be described, together with relevant technical theoretical and experimental aspects of the experimental setup, as well as results obtained with prototypes built for the current development phase of this project.
The tracker detector of MEG II and the one under development of CREMLIN+, FCC and CEPC experiments consists of ultralight drift chambers, operated with a mixture of Helium and Isobutane. A stable performance of the tracker detector in terms of its electron transport parameters, avalanche multiplication, composition and purity of the gas mixture is of crucial importance, so in order to have a continuous monitoring of the quality of gas, we plan to install a small drift chamber, with a simple geometry that allows to measure very precisely the electron drift velocity in a prompt way. The monitoring chamber will be supplied with the gas mixture coming from the inlet and the outlet of the detector to determine if any gas contamination originate inside the main chamber or in the gas supply system. The chamber is a small box with cathode walls, that determine a highly uniform electric field inside two adjacent drift cells. Along the axis separating the two drift cells, four staggered sense wires alternated with five guard wires collect the drifting electrons. The trigger is provided by two $ ^{90}Sr $ weak calibration radioactive sources placed on top of a two thin scintillator tiles telescope. The whole system is designed to give a prompt response (within a minute) about drift velocity variations at the $ 10^{-3} $ level. We will present a detailed description of the chamber layout and its simulations and the preliminary measurements.
In the MEG II detector, the measurement of the momentum of the charged particle is performed by a high transparency single volume, full stereo cylindrical Drift Chamber (CDCH). It is composed of 9 concentric layers, each consisting of 192 drift cells. The single drift cell is approximately square, with a $20~\mu m$ gold plated W sense wire surrounded $40~\mu m$/$50~\mu m$ silver plated Al field wires in a ratio of 5:1. During the construction of the first CDCH, we had the breaking of a hundred cathode wires: of these, 97 are $40~\mu$ aluminum wires while 10 are $50~\mu m$ wires.Since the number of broken cathodes is less than 1\% of the total, one can expect the influence on the track reconstruction efficiency to be not so dramatic. We verified by means of simulations that the loss of one cathode does not change the cell electric field appreciably. We present the results of the analysis of the effects of mechanical stress and chemical corrosion observed on these broken wires and a simple empirical model that relates the number of broken wires to their exposure time to atmospheric relative humidity and their mechanical tension. Finally we will show the study carried out on new wires to overcome the weaknesses found and the process that will be used for the construction of the new drift chamber (CDCH2). It will be built with the same modular technique, as for the first, the wiring robot will be used by improving some weak points and using new wires with a diameter of 25\% thicker diameter, which has very little effects on the resolution and efficiency of the detector. Furthermore these wires are made with a manufacturing process different from their used previously.
In December 2018 the Large Hadron Collider entered the Long Shutdown 2 phase, during which a maintenance program of the LHC took place.
The upgrades of the accelerators aim at increasing the instantaneous luminosity, to enlarge the statistics collected in the data taking runs, up to a factor 5-7 beyond the original LHC design in the HL-LHC program.
To cope with this, the experiments must be upgraded accordingly. Concerning the muon spectrometer, the CMS collaboration has began the installation of detectors based on the triple-GEM technology. These aim at keeping under control the trigger rate in the high pseudorapidity region, sustaining a high level of radiations and increasing the redundancy in the muon track reconstruction.
The installation of the first CMS GEM station GE1/1 was completed and the final phases of its commissioning are ongoing, in preparation of the Run 3. In addition, a first demonstrator of the GE2/1 station chambers was installed in CMS.
In this contribution the status of the commissioning of GE1/1 services (gas, cooling, low voltage and high voltage) will be presented, focusing in particular on the role played by HV trips in the chamber operation and on the strategies adopted to minimize their occurrence.
In October and November 2021 the GE1/1 chambers were operated for the first time in presence of magnetic field in CMS.
The phenomena observed suggested a dedicated test to study GE1/1 chambers behavior during a magnetic field ramp using the Goliath magnet in the CERN North Area.
The aim of the test was to gain a deep understanding of the behavior of GE1/1 chambers in magnetic field and consequently developing procedures to be followed in CMS.
The results obtained will be illustrated and the phenomena observed in CMS will be discussed in light of those.
The CMD-3 is a general-purpose detector at VEPP-2000 collider whose purpose is to study the exclusive modes of $e^{+}e^{-}\longrightarrow hadrons$ in the center of mass energy range below 2 GeV. The CMD-3 results will provide an important input for the calculation of the hadronic contribution to the muon anomalous magnetic moment. An upgrade of the CMD-3 tracker is currently in progress. The proposed tracker is an ultra-light drift chamber equipped with cluster counting/timing techniques. The main features of this design are the high transparency in terms of multiple scattering contribution to the momentum measurement of charged particles and the precise particle identification (PID). The central tracker is a down sized drift chamber from the larger one designed for the IDEA detector at both FCC-ee and CEPC colliders. The chamber is divided in 2 parts. The innermost part is a drift chamber with the jet cells. They are open cells, in which the wires are axially arranged. Outside this part, the chamber has single-wire cells with the wires arranged in an appropriate stereo angle configuration. This external part is divided into three different cells configurations, the first one (innermost) has 4 layers with 4 cells per sector, the second one has 4 layers with 5 cells per sector and the last one (outermost) has 8 layers with 6 cells per sector. The structure of this drift chamber is presented, with a focus on the mechanical design of the end plates and the novel tension recovery scheme, which has two main objectives: the minimization of the amount of material in front of the end-plate crystal calorimeter and the maximization of the mechanical stability.
IDEA (Innovative Detector for an Electron-positron Accelerator) is an innovative general-purpose detector concept, designed to study electron-positron collisions in a wide energy range provided by a very large circular leptonic collider. The IDEA drift chamber is designed to provide an efficient tracking, a high precision momentum measurement and an excellent particle identification by exploiting the application of the cluster counting technique. To investigate the potential of the cluster counting techniques on physics events a simulation of the ionization clusters generation is needed, so we developed an algorithm which can use the energy deposit information provided by Geant4 toolkit to reproduce, in a fast and convenient way, the clusters number distribution and the cluster size distribution. The results obtained confirm that the cluster counting technique allows to reach a resolution 2 times better than the traditional dE/dx method. A beam test has been performed during November 2021 at CERN on the H8 to validate the simulations results, to establish the most efficient cluster counting algorithms, to define the limiting effects for a fully efficient cluster counting, to demonstrate the ability to count the number of electron clusters released by an ionizing track at a fixed $ \beta\gamma$ as a function of the operative parameters and to establish the limiting conditions for an efficient cluster counting. Once a set of parameters optimizing the cluster counting efficiency has been defined, the setup will undergo a new test in a muon beam of momenta in the relativistic rise range, in order to define the particle identification capabilities of the cluster counting approach over the full range of interest for all future lepton machines. We will present a detailed description of the simulations analysis and the beam test results.
Extreme Energy Events (EEE) detectors are designed to measure secondary cosmic ray tracks, mainly muons, to study high energy primary cosmic rays. The EEE ‘telescope’ is made by 3 Multigap Resistive Plate Chamber (MRPC), each with an active area of 160x82 cm2 in size. Each detector is part of a large network of about sixty telescope s spread over the Italian territory. GPS time synchronization of the telescopes allows the detection of extensive air showers produced by high energy primary cosmic ray interactions in the Earth atmosphere. Due to the good (o excellent) tracking capabilities (100 ps time resolution and cm2 spatial resolution) the EEE telescope can be used also as test station for large area detectors. The link between the EEE track and signals from the detector under test detector is obtained by implementing a streaming DAQ with a common time reference between the two systems given by the GPS signal. In this contribution I will present the installation and first results of the cosmic muon test facility with the MRPC telescopes based on the low-cost, streaming-compatible 12ch, 250MHz, 14 bits digitizer (INFN-WaveBoard or WB) developed by the JLAB12 Collaboration. According to the detector under examination, different measurements can be performed: in a scintillator crystal bars, for example, the efficiency and optical attenuation along the detector length can be easily tested. In a first test run, we characterized some scintillator crystal of PbWO4 from the POKER detector. The system can be easily replicated, instrumenting any EEE existing Telescopes , and providing a convenient cosmic ray test facility across Italy.
RPC detectors are largely employed in LHC experiments thanks to their excellent trigger efficiency, time performance, and contained production costs. They are operated with 90-95% of R-134a, 5%-10% of isobutane and 0.3% of SF6. R-134a and SF6 are nowadays known to be greenhouse gases, with a GWP of 1430 and 22800 respectively, therefore subjected to European regulations aiming at reducing their availability on the market. The CERN gas group has adopted several strategies to reduce greenhouse gases, among which the use of alternative gases for RPC detectors was identified. Several low GWP gases were identified as possible alternatives to R-134a and SF6. First, the detectors were tested in laboratory conditions with cosmics muons. Different gas mixtures were tested by evaluating detectors performances in terms of currents, efficiency, streamer probability, prompt charge, cluster size and time resolution. Few selected gas mixtures were then used to test RPCs performance in the presence of a muon beam and high gamma background rate at CERN GIF++.
RPCs were tested by adding up to 30-40% of He or CO2 to the standard gas mixture and up to 500 Hz/cm2 gamma counting rates. Few gas mixtures based on the addition of R-1234ze combined with R-134a and He or CO2 were also tested to investigate possible GWP reductions.
Several alternatives to SF6 were evaluated: C4F8O, CF3I, Novec 5110, Novec 4710 and Amolea 1224yd. For some of these gases, preliminary results showed similar performance to the SF6-based gas mixture.
A fluoride measurement campaign was also started to investigate the F- production of different gas mixtures at different gamma rates.
We have studied the ageing behaviour of 6~mm diameter straw tubes and found the ageing rates with the binary gas mixture - Ar/CO$_2$ to justify its use in future high energy physics experiments. Two separate experiments are performed with the straw tube detector prototypes in the laboratory. In the first experiment, the performance of the straw tube detector is studied using a single straw irradiated with 40 kHz/mm X-rays for more than 800 hrs at a stretch (charge accumulation of 0.6 C/cm). In the second experiment, the gain and energy resolution measurements with two straws (one is the straw under test and the other is the reference straw) connected in parallel to the same gas line are carried out for a total period of 1200 hrs under constant irradiation. The test straw is operated at high gain(~10$^4$) and high X-ray flux(~35 kHz/mm) whereas the reference straw is operated at a low gain(~6 $\times$ 10$^3$) and low X-ray flux(~0.9 kHz/mm) purposely to observe the effect of high radiation doses. The gain of the aged straw is found to have gas flow rate dependence. This is called `transient ageing' which is typically observed in the straw tubes. We have also estimated the time required for the gain of an aged straw tube detector to recover on increasing the gas flow rate. Our observations for the ageing behaviour of straw tubes have also been compared to the results in the past. The details of the measurement process and the experimental results will be presented.
Resistive Plate Chamber (RPC) detectors are currently used in High Energy Physics (HEP) experiments for triggering and tracking purposes for their low-cost of fabrication, high efficiency (> 90%) and good time resolution (∼ 1-2 ns). RPC is also a potential candidate for high-resolution medical imaging.
Keeping in mind, the requirements of detectors having high-rate handling capability, cost-effectiveness, and large area coverage, to be used in future HEP experiments, commercially available bakelite plates with moderate bulk resistivity are used to build RPC prototypes.
A RPC prototype is built using indigenous bakelite sheet and the inner sides of the electrode plates are coated with linseed oil using a new technique. The newly built detector is tested with 100% Tetrafluoroethane (C2H2F4) and efficiency plateau ∼95% from 9.4 kV onwards and ∼85% from 10.1 kV onwards are obtained for the -15 mV and -20 mV discriminator threshold settings respectively.
The chamber is recently tested with conventional 90% Tetrafluoroethane (C2H2F4) and 10% Isobutane (iC4H10) gas mixture. The HV conditioning with time of the chamber is also studied with the conventional gas mixture. The new results will be presented.
The characteristic studies of a Single Mask triple Gas Electron Multiplier (GEM) detector of dimension 10$\times$10 cm$^{2}$ is carried out using Ar/CO$_{2}$ gas mixture in 70/30 volume ratio in continuous flow mode. A strong Fe$^{55}$ X-ray source is used for this work. The gain and energy resolution are studied from the 5.9 keV X-ray energy spectra. Conventional NIM electronics is used for detector biasing and data acquisition. The characteristic studies include the study of the effect of environmental parameters (temperature and pressure) on the gain and energy resolution of the prototype. After correcting the effects of temperature and pressure variation, the effect of relative humidity on the gain and energy resolution of the chamber is investigated. The details of the experimental setup, measurement methods and results will be presented.
Gas Electron Multipliers (GEM) are widely used for various high energy particle physics
experiments world-wide. Thorough understanding of the working of GEM detectors are, thus,
a matter of priority. Space charge accumulation within GEM holes is one of the vital
phenomena which affects many of the key working parameters of such detectors through its
direct influence on the resulting electric field in and around the holes. This accumulation is found to be significantly affected by the initial primary charge configurations and operating parameters of the detector since they determine charge sharing and the subsequent evolution of detector response. A recent numerical study on the possible effect of charge sharing on space charge accumulation in GEM holes has motivated us to investigate the phenomenon in greater detail. It has been observed that charge sharing among a larger number of holes allows higher gain since the space charge accumulation effect also gets shared among these holes. In this work, we have studied the effects of space charge on different parameters of single, double and triple GEM detectors using numerical simulation. A hybrid approach has been adapted, as given below:
1) Geant4, Garfield / Garfield++, neBEM, HEED, Magboltz have been used to identify the
primary cluster, transport properties and resulting charge sharing.
2) 2D-axisymmetric and 3D hydrodynamic model based on COMSOL Multiphysics have been developed to simulate the temporal evolution of primary cluster, model space charge effects and to estimate the detector response. Finally, an attempt has been made to optimize the 3D-hydrodynamic model to make it computationally economical.
The Extreme Energy Events (EEE) Project is based on the deployment of cosmic-ray telescopes in Italian high schools with the active contribution of students and teachers in the construction and operation of the detectors.
The telescope network counts ~60 tracking detectors, each one made by three Multigap Resistive Plate Chambers (MRPC) with an active surface of 158x82 cm$^2$. The induced signal is read out by 24 strips. Signal time-of-arrival measurements allow for a reconstruction of the particle position along the strip. The MRPCs have been fluxed so far with a gas mixture composed by 98% C$_2$H$_2$F$_4$ and 2% SF$_6$, both of them greenhouse gas.
The search for new eco-friendly gas mixtures is a challenge that many experiments are facing, action even more important for the EEE project, given the important role in the outreach and student education. Therefore, the collaboration has decided to phase out the gas mixture in use and start an R&D on alternative mixtures environmentally sustainable. Apart from maintaining the telescope performance, further requirements for the new mixtures are related to the cost, security and compatibility with the current gas distribution system. It should also be possible to operate the detector within the bias voltage that the electronics is currently able to provide.
New binary gas mixtures, compatible with the project requirements, have been identified and are currently under investigation, with a few telescopes already fully running with the new mixtures. MRPCs performance, in terms of resolution (time and space), efficiency, stability and aging are currently being evaluated and optimized. In this contribution the EEE project and the performance of the telescopes will be described. Test strategy and preliminary results with the new gas mixtures will be reported.
The MEG experiment has set the latest limit of 4.2 x 10^(-13) (90% C.L.) on the branching ratio of the charged lepton flavour violating decay µ+ -> e+ γ, making use of the most intense continuous surface muon beam in the world at the Paul Scherrer Institut(PSI), Villigen, Switzerland. An intense upgrade of the experiment, MEGII, has been carried out, successfully concluded with the beginning of the data taking just a few months ago. The aim of the MEGII experiment is to search for the µ+ -> e+γ decay with a sensitivity of 6 x 10^-(14) (90% C.L.) with a few years of data taking.
In order to match such a challenging scientific achievement, all MEGII sub-detectors have been pushed at the detector performance edge. Furthermore, measuring the kinematic variables (energy, timing and relative opening angle) of the positron and the gamma resulting from the muon decay with high resolutions require careful calibration and monitoring of the experimental apparatus.
A new calibration method for the MEGII spectrometer has been studied to fully exploit its unique feature. The range of the measured momentum can be selected by tuning the gradient magnetic field inside which the detector is placed. It will be presented how the basic parameters of the detector (active cells, working channels, gain alignment) and the major kinematic variables (momentum, direction and timing) can be extracted using respectively straight and curved charged particle tracks.
The future Electron-Ion Collider (EIC) at Brookhaven National Laboratory (BNL) will collide polarized electrons with polarized proton/ions. This unique environment imposes stringent requirements on the tracking system needed for the measurement of the scattered electron and charged particles produced in the collisions. A Totally Hermetic Electron-Nucleus Apparatus (ATHENA) detector has been proposed as a potential day one EIC detector. The ATHENA tracking system implements a hybrid of silicon and Micro-Pattern Gaseous Detector (MPGD) technologies. The MPGDs positioned at larger radii compliment the silicon-based tracking and vertexing detectors to provide an optimized and cost effective tracking system. This presentation will focus on the MPGD technology choices for the ATHENA detector, their performance, and an overview of the ongoing R&D.
Coarsely segmented (pitch > 1 mm) zigzag-shaped anode strip arrays have been shown to have considerable advantages over similarly pitched straight strip arrays for standard planar MPGDs, including GEM, Micromegas, and micro-RWELL detectors. Once the geometric parameters of the zigzag are precisely tuned for a specific detector application, the spatial resolution remains high and approximately flat for very large pitches, up to 3.3 mm or more. Additionally, the response of the optimized zigzags along the measured coordinate and in the orthogonal direction are highly uniform without the need for differential non-linearity corrections. We extend the enhanced charge sharing characteristic of the zigzags to the case of a 2D readout by employing anode structures that are interleaved along two distinct directions. This allows for the possibility to choose arbitrary coordinate axes suitable for particular detector applications. As in the 1D case, the segmentation of the 2D anodes can also be large to minimize the channel count and save considerably on the readout electronics. Thus, to achieve the desired position resolution and uniformity of response, we rigorously optimize the basic geometric parameters of various 2D interleaved anode shapes with the goal of minimizing the channel count while maintaining the detector performance. A variety of 2D interleaved readout patterns coupled to GEM, micro-RWELL, and Micromegas detectors were evaluated at a beam test and in the lab using a highly collimated X-ray source for this purpose. In addition to providing a survey of the geometric parameters of the anode structure, various 2D readouts were studied with angles other than 90 deg. between the coordinate axes. Results will be presented at the conference that demonstrate the viability of such 2D interleaved readouts with an emphasis on the spatial resolution and the uniformity of response along each coordinate.
A muon collider has a great potential for high energy physics. It combines the high precision of electron-positron machines, with a low level of beamstrahlung and synchrotron radiation, and the high centre-of-mass energy and luminosity of hadron colliders. The main challenges, that impact both the machine and detector design, arise from the short muon lifetime and the harsh Beam Induced Background (BIB). This is due to electrons and positrons from muons decay and to photons radiated by them that interact with the machine generating secondary and tertiary particles that eventually reach the detector.
A full simulation is crucial to understand the feasibility of the experiment implementation. Focusing in particular on the muon system, the geometry inherited from CLIC foresees layers of track sensitive chambers interleaved with iron yoke plates. Currently, the CLIC adopted technology is glass Resistive Plate Chambers (GRPC) both for the barrel and endcap regions. However, a preliminary simulation of sensitivity and hit rate in a muon collider reveals that GRPC are already at the limit of their rate capability, and, therefore, alternative MicroPattern Gaseous Detector (MPGD) solutions are under investigation to try to match the required performance.
In parallel, studies of muon reconstruction are ongoing. The low BIB occupancy in the muon system with respect to the other detectors, tracker and electromagnetic calorimeter in particular, suggests using standalone muon objects to seed the global muon track reconstruction.
Results of the muon reconstruction efficiency, BIB sensitivity and background mitigation will be presented for single muon and multimuon final state processes at a centre-of-mass energy of 1.5 TeV. Besides, new technologies based on a Micromegas detector coupled to a Cherenkov radiator and equipped with a photocathode, such as PICOSEC, will also be discussed.
The small (15 mm)-diameter Muon Drift Tube (sMDT) detector technology has been chosen and constructed in order to cope with the increased background counting rates expected at the High Luminosity-Large Hadron Collider (HL-LHC) and Future Circular Collider (FCC-hh). Unfortunately, the rate capability of the sMDT drift tubes in terms of muon detection efficiency and spatial resolution is strongly limited by the performance of the readout electronics. For this reason, to support the present and future sMDT based high-resolution trigger system with the continuous readout of the muon chambers and the increased overall trigger rates and further enhance the rate capability of those detectors, a new Amplifier-Shaper Discriminator (ASD) readout chip with a faster peaking time compared to the old chip has been developed with the possibility of the reducing the discriminator threshold crossing time jitter and thus improving the time and spatial resolution with and without $\gamma$-background radiation. In this contribution, we show the results of the extensive studies of the sMDT detectors with old and new readout chips and as well as with discrete readout circuits with baseline restoration functionality performed at varying background irradiation from the $^{137}$Cs $\gamma$-source at the CERN Gamma Irradiation Facility (GIF++) with the highly energetic muon beam ($\sim$100 GeV) from the SPS. In addition, a method compensating the gas gain drop due to space charge at high $\gamma$-background hit flux by adjusting the sMDT operating voltage will be presented. Simulations are shown that the added active baseline restoration circuits in the front-end electronics chips in order to suppress signal-pile-up effects at high counting rates further lead to significant improve the efficiency and resolution parameters of the sMDT detector.
The Surface Resistive Plate Counter (sRPC) is a novel RPC based on surface resistivity electrodes, a completely different concept with respect to traditional RPCs that use electrodes characterized by volume resistivity (phenolic-resin or float-glass).
The electrodes of the sRPC exploits the well-established industrial Diamond-Like-Carbon (DLC) sputtering technology on thin (50 µm) polyimide foils, already introduced in the manufacturing of the resistive MPGDs such us micro-RWELL. The DLC foil is then glued on a 2 mm thick glass, characterized by excellent planarity. With this scalable and cost-effective DLC technology it should be possible to realize large area (up to 2x0.5 m2) electrodes with a resistivity spanning over several orders of magnitude (0.01÷10 GOhm/square).
Different sRPC layouts have been tested: symmetric, with both electrodes made of DLC foils, and hybrid, with one electrode made of DLC and the other made of float-glass. With these layouts we measured an efficiency of 95-97% and a time resolution of 1ns. Performance that are quite standard for 2 mm gas gap RPCs.
In addition, exploiting the concept of the high density current evacuation scheme, already introduced for the micro-RWELL, we realized the first prototypes of high-rate electrodes by screen printing a conductive grid onto the DLC film.
With this high-rate layout, with 7 GOhm/square DLC resistivity and 10 mm grounding-pitch, we measured a rate capability of about 1 kHz/cm2 with X-ray, corresponding to about 3 kHz/cm2 with mip. By lowering the DLC resistivity and optimizing the current evacuation scheme, rate capabilities largely exceeding the barrier of the 10 kHz/cm2 seems to be easily achievable.
The sRPC, based on innovative technologies, open the way towards cost-effective high-performance muon devices for applications in large HEP experiments for the future generation of high luminosity colliders.
An R&D project has been recently started to consolidate the technology of resistive Micromegas for operations well beyond the actual operations at HEP experiments, aiming at stable, reliable, and high gain operation up to particle fluxes of the order of 10 MHz/cm2, over large surfaces.
To cope with these challenges, readout copper pads, of a few mm2 size, have been proposed to reduce the occupancy of the readout elements, calling for innovative solutions for the spark protection resistive scheme. It is known that single stage amplification Micro Pattern Gaseous Detectors suffer from sparks when operated under harsh environments. Resistive anodes drastically mitigate the spark intensities but, on the other hand, they reduce the rate capability when high currents flow into the detectors, generating a drop in the amplification voltage. Ad-hoc solutions must be adopted.
Two resistive schemes have been studied. The first one is based on a pad-patterned resistive double layer, superimposed to the readout pads, with an embedded resistor connecting the resistive pads. In this scheme, each pad is independent from the others. The second scheme exploits the recently developed Diamond-Like Carbon (DLC) resistive foils. A double layer of DLC is superimposed to the readout pads, with a grid of interconnecting vias to ground for a fast evacuation of the accumulated charge. In this case the pads are not completely independent since the charge can spread over more pads. For each of these resistive schemes, detectors with different configurations and construction techniques have been built.
All detectors have been thoroughly tested and fully characterized with radioactive sources, X-rays and with test-beam carried out at CERN in 2021. The performance and achievements in terms of gains, rate capabilities, energy, space and time resolutions will be reported, along with a detailed comparison among the different schemes and configurations.
A new experimental system was recently developed by our group to measure the mobility of both positive and negative ions: the Dual-Polarity Ion Drift Chamber (DP-IDC). This new system is intended to foster the understanding of transport properties of ions in gases, as these are specially relevant for the performance of gaseous detectors, namely in large volume ones, in particular for the development/optimization of the performance of Negative Ion Time Projection Chambers (NITPCs) for rare event searches such as the experiments CYGNUS, XENON or NEXT. The optimization/fine tuning of gas mixtures for such detectors gains special relevance as drift of negative ions in these detectors can significantly affect the signal formation, the tracking capability and spatial resolution, eventually limiting their rate capability. In addition, a comprehensive understanding of the different ion species expected in particular gas mixtures, can also be of extreme importance as it may allow to identify potential minority charge carriers (negative ions) which are the basis for the development of additional internal trigger methods in NITPCs while enabling to further reduce the background on such detectors.
In this work, we present a description of the experimental setup and technique used, and the initial studies carried out in mixtures of interest in NITPC’s, namely in Xe-SF6 mixtures, whose interest has attracted attention as a possible alternative in searches for the neutrinoless double-beta decay.
Will be present the ongoing work on the development and testing of a prototype of a Micromegas (MM) detector and its front end that are intended to be a candidate for the substitutions of some of the Multi Wire Proportional Chambers (MWPC) in the future AMBER (NA66) experiment at CERN.
Presently the MWPCs are used in the COMPASS (NA58) experiment at CERN as one of the main trackers and it is planned that they will be still operated in the recently approved AMBER program. Unfortunately, due to aging of some of the structural elements of the MWPC chambers only a part of them could be operated for the whole expected life span of the AMBER program. Our main candidate technology to substitute the most aged MWPCs is the presently well-established Micro-Pattern Gaseous Detectors (MPGD). The main challenges we need to address are the operation in a fixed target environment that led to a difference of the particle rate from 120 $kHz/mm^2$ at the centre to a few $kHz/cm^2$ on the periphery of the detector, the size of the active area that should be of the order of 1x1.2 $m^2$ and the necessity to maintain a reasonable material budget. To those requirements on the detector itself, we need to add the necessity to operate the detector in the new AMBER triggerless DAQ that require a specific ASIC. We are now qualifying the TIGER (Turin Integrated Gem Electronics for Readout) custom front-end ASIC as a possible candidate for the readout. Moreover during 2022 we will submit a new custom ASIC that is being designed at INFN Torino specifically for application with MM detectors in AMBER conditions. The new custom designed 64ch ASIC will provide a time and charge measurement featuring a fully digital output and operated in triggerless mode.
Within the family of the Micro Strip Gas Detectors (MSGD), the intrinsic characteristics of the bulk Micro-Megas (MM) device represent the most promising features for the construction of a new instrument to be operated as a TPC gas chamber in a low-pressure regime. In this study, we present the main properties of a low-pressure bulk MM detector in which the amplification gap was slightly increased to improve the gas gain. Two configurations have been deeply studied: the first one with a gap of 128 μm and a second one with 192 μm gap, both filled and operated with a gas mixture (Ar-Co2) at pressures below 100 mbar. The dependence of the gain and the energy resolution on the amplification field, gas pressure and drift field have been evaluated. The reliability of the measured performance, combined with the simple and robust structure of the detector even with an increased length of amplification gap, make it an attractive choice for applications where track length of low energy particles is detected by using a low-pressure filling gas.
Triple GEM technology has been selected to extend the acceptance of the CMS muon spectrometer to the region 2.4< |η| <2.8, the so called ME0 project. The ME0 stations will be formed by stacks (six-layer stations) of triple-GEM chambers, which must operate in a harsh environment with expected background particle fluxes ranging between 3 and 150 kHz/cm2 on the chamber surface. Both the maximum background rate and the large range in particle rate set a new challenge for particle detector technologies. The rate capability of triple-GEM detectors is limited by voltage drops on the chamber electrodes due to avalanche induced currents flowing through the resistive protection circuits (discharge quenchers).
Studies with large-area triple-GEM detectors with moderate fluxes, show drops up to 40% of the nominal detector gas gain. The traditional GEM foils segmentation does not allow for feasible gain compensation acting on the HV settings. To overcome this strong limitation and to cope with the large variation in background flux a novel GEM foil design with electrode segmentation in the radial direction, instead of the “traditional” transverse segmentation has been introduced.
The advantages of the new design include uniform hit rate across different sectors, minimization of gain-loss limiting the need for voltage compensation, and independence of detector gain on background flux shape.
Rate capability studies with ME0 chamber prototype by using a high intensity 22 keV X-ray generator will be presented. We prove the possibility to restore the original gain compensating the voltages applied on each GEM electrode, this makes this novel GEM foil layout suitable for the CMS-ME0 application and for all experiments which expose GEM detectors to a high background rate and large rate variation on the detector surface. Additional results from a beam test on pion and muon beam done in October 2021 will be also presented.
Visual investigation of a Single Mask (SM) Gas Electron Multiplier (GEM) foil is performed manually using an optical microscope having magnification factors of 20x and 40x. The scanned SM GEM foil is being used as the 3rd GEM foil of a triple GEM chamber prototype used for long-term studies. During the long-term test, it is observed that the detector suddenly stopped giving the signal. To understand the problem, the triple GEM chamber prototype is disassembled and the measured foil resistance of the 3rd GEM foil was found to be ∼ 40 kΩ which indicated that there were some short paths created between the top and bottom electrodes of the GEM foil. The short-circuited path might be due to the accumulation of impurities inside the GEM holes or due to the degradation of the foil itself. The GEM foil is scanned and the different damaged parts of the foil are identified. The details of the techniques and results will be presented.
The Large Hadron Collider (LHC) will be upgraded to increase its luminosity by a factor of 7.5 relative to the design luminosity (referred as HL-LHC). The ATLAS detector will undergo a major upgrade to fully explore the physics opportunities provided by the HL-LHC. In order to improve the Level-1 muon trigger efficiencies at the HL-LHC, the precision Monitored Drift Tube (MDT) chambers will be upgraded with smaller-diameter MDT (sMDT) chambers, designed by the Max Planck Institute (MPI), in the barrel inner station of the Muon Spectrometer to make space for additional Resistive Plate Chamber (RPC) triggering layers. This talk will report on the design and construction of the infrastructure for the sMDT tube and chamber productions as well as the procedures of the detector construction and QA/QC tests. Data on mechanical precision measurements and sMDT efficiencies and tracking resolutions measured with cosmic ray muons will be presented based on the first 50 chambers produced at MPI (Germany) and Michigan (USA) in the past year.
Simulation of High Energy Physics experiments is inevitable for both detector and physics studies. Detailed Monte-Carlo simulation algorithms are often limited in the number of samples that can be produced due to the computational complexity of such methods, and therefore faster approaches are desired. Generative Adversarial Networks (GANs) is a deep learning framework that is well suited for aggregating a number of detailed simulation steps into a surrogate probability density estimator readily available for fast sampling. In this work, we demonstrate the power of the GAN-based fast simulation model on the use case of simulating the response for the Time Projection Chamber in the MPD experiment at the NICA accelerator complex. We show that our model can generate high-fidelity TPC responses throughout the full input parameter space, while accelerating the TPC simulation by at least an order of magnitude. We describe different representation approaches for this problem and discuss tricks and pitfalls of using GANs for fast simulation of physics detectors. We also outline the roadmap for the deployment of our method into the software stack of the experiment.
The MEG experiment at the Paul Scherrer Institut (PSI) represents the state of the art in the search for the charged Lepton Flavour Violating $\mu^+ \rightarrow e^+ \gamma$ decay, setting the most stringent upper limit on the BR $(\mu^+ \rightarrow e^+ \gamma) \leq 4.2 \times 10^{-13}$ ($90\%$ C.L.). An upgrade of MEG, MEG II, was designed and it recently started the physics data taking, aiming at reaching a sensitivity level of $6 \times 10^{-14}$. In order to reconstruct the positron momentum vector a Cylindrical Drift CHamber (CDCH) was built, featuring angular and momentum resolutions at 6.5~mrad and 100~keV/c level. The MEG II drift chamber presents a series of unprecedented peculiarities. CDCH is a 2-meter long, 60 cm in diameter, low-mass, single volume detector with high granularity: 9 layers of 192 drift cells, few mm wide, defined by 12000 wires in a stereo configuration for longitudinal hit localization. CDCH is the first drift chamber ever designed and built in a modular way. The filling gas mixture is Helium:Isobutane 90:10. The total radiation length is $1.5 \times 10^{-3}$ X$_0$, thus minimizing the Multiple Coulomb Scattering and allowing for a single-hit resolution $< 120$ $\mu$m. After the assembly phase at INFN Pisa, CDCH was transported to PSI and integrated into the MEG II experimental apparatus since 2018. The commissioning phase lasted for the past three years with several hardware improvements until the operational stability was reached in 2020. The analysis software is continuously developing and the tuning of the reconstruction algorithms is one of the main activities. The last updates on the single-hit and positron momentum vector resolutions and tracking efficiency will be presented.
Fast neutron spectroscopic measurements are an invaluable tool for many scientific and industrial applications, in particular for Dark Matter (DM) searches. In underground DM experiments, neutron-induced background produced by cosmic ray muons and the cavern radioactivity, can mimic the expected DM signal. However, detection methods are complex and measurements remain elusive.
The widely used $^3$He based detectors are expensive, while the low atomic mass requires large target masses, prohibitive for underground laboratories.
A safe, inexpensive, effective and reliable alternative is the N$_2$-filled Spherical Proportional Counter (SPC). The neutron energy is estimated by measuring the products of the $^{14}$N(n,a)$^{11}$B and $^{14}$N(n,p)$^{14}$C reactions, which have comparable cross sections to the $^3$He(n,p)$^3$He reaction. Furthermore, the use of a light element such as N$_2$ keeps γ-ray efficiency low and enhances the signal to background ratio in mixed radiation environments. Partial proof of principle of this idea suffered from issues such as wall effect, electron attachment and low charge collection efficiency.
In this work, we tackle these challenges by incorporating the latest SPC instrumentation developments such as resistive multi-anode sensors for high-gain operation with high-charge collection efficiency and gas purifiers that minimize gas contaminants to negligible levels. This allows operation with increased target masses, reducing the wall effect and increasing the sensitivity.
Two 30cm diameter detectors are used at the University of Birmingham (UoB) and at the Boulby underground laboratory, operating above atmospheric pressure. We demonstrate spectroscopic measurements of fast and thermalised neutrons from an Am-Be source and from the MC40 cyclotron facility at UoB. Additionally, the response of the detector to neutrons is simulated using a framework developed at UoB, based on GEANT4 and Garfield++. The simulation provides the expected efficiency, the pulse shape characteristics and the means to discriminate the events according to their interaction, providing a good agreement with the measurements.
The micro-RWELL is a single amplification stage resistive MPGD. The device is realized with a copper-clad polyimide foil micro-patterned with a well matrix coupled with a readout PCB through a Diamond-Like-Carbon (DLC) resistive film (10÷100 MOhm/square).
The detector is proposed for several applications in HEP that require fast and efficient triggering in harsh environment (LHCb muon-upgrade), low mass fine tracking (FCC-ee, CepC, SCTF) or high granularity imaging for hadron calorimeter applications (Muon collider).
For the phase-2 upgrade of the LHCb experiment, proposed for LHC Run-5, the excellent performance of the current muon detector will need to be maintained at 40 times pile-up level experienced during Run-2.
Requirements are especially challenging for the innermost regions of the muon stations, where detectors with rate capability of few MHz/cm2 and capable to stand an integrated charge up to ~10 C/cm2 are needed.
In this framework an intense optimization program of the micro-RWELL has been launched in the last year, together with a technology transfer to the industry operating in the PCB field.
In order to fulfill the requirements, a new layout of the detector with a very dense current evacuation grid of the DLC has been designed.
The detector, co-produced by CERN-EP-DT-MPT Workshop and ELTOS Company, has been characterized in terms of rate capability exploiting a high intensity 5.9 keV X-ray gun with a spot size (10÷50 mm diameter) larger than the DLC grounding-pitch. A rate capability exceeding 10 MHz/cm2 has been achieved, in agreement with previous results obtained with m.i.p. at PSI.
A long term stability test is in progress: a charge of about 100mC/cm2 has been integrated over a period of about 80 days. The test will continue with the goal to integrate about 1 C/cm2 in one year, while a slice test of the detector is under preparation.
The Compressed Baryonic Matter fixed target experiment at the SIS100-FAIR/Darmstadt future accelerating facility is dedicated to study the properties and dynamics of highly compressed baryonic matter, looking for rare probes accessed at unprecedented high interaction rates. Therefore, at the low polar angles, the CBM detectors will be exposed to challenging high counting rates and track densities.
Two Multi-Strip Multi-Gap Resistive Plate Counters (MSMGRPC) designed with a high granularity for the inner zone of the CBM-TOF sub-detector were assembled using a low resistivity glass. The prototypes were successfully tested in the laboratory with cosmic rays, proving a very good efficiency (97%) and time resolution (60 ps). In-beam tests were performed in 2021 in the mCBM experimental setup installed at SIS18/GSI Darmstadt facility (FAIR Phase-0). At low counting rate the efficiency plateau was confirmed while a very good 40 ps time resolution was obtained. A scan with the beam intensity was converted in the particle flux incident on the chambers. At the highest beam intensity a corresponding 25 kHz/cm$^{2}$ counting rate, landmark value for the CBM-TOF, was reached exposing the whole active area of the chambers, while the measured time resolution and efficiency still maintain very good.
In parallel, detailed ageing tests of a MSMGRPC foreseen to be used for the inner zone of CBM-TOF sub-detector were performed at the IRASM multipurpose irradiation facility of IFIN-HH/Bucharest, based on a high intensity $^{60}$Co source.
For the mitigation of the observed gas pollution ageing effects, we proposed a new detector architecture which assures a directed gas flow through the gas gaps. The designed prototype was tested in-beam in July 2021 in the mCBM setup. The obtained performances, similar with the results obtained for the prototype with gas exchange via diffusion will be reported.
In order to accurately establish leptonic CP-violation the T2K collaboration planned to upgrade both the neutrino beam line, by doubling its intensity and the ND280 Near Detector, for collecting neutrino interactions within full phase-space acceptance. The innovative concept of this neutrino detection system consists in combining a fine-grained fully active target (Super-Fine-Grained Detector) with 2 large volume Time Projection Chambers, rectangular in shape (High Angle TPC, HATPC) and 6 TOF planes. The sub-detectors are being assembled and commissioned at CERN and J-PARC and will be installed within ND280 by March 2023.
This talk will focus on the HATPCs which will be used for 3D track reconstruction, momentum measurement and identification of final state particles from neutrino interactions in the SFGD.
The HATPCs operate with the “T2K gas” mixture Ar:CF4:isoC4H10 (95%:3%:2%) at atmospheric pressure in a thin walled Field Cage (3cm thickness, 4% rad-length, 2×1.8×0.8 m3 volume) with a central cathode (-30kV) and fully instrumented anodes at the opposite end-plates.
Each end-plate is instrumented with 8 “Encapsulated Resistive Anode bulk Micromegas” sensors (ERAM) covering the full transverse surface (1.8×0.8 m2). ERAMs provide primary charge amplification and use a Diamond-Like-Carbon (DLC) resistive anode to "spread" the charge over several pads with several advantages including enhanced spatial resolution.
In this talk I will report about the construction of the new HATPCs, focusing on the innovative developments and discussing our results concerning the following:
1) Mechanical and electrical characterization of the Field Cages, including assessment of mechanical properties, E-field uniformity, HV insulation limits and studies about inner surfaces characteristics;
2) Performance characterization of the ERAM detectors including studies about electrical response, X-ray Test-Bench results for series production validation, Test-Beam results including assessment of track reconstruction in magnetic fields;
3) Commissioning by exploiting Cosmic-Rays and a Test-Beam of the first TPC module at CERN (April'22)
The Schwarzschild-Couder Telescope (SCT) is a Medium-Sized Telescope proposed for the Cherenkov Telescope Array (CTA). The first prototype (named pSCT) has been constructed and is being commissioned at the Lawrence Whipple Observatory (FLWO) in Arizona , USA. The SCT is characterized by a dual-mirror optical design in order to remove the comatic aberrations across its field of view. The pSCT camera is now partially equipped with Silicon Photomultiplier (SiPM) matrices produced by Fondazione Bruno Kessler (FBK) and is now in the upgrade phase. A new design of the front-end electronics (FEE) based on the TARGET ASICs will be installed to obtain an improvement especially in the noise performance. The new FEE design will also include a 16-channel integrated pre-amplifier, called SMART, developed and tested by INFN to match the signal produced by the FBK SiPMs.
The results of the performance of the SMART ASIC coupled to the FBK SiPMs and to the new FEE modules will be shown in terms of gain and noise.
The CMS Phase-2 upgrade for the HL-LHC aims at preserving and expanding the current physics capability of the experiment under extreme pileup conditions. A new tracking system incorporates a track finder processor, providing tracks to the Level-1 (L1) trigger. A new high-granularity calorimeter provides fine-grained energy deposition information in the endcap region. New front-end and back-end electronics feed the L1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded L1 will be based primarily on the Xilinx Ultrascale Plus series of FPGAs, capable of sophisticated feature searches with resolution often similar to the offline reconstruction. The L1 Data Scouting system (L1DS) will capture L1 intermediate data produced by the trigger processors at the beam-crossing rate of 40 MHz, and carry out online analyses based on these limited-resolution data. The L1DS will provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements, and, in some cases, calibrations. It also has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the L1 trigger accept budget or with requirements that are orthogonal to “mainstream” physics. The requirements and architecture of the L1DS system are presented, as well as some of the potential physics opportunities under study. The first results from the assembly and commissioning of a demonstrator currently being installed for LHC Run-3 are also presented. The demonstrator collects data from the Global Muon Trigger, the Layer-2 Calorimeter Trigger, the Barrel Muon Track Finder, and the Global Trigger systems of the current CMS L1. This demonstrator, as a data acquisition system operating at the LHC bunch-crossing rate, faces many of the challenges of the Phase-2 system, albeit with scaled-down connectivity, reduced data throughput and physics capabilities, providing a testing ground for new techniques of online data reduction and processing.
To maximally exploit the physics potentials reachable with proton-proton collisions, the Large Hadron Collider is undergoing an ambitious upgrade program that will increase the delivered instantaneous luminosity to $7.5\times 10^{34}$ cm$^{-2}$ s$^{-1}$ allowing to collect more than 3 ab$^{-1}$ of data at $\sqrt{s}=$14 TeV. One of the most challenging experimental conditions is posed by the largely increased pile-up that will grow by more than a factor 5 with respect to present conditions. In order to face this unprecedented problem, the ATLAS detector will be equipped with new sets of both front-end and back-end electronics and a new trigger and data acquisition system able to cope with higher rates. The large number of detector channels, huge volumes of input and output data, short time available to process and transmit data, harsh radiation environment and the need of low power consumption all impose great challenges on the design and operation of electronic systems. This talk will offer an overview of the solutions adopted by the ATLAS collaboration for the electronic systems in order to highlight the global strategy and picture. On the contrary, full details on the single projects will be addressed in specific talks. A summary of the status of advancements of these projects and the most important results from prototypes and tests will also be reported.
This contribution describes the system performing the trigger and the readout of the PMTs for the High Energy Particle Detector (HEPD-02) onboard the second satellite of the China Seismo Electromagnetic Satellite (CSES-02) mission.
CSES is a project developed to research the ionospheric perturbations associated with earthquakes. The mission aims at building a constellation of multi-instrument satellites to conduct a thorough study of ionospheric phenomena.
The HEPD-02 is designed to detect cosmic rays, i.e., electrons and protons, along with light nuclei, in the energy range between a few MeV and a few hundreds of MeV. The instrument consists of a tracker, a trigger and a calorimeter surrounded by a veto.
All scintillating detectors are readout by a single board which also issues and manages the trigger signals for the whole apparatus. The HEPD-02 trigger system must be extremely versatile because along the orbit of CSES-02 particle fluxes span several orders of magnitude and data acquisition must guarantee the measurement of energy spectra with a high duty cycle. The HEPD-02 trigger system features concurrent trigger configurations and prescaling capability to match the amount of data the instrument can process and send to the ground. Each trigger pattern is optimized after scientific requirements about the field of view and the nature of particles impinging in HEPD-02, with prescaling settings suitably adjusted.
All the trigger configurations will be monitored by ratemeters. In addition, a trigger configuration dedicated to gamma-rays will be tracked on a time basis of 10 milliseconds, to measure photon fluxes in the MeV-tens of MeV energy range and provide sensitivity for Gamma Ray Bursts. We provide a comprehensive description of the design criteria and the architecture of the trigger system, including results from laboratory tests on engineering and pre-flight models.
The WaveDAQ data acquisition system has been developed at PSI, Switzerland in collaboration with INFN Pisa in the past nine years. It features integrated data acquisition up to 5 GSPS/12 bits of resolution using the DRS4 chip, combined with sophisticated triggering capabilities. The DAQ boards of this system have integrated bias voltage generation for SiPMs, shaping, pre-amplification and scalers. Each board with 16 channels can be either read out directly with onboard Gigabit Ethernet, or in a custom crate using Gigabit serial links. A 3 HE crate houses up to 256 channels with central clock distribution, triggering and data readout, allowing timing measurements down to 10 ps resolution.
The paper will describe the design principles and their implementations and show our experience to deploy almost 9000 channels of this system in the MEG experiment an other applications such as beam profile monitors and the FOOT experiment. Emphasis will be given on the lessons learned and best practices in designing such large custom systems.
Within the Phase-II upgrade of the LHC, the readout electronics of the ATLAS LAr Calorimeters is prepared for high luminosity operation expecting a pile-up of up to 200 simultaneous pp interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction. Real-time processing of digitized pulses sampled at 40 MHz is thus performed using FPGAs.
To cope with the signal pile-up, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct bunch crossing and in energy resolution.
Very good agreement between neural network implementations in FPGA and software based calculations is observed. The FPGA resource usage, the latency and the operation frequency are analysed. Latest performance results and experience with prototype implementations will be reported.
The increase of the particle flux (pile-up) at the HL-LHC with instantaneous luminosities up to 7.5×10^34 cm^(-2) s^(-1) will have a severe impact on the ATLAS detector reconstruction and trigger performance. The end-cap and forward region where the liquid Argon calorimeter has coarser granularity and the inner tracker has poorer momentum resolution will be particularly affected. A High Granularity Timing Detector (HGTD) will be installed in front of the LAr end-cap calorimeters for pile-up mitigation and luminosity measurement.
The HGTD is a novel detector introduced to augment the new all-silicon Inner Tracker, adding the capability to measure charged-particle trajectories in time as well as space. Two silicon-sensor double-sided layers will provide precision timing information for minimum-ionizing particles with a resolution as good as 30 ps per track. Readout cells with a size of 1.3 mm × 1.3 mm, lead to a highly granular detector with 3.7 million channels, which throw out a huge challenge to the readout electronics system taking the form of Peripheral Electronics Boards (PEB).
We developed a demonstration system of PEB to verify some uncertain aspects in advance. The demonstration system is designed as a flexible platform where we can perform many kinds of validations and tests, such as: exercise the data transmission of the critical versatile link (lpGBT + VTRx+) , validate the feasibility of working with FELIX DAQ system, test the HGTD modules efficiently, etc. The demonstration system is actually a simplified version of PEB, small but complete. It has 2 uplinks both with the speed of 10.24Gbps, and 1 downlink with the speed of 2.56Gbps. Also, it adopts the modular design, which makes the ASICs replaceable.
We performed the test of the demonstration system, and it worked as expected. The Bit Error Rates (BER) for both uplinks and downlink are less than 10^(-14).
The FOOT experiment aims at measuring the nuclear fragmentation of carbon and oxygen nuclei to characterise the secondary products in hadron therapy. C and O beams of an energy in the range 200-400 MeV/u are shot on thin targets, the emerging fragments are reconstructed by the FOOT detector.
Since the projectile fragmentation occurs in less than 10% of the events a sophisticated trigger logic has been implemented to enrich the data sample with fragments which are distinguished from primaries by looking at the energy deposit in the scintillator detectors. The trigger was commissioned in 2021 and used in two data taking campaigns at GSI snd CNAO; as a result the collected sample shows 6 times more fragment with respect to the previous set up. The efficiency was also measured to be close to 100% for all fragments.
The trigger algorithm, its implementation in the FPGA-based logic into the WaveDAQ system and the experimental results will be presented.
Time Series Classification (TSC) is an important and challenging problem for many subject-matter domains and applications. It consists in assigning a class to a specific time series, recorded from sensors or live observations over time.
TSC finds application in different fields, such as finance, medicine, robotics and physics, and it can be used mainly for: Failure prediction, Anomaly detection, Pattern recognition and Alert generation.
There are many algorithms that are designed to carry out time series classification. Depending on the data, one type might produce higher classification accuracies than the other types. In the last decade, with the advent of Machine Learning and AI, a lot of algorithms have been developed using, for example, the Neural Networks, to perform this task.
Here we present a new Neural Networks architecture, called Convolutional Echo State Network (CESN), for the detection of patterns and for the classification of univariate and multivariate time series. This architecture arises from the union of the Convolutional Neural Networks (CNNs), typically used for pattern recognition in images and videos, and the Echo State Networks (ESNs), used mainly for forecasting time series from their past history.
CESN results being suitable for the TSC tasks, both for univariate and multivariate TS, showing in parallel a good accuracy and a good sensitivity with datasets previously tested with other existing algorithms. We applied this technique to a simulated data set based on accelerometers and gyroscopes to detect falling condition.
Coordinating firmware development among many international collaborators is becoming a very widespread problem in high-energy physics. Guaranteeing firmware synthesis reproducibility and assuring traceability of binary files is paramount.
We devised Hog - HDL on git, a set of Tcl and Shell scripts that tackles these issues and is deeply integrated with HDL IDEs, such as Xilinx Vivado Design Suite and ISE PlanAhead or Intel Quartus Prime, and all major simulation tools, like Siemens ModelSim or Aldec Riviera Pro.
Git is a very powerful tool and has been chosen as standard by several research institutions, including CERN. Hog perfectly integrates with git to assure an absolute control of HDL source files, constraint files, IDE and simulation settings. It guarantees traceability by automatically embedding the git commit SHA and a numeric version into the binary file, also automatically renamed.
Hog does not rely on any external tool apart from the HDL IDE and git, so it is extremely compatible and does not require any installation. Developers can get quickly up to speed: clone the repository, run the Hog script, work normally with the IDE.
The learning curve to use Hog for the users is minimal. Once the HDL project is created, developers can work on it either using the IDE graphical interface, or with the provided Shell scripts to run the workflow.
Hog works on Windows and Linux, supports IPbus, Sigasi and provides pre-made YAML files to set up a working Continuous Integration on GitLab (Hog-CI) with no additional effort, which runs the HDL implementation for the desired projects. Other features of Hog-CI are the automatic creation of tags and GitLab releases with timing and utilisation reports.
Currently, Hog is successfully used by several firmware projects within the High-Energy Physics community, e.g. in the ATLAS and CMS Phase-II upgrades.
The ATLAS level-1 calorimeter trigger (L1Calo) is a hardware-based system that identifies events containing calorimeter-based physics objects, including electrons, photons, taus, jets, and missing transverse energy. In preparation for Run 3, when the LHC will run at higher energy and instantaneous luminosity, L1Calo is currently implementing a significant programme of planned upgrades. The existing hardware will be replaced by a new system of FPGA-based feature extractor (FEX) modules, which will process finer-granularity information from the calorimeters and execute more sophisticated algorithms to identify physics objects; these upgrades will permit better performance in a challenging high-luminosity and high-pileup environment. This talk will introduce the features of the upgraded L1Calo system and the current status of installation and commissioning. In addition, the expected performance of L1Calo in Run 3 will be discussed.
The Large Hadron Collider (LHC) has envisaged a series of upgrades towards a High Luminosity LHC (HL-LHC) delivering five times the LHC nominal instantaneous luminosity, that will take place throughout 2026-2028, corresponding to the Long Shutdown 3. During this upgrade, the ATLAS Tile Hadronic Calorimeter (TileCal) will replace completely on- and off-detector electronics adopting a new read-out architecture.
Signals captured from the TileCal are digitized by the on-detector electronics and transmitted to the TileCal PreProcessor (TilePPr) located off-detector, which provides the interface with the ATLAS trigger and data acquisition systems.
TilePPr sends the data from the on-detector to the FELIX system. FELIX is the ATLAS common hardware in all the subdetectors designed to act as a data router, receiving and forwarding data to the SoftWare Read-Out Driver (SWROD) computers. FELIX also distributes the TTC signals to the PPr to be propagated to the on-detector electronics.
The SWROD is an ATLAS common software solution to perform detector specific data processing, including configuration, calibration, control and monitoring of the partition.
In this contribution we will introduce the new read-out elements for TileCal at the HL-LHC, the interconnexion between the off-detector electronics and the FELIX system, results from the test beam campaigns, as well as the developments of the preprocessing and monitoring status of the calorimeter modules through the SWROD infrastructure.
The Schwarzschild-Couder Telescope (SCT) is a Medium-Sized Telescope proposed for the Cherenkov Telescope Array (CTA). The current prototype is installed at the Fred Lawrence Whipple Observatory (FLWO) in Arizona , USA. The camera is only partially equipped and is being upgraded with improved SiPM sensors and a new Front End Electronics Module (FEEM) for the full focal plane. The new FEEMs aim to read-out and digitize the SiPM pre-amplified signals down to the single photoelectron (phe). This phe signal is assumed equivalent to a signal with 2 mV peak amplitude and 500 MHz maximum bandwidth. The FEEM should have a linear response up to 2 V for a required dynamic range of about 1000 phe. A noise equivalent of 0.5 phe is an acceptable value. Due to the severe mechanical constraints to have compact electronics and low noise performances, the FEEM consists of two stacked-up submodules, one dedicated to the power supplies and the other to house the FPGA which reads-out and sends digitized data to the main backplane. The new FEEM is capable of digitizing 64 analog channels with a sampling frequency of 1 GSamples/s.
A first prototype of the FEEM has been produced. In this contribution we will present the performance of these FEEM prototypes.
The HL-LHC upgrade will not only significantly increase the collider's physics reach, but also pose challenging requirements on the performance of the detector. To exploit its full physics potential, more selective hardware triggers are required. At the ATLAS experiment, a huge gain in the selectivity of the first-level muon trigger will be accomplished by incorporating the data of the precision tracking muon drift-tube (MDT) chambers into the trigger decision in addition to the fast RPC and TGC trigger chambers. For this purpose, the Sector Logic system processing data from the trigger chambers will be complemented by the novel MDT trigger processor (MDTTP) boards.
The front-end electronics of the muon system will be replaced to cope with the expected increased rates and latencies, streaming all hit data continuously to back-end trigger electronics over high-speed optical links. The hits of the fast trigger chambers will be processed by the Sector Logic to determine the bunch-crossing in which a muon has been created and a region of interest in which the muon is detected. This coarse position information is then used as a seed for the reconstruction of the muon trajectory from the spatially precise MDT hits by the MDTTP. Based on this, the final muon trigger decision is taken. The achieved online muon transverse momentum resolution leads to a sharp trigger turn-on curve and to very low fake triggers.
Preliminary designs for the new trigger processors exist. A prototype of the MDTTP ATCA blade is in production. It consists of two coplanar printed circuit boards, the Service Module providing the basic infrastructure and the Command Module incorporating a high-performance FPGA for data processing. The presentation will describe the new ATLAS first-level muon trigger architecture, the MDT trigger processor blade and the firmware running on it.
One of the proposed Medium-Sized Telescopes for the Cherenkov Telescope Array (CTA) is the dual mirror optics Schwarzschild-Couder Telescope (SCT). The prototype SCT camera is currently equipped with 24 SiPM modules each one made of 64 pixels. The upgrade of the current camera is in progress, with the aim of fully equipping the 177 SiPM modules. A new front-end electronics is being developed and tested in order to improve the noise performance and match CTA requirements. In this process, more than 11000 SiPMs and related electronics will be tested in the laboratories before the assembly on the telescope camera.
The SiPM Multichannel ASIC for high Resolutions cherenkov Telescope (SMART) has been developed by INFN to amplify the SiPM signals to be digitized and injected in the trigger logic based on the TARGET ASICs.
An experimental setup has been devised to test about 750 SMART, which will be used to equip the full camera of the prototype SCT. Each SMART was tested for proper operation in response to a laser pulse. In this contribution we present a detailed scheme of the test bench and the first results obtained on the quality control measurements.
The Serenity boards are ATCA-format boards used in the readout of the CMS High-Luminosity upgrade detector, use up to 144 optical (up to 10 Gb/s) input links to transfer data from the front-end (FE) where they will be properly formatted by high-performance FPGAs, and eventually routed via 4 output optical 25 Gb/s links to other BE boards.
We will present the architecture and behavior of the system that handles these data from the FE to the final Data Acquisition system (DAQ) of CMS: the DAQPATH.
After the L1-accept, data from the FE are received in input buffers. When all data of the current event are present, the DAQPATH uses a main FSM with a token ring architecture that starts sequentially the buffer readout and merges them into data packets in output buffers that feed the 25 Gb/s output links. The DAQPATH system has a modular and parametric structure: each DAQPATH module feeds one output link with data from a programmable number of sources. Different sizes of input data are allowed. Input channels can be organized in groups and pipelined to meet timing requirements.
The DAQPATH firmware has been validated through extensive functional simulations. It has been implemented in different configurations in the Xilinx Kintex Ultrascale+ FPGA housed in the Serenity board and successfully tested on hardware at the Tracker Integration Facility at CERN, at the required core frequency of 360 MHz.
The Extensible Modular Processor (EMP) is the common framework for the development of DAQ, trigger and control firmware in the CMS experiment. EMP provides a flexible platform that allows sharing of firmware modules and compatibility with FPGA devices housed in the different boards. The DAQPATH firmware has been successfully integrated in the EMP framework and it is now available to CMS firmware developers.
Optical transceivers have rapidly become essential components in the readout sub-systems of high-energy physics (HEP) experiments. Given the ever-increasing radiation hardness requirements for next-generation colliders, existing readout systems based on directly modulated laser diodes, e.g., VTRx+, will rapidly become ineffective [1]. Properly engineered silicon-based photonic modulators have been shown to sustain higher radiation tolerance than current VCSEL-based devices [2]. In addition, silicon photonics (SiPh) solutions could enable higher data rates and lower power consumption with further possibilities of data aggregation, e.g., wavelength division multiplexing (WDM).
A full-custom photonic integrated circuit (PIC) in IMEC’s iSiPP50G silicon-on-insulator (SOI) technology has been designed in the context of the INFN-funded projects PHOS4BRAIN and FALAPHEL to further explore SiPh suitability for radiation-pervaded use cases. The latter project indeed aims to the development of a radiation-tolerant 4-lane SiPh WDM transmitter driven by custom-designed electronic integrated circuits (EICs) to implement an aggregated 100 Gb/s transmission bandwidth.
The PIC includes different flavors of SiPh optical modulators (Mach-Zehnder, ring or silicon-germanium electro-absorption modulators) to understand those which may best fit as building blocks in a future radiation-hard integrated optoelectronic readout module. This contribution will present recent developments and preliminary device characterizations of the SiPh modulators designed to target total ionizing doses (TIDs) up to 1 Grad.
References
[1] J. Troska, et al., “The VTRx+, an Optical Link Module for Data Transmission at HL-LHC”, Topical Workshop on Electronics for Particle Physics (TWEPP-17), doi: https://doi.org/10.22323/1.313.0048
[2] M. Zeiler et al., "Radiation Damage in Silicon Photonic Mach–Zehnder Modulators and Photodiodes," in IEEE Transactions on Nuclear Science, vol. 64, no. 11, pp. 2794-2801, Nov. 2017, doi: https://doi.org/10.1109/TNS.2017.2754948
Using a bulk Micro-Megas (MM) detector a precise energy measurement can be obtained collecting the total charge reaching the mesh electrode connected to a low noise charge sensitive preamplifier. When operating such a device in a low-pressure gas regime, it is necessary to modify the amplification gap geometry to reach the optimal detector gain, reducing the discharge probability between the anode and the mesh with a reasonable avalance volume for track length development and resolution. This implies changes in the input capacity of the preamplifier which influences its signal to noise ratio and thus the detector energy resolution. An ad-hoc high-gain and low-noise charge preamplifier to cope with the requirements of our application field has been developed. In this short report, we present the development activities focused to the study of a configurable charge amplifier to be connected to a MM detector having different mesh capacitances.
The Jiangmen Underground Neutrino Observatory (JUNO) is a neutrino experiment under construction with a broad physical program. The main goal of JUNO is the determination of the neutrino mass ordering by precisely measuring the fine structures of the neutrino energy spectrum. Precise reconstruction of the event energy is crucial for the success of the experiment.
The JUNO detector is equipped with a huge number of photomultiplier tubes (PMTs) of two types: 17 612 large PMTs (20 inches) and 25 600 small PMTs (3 inches). The detector is designed to provide an energy resolution of 3% at 1 MeV. Compared to traditional reconstruction methods, Machine Learning (ML) is significantly faster for the detector with so many PMTs.
In this work we study ML approaches for energy reconstruction from the signal gathered by the PMT array and present fast models using aggregated features: fully connected deep neural network and boosted decision trees. Consideration of the problem of the domain adaptation of a model trained on Monte Carlo (MC) data for real data will be also presented. The dataset for training and testing is generated by the full detector MC method using the official JUNO software.
A custom Application Specific Integrated Circuit (ASIC) VMM3a is developed by Brookhaven National Laboratory (BNL) and is capable of simultaneous precise measurements of both the charge and time characteristics of signals in gaseous detectors. The flexibility of its operation modes makes it attractive as a front-end electronics solution for a wide range of applications, including readout systems of Straw Trackers in future High Energy and Neutrino Physics experiments.
We present the first results on the performance of straw drift tubes operated with a VMM3a-based readout implemented by RD51 collaboration (CERN) within the Scalable Readout System (SRS). A dedicated measurement setup developed at JINR allows to study the readout performance with generator test pulses, cosmic ray muons and radioactive sources. Along with the laboratory studies, we overview the results obtained with SPS muon beam at CERN.
We present also examples of Garfield simulation of a straw tube response interfaced to the PSpice electronics simulation package. This approach allows efficient optimization of the signal circuit path and VMM3a operation mode, and supports performance studies for Straw Trackers operated in the magnetic field and with different gas mixtures.
Future potential applications of VMM3a include the Straw Tube Trackers for the Near Detector complex of the DUNE experiment, the central tracker of the SPD experiment at NICA, and the Spectrometer Straw Tracker of the SHiP experiment.
The Mu2e experiment at the Fermilab will search for a coherent neutrinoless conversion of a muon into an electron in the field of an aluminum nucleus with a sensitivity improvement by a factor of 10,000 over existing limits. The Mu2e Trigger and Data Acquisition System (TDAQ) uses otsdaq framework as the on-
line Data Acquisition System (DAQ) solution. Developed at Fermilab, otsdaq integrates several framework components - an artdaq-based DAQ, an art-based event processing, and an EPICS-based detector control system (DCS), and provides a uniform multi-user interface to its components through a web browser.
Data streams from the Mu2e tracker and calorimeter are handled by the artdaq-based DAQ and processed by a one-level software trigger implemented within the art framework. Events accepted by the trigger have their data combined, post-trigger, with the separately read out data from the Mu2e Cosmic Ray Veto system.
Foundation of the Mu2e DCS, EPICS – an Experimental Physics and Industrial Control System – is an open-source platform for monitoring, controlling, alarming, and archiving.
A prototype of the TDAQ and the DCS systems has been built and tested over the last three years at Fermilab’s Feynman Computing Center, and now the production system installation is underway. The talk will present their status and focus on the installation plans and procedures for racks, workstations, network switches, gateway computers, DAQ hardware, slow controls implementation, and testing; installation of air curtains and the fire protection system. It will also discuss the network design and cabling, quality assurance plans and procedures for the trigger farm computers, and the system and software maintenance plans.
We describe the in-line real time trigger module that provides a majority trigger decision for the ATLAS Forward Proton detector (AFP). A forward proton traverses a sequence of four successive Cherenkov radiators (a “Train”) connected to a fast multi-anode MCP Photomultiplier. Four such trains are mounted next to one another and subdivide the AFP acceptance for diffractive protons is “slices” with roughly equal occupancy. Every Train that passes the majority trigger encodes a “bit” in the 5-bit trigger word (the first bit is an “OR” of all trains firing) that is sent over a 220 m foam-core coax cable towards the ATLAS Central Trigger Processor (CTP). The fast real-time Trigger Processor is described, including the trigger decoder that interfaces with the CTP.
The Mu2e experiment will search for the CLFV neutrinoless coherent
conversion of muon to electron, in the field of a nucleus. A custom Event
Display has been developed using TEve, a ROOT based 3-D event
visualisation framework. Event displays are
crucial for monitoring and debugging during live data taking as well as
for public outreach. A custom GUI allows event selection and navigation.
Reconstructed data like the tracks, hits and clusters can be displayed
within the detector geometries upon GUI request. True Monte Carlo
trajectory of particles traversing the muon beam line, obtained directly
from Geant4, can also be displayed. Tracks are coloured according to
their particle identification and users get to select which trajectories to
be displayed. Reconstructed tracks are refined using a Kalman filter.
The resulting tracks can be displayed alongside truth information,
allowing visualisation of the track resolution. The user can remove/add
data based on energy deposited in a detector or arrival time. This is a
prototype and an online event display, is currently under-development
using REve which allows remote access for live data taking.
The FASER experiment at the LHC will be instrumented with a high precision W-Si preshower to identify and reconstruct electromagnetic showers produced by two O(TeV) photons at distances down to 200µm.
The new detector will feature a monolithic silicon ASIC with hexagonal pixels of 65 µm side, with extended dynamic range for the charge measurement and capability to store the charge information for thousands of pixels per event. The ASIC will integrate SiGe HBT-based fast front-end electronics with O(100) ps time resolution. Analog memories inside the pixel area will be employed to allow for a frame-based event readout with minimum dead area. A description of the pre-shower and its expected performance will be presented together with the design of the monolithic ASIC and the testbeam results of prototypes.
The barrel part of the CMS electromagnetic calorimeter (ECAL) consists of 61200 PbWO4 crystals coupled to avalanche photodiodes (APDs). A decrease of the ECAL operating temperature from 18 °C to 9 °C is needed to mitigate the increase in APD noise from radiation-induced dark current in the conditions of the high luminosity upgrade of the LHC. Moreover, a full re-design of the front-end electronics has been undertaken in order to deal with the increase of pile-up events and to improve the rejection of anomalous signals generated from direct interaction with the APDs. The VFE (very front-end) card will be equipped with two new ASICs: a fast trans-impedance amplifier named CATIA as well as a data conversion and compression ASIC named LiTE-DTU. The VFE will interface with the radiation tolerant LpGBT transceiver and the VTRx+ optical board, while trigger primitive generation will be moved off-detector to FPGA-based processors. The CATIA ASIC has a single input and two differential outputs with different gains in order to have better resolution for low energy signals. CATIA is designed in commercial CMOS 130 nm technology and can be controlled via an I2C interface. The LiTE-DTU ASIC embeds two 12-bit 160 MS/s ADCs, a sample selection logic, a lossless compression digital logic, and a 1.28 Gb/s serializer that will directly interface with the LpGBT e-links. LiTE-DTU is designed in commercial CMOS 65 nm technology. It embeds a PLL for the generation of the low jitter 1.28 GHz clock required by the ADCs and the serializer. Both ASICs have been extensively tested in lab and beam tests. This new system has been verified to fulfill the requirements of the experiment in terms of performance and radiation tolerance. The ASICs are now in the pre-production phase.
This work discusses the design of analog front-end circuits for future, high-rate pixel detector applications. The front-end design activity is being carried out in the framework of the INFN Falaphel project, aiming at the development and integration of silicon photonics modulators with high speed, rad-hard electronics in a 28 nm CMOS technology. The project targets the tracker of the hadronic Future Circular Collider (FCC-hh) experiments, with the opportunity to replace the inner pixel systems of the high-luminosity LHC experiments after 2030.
Two front-end architectures are being developed, one with Time-over-Threshold digitization of the input signal and the other based on in-pixel flash ADCs. The first architecture includes a charge sensitive amplifier (CSA) featuring a Krummenacher feedback network for detector leakage current compensation. The CSA output is connected to a comparator implemented by means of a differential pair driving a common source output stage. In the second version of the front-end, a novel, clocked comparator is being developed and conceived to dramatically reduce the threshold dispersion of the front-end.
The conference paper will include a thorough description of the analog processors being developed. The main analog performance parameters, as obtained from circuit simulations and including equivalent noise charge, threshold dispersion and time-walk, will be presented.
The LHCb Upgrade in Run 3 has changed its trigger scheme for a full software selection in two steps. The first step, HLT1, will be entirely implemented on GPUs and run a fast selection aiming at reducing the visible collision rate from 30 MHz to 1 MHz.
This selection relies on a partial reconstruction of the event. A version of this reconstruction starts with two stand-alone tracking algorithms, the VELO-pixel tracking and the HybridSeeding, which reconstructs track segments in the Velo and SciFi trackers, respectively. Those segments are then matched through a matching algorithm in order to produce ‘long’ tracks, which form the base of the HLT1 reconstruction.
We discuss the principle of these algorithms as well as the details of their implementation which allows them to run at a high-throughput configuration. An emphasis is put on the optimisations of the algorithms themselves in order to take advantage of the GPU architecture. Finally, results are presented in the context of the LHCb performance requirements for Run 3.
The CBC3.1 is the ?nal version of the readout ASIC for 2S-modules in the outer
radial region of the upgraded CMS Tracker at the High Luminosity LHC. The chip
development was completed in an engineering run in 2018. Subsequently two pre-
production lots of wafers were delivered in 2019, and large scale production deliveries
began in May 2021. So far almost 270 production wafers have been received. The
engineering run wafers and pre-production lots were tested using an automatic wafer
prober, intended for ?nal acceptance tests, and showed some variations in yield of
good chips across the wafers, whose patterns suggested non-optimal processing in the
foundry. To try to understand this better, the probe station was adapted so that
measurements could be carried out at low temperatures, down to 30?C.
Although both preproduction and production wafers have a high yield of good
chips, some unexpected behaviour was observed at low temperature. Rare memory
errors were observed when the hit data are read out. The wafers a?ected are from lots
with patterns of central yield loss, and the errors do not seem to be present in better
quality wafers. However, the rate of occurrence is so low that there should be negligible
impact on CMS track reconstruction.
The second issue is occasional corruption of some registers which store ?ne tuning
values for individual channel pedestals, following certain write operations into speci?c
registers. This also appears to be correlated with manufacturing quality and although
the impact should be minor, and probably avoidable during tracker operation, the
origin of the problem is not yet fully understood and is the subject of ongoing investi-
gations.
The status of the wafer probing will be presented with results from studies to date.
Next-generation cryogenic bolometric detectors, like those used for the CUPID and CROSS experiments for the search of neutrinoless double beta decay, will also identify the type of the interacting particle by measuring the amount of scintillation light produced in the crystals. Light signals are characterized by a faster response with respect to heat signals and will thus require different characteristics of the readout electronics. The signal filtering and digitization for these experiments will be based on a custom solution comprised of several analog-to-digital boards interfaced to Altera Cyclone V SoC FPGA modules installed on the backplane of the DAQ crates. Each analog-to-digital board hosts 12 channels that allow signal digitization up to 25 ksps per channel and an effective resolution of 21 bits at 5 ksps. The anti-aliasing filter cut-off frequency can be digitally adjusted with 10 bits of resolution from 24 Hz to 2.5 kHz and thus adapted to a wide range of detectors. The SoC FPGA modules control all the acquisition parameters through a Python-based server, and are responsible for the synchronization of the analog-to-digital boards and for the data transfer to the storage, using the RTP protocol on a standard Ethernet interface. Each FPGA module manages the data coming from 8 boards, offering excellent scalability. In this contribution, we will present an overview of the new filtering and digitization system, a detailed characterization of its performance, and the results of the first tests with real detectors during CUPID and CROSS test runs.
The cluster counting/timing technique in a drift chamber is a consolidated technique to obtain a bias free impact parameter estimate. The application of this technique requires to identify the clusters of avalanching electrons from each primary ionization event. This is done by digitizing the signal from the sense wire in a drift chamber and applying an algorithm for peaks identification. The rise time of the signal from a cluster is approximately 1~ns, therefore a frontend electronics with about 1~GHz bandwidth is required. A high linearity and low distortion are also required to resolve the signal of each cluster.
A specific frontend electronics based on commercial components has been developed in order to detect signals from individual ionization clusters in a drift chamber. The readout channel is characterized by a high linearity, low distortion, and a bandwidth adequate to the expected spectral density of the signal. Furthermore, the readout electronics has been designed for an easily variable gain to get a signal which is always suitable for the digitizer, despite eventually changes in the working point of the drift chamber.
Signal amplification is obtained through two gain stages, made by a variable gain amplifier (VGA) and an output driver. Both devices have been chosen to be suitable for pulsed applications. Specific compensation techniques have been implemented to obtain an overall bandwidth of the order of 1~GHz. The serial interface is a generic 4-wire synchronous interface that is compatible with SPI-type interfaces used on many microcontrollers and DSP controllers. Since the used VGA is equipped with a chip select pin, it is possible to have different gains amplifiers in a multichannel frontend. Measurement of gain, bandwidth and linearity of the described device will be presented, furthermore, the response of the device to the signal obtained from a drift tube will be presented.
The HGCROC ASICs are dedicated very front-end electronics designed to read out the High Granularity Calorimeter (HGCAL), which will replace the present end-cap calorimeters of the Large Hadron Collider (LHC) for the Compact Muon Solenoid collaboration (CMS). They are declined in two flavors: HGCROC to read out the Silicon pads of the electromagnetic and front hadronic sections and H2GCROC to readout out the SiPMs coupled to the scintillating tiles of the back hadronic sections, where the radiation constraints are less severe.
H2GCROC is a radiation-hardened 130nm CMOS chip with 78 channels (72 reading out standard cells, 2 reading out calibration cells, and 4 channels not connected to any sensor cells for common-mode noise estimation). The front-end preamplifier is adapted for the SiPM's higher signal level, expecting pC/MIP rather than fC/MIP ranges. A current conveyor at the FE ASIC's input modifies the signal to guarantee that the remainder of the chain is compatible with the one used to read the silicon sensors. Each channel also contains a low noise and high gain preamplifier and shapers connected to a 10-bit 40 MHz SAR-ADC allowing charge measurement over the preamplifier's linear range. A discriminator and TDC provide charge information from TOT (Time over Threshold) over a 200 ns dynamic range in the preamplifier's saturation zone. Additionally, timing information with an accuracy of 25 ps is produced via a fast discriminator and TDC. Finally, DRAM memory is used to store charge and timing data for later processing.
The chip was received end of 2020 and extensively tested since then, in the lab and in testbeam. This work examines the very front-end design and performance, including timing performance with the sensor.
The ICARUS T600 LAr TPC is located at shallow depth along the Booster Neutrino Beam (BNB) and NuMI off-axis beam lines at Fermilab with the aim to search for sterile neutrinos in the context of SBN program. A system based on 360 large area Hamamatsu R5912-MOD Photo Multiplier Tubes (PMTs) is used to detect the VUV scintillation light emitted by ionizing particles, allowing for the trigger and timing of the neutrino events and for the reduction of the cosmic rays background due to the ICARUS T600 operations at surface.
The ICARUS trigger system exploits the coincidence of the BNB and NuMI off-axis beam spills with the prompt scintillation light as detected by the PMT system. This system is based on PXIe logical modules processing PMT signals discriminated by CAEN V1730 digitizers. The logical system consists of: a INCAA Computers SPEXI PXIe board based on CERN project, handling the beam extraction information needed for the time synchronization; three FPGA boards (NI model PXIe-7820), processing the PMT information to recognize an event interaction in coincidence with the beam spill. This provides a global trigger starting the acquisition of the TPC and PMT signals; a PXIe RT controller implementing all the features for the communication with the DAQ.
The SPEXI and FPGAs are programmed according to the different requests for debugging, calibration and data taking using VHDL and NI LabVIEW FPGA packages. The implementation of the ICARUS T600 trigger system, the logical block diagram and its possible upgrades, as well as its performance, will be reported.
Trackers in high-energy experiment of the next generation must cope with unprecedented high rates and track densities. This poses the need for timing information at the pixel level, high readout frequency and radiation hardness. 28-nm CMOS technology appears having the whole set of characteristics to satisfy such experimental requirements.
Within the TimeSPOT project, we have developed a complete 28-nm CMOS ASIC to elaborate technical solutions about the challenging issues of such complex future detectors.
The ASIC, named Timespot1, features a 32x32 channels hybrid-pixel matrix and integrates one analogue front-end, one discriminator and one high-resolution time-to-digital converter per pixel. The system aims to achieve a timing resolution of 30 ps or better at a maximum event rate of 3MHz per channel with a Data-Driven interface. Power consumption can be programmed to range between 1.2W/cm2 and 2.6 W/cm2. The present paper illustrates the experience and the results gained in the design and tests of the ASIC. Future possible developments are also addressed.
Ultra-low mass and high granularity Drift Chambers fulfill the requirements of tracking systems for modern High Energy Physics experiments at future high luminosity accelerators (FCC or CEPC). For this purpose, it is required the ability of reaching the expected resolutions and rate performance. The application of the Cluster Counting/Timing technique adds a valuable PID capabilities with resolutions outperforming the usual dE/dx technique. By measuring the arrival times of each individual ionization cluster to the sense wire and by using suitable statistical tools it is possible to perform a bias free estimate of the impact parameter and a precise PID in drift chamber operating in a Helium based gas mixtures The Cluster Counting/Timing technique consisting in isolating pulses due to different ionization clusters, therefore it is necessary to have a read-out interface capable of processing such high speed signals. This requires a data acquisition chain, able to manage the low amplitude signals from the sense wires (a $\sim$few mV) with a high bandwidth ($\sim$1~GHz). The signals are first converted from analog to digital by a fast ADC. Requirements on the drift chamber performance impose conversions at sampling frequencies of at least 1~GS/s with 14-bit resolution.
These constraints, together with maximum drift times and many readout channels, impose some sizable data reduction strategy, while preserving all relevant information. Measuring both the amplitude and the arrival time of each peak in the signal associated to each ionization cluster is the minimum requirement on the data transfer for storage to prevent any data loss. An electronic board including a Fast ADC and an FPGA for real-time processing of the signals coming from a drift chamber is presented. Moreover, various algorithms implementation for peaks finding are compared.
The ICARUS-T600 liquid argon (LAr) time projection chamber (TPC) is presently used as a far detector of the Short Baseline Neutrino (SBN) program at Fermilab (USA) to search for a possible LSND-like sterile neutrino signal at Δm2 ~ o(eV2) with the Booster Neutrino Beam (BNB).
A light detection system, based on large area Photo-Multiplier Tubes (PMTs), has been realized for ICARUS-T600 to detect VUV photons produced after the passage of ionizing particles in LAr. This system is fundamental for the TPC operation, providing an efficient trigger and contributing to the 3D reconstruction of events. Moreover, since the detector is exposed to a huge flux of cosmic rays due to its shallow depths operations, the light detection system allows for the time reconstruction of events, contributing to the identification and to the selection of neutrino interactions within the BNB spill gate.
Based on 360 Hamamatsu R5912-MOD PMTs deployed behind the four TPC wire chambers, the system requires a high performance electronic set-up. The electronics consists of fast sampling digitizers (500 MSa/s, 14-bit) allowing for the recording and the discrimination of the signals directly extracted from the PMT anodes, and providing a fast identification of interactions and the exploitation of the scintillation light for trigger purposes.
The main features of the electronics for the ICARUS-T600 scintillation light detection system are introduced together with a presentation of its installation and commissioning at Fermilab.
Future experiments at the LHC (beyond Phase-II upgrades) and in future colliders (like FCC) will need radiation tolerant multi-Gbps serial links for the detector readout. The most challenging components of those links are radiation tolerant (up to 1 Grad) Serializers and Deserializers (SERDES).
Current SERDES developed for the HL-LHC have limited radiation hardness (200 Mrad). We will present the status of the R&D for 10-bit 3.2 Gbps SERDES with increased radiation tolerance in 65 nm CMOS technology.
Two circuits have been implemented on Silicon: a SER and a DES, with a modular architecture that makes them easily scalable, and full-custom Current Mode Logic (CML) cells (registers, clock buffers, CML/CMOS and CMOS/CML converters).
To match the desired radiation tolerance dedicated design techniques are requested, as the use of Enclosed Layout Transistors, n-MOS devices only and of “long” MOS devices. In addition we introduced (and later patented) an innovative compensation technique based on a tunable bias voltage in CML cells. Irradiation tests have shown that this technique is able to recover most of the performance degradation due to TID effects.
Two prototypes have been produced in the past couple of years. Test and characterization of the first prototypes led to design improvements in the SERDES design that allowed a significant reduction of the power dissipation. Thorough tests and characterization results of both devices will be presented.
We present our developments towards a readout chip prototype for future pixel detectors with timing capabilities. The readout chip is intended for characterizing 4D pixel arrays with a pixel size of the order of 100x100 µm², where the sensors are LGADs. The long term focus is towards a possible application in the extended forward pixel system (TEPX) of the CMS experiment during the HL-LHC. The requirements for this ASIC are the incorporation of a TDC (Time to Digital Converter) in the small pixel area, low power consumption and radiation tolerance up to $5\times10^{15} \text{n}_\text{eq}/\text{cm}^2$ to withstand the radiation levels in the innermost rings of the TEPX modules during HL-LHC. Prototype structures have been designed and produced in 110 nm CMOS technology at LFoundry and UMC with different versions of TDC structures, together with a front end circuitry to interface with the sensors. The design of the front end will be discussed, with the test set-up for the measurements, and the first results comparing the performance of the different structures.
The Tracking Ultraviolet Setup (TUS) and Multiwavelength Imaging New Instrument for the Extreme Universe Space Observatory (Mini-EUSO) are the first two space missions of the JEM-EUSO (Joint Experiment Missions for the Extreme Universe Space Observatory) program devoted to demonstrate the detection principle of Ultra-High Energy Cosmic Rays (UHECRs) from space. TUS operated in 2016 and 2017 as a part of the Lomonosov satellite orbiting at 500 km from ground while Mini-EUSO is operational since 2019 on the International Space Station. Both telescopes are based on an optical system (Fresnel mirrors for TUS and Fresnel lenses for Mini-EUSO) which focus near-UV light (290 – 430 nm) on an array of Photomultiplier Tubes. Both instruments adopt a multi-level trigger scheme with time resolutions ranging from microseconds to tens of miliseconds to search for UHECRs and slower phenomena occurring in the atmosphere such as transient luminous events, meteors and macroscopic dark matter.
During this contribution a review of the two different trigger and data acquisition systems will be presented showing their performance results in space with emphasis to the search for UHECRs and macroscopic dark matter.
Gravitational waves have opened a new window on the Universe and paved the way to a new era of multimessenger observations of cosmic sources. Second-generation ground-based detectors such as Advanced LIGO and Advanced Virgo have been extremely successful in detecting gravitational wave signals from coalescence of black holes and/or neutron stars. However, in order to reach the required sensitivities, the background noise must be investigated and removed. In particular, transient noise events called “glitches” can affect data quality and mimic real astrophysical signals, and it is therefore of paramount importance to characterize them and find their origin, a task that will support the activities of detector characterization of Virgo and other interferometers. Machine learning is one of the most promising approaches to characterize and remove noise glitches in real time, thus improving the sensitivity of interferometers. A key input to the preparation of a training datasets for these machine learning algorithms can originate from citizen science initiatives, where volunteers contribute to classify and analyze signals collected by detectors. We will present GWitchHunters, a new citizen science project focused on the study of gravitational wave noise, that has been developed within the REINFORCE project (a "Science With And For Society" project funded under the EU's H2020 program). We will present the project, its development and the key tasks that citizens are participating in, as well as its impact on the study of noise in the Advanced Virgo detector.
In this paper, a highly customisable and comprehensive data acquisition system (DAQ) is presented. It is applied to a novel reconfigurable Dose-3D detector intended for a full spatial therapeutic dose reconstruction to improve radiotherapy treatment planning by providing a breakthrough detector with active voxels (more details are provided in another contribution).
The basic element of the DAQ is a slice housing a multianode photomultiplier tube (PMT), readout ASIC with 64 channels and an FPGA. The slices are assembled in a crate. In a single crate, there are eight slices and Precision Time Protocol (PTP) Unit providing synchronisation between slices. The modularity of the system is enhanced with the possibility of crate stacking. Moreover, the FPGA firmware is based on the open-source eXtensible FPGA Control Platform.
This firmware enables mapping of ASIC readout logic and access to continuous data streams through UDP/IP over Ethernet communication. For each functional block of the ASIC, a set of data upstreams, configuration storage and ASIC-specific modules were created, resulting in independent control of each part of the firmware.
The operation of all slices is managed by the dedicated software running on the central computer. First, it asks each slice about the firmware modules hierarchy to create entities maintaining seamless input-output communication between hardware and software. These entities can be subscribed by any consumer process through a TCP/IP connection. This way users can prepare tools distributed over the network checking the online status of each slice or gathering data from all available data channels to prepare the 3D data reconstruction.
This firmware and software combination proved to be working effectively and efficiently assuring high performance and reliability. Moreover, the presented DAQ architecture is extremely customisable requiring a reduced effort to be adapted to other systems based on different front-end ASICs and design requirements.
Several detectors for the next generation of particle physics experiments will make use of silicon photomultipliers (SiPMs) to detect scintillation photons in liquid Argon. Cryogenic operation reduces dark counts by orders of magnitude, and allows to retain single photon sensitivity even if large arrays of SiPMs are readout by a single amplifier. The total capacitance of a SiPM array with a total area of tens of cm^2 can range up to 100 nF. The series noise of the amplifier is the dominant factor that determines the signal to noise ratio of the readout chain.
In this contribution, we present a cryogenic amplifier, designed to operate in liquid Argon. The base version has a series white noise of 0.37 nV/√Hz, while dissipating only 2 mW. Design variants have been also tested, which allow to reduce noise to 0.22 nV/√Hz, with a power consumption close to 4.5 mW, still low enough to not cause bubbling.
The amplifier base design and variants have been tested reading out SiPM arrays consisting of up to 96 6x6 mm^2 SiPMs, for a total photosensitive area of 35 cm^2, demonstrating good single photon sensitivity even at low overvoltage values.
The Extreme Universe Space Observatory on a Super Pressure Balloon 2 (EUSO-SPB2) is a stratospheric balloon mission developed within the JEM-EUSO Program that will serve as a prototype for future satellite-based missions, including K-EUSO and the Probe of Extreme Multi-Messenger$~$Astrophysics (POEMMA). EUSO-SPB2 consists of two telescopes. The first is a Cherenkov Telescope, based on Silicon Photomultipliers, devoted to the study of the background for future below-the-limb very-high-energy (E>10$~$PeV) astrophysical neutrino observations. The second is a Fluorescence Telescope (FT) developed for detection of Ultra-High Energy Cosmic Rays (UHECRs). The FT consists of a Schmidt telescope and a focal plane based on Multi-anode$~$Photomultipliers for a total of 6192 pixels. This ultraviolet camera is read out with an integration time of 1.05$~$𝜇s by a set of dedicated ASICs. A trigger code looks for multiple clusters of excess signal within a certain time window. Its hardware implementation and performance, both in terms of rejection of noise and ability to detect fast signals, is tested taking advantage of the TurLab facility, hosted at the University of Turin.
TurLab is a laboratory equipped with a 5$~$m diameter and 1$~$m depth rotating tank. The TurLab tank is located in a large room more than 50$~$m long where the intensity of background light can be adjusted in a controlled way. In the past TurLab has been used to test and validate the data acquisition system of EUSO$~$Balloon, EUSO-SPB1 and Mini-EUSO. The data acquisition and trigger system of EUSO-SPB2 is tested hanging to the ceiling a scaled down version of the FT, made of a square matrix of 16×16$~$pixels. Different passive and active light sources have been placed inside the tank, and the response of the trigger logic has been tested as different sources enter the field of view. This contribution describes the tests and discusses the obtained results.
The Mu2e experiment at Fermilab searches for the $\mu^-$ conversion in the Coulomb field of an Al nucleus. The kinematics of this process is well-modelled by a two-body decay, resulting in a mono-energetic electron with the recoil of the atomic nucleus and with no neutrinos in the final state. The conversion electron (CE) has the energy of about the muon rest mass (104.967 MeV).
After three years of data taking, Mu2e will reach the sensitivity level of $R_{\mu e}\leq 6 \times 10^{-17}$ (@ 90$\%$ C.L.), improving the current limit on \convrate by four orders of magnitude. A very intense pulsed muon beam ($\sim 10^{10} \mu/$s) is stopped on a target inside a very long, curved solenoid where the detector is. The Mu2e detector consists of a 3.2 m long straw-tube tracker and an electromagnetic crystal calorimeter.
The TDAQ (Trigger and Data Acquisition) system provides a continuous data stream from the readout controller boards to the TDAQ farm, which performs the online event reconstruction. The events are then selected based on the decision of dedicated software filters. The bulk of the trigger lines use the tracking information to perform the event selection; these include lines for the CE candidate search and other lines for calibration, background characterization and monitoring. The trigger system needs to deliver a signal efficiency $>90\%$ and a processing time $\leq 5$~ms/event while providing $>99%$ background rejection. These are challenging requirements for the track reconstruction software and the TDAQ system.
The TDAQ group recently installed a large-scale prototype that provides advanced testing capabilities. The prototype consists of 20 detector-transfer-control units in 10 servers. We will present the track reconstruction algorithms and the expected performance of the trigger system, in terms of signal selection efficiency and trigger rate, as well as the expected timing performance obtained with the prototype.
The Scalable Read-out System (SRS), developed within CERN's RD51 collaboration to assist the development and use of Micropattern Gaseous Detectors, suits applications from a small scale bench top experiments with a few hundreds of channels, up to large setups using several thousands of channels. Any ASIC collecting the charge events from the detectors can be integrated in the SRS, which will then transmit the data in the format defined by its FPGA.
In this work we present some of the most recent developments related with the integration of the SAMPA chip --- developed for ALICE's TPC and Muon Chambers --- in the SRS. The SAMPA chip is a 32 channel ASIC fabricated using 130nm CMOS technology, which provides a charge sensitive amplifier, a shaper and a 10-bit ADC for each channel, with a sampling rate that can reach 20 MS/s.
To test the acquisition system with charged particles, a muon telescope setup was mounted using a coincidence trigger provided by two scintillators and a small triple-GEM based TPC prototype (0.8L). The read-out is composed of 120 pads which are read by 4 SAMPA chips integrated with the SRS. In this work, we present a detailed description of the experimental setup, including detailed information on the SAMPA/SRS integration, as well as the experimental results obtained from muon tracks.
The High Luminosity LHC will start to operate from LHC Run 4 2029, and will see the ATLAS experiment operating in an increasingly harsh collision environment. This has motivated a series of upgrades, to be installed during the long experimental shutdown from 2026 to 2029. One key change among these is the upgrade of the Front-End Link eXchange (FELIX) system, which was developed to improve the capacity and flexibility of the detector readout system for selected ATLAS system in LHC Run 3 (2022). After the Run 4 upgrade, all ATLAS systems will be read out via the FELIX system, with sub-detector specific processing taking place in a common software processing framework.
The FELIX system functions as a router between custom serial links from front-end ASICs and FPGAs to data collection and processing components via a commodity switched network, while also forwarding TTC signals to front-end electronics. FELIX uses commodity server technology hosting FPGA-based PCIe I/O cards transferring data to a software routing platform in connected to a high-bandwidth switched network.
Commodity servers connected to FELIX systems via the same network run newly developed multi-threaded 'Data Handler' infrastructure for event fragment building, buffering and detector-specific processing to facilitate online selection.
One major upgrade challenge for the system in Run 4 is to support the higher trigger rate, which increases from 100 kHz in Run 3 to 1 MHz in Run 4, along with an overall increase in event size.
This presentation will cover the FELIX design for Run 4, as well as the results of a series of performance tests carried out with Run 3 hardware at Run 4 rates, which will inform the next stages of development.
This talk introduces and shows the simulated performance of two FPGA-based techniques to improve fast track finding in the ATLAS trigger. A fast track trigger is being developed in ATLAS for the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), the goal of which is to provide the high-level trigger with full-scan tracking at 100 kHz in the high pile-up conditions of the HL-LHC. One option under development for achieving this include a method based on the Hough transform (whereby detector hits are mapped onto a 2D parameter space with one parameter related to the transverse momentum and one to the initial track direction) run on FPGAs.
This method can benefit from a pre-filtering step, to reduce the number of hit clusters that need to be considered and hence reduce the overall system size and/or power consumption, by examining pairs of clusters in adjacent strip detector layers (or lack thereof). This stub-filtering was first investigated by CMS but had been previously unexplored in ATLAS, and we will show the reduction in throughput enabled along with the performance impact on the Hough transform system of track finding, as well as estimates of resource usage.
One feature of the Hough transform method is that it identifies a large number of track candidates, which must be reduced before a second stage precision fit. A neural network has been developed to identify the most promising track candidates, and its promising performance will also be shown, in combination with and independent of stub filtering, along with the resources required to run it on an FPGA.
The High Luminosity-Large Hadron Collider is expected to start in 2027 and to provide an integrated luminosity of 3000 fb-1 in ten year, a factor 10 more than what will be collected by 2021. This high statistics will allow to perform precise measurements in the. Higgs sector and improve searches of new physics at the TeV scale. The luminosity needed is L ~7.5 1034 cm-2 s-1, corresponding to
~200 additional proton-proton pile-up interactions, which will increase the rates at each level of the trigger and degrade the reconstruction performances.
To face such harsh environment some sub-detectors of the ATLAS experiment will be upgraded or completely substituted and the DAQ system will be upgraded.
In this poster a fast-simulation framework developed to study the impact of the tracking resolution and acceptance on multi-(b)jet and multi-leptons triggers at the HL-LHC will be shown, as well as the resulting performance on the above triggers on the HH->4b channel.
The TASS main idea is based on built-in "library of modules", where each module reproduces in a realistic way the panel picture (see Fig in attachment, please note: that is not a photo, that is the TASS rappresentation of a small setup!!) and the electrical and logical behavior of the real one.
A sophisticated GUI allows the user to push buttons, turn knobs, make cable connections, set Camac/Vme functions and so on.
The user builds his virtual trigger system choosing from the library the modules he needs, places them in the crates, makes the cable connections and runs the simulation. Any parameter can be set interactively, and input signals provided by waveform generators can be used to stimulate the system and the resulting outputs can be shown on a virtual digital scope.
The advantage of this philosophy is that the user can change and try, very quickly, different configurations for the system under construction, by simply replacing module and cable connection. Moreover the TASS project is a built-in method to store e maintain the real configuaration of real Trigger/Daq setup.
The CMS pixel detector for the High-Luminosity LHC (Inner Tracker) will be instrumented by hybrid pixel modules where the sensors are read out by the CMS Readout Chip (CROC), an ASIC developed in CMOS 65 nm technology. The CROC contains more than one billion transistors, with a digital architecture of unprecedented complexity for High Energy Physics and has the novelty to be powered from a constant current generator to allow chains of serially powered modules.
The Inner Tracker will be equipped by 13256 CROCs. Testing the CROC for the verification of all the critical functionalities has to be performed during production and it is fundamental to guarantee a reliable performance of the CMS pixel detector, and therefore the success of the experiment.
The CROC chips will be delivered by the foundry on 12" wafers with 138 chips, and each chip must be tested at wafer level, before it is sent to the company for dicing and hybridization.
In this talk, it will be presented the procedure currently put in place for the automatic test of the CROC chips while they are still on wafer, during the construction of the Inner Tracker.
The testing set-up relies on a semi-automated probe system, a custom design probe card hosting a mezzanine equipped with a micro-controller, and uses the CMS DAQ under development for the experiment.
The talk will illustrate the hardware, the software and the test procedures developed to validate the analog and the digital sections of the chip, and show the results from a first qualification campaign based on wafers from a production of the full-size prototype of the CROC delivered in August 2021.
The aim of the XENONnT upgrade (at the INFN Laboratori Nazionali del Gran Sasso) is to increase the experimental sensitivity to Dark Matter detection by an order of magnitude with respect to the previous XENON1T.
This goal can be achieved by means of some important improvements on detector and other systems: a three times larger target mass ( ∼ 8.4 t LXe with respect to 3.2 t in XENON1T) and the enhanced background suppression. The latter relies on an upgraded purification system, a new online cryogenic Radon distillation column and, finally, the development of Neutron Veto (NV) sub-detector to tag the radiogenic neutrons especially those interacting once in the TPC which exactly mimic the WIMP signal.
NV sub-detector is made of an octagonal structure (3 m-high and 4 m-wide) inside the water tank around the cryostat. In order to improve the neutron detection efficiency, the water is loaded with gadolinium. A total of 120 Hamamatsu 8” high-QE PMTs with low-radioactivity windows are placed along the lateral walls to detect the Cherenkov photons.
A new generation Waveform Digitizer boards developed by CAEN are responsible for digitizing the 120 PMT signals. The NV DAQ is designed around a triggerless data collection scheme. The possibility to provide the pulse shape and the time stamp of each PMT signal allows to run without a hardware event trigger. In fact, the event building is completely implemented via software processes running on the online server. This architecture allows to lower the energy threshold and to have for each channel an independent data readout by means of an individual trigger threshold (self-trigger).
This paper describes the implementation and the performance of the NV DAQ system.
TEAM NET Project “A reconfigurable detector for measuring the spatial distribution of radiation dose for applications in the preparation of individual patient treatment plans” is being supported by Machine Learning (ML) techniques in building a reconfigurable three-dimensional (3D) detector for rapid and precise measurement of the radiation 3D dose distribution and improving individual treatment plans. Individual treatment plans are currently being prepared using tools based on analytical methods, which generates uncertainties. The project idea is to use a high-quality GEANT engine and Monte Carlo techniques for simulations. The key point is to deliver the proper geometry for the three-dimensional detector. In our case, it is in the form of a 3D Computed Tomography (CT) scan of the human body with precise delineation of the area of affection.
Medical image segmentation refers to the process of extracting the desired object from a medical image which can be done manually or automatically. The presentation will describe the current stage and future plans of the process of building the fully-automatic segmentation tool based on the most advanced Machine Learning models that is able to distinguish between the affected area and surrounding healthy organs inside the patient's body on the basis of 3D CT scan images. Working with specific medical data formats for this purpose requires designing a special preprocessing pipeline. Three-dimensionality requires high computational power and GPU’s support in the process of ML models training. This is the reason for using dedicated platforms such as MONAI and NVIDIA CLARA, that apart from increasing the training performance with domain-specific GPU optimization also provides state of the art pre-trained ML models and more modern powerful techniques. The output of the automatic segmentation after transforming it into the standard formats is the input of simulation which is the crucial part of the measurement process.
The abundance of data arriving in the new runs of the Large Hadron Collider creates tough requirements for the amount and consecutively speed of simulation generation. Current approaches can suffer from long generation time and lack of important storage resources to preserve the simulated datasets. The development of the new fast generation techniques is thus crucial for the proper functioning of experiments. We present a novel approach to simulate LHCb detector events using generative machine learning algorithms and other statistical tools. The approaches combine the speed and flexibility of neural networks and encapsulates knowledge about the detector in the form of statistical patterns. Whenever possible, the algorithms are trained using real data, which enhances their robustness against differences between real data and simulation. We discuss particularities of neural network detector simulation implementations and corresponding systematic uncertainties.
The search for signals beyond the Standard Model can be pursued through precision measurements of flavour-changing processes, such as muon decays. In this regard, the MEG II experiment at PSI searches for the $\mu \to e \gamma$ decay with a sensitivity of $6 \cdot 10^{-14}$ at $90\%$ of confidence level. Furthermore, the MEG II apparatus appears to be competitive in searching for more exotic processes, in which the lepton flavour violation is mediated by an invisible axion-like particle $X$. The experimental search for such an elusive signal requires an exhaustive Monte Carlo simulation, including extremely accurate theoretical predictions for the event generation.
We present an improved simulation of muon decays, both in the Standard Model and beyond, implemented in the Geant4 framework of MEG II.
The event generation is based on McMule, acronym of Monte Carlo for MUons and other LEptons. McMule is a novel numerical tool for the fully-differential computation of higher-order radiative corrections for low-energy processes involving leptons. The code features the most accurate theoretical predictions ever made for polarised muon decays and notably achieves a precision of $10^{-6}$ on the $\mu \to e \nu \bar\nu$ energy spectrum.
Such predictions, specifically developed for this project, are used as theoretical input to the Geant4 simulation of the MEG II detectors. The new implementation is tested by studying the reconstruction of $\mu \to e \nu \bar\nu$, $\mu \to e \gamma$ and $\mu \to e X$ events in the MEG II positron spectrometer.
The analysis shows that the new simulation has a noticeable impact on experimental observables and is therefore required for an experiment such as MEG II, featuring state-of-art detectors for precision studies of low-energy leptons.
The Jiangmn Underground Neutrino Observatory is the state of the art, liquid scintillator based, large neutrino detector. Due to the 20 kt liquid scintillator mass, and thanks to the tight requirements on its optical and radio-purity properties, it will be able to perform leading measurements detecting terrestrial and astropysical neutrinos in a wide energy range(from 200~keV to to several MeV). An important requirement for the success of the experiment is an unprecedented energy resolution (3% at 1 MeV) and a sub-percent energy non-linearity. Another key ingredient is the use of high speed, high resolution sampling electronics, located very close to the 20,012 20-inch photomultipliers. This novel concept, compared to legacy large scintillator based neutrino experiments, allows to reach the best performances in terms of signal to noise ratio since the analog part of the signal is digitized at a very early stage. Moreover, the data readout throughput is lowered thanks to the reduced number of cables needed to communicate to the back-end electronics. Finally, local data storage is possible opening the possibility to perform complex signal pre-processing tasks locally, before data is sent to the Data Acquisition system. In this contribution, the design of the Front-End and Read-Out electronics will be presented, together with the performances measured on prototype modules and during the mass production of the final electronics.
During the LHC long shutdown 2, the ATLAS small wheel has been replaced with a new detector (New Small Wheel - NSW) including technologies such as MicroMegas chambers and sTGC chambers, able to sustain harsher data-taking conditions.
The sTGC Pad Trigger system has been designed to reduce the endcap region trigger fake rate thanks to the multi-layer hit coincidence selection. The Pad trigger board takes care of the sTGC Pad data acquisition, trigger algorithm execution and interface with the NSW trigger processor. During 2021 one Pad Trigger board has been installed on the rim crate of each NSW sector and its connectivity with the sTGC chambers front-end electronics and with the NSW trigger processor has been commissioned with dedicated software before moving NSW underground.
This contribution, after an introduction on the Pad Trigger functionalities, will highlight the commissioning procedure and corresponding outcome.
Traditionally FPGA firmware was developed solely with Hardware Description Languages (HDL) such as Verilog or VHDL.
However, with the steady improvements of tools like Vivado HLS (High Level Synthesis) it is now possible to write parts of the firmware with higher level languages like C++.
Using HLS allows faster development cycles, easier code reuse and, most importantly, to efficiently write complex algorithms for the FPGA.
The Compressed Baryonic Matter (CBM) experiment at the Facility for Antiproton and Ion Research (FAIR) will investigate the QCD phase diagram at high net-baryon densities.
The experiment employs a free streaming data acquisition with self-triggered front-end electronics (FEE).
At interactions rates of up to 10 MHz the readout hardware has to process very high data loads.
The CBM Transition Radiation Detector (TRD) is equipped with the SPADIC front-end ASIC. The SPADIC allows for an oscilloscope-like sampling of the detector signals.
Additionally, the ASIC has implemented a forced neighbour readout logic which allows to read out pads adjacent to the pad, which fulfilled the trigger logic without lowering the threshold.
In order to do online event selection it is necessary to reduce the incoming data load inside the FPGA by combining the SPADIC trigger messages into clusters.
Achieving this with traditional HDLs is a very complex, time consuming task, which can be sped up significantly by using HLS.
In this contribution I will present how I developed and implemented a cluster-finding algorithm in the FPGA with Vivado HLS.
The Mu2e experiment at Fermilab will search for the neutrino-less coherent conversion of a muon into an electron in the field of a nucleus. The observation of this process would be the unambiguous evidence of physics beyond the Standard Model. Mu2e detectors comprise a strawtracker, an electromagnetic calorimeter and a veto for cosmic rays. The calorimeter provides excellent electron identification,pattern recognition and track reconstruction. The detector employs 1348 Cesium Iodide crystals readout by silicon photomultipliers and fast front-end and digitization electronics. A design consisting of two disks positioned at a distance of 70 cm satisfies Mu2e physics requirements. The front-end electronics consists of two discrete chips for each crystal. These provide the amplification and shaping stage,linear regulation of the SiPM bias voltage and monitoring. The SiPM and front-end control electronics is implemented in a battery of mezzanine boards each equipped with an ARM processor that controls a group of 20 Amp-HV chips, distributes the low voltage and the high-voltage reference values, sets and reads back the locally regulated voltages. The electronic is hosted in crates located on the external surface of calorimeter disks. The crates also host the waveform digitizer board (DIRAC) that performs digitization of the front end signals and transmit the digitized data to the Mu2e DAQ. The core of the DIRAC board is a large FPGA (MicroSemi® MPF300T), that handles 10 double channels 12 bits and maximum sample rate of 250 MSPS analog-to-digital converters ADCs. Digitized data are sent to the main DAQ system through a CERN custom designed optical transceiver (VTRX).
Calorimeter electronic is hosted inside the cryostat and it must substain very high radiation and magnetic field so it was necessary to fully qualify it.
The constraints on the calorimeter front-end and readout electronics, the design technological choices and the qualification tests will be reviewed.
The Mu2E collaboration has developed a digitizer board that samples
up to 20 signals with a sampling frequency of 200 MHz on 12 bits.
The digitizer has been qualified to operate in the hostile environment of Mu2E.
The qualification levels are Total Ionizing Dose (TID) of 12 Krad and Neutron
fluence of 5x1010 n / cm2 @ 1 MeVeq (Si) / y, 1T magnetic field, level of vacuum of 10-4 Torr.
The digitizer has aroused considerable commercial interest,
as there are currently no digitizers with similar characteristics on the market.
Possible applications are related to the aeronautical industry,
the medical sector and that of accelerators.
The Mu2e board requires both hardware and firmware changes related to
the use of custom electronic components and communication protocols related to the Mu2e collaboration,
which were funded by INFN through a specific research and development project called HAMLET.
As an example of application of the electronics developed in the HAMLET field,
a demonstrator based on an array of SiPM coupled to a sparkling crystal and connected
to the digitizer was created.
The demonstrator constitutes a complete and scalable qualified hardware platform
that can be used in hostile environments.
Each single channel of the demonstrator is made up of a CsI crystal c
oupled through a SiPM array of 8 Broadcom® SiPMs to a front end chip called
MUSIC developed by ICCUB (University of Barcelona spin-off) which adds,
forms and amplifies the signals of up to 8 SiPM. An interface card connects
the front end with the digitizer, manages 20 independent and programmable HV channels,
20 I2C interfaces to the Music chip and one I2C / SPI interface to the digitizer.
The system is thermostated in such a way as to keep the SiPM gain stable.
The readout of detectors with FPGAs is a common practice. Traditionally, this is done using a hardware description language such as VHDL, Verilog, or System Verilog. Data is preprocessed, filtered, and passed to high-performance computing clusters for the next processing steps. Many experiments require a continuous, trigger-less data stream, which significantly increases the algorithmic requirements.
In this work we show how to implement the increasingly complex algorithms using methods of dataflow programming and methods of Modern HLS C++ Template Programming. The focus is also on resource consumption, which must remain below an acceptable limit compared to a pure VHDL implementation. Additional design goals are fast and shorter development time, easy maintenance, the ability to adapt algorithms to changing needs, and high throughput to handle continuous data streams. We present the concept of a dataflow template library that can be used to realize the above design goals. The template library allows an algorithm to be represented as a data flow graph. It is designed in such a way that the maximum possible data throughput can be implemented with an initiation interval of 1. The locality of the algorithm is instantiated by local streaming buffers. It is shown how these buffers are implemented in such a way that a nearly balanced graph is obtained.
The Compact Muon Solenoid (CMS) detector at the CERN Large Hadron Collider (LHC) is undergoing an extensive Phase-II upgrade program to cope with the challenging conditions of the High-Luminosity LHC (HL-LHC). A new timing detector is designed to measure MIPs with a time resolution of 30-60ps during the entire HL-LHC phase. MIP Timing Detector (MTD) will consist of a central barrel region based on LYSO:Ce crystals read out with silicon photomultipliers and two end-caps instrumented with radiation-tolerant, low-gain avalanche diodes. A common data acquisition (DAQ) system will collect data from readout chips, reconstruct timing information, and send data to the event builder. The MTD DAQ system is built around the state-of-the-art ATCA-form-factor Serenity board with two high speed FPGAs. On detector, it communicates with the lpGBT ASIC in both barrel and endcap timing detectors. Control signals and a precision clock that has a RMS jitter of less than 5ps, essential for the timing resolution, are transmitted using the same DAQ link. The precision clock is synchronized to the LHC collision rate of 40MHz and is received at the subsystem and transmitted to the detector via high-speed data links. An advanced monitoring system is being developed to ensure timing synchronization during the operation of the detector. The detector system with full-readout chain has been tested using prototypes of the DAQ, on-detector electronics, and sensors, showing that the system can successfully achieve timing resolution below 30ps. This talk is organized in four parts: first the infrastructure of the DAQ system will be discussed, followed by the precision clock distribution and monitoring system. The third part will focus on software development and organization. The final part will be dedicated to system tests, in which 30ps resolution has been demonstrated with independent measurement channels.
Recently, Field Programmable Gate Arrays (FPGAs) have become the best platforms for data acquisition, providing both high capacity of logic resources and unique pipelined processing in real-time capabilities. The talk will present details of the FPGA-based data acquisition system and readout electronics, designed for the Jagiellonian Positron Emission Tomography (J-PET) detector [1-4]. The system works in a continuous readout mode, allowing the collection and pre-processing of all signals registered by the tomograph, which in particular allows for subsequent filtration of the signals for targeted analysis of various types of events like for example multi-photon imaging [5] and positronium imaging [6]. Data processing by programmable logic opens the possibility of initial analysis, coincidence building, as well as on the fly, preliminary image reconstruction [1,2]. It also allows for a significant reduction of data when extracting only the valuable parameters of the collected signals, estimated during processing.
[1] G. Korcyl et al., “Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA SoC Devices”, IEEE Trans. Med. Imaging 37 (2018) 2526-2535
[2] M. Pałka et al., “Multichannel FPGA based MVT system for high precision time (20 ps RMS) and charge measurement” JINST 12 (2017) P08001
[3] P. Moskal et al., „Synchronisation and calibration of the 24-modules J-PET prototype with 300 mm axial field of view”, IEEE Trans. Instrum. Measur. 70 (2021) 2000810
[4] S. Niedźwiecki et al., „J-PET: A New Technology for the Whole-body PET Imaging”, Acta Phys. Pol. B 48 (2017) 1567
[5] P. Moskal et al., “Testing CPT symmetry in ortho-positronium decays with positronium annihilation tomography” Nature Comm. 12 (2021) 5658
[6] P. Moskal, K. Dulski et al., „Positronium imaging with the novel multiphoton PET scanner”, Science Adv. 7 (2021) eabh4394
Intending to improve the current sensitivity on $\mu^+\rightarrow e^+ \gamma$ decay by one order of magnitude, the MEG II experiment at Paul Scherrer Institute completed the integration phase in 2021 with all detectors successfully operated throughout the subsequent beamtime.
Earlier in 2021, the WaveDAQ integrated Trigger and Data Acquisition System for the complete readout of the experiment was commissioned.
Receiving almost 9000 channels from the detectors, the MEG II TDAQ system is the largest WaveDAQ deployment so far, proving the scalability of the overall design, from bench-top setup through various smaller-size experiments.
In this contribution, I will describe how MEG II trigger system reduces the $\sim10^7$ muons decays at the experiment target down to a 10 Hz event rate by exploiting the signal event characteristics at the online level.
The trigger system performs the calorimetric reconstruction of the photon shower and then compares the timing and direction with positron candidates within a 600 ns hard latency time.
The first release of the online reconstruction, deployed in 2021, achieved a 2.5% photon energy resolution at the signal energy of 52.8 MeV and a 4 ns coincidence time resolution among the child particles.
I will show the trigger performances and limiting factors in the last beamtime and how a progressively better understanding of the experiment behaviour will improve them through the three-year-long MEG II data taking campaign.
This work reports the design and the experimental results from the characterization of a readout ASIC developed for the General AntiParticle Spectrometer (GAPS) balloon mission that will search for an indirect signature of dark matter through the detection of low-energy (< 0.25 GeV/n) cosmic-ray antiprotons, antideuterons, and antihelium.
GAPS relies on a tracker system which serves as the target and tracker for the initial cosmic-ray particle and its annihilation products. The lithium-drifted silicon, Si(Li), detectors of the GAPS tracker system will be read out with a mixed-signal processor that was fabricated in a 180 nm CMOS technology. The ASIC, named SLIDER32 (32 channels Si-Li DEtector Readout ASIC), is comprised of 32 analog readout channels, an 11-bit SAR ADC and a digital back-end section which is responsible for defining channel settings and for sending digital information to the data acquisition system (DAQ). The core of the ASIC is a low-noise analog channel implementing a dynamic signal compression which makes the conditioning network suitable for resolving both X-rays in the range of 20 to 100 keV and charged particles with energy deposition of up to 100 MeV. It also features an energy resolution of 4 keV FWHM in the 20-100 keV range with a 40 pF detector capacitance, to clearly distinguish X-rays from antiprotonic or antideuteronic exotic atoms. The readout electronics of the ASIC, which is expected to run at a temperature of about – 40 °C, has to comply with a detector leakage current of the order of 5-10 nA per strip and with a power dissipation limited to less than 10 mW/channel to be compatible with the balloon nature of the experiment.
The ASIC has been thoroughly tested and a complete set of experimental results, focused on the performance of the low-noise analog channel, will be presented at the conference.
A new generation of electron scattering experiments is underway at world-leading facilities such as BNL and JLAB, dedicated to the study of QCD. All these experiments are characterized by modern detectors, with millions of active readout channels, and by an unprecedented data rate, produced by high-luminosity operations of the accelerators. Therefore they require suitable DAQ system that can record the interesting events and filter out the unnecessary background. Thanks to the continuous progress in computing and networking technology, the FPGA-based trigger scheme can be replaced by streaming readout (SRO) DAQ system. It aims to use a software-based trigger that considers the whole detector information for efficient real-time data tagging and selection. Considering the crucial role of DAQ in an experiment, validation using on-field tests is crucial to demonstrate SRO performance.
In this contribution, I will describe the first Jefferson Lab implementation of a full SRO DAQ system, including several components such as: an FPGA stream source, a distributed hit processing system, and software plugins that allowed offline analysis written in C++ to be used for online event filtering. In particular, I will report results of on-beam tests performed to validate the JLAB - SRO framework
Breakthrough advances in particle and photon detectors have occurred when the sensors were enabled by the readout electronics, and especially so when they were developed as an integral concept of charge and light sensing and low noise electronics. Some of the most successful detector systems, either in operation today or planned and being constructed for the future, have been spawned by pioneering efforts in the past. Examples in which the integral concept has led to breakthrough advances are high purity germanium gamma-ray detectors, noble liquid calorimetry and time projection chambers (both gas and liquid, most involving cold electronics. A brief glimpse at those is illustrative, but cannot be all-inclusive.