In spite of the fact that gravitational-wave (GW) interferometers detect GWs in a very complicated (and very clever) way, the final output (i.e. that used for data analysis, obtained after calibration and other processes) is up to now quite simple: the detected strain is linear in the two GW polarizations, which are multiplied by the corresponding antenna patterns which describe the detector’s response to each signal source location in the sky.
The usual antenna patterns do not take into account the transfer function of the interferometers, and as a result they are not frequency-dependent: this is an approximation that works quite well as long at the signal frequencies are well below the LIGO - Virgo free spectral range (FSR) (for the LIGO detectors this frequency is about 37.5 Hz, and for Virgo 50 kHz).
Next-generation detectors, such as Cosmic Explorer (CE) and the Einstein Telescope (ET), are expected to have far longer arms (40 km for CE, 10 km for ET) so that phenomena associated with frequencies around 1000 Hz (like a signal from a core-collapse supernova) are no longer so "low" with respect to the FSR of these detectors (3750 Hz for CE, 15000 Hz for ET).
This means that the transfer function must be taken into account and this produces a frequency dependence of the antenna patterns. However, while the frequency-domain version of the response function has been extensively explored, the time-domain version has not: this time dependence is described in this work.
Moreover, for a fixed frequency the frequency-dependent corrections depend on the source location as seen by each detector. This produces a frequency- and location-dependent systematics that must be taken into account.
In my work I explore the implications of these corrections for data analysis, above all for the next generation of detectors, trying to understand how much the analysis could be biased by using the long-wavelength approximation.