Upload
alexandrina-gallagher
View
217
Download
0
Tags:
Embed Size (px)
Citation preview
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 1
First joint search of burst gravitational waves by the
AURIGA-EXPLORER-NAUTILUS-VIRGO collaboration
Lucio BAGGIO
for the VIRGO+bars Collaboration
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 2
Outline
• VIRGO+bars Network: AURIGA, EXPLORER, NAUTILUS and VIRGO
• Main methodology: coincidence search on trigger lists provided by each detector, with expected accidental coincidences computed by time shifts.
• Goal: assess interpreted confidence intervals on the flux of gravitational
waves signals coming from the galactic center (GC) and taken from the template class of damped sinusoids (DS):
• Efficiency of detection comes from software injections (MDC). The injected population has amplitudes derived from the assumption of elliptical polarization from randomly oriented rotation axis of the source.
• Optimization of thresholds: for each template and each given target amplitude, the best compromise between efficiency and FAR is searched, using variable threshold for each detector with ½ hour bins.
• Blind analysis: in order not to bias results by feedbacks on methods from looking at results, a “secret” time offset has been added to detector times.
/0 0( ) cos(2 )th t e f t
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 3
The VIRGO-bars network24 hours of data taking during C7, starting from GPS time 810774700 (UTC14 Sep 2005 - 23:11 27s)
VIRGO AURIGA = NAUTILUS = EXPLORER
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 4
Software injections details
Damped Sinusoids:
• 11 waveforms to investigate several damping times and central frequencies:
• For each template, we generated N=8640 signals (one each 10s), with uniformly random time jitter of +/- 0.5s.
• Polarization is elliptic, distributed as for signals generated by rotating systems at GC: random polarization angle in [0, 2], and random inclination angle such that cos is distributed uniformly in [−1,1].
(ms) f0 (Hz)
1 914
3 882 946
10 866 898 930
30 866 874 906 930 938
/0 0( ) cos(2 )th t e f t
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 5
Observables provided by each detectors
• AURIGA: WaveBursts (S. Klimenko et al, LIGO-T050222-00-Z) was succesfully adapted to AURIGA data. The cluster S/N (close to the optimal) was used as an indicator of the signal magnitude.
• NAUTILUS and EXPLORER: a single linear Wiener-Kolmogorov filter matched to the impulse response is applied to the output data. The impulse S/N was used as an indicator of the signal magnitude.
• VIRGO: PowerFilter is the chosen trigger generator. The normalized logarithmic power was used as an indicator of the signal magnitude.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 6
Exchanged data: triggers+MDC @ 10-19 Hz-1/2
AURIGA
N=1413
VIRGO
N=24241
NAUTILUS
N=8628
EXPLORER
N=5614
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 7
Assessing the background of accidentals
• To assess the significance of rates, we need an estimate of the rate of accidentals.
• Ideally one would like to have events at each detector distributed as independent Poisson processes. The auto-correlogram of the events at each detector should be flat.
• Instead, because of non-gaussianity, oscillations occur, for instance in Virgo which is under commissioning.
• However, the cross-correlogram is flat! So the coincidences can be regarded as a Poisson process.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 8
A better view in the frequency domain
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 9
Optimization strategies
The real job on false alarm rate reduction is performed during network analysis.
The reduction is provided by requiring the event magnitude to exceed a higher threshold than the exchanged minimum. The efficiency of detection will be also affected. The trade-off between background and efficiency depends on our goal:
A) Detect a single GW event with high confidence, or low false alarm probability:
expected background counts << 1
In this case we shall measure the efficiency (which may turn out very low) which is determined by the chosen confidence.
B) Define the best exclusion region (upper limit on rate vs amplitude):
the ratio efficiency / background fluctuations is maximized
It is understood that we are able to estimate the mean background counts, and subtract them to the total. That is why we are limited only by the background fluctuations – i.e. ~sqrt(background).
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 10
Optimization in practice (1)
The time axis is subdivided in ½ hour long bins to account for variable efficiency (and possibly variable background rate).
The optimization proceeds by incremental steps, each time affecting the threshold on the event magnitude in one of the two detectors at one particular time bin.
A) If the target is low accidental coincidence probability, we stop the process only when the expected background has reached the desired level.
B) If the target is to have better exclusion regions, we stop when the ratio efficiency/sqrt(background) starts to decrease
How better will be the total background with the new threshold? How worse the total efficiency? We need a benchmark in order to rank the optimization steps and to decide which is the next better move.
The benchmark is defined as the ratio
(total efficiency)/sqrt(total background)
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 11
accidental
time lags
Optimization in practice (2)
Only half of the time-delayed accidental coincidences and half of the injected signals are used to rank the bins and the thresholds. In fact, due to limited statistics, the thresholds “overfit” the input data fluctuations.
detector data 1 2
coincidences injections
The other halves of the data are then used to give an unbiased estimate of background and efficiency to use for confidence interval calculation.
optimization of thresholds
unbiased characterization
background & efficiency
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 12
Example: DS(914 Hz, 1ms, 10-19 Hz-1/2)
AURIGA
VIRGO
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 13
Statistical Analysis (1)
The confidence intervals were set according to the confidence belt already used by IGEC1 (see L. Baggio and G.A. Prodi, “Setting confidence intervals incoincidence search analysis" in “Statistical problems in particle physics, astrophysics and cosmology”, R.Mount, L.Lyonsand and R.Reitmeyer editors, Stanford (2003) 238):The resulting confidence intervals are the supports which maximize the likelyhood integral, and they are chosen in order to give (conservately) a minimum frequentist converage for all possible values of the source parameter.
We may loosely say that the confidence intervals are “more confident” when including the null hypothesis than when bounding the expected value of GW number.
but
here we made an important modification: an additional null hypothesis test modifies the coverage at low signal rates (or no signal at all).
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 14
Modifying confidence belts
P{wrong estimate} < 5 %
coincidence counts
GW
event
num
ber
false assessment probability
P{false alarm} < 5%
upper limit affirmative claim
background Nb=7
P{false alarm} < 0.1%
upper limit affirmative claim
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 15
Statistical Analysis (2)
• The coincidence search and optimization procedure were performed for different populations (f0, , hrss) and many couples of detectors, which accounts for multiple tests performed (~100). Eventually this large trial factor increases the false claim probability
• The effective overall confidence is defined as the probability of not having a single false claim in any of the performed tests. It is clearly linked to the confidence of the single trial. Knowing this relation we can compensate with a higher confidence on the single trial and to achieve the desired global confidence.
• This relation may be empirically estimated by measuring the frequency of false claims in the time-delayed configurations. In other words, for each time lag we simulate the confidence interval obtained across all optimized configurations, and we check for false rejection of the null hypothesis.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 16
background & efficiency
coincidenceslag 1
lag 2lag 3
lag 4lag 5
testtest
testtest
test
Waveform Amplitude x Detectors A1-A2
Trial factor in practice
zero-lag coincidences
global confidence
interval
background & efficiency
coincidenceslag 1
lag 2lag 3
lag 4lag 5
testtest
testtest
test
Waveform Amplitude y Detectors B1-B2
interval
zero-lag coincidences
background & efficiency
coincidenceslag 1
lag 2lag 3
lag 4lag 5
testtest
testtest
test
Waveform Amplitude z Detectors C1-C2
zero-lag coincidences
interval
test
test test
test test
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 17
Trial factor in 2-fold search
In two-fold coincidence search, it was possible to assess an empirical confidence of 98-99% on the results using 400 time delayed configurations
Single trial confidence
Glo
bal co
nfid
en
ce
• A lower trial factor comes by analyzing only on the best couples of detectors for each template/amplitude.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 18
Statistical Analysis (4)
Disclaimer: rejecting the null hypothesis implies to claim excess correlation in the observatory. This can be due to:
o GW signals.
o cross-correlated noise (not taken into account in the background measuring procedure with time lags);
o bad chance (statistically improbable but not impossible)
Excluding cross-correlations is not an easy task: when assessing the results this duality in the physical interpretation should always be kept in the mind of the readers.
The bottom line: if you find coincidences in excess, what are you going to blaim first: glichiness of data and poor sensitivity, or rather that risky 90% confidence level threshold?
The probability of accidental claim depends in the first place on the chosen threshold of acceptable p-level of the statistic test. The p-levels themselves are affected by measurement errors (background coincidence counts) and systematics (edge effects, ergodic approximation) but normally we can properly account for them.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 19
Results for the 2-fold coincidence searches (1)
• Goal: to obtain the better exclusion regions.
•The level of residual accidental background being relatively high (~0.1/day), the detection of a single coincidence does not lead to claim of excess correlation.
• It was possible to assess an empirical confidence of 98-99% on the results using 400 time delayed configurations
No excess of coincidences was found. The null hypothesis is confirmed at 99%.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 20
Results for the 2-fold coincidence searches (2)
Upper Limits at 95% coverage
hrss=10-20 Hz-1/2 would correspond to ~ 10-3 Mo radiated at 10kpc
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 21
3-fold coincidence searches
Goal: to be able to issue a claim at 99.5% confidence on a single observed triple.
• The backgound for some configurations is low enough to reach this confidence. 400x400 time lags allow to estimate such a low false alarm probability.
• In order to limit the trial factor, for each waveform only a small subset of MDC amplitudes (e.g. 10-19, 5 10-18 and 10-18 Hz-1/2) will be tried. The zero-lag will be then analyzed with the optimization for the lower signal amplitude which still allows at least a level of efficiency of 40%. Configurations of detectors/template which do not reach such minimal level for any of the chosen amplitudes will be discarded.
• The analysis and checks are in progress, results to appear soon.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 22
The injection of many waveforms and amplitudes multiplies the computation time by order of magnitudes! However, this is not intrinsic of the method, which only requires an hint on the efficiency variability (it could be provided by an empirical formula using the noise characterization and modelization of injected signals).
While only the magnitude at the output of the event search algorithm was used, in principle any test statistic provided with or derived from the coincidences as a function of time may be included in the optimization process.
Summary and final remarks (1)
In order to extract the maximum of information from the collected data, we defined optimized thresholds which took into account the characteristics of the tested population (direction, amplitudes, polarization...) via the efficiency of detection.
No attempt was done to regularize the final output of the threshold optimization. The implemented optimization algorithm is very primitive and the correspondence between microscopic states (threshold time series) and macroscopic observables (efficiency, background) has not been systematically investigated.
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 23
Summary and final remarks (2)
The issue which prevents the systematic use of the ad hoc optimization is the trial factor. The overall false claim probability can be controlled, but at the price of reduced sensitivity. Eventually, the affordable number of independent optimizations has to be limited to the most promising cases, based on a preliminary survey of the expected backgrounds and efficiency.
Gabriele Vedovato (AURIGA) is implementing a different analysis scheme based on WaveBurst in association with cross-correlation tests. This semi-coherent all-sky network analysis is being preliminary tested on AURIGA-VIRGO data and is giving promising results.
A Virgo-note was produced to discuss the methodology:
VIR-NOT-FIR-1390-328
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 24
EXTRA SLIDES
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 25
Background estimate
+/- 7 min
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 26
Time coincidence
• Our ETGs are not “matched” to DS
• Time errors are dominated by systematic biases.
• The narrower the bandwidth, the greater the signal distortion
• Example : AURIGA – VIRGO coincidences. The double peak is due to the multimodal time error by VIRGO
•The coincidence “window”, Tw = 40 ms
f0=914 Hz
=1ms
f0=866 Hz
=10ms
f0=930 Hz
=30ms
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 27
Efficiency
DS: f0=914 Hz tau=1ms
DS: f0=930 Hz tau=30ms
DS: f0=866 Hz tau=10ms
• Single detector efficiencies
• For VIRGO, ~ 7 hrs out of 24 have been excluded by epoch vetoes
=> Asymptotic 70%
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 28
Software injections details (1)
0.001
0.01
0.1
850 870 890 910 930 950 970 990
central frequency
deca
y tim
e
(ms) f (Hz) N°
1 64 3
3 32 5
10 16 13
30 8 22
100 4 38
We chose a family of DS with several damping times and central frequency combination in order to span evenly our parameter space.For all the classes, we set hrss = 10−23Hz −1/2 (this can be changed simply rescaling all thesignals).• For each class, we generated a series of injection times (signal arrival at the center of theEarth), spaced by 10s + uniform random jitter of +/- 0.5s.• For each class and for each injection time we generated a random polarization angle in[0, 2], and a random inclination angle such that cos is distributed uniformly in [−1,1];
VIRGO - DS(930Hz;10ms)
0
2
4
6
8
10
12
14
23:2
3
1:08
2:55
4:33
6:26
8:09
9:56
11:4
8
13:2
8
15:2
2
17:0
9
18:5
2
20:3
1
22:2
1
time [h:min]
proj
ecte
d am
plitu
des
[10-2
3H
z-1/2
]
optimal alignment
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 29
Signals and astrophysical motivation
• fgw and are frequency and damping times
• hrss is the scale factor (we will define it precisely later)
• and are geometrical factors (polarization and “source plane” inclination Such signals could be produced by a ringdown of a system excited in a l=m=2 mode
BH-BH ring-down.Andersson N. and Kokkotas K., Mon. Not. Roy. Astron. Soc. 299 (1998)Kokkotas K.D. and Schmidt B.G., http://www.livingreviews.org/lrr-1999-2 (1999)
f-mode of neutron stars.In this case the f-mode could produce a wave with variable frequency and damping time; to keep this into account we did not use matched filtering.Ferrari V. et al., Mon. Not. Roy. Astron. Soc. 342 (2003) 629
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 30
Astrophysical motivations:
The chosen signals can be produced by:
BH-BH ring-down.In this case the energy release to make a detection realisticshould be about 10-3 - 10-4 Mo for a galactic event.Andersson N. and Kokkotas K., Mon. Not. Roy. Astron. Soc. 299 (1998)Kokkotas K.D. and Schmidt B.G., http://www.livingreviews.org/lrr-1999-2 (1999)
f-mode of neutron stars.Here too the energy release must be high. Moreover,
the f-mode could produce a wave with variable frequency and damping time, which anyway should sweep inside the observed
frequency band.Ferrari V. et al., Mon. Not. Roy. Astron. Soc. 342 (2003) 629
The wave polarisation should be linear for SN explosions, but elliptic for BHBH coalescenses (V. Ferrari, private communication).
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 31
Which physical parameters?
• Take just the example of Quasi Normal Modes of Black Holes, and assume that an l=m=2 mode dominates the signal.
• Mass and ratio j = J/M2 are correlated with frequency and damping time.
• So, we are looking also at values which are incompatible with these modes
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 32
Interpretation of the found limits
We go back to the signal model
The hrss is just a spectral scale of the signal
The definition of the energy flux distribution over angle and frequency
With the signal model, the total radiated energy is easily computed as
With the signal model, the total radiated energy is easily computed as
An hrss=10-20 Hz-1/2 would correspond to ~ 10-3 Mo radiated at 10kpc
Lucio BAGGIO - GWDAW11, Potsdam, Dec 21st 2006 33
experimental data
physical unknown
confidence interval
x
coverage
|
( ) ( ; )xx I
C pdf xI
xI
For each outcome x one should be able to determine a confidence interval Ix
For each possible , the measures which lead to a confidence interval consistent with the true value have probability C(), i.e. 1-C() is the false dismissal probability
x I
Confidence Belt & Coverage