31
A comparison of potential Raven centroiding algorithms David Andersen November 5, 2012 1. Introduction The initial performance modeling for Raven was conducted using MAOS, a c++ based AO simulation tool built by L. Wang for simulating MCAO on TMT. It has advantages in that it is very fast and has a well-established tomographic reconstructor. However while it is built to simulate NFIRAOS on TMT very well, it is somewhat difficult to configure. In particular, it is difficult to address the problem of open loop centroiding using this code. We do try to account for open loop errors both in the Raven CoDR report and the Raven modeling paper (Andersen et al. 2012), but which open loop centroiding algorithm to use has remained an open question. As a reminder, we summarize the wavefront error terms associated with the WFSs and DMs in Table 1. In particular, we highlight the “Sampling” error which arrises from the fact that we do not oversample the spots on the WFS. In Table 1, this assumed using a Matched Filter algorithm (described below). In this paper, we explore ways of minimizing this (and WFS noise) using different centroiding algorithms. Table 1: Raven Error Budget WFE term WFE (nm RMS) T/T removed Tomography 175 DM Fitting σ 2 ~ 0.25 (d0/r0) 5/3 155 Aliasing σ 2 ~ 0.1 103

A comparison of potential Raven centroiding algorithmsweb.uvic.ca/~ravenmoa/Docs/Andersen_centroiding.docx · Web viewThe three options we looked into were: 1) do nothing. No ADC

  • Upload
    vocong

  • View
    217

  • Download
    4

Embed Size (px)

Citation preview

A comparison of potential Raven centroiding algorithms

David Andersen

November 5, 2012

1. IntroductionThe initial performance modeling for Raven was conducted using MAOS, a c++ based AO simulation tool built by L. Wang for simulating MCAO on TMT. It has advantages in that it is very fast and has a well-established tomographic reconstructor. However while it is built to simulate NFIRAOS on TMT very well, it is somewhat difficult to configure. In particular, it is difficult to address the problem of open loop centroiding using this code. We do try to account for open loop errors both in the Raven CoDR report and the Raven modeling paper (Andersen et al. 2012), but which open loop centroiding algorithm to use has remained an open question. As a reminder, we summarize the wavefront error terms associated with the WFSs and DMs in Table 1. In particular, we highlight the “Sampling” error which arrises from the fact that we do not oversample the spots on the WFS. In Table 1, this assumed using a Matched Filter algorithm (described below). In this paper, we explore ways of minimizing this (and WFS noise) using different centroiding algorithms.

Table 1: Raven Error Budget

WFE term WFE (nm RMS)T/T removed Tomography 175

DM Fitting σ2 ~ 0.25 (d0/r0)5/3 155

Aliasing σ2 ~ 0.1 (d0/r0)5/3 103

Sampling 72Noise (m=14; Fs=180 Hz) 95

It is also important to note the limitations of the Raven WFSs. We are using the Andor iXon cameras with 128 pixels. As part of our trade study, we determined that we need ~5” FOV and at least 10x10 subapertures to achieve the dynamic range and sensitivity, respectively, to achieve the overall system requirements. This means that there are only going to be ~12x12 pixels per subaperture and that the pixel size needs to be ~0.4”/pixel. Since the spot size on the WFS will most often be smaller than 0.8”, that means that the spots will be under-sampled. We expect, given Table 1, that we have a ~70 nm WFE due to this under-sampling, but this will be algorithm dependent. We need to determine which centroiding algorithm works best at both high and low S/N.

In this document, we describe simulations carried out to test the sensitivity of different spot centroiding algorithms in the context of the Raven Open-Loop Wavefront Sensors (OL WFSs). In section 2, we describe the basic pixel processing required for use with different centroiding algorithms we explored, and in section 3 we describe the simulations. We present results of the centroiding tests in section 4. We also apply these simulations to help determine whether Raven requires an atmospheric dispersion corrector (ADC) or a blue cutoff filter. We finally summarize our results in section 5.

1. Pixel processing We have explored the performance of 3 basic centroiding algorithms: thresholded-center of gravity (tCOG), correlation and matched-filter (MF) centroiding. Thresholded center of gravity is the simplest method and will be the fastest centroid algorithm computationally. In this flux-threshold variant, the maximum flux is determined in each subaperture and only those pixels with a flux above some percentage of the peak flux would be used for determining a flux-weighted mean (There should also be a minimum threshold which removes almost all pixels with just background and readnoise, even if this level is greater than the threshold determined from the peak flux). Correlation centroiding relies on a knowledge of the PSF. We describe how we generate a reference PSF from our measurements and then correlate our subaperture images with this reference PSF. This correlation process creates a new image for a subaperture that has highlighted all objects in the frame that have a similar shape to the PSF while downgrading structure that does not look like a PSF (shot noise and cosmic rays). The centroid can be determined from the correlation image using a thresholded center-of-gravity technique with a relatively high threshold. Finally, the matched filter also uses knowledge of the PSF to generate slopes, but the MF technique has a limited dynamic range which may not be well-suited to work on an open loop system like Raven. We would not expect the MF to work better than the Correlation method, but it would be more efficient computationally. We do not explore the performance of the MF in great detail here, but do describe the pixel processing steps in the next section for all three centroiding methods.

A. Thresholded Center of Gravity (tCOG)

Figure 1: Data flow for the tCOG centroiding algorithm. Each of the boxes are described below.

A.1 Basic Pixel ProcessingThe following steps are included for all centroiding methods.

CameraParameters: frame rate, gain, etc.

Andor iXon 860 128x128 pixel camera with 500 fps. Data is 14 bit.

Convert Raw to SubapParameters: Subaperture number, ROI

RTC reads in pixel stream, and starts making images for each subaperture. After each subaperture is filled, further computations can be parallelized. There will be of order 80 subapertures/WFS

Back SubtractParameters: Dark Current/Background constant or image (TBD)

Subtract background pixel by pixel. Values converted from 14bit to double (single should be ok). The background flux could be measured in the Realtime Parameter Generator (RPG), or could be an input parameter. The RTC should allow for either possibility.

Flatfield (optional)Parameters: Flat field frame for subaperture

Divide subap image by flatfield (or pixel to pixel variation) map. Could multiply pixel by pixel the inverse of the flat field to speed things up. The flatfield may possibly be taken during calibration, or could be a theoretical function.

A.2 Windowing (optional)Windowing reduces the number of pixels that are considered in a subaperture by creating a subwindow around an initial guess of the centroid. Windowing should improve the accuracy of the centroid measurements by removing effects of noise peaks far from the WFS spot.

CentroidParameters: Method

Initial centroid measurement for subaperture. Can be correlation, thresholded center of gravity (tCOG) or matched filter (MF) or … X and Y slope are used in subsequent steps.

If tCOG method is chosen (baseline), then operations consist of 1) determining the threshold (usually some fraction of the peak flux), 2) applying that threshold on a pixel-by-pixel basis (meanwhile 2a building a standard deviation of “background” pixels to estimate noise), 3) for pixels above threshold, sum fluxes and sum x and y pixel locations times flux (3a flux is signal for S/N calc). 4) calculate 2 slopes by dividing two pairs of sums.

Report low S/NParameters: predicted S/N for subaperture (based on subaperture vignetting, r0, frame rate, NGS magnitude, user input extinction, airmass), number of frames for averaging

Take time average of S/N over N frames (specified by user). If S/N calculated during previous centroid step was lower than predicted/allowed, broadcast message to pub/sub alerting user of potential problem. Could have a “smart” fix of increasing exposure time slightly?

Round Parameters: none

Round centroids to integers for use with windowing.

WindowParameters: window size

Crop the image by user specified amount, centered on initial (rounded) centroid guess. Useful for reducing the size of the image and more importantly, removing

noise far from spot. If window exceeds size of subaperture, use window as close to edge as possible. Trigger warning.

Report Window FailureParameters: none

User-specified window exceeds dimensions of subaperture.

A3. Centroiding

Determine ThresholdInput: Fraction of peak flux to be used for threshold (typically 15%) and minimum threshold.

Find maximum of image. Set threshold to fraction of the maximum image flux. If fraction of maximum image flux is below the noise, use the second, user-supplied minimum threshold.

tCOGInput: none

Check each pixel (of possibly windowed image) to ensure it is above threshold (determined from previous step). For all pixels above threshold, calculate flux-weighted x and y centroids.

Report Bad spot

Calculate S/N of spot used to generate measurement in tCOG. If lower than expected, report error. If centroid is near edge of subaperture or window, report error.

A4. Applying Reference Values

Add initial Rounded centroids

If windowing is used, the difference in center between the subaperture and the window generated by round above (A2) needs to be added back to the new centroid measurement.

Determine Reference SlopesParameters: Probe location, slopes

The OL WFSs are presented with different field-dependent aberrations as they move. The location of the probe arms help determine the reference slopes (look-up-table in RPG). The mean tip and tilt of the WFSs will also be used to apply a non-common path aberration (NCPA) correction (a beam with a global tip or tilt will have a different optical footprint in the OL WFSs). Unlike all previous steps, this

process is not done 1 subaperture at a time. All slopes are needed to calculate the average tip/tilt of the beam.

Subtract Reference SlopesParameters: none

The reference slopes are subtracted from the slope measurements. Slope measurements at this point may or may not be projected into phase space or onto a modal basis.

B. Correlation Centroiding

Figure 2: Pixel processing and Correlation Centroiding Flow diagram. All boxes are described below.

B1. Basic Pixel ProcessingCamera, Convert Raw to Subap, Back Subtract, and Flatfield are described in (A1) above.

B2. Windowing (optional)Centroid, Report low S/N, Round, Window and Report window failure are described in (A2) above. The Window function is slightly modified as described here:

Window/blkrepParameters: window size, drizzling factor

Crop the image by user specified amount, centered on initial (rounded) centroid guess. Useful for reducing the size of the image and more importantly, removing noise far from spot. If window exceeds size of subaperture, use window as close to edge as possible. Trigger warning.

If Drizzle is used, block-replicate the windowed image by the drizzling factor. For example, if the drizzling factor is 2, create new subaperture image with dimensions 2x the window size, copy each pixel into 4 new sub-pixels. Drizzling only improves the resolution of the reference image – not the current subaperture spot image.

B3. Create Reference ImageFor Correlation Centroiding and MF centroiding, a reference image is needed. Generating this reference image is not a real time task, as not all images need to be processed. We describe the steps below.

Drizzle (shift/add)Parameters: drizzling factor

Each spot image is re-sampled onto a finer (by the drizzling factor) grid. The spot is shifted by the fractional measured centroid, and the flux from 1 camera pixel is divided between multiple subpixels with the fraction of flux going into each subpixel proportional to the area. The drizzling factor can be 1, which means that the light from a subaperture is shifted and resampled on a grid with the same pixel scale. Our experience indicates that a drizzling factor greater than 2 does not produce reference images with significantly higher spatial resolution.

WindowParameters: window size

Trim the drizzled image to provide an image with the same dimensions as that produced by the window/blkrep step described in (B2).

Integrate i0

Parameters: # of frames to add, switch to produce 1 reference image or 1 reference image/subaperture

Add drizzled and windowed images from many exposures to create a high signal-to-noise reference image that is not dependent on the instantaneous turbulence.

This task is performed by the RPG, and does not need to be real-time. New reference images will be produced on the order of every ~10 seconds. If some frames are dropped from the integration, that is acceptable.

The user can choose whether to produce a single reference image from the images of all subapertures, which will increase the S/N, or build a different reference image for each subaperture (accounts for diffraction effects and laser elongation in the LGS WFS). In the end, the reference image will be scaled to the mean of the individual spot images.

Exposure TimeInput: User specified S/N for reference image

Uses the guide star magnitude, r0 estimate, and frame rate to calculate the total number of images that should be integrated to reach a sufficient S/N ratio in the final reference image, i0.

Current i0

Store the current reference image for use by the correlation function. The reference images io should be saved to DMS. A new reference image will be produced on the order of every 5-10 seconds.

B4. Centroiding

CorrelateProduce cross correlation image of reference image versus current frame. Either done through FFT or brute force calculations.

Determine ThresholdInput: Fraction of peak flux to be used for threshold (typically 15-25%) and minimum threshold.

Find maximum of image. Set threshold to fraction of the maximum image flux. If fraction of maximum image flux is below the noise, use the second, user-supplied minimum threshold. Fractional threshold is typically after applying correlation since S/N of correlated image is so great.

tCOGInput: none

Check each pixel (of possibly windowed image) to ensure it is above threshold (determined from previous step). For all pixels above threshold, calculate flux-weighted x and y centroids. We use tCOG rather than fitting a functional form to the correlation peak to increase computational speed and robustness.

Report Bad spotInput: none

Calculate S/N of spot used to generate measurement in tCOG. If lower than expected, report error. If centroid is near edge of subaperture or window, report error.

B5. Applying Reference ValuesSame steps as in (A4) above.

C. Matched Filter (MF) Centroiding

Figure 3: Pixel processing and MF centroiding flow diagram.

C1. Basic Pixel ProcessingCamera, Convert Raw to Subap, Back Subtract, and Flatfield are described in (A1) above.

C2. WindowingCentroid, Report low S/N, Round, Window and Report window failure are described in (B2) above. Windowing is more important in the case of MF centroiding because the MF only works if the spot center is within a FWHM of the

reference image. Otherwise the MF centroiding accuracy is low. By windowing, we are placing the spots within a pixel of the center of the window using our best guess.

C3. Create Reference ImageDrizzle (shift/add), window, Integrate i0, Exposure time and Current i0 are all described in step (B3) above.

C4. Create MF

GradientInput: none

Take the gradient of the reference image, i0, thereby producing 2 Gradient matrices (Gx and Gy).

G = [Gx, Gy].

Create MFInput: Choice of constrained or unconstrained MF. For the unconstrained MF, also need i0 and ReadNoise.

For the unconstrained MF, simply:

R = G-1

For the constrained MF:

M = [1 0 0 0 0 -1 1; 0 1 0 -1 1 0 0]

i1 = i0 shifted 1 pixel left

i2 = i0 shifted 1 pixel right

i3 = i0 shifted 1 pixel up

i4 = i0 shifted 1 pixel down

G2 = [Gx, Gy, i0, i1, i2, i3, i4 ]

-1(j,j) = [RN2 + 2 i0(j)]-1

G3 = [G2-1 -1 G2]-1 G2

-1

R = M G3 -1

C5. Centroid

MultiplyInput: none

Multiply the spot image by the MF, R, to produce centroids.

Report Bad spotInput: none

Calculate S/N of spot used to generate measurement in tCOG. If lower than expected, report error. If centroid is near edge of subaperture or window, report error.

C6. Applying Reference ValuesSame steps as in (A4) above.

3 Open Loop Centroiding SimulationsWe performed our open-loop centroiding simulations in matlab using the UVic AO library. Our centroiding simulation process is shown in Figure 4. The goal of the simulation was to assess the amount of extra WFE and loss in EE due to aliasing, sampling error, and WFS noise (isolated from tomographic, DM fitting and temporal errors).

Figure 4: Flow diagram showing the steps involved in simulating the expected performance of the different centroiding algorithms.

Produce Phase Screens: We began by simulating 200 independent phase screens with a Fried parameter of r0=15 cm and a sampling of 48 pixels per subaperture (Figure 5).

Figure 5: Sample phase screen.

Produce WFS spot images with 0.1” pixels: For each of these phase screens, we simulated spots in 80 0.8x0.8 m subapertures with a pixel sampling of 0.1 arcsec per pixel (48x48 pixels per subaperture; Figure 6) and a FOV of 4.8 arcseconds per subaperture. These spots are simulated with no noise initially.

Figure 6: Sample WFS spots sampled on 480x480 pixel WFS with 0.1 arcsec/pixel spatial resolution.

Bin WFS Image and Add noise: We can then bin these spot images by 2, 3, or 4 pixels and then scale the flux and add noise to simulate stars on WFSs with different sampling and different brightnesses (Figure 7). We simulated stars with brightnesses corresponding to R=10 to R=15 and WFS integration times of 80 ms (125 Hz). We assumed A0 stars and assumed that the WFS received half the V-band, all the R-band and half the I-band. The number of photons per subaperture is given by:

Nphotons = 7740 x 10-0.4*(mag-10)

Raven uses EEV low readnoise detectors. The low readnoise is achieved by applying a high gain and then “counting” photons. The effective readnoise is just 0.23 electrons, but we pay a square root of 2 penalty in photon noise.

Figure 7: Left Panel: WFS spot for 1 subaperture sampled with 0.1 arcsec/pixel and not including noise. Right Panel: Same WFS spot as on the left, but now sampled with 0.4 arcsec/pixel detector and including realistic photon and read noise.

Measure Slopes: Once realistic WFS spot images have been produced, we can measure centroids using various algorithms including thresholded COG, correlation and MF techniques as described in section 2. Both correlation and MF methods require reference images (i0) to be constructed. We have constructed these reference images from the real data. For the high resolution WFSs (WFS sampling of 0.1 or 0.2 arcsec/pixel), it is probably best to just shift and add each of the individual images (Figure 8). Note we create 1 reference image for the whole WFS. One could possibly do better by creating a reference image for each individual subaperture to account for pupil edge diffraction and possible lenslet to lenslet variations, but we found for the cases of interest to us (0.4”/pixel sampling), that we expect these variations to be smaller than the differences due to pixel blurring. We also find that there is some potential gain to be had by shifting and adding the individual WFS spot images onto a finer plate scale (drizzling is described in section 2 B3; Figure 8).

Figure 8: Comparison to reference images. The Left Panel shows the spot for the 0.1 arcsec/pixel sampling. Each spot for 200 phase screens has been shifted and added to a single frame. If the same process in done using a 0.4 arcsec/pixel detector, one finds a reference image shown in the Central Panel. The Right Panel shows the reference image that can be measured from 0.4 arcsec/pixel spots drizzled to 0.1 arcsec/pixel synthetic grid. Drizzling can increase the spatial resolution of the reference spot by about a factor of 2.

Project Phase onto Modes: To evaluate the impact of noise and sampling on Raven WFE and EE, we need to compare the measurements of slopes against some fiducial measurement of phase. We create this fiducial by projecting each phase screen onto 44 Zernike polynomials (Figure 9). This representation of the phase screen removes any DM fitting error for the results, but still allows us to assess WFS aliasing (higher modes not measured by the WFS producing slope offsets that thereby increase lower order wavefront errors).

Figure 9: Comparison of random phase screen and the projection of the phase screen onto the first 44 Zernike polynomials. The phase screen on the right excludes turbulence that can not be fit by Raven science DMs.

Simulation Performance Metrics: For each set of slopes measured from a phase screen (for a given magnitude/sampling/centroiding algorithm), we multiply the resultant slopes by a modal reconstructor generated from a noiseless WFS with 0.1”/pixel. We then compare the fiducial modal representations of the phase screen to our noisy reconstructions (Figure 10). We use two performance metrics to evaluate the performance: the rms WFE difference between these two phase maps

and the loss of ensquared energy in a 150 mas spaxel due only to aliasing, sampling and WFS noise.

Figure 10: Left Panel: Residual phase map between the perfect projection of 44 modes onto the original phase screen and the reconstructed shape from the WFS (including sampling, aliasing, and noise errors). The comparison of the mode amplitude estimates is shown in the Right Panel.

4 Simulation ResultsIn general, the results of the simulations show that both the tCOG and Correlation centroiding algorithms perform well. Results are summarized in Table 2. We have not yet had success using the MF in open loop. More work can be done on MF – we have not yet studied MF with windowing (which would re-center the spots), and our MF has an artifact at the edge due to the discontinuity of the gradients. The centroiding algorithm out-performed the tCOG for faint stars (it should be less susceptible to noise artifacts), and worked well for the worst sampling (0.4 arcseconds/pixel, which we are using in Raven; Figure 11). While we quote magnitudes below, we should note that here we are taking a rather naïve approach to assigning a certain number of photons detected per subaperture. One should read R=11 as being a bright star and R=16 as being a faint star. A much more thorough approach to estimating NGS photons is used in section 4.2 below.

Table 2: For different WFS pixel scales and centroiding methods, we present the WFE and EE loss for bright and faint stars including aliasing, WFS noise and WFS sampling errors.

Pixel Scale tCOG (15%) CorrelationR=11 Star R=16 Star R=11 Star R=16 Star

0.1” 121 (88%) 202 (80%)0.2” 121 (88%) 158 (84%) 116 (90%) 151 (86%)0.3” 124 (88%) 164 (83%) 113 (90%) 150 (86%)0.4” 136 (87%) 185 (80%) 122 (89%) 165 (83%)

Figure 11: Left Panel: WFS spot image for 0.1 arcsec/pixel scale and a faint star. Left-Center Panel: Same spot convolved with the reference image. The effective S/N of the spot has been increased many-fold. Center-Right Panel: Same spot sampled onto a 0.4 arcsec/pixel WFS. Right Panel: Same 0.4”/pixel spot convolved with the reference image. Again, the effective S/N has been greatly enhanced.

4.1 Does a mismatch in the reference image create an error for correlation centroiding?Since correlation centroiding seems to work better in theory than tCOG, we decided to explore the errors introduced by having reference images were are not well-matched to the current image quality. We therefore ran the same simulations described above, but with synthetic reference images with varying widths. The results are shown in Figure 12.

Figure 12: Wavefront Error (WFE; top) and H-band Ensuared Energy (bottom) versus NGS magnitude for reference images with FWHM of 1.2 arcsec (purple), 1.4 arcsec (blue), 1.8 arcsec (green), 2.3 arcsec (yellow) and 2.8 arcsec (red). The best fit reference image (produced through drizzling) had a FWHM of 1.8 arcsec.

As Figure 12 shows, there is an impact on performance if the reference image does not match the average size of the spot images. We find about a 7% loss in EE if the reference image is 65% the size in FWHM of the spot images. The correlation method is less sensitive to an over-estimation of the spot size. We record only a 3.5% loss in EE when the reference image is 1.5 times bigger than the measured spots. We assume this is the case because a narrower reference spot would magnify narrower noise peaks after correlation. The relative losses due to a mismatch in reference image to spot size decrease at faint magnitudes; Figure 12 shows only a 3.5% loss in EE for the same reference image that is 65% narrower than the spot images. This is because the dominant error becomes WFS noise. This result is good news for Raven because we can use (relatively broad) synthetic reference images for faint magnitude guide stars where it is not practical to create reference images with little loss in performance, and for brighter stars we can create reference images from the WFS data (as described above).

4.2 Do Raven OL WFSs need ADCs?Raven as designed does not include an Atmospheric Dispersion Corrector in any of it’s WFSs. Originally, we intended to use Raven only down to a zenith angle of 45 degrees, but several interesting science cases push Raven to be used with zenith angles up to 60 degrees (2 Airmasses). As light passes through more atmosphere, light undergoes more refraction and point sources are dispersed into very low resolution spectra. Making assumptions about the atmosphere over Mauna Kea, we can model the effect of atmospheric dispersion (Figure 13).

Figure 13: Atmospheric Dispersion in arcseconds versus wavelength for 3 zenith angles: 30 degrees (blue), 45 degrees (green) and 60 degrees (red) for Mauna Kea. Courtesy of O. Lardiere.

Figure 13 shows that the amount of dispersion can be large (~2 arcseconds at 60 degrees zenith angle). We wanted to assess how much of an effect it had on Raven performance and whether we needed to change the Raven design to mitigate it. The three options we looked into were: 1) do nothing. No ADC means no loss in throughput. Also for bright stars, one can imagine that with correlation centroiding that there would actually be an advantage to having an elongated guide star. 2) use a filter that cuts off light below 600 nm. This could have three potential advantages: it would decrease the amount of atmospheric diffraction a great deal (down to ~0.4 arcseconds at 60 degrees zenith angle), a filter with a 600 nm could remove all scattered Na beacon light (589 nm), and the sky background could be greatly reduced. The disadvantage of course would be that the WFS would receive much less light in total. Finally, 3) we could redesign the WFSs to include an ADC. There would be a small loss of light due to the extra optical surfaces, but almost all the light in the optical would be concentrated within the seeing-limited PSF. We expect that the ADC option would give the best performance at low S/N.

Modeling on the ADC performance is very similar to the process described above in Section 3, except that we simulate the spots corresponding to the same initial turbulence but imaged at different wavelengths. We simulated spots at wavelengths of 400, 500, 600, 700, 800, and 900 nm. We combine these spot images into a single WFS frame by shifting single wavelength images using the dispersion corresponding to zenith angles of 30, 45 and 60 degrees (Figure 13). We weight the images in different wavelengths by the throughput of optics + detector and by the spectrum of a given star (Figure 14 and Table 3).

Figure 14: Left Panel: Sample WFS spot (0.4 arcsec/pixel) if no atmospheric dispersion is included. Right Panel: Same spot including atmospheric dispersion (A0 star at 60 degrees zenith angle).

In Table 3 below, we show the relative flux for a blue A0 star (A0 stars have B-V=V-R=V-I=0 color and serve as the basis of the Vega photometric system), a solar type G0 star, and a red K5 star (K stars probably dominate the selection of faint NGSs at the faint limit towards most science fields). The very different spectra of these stars produces different atmospherically dispersed PSFs (Figure 15).

Table 3: Detector throughput and relative star fluxes for representative blue (A0), solar (G0) and red (K5) star types.

(nm) Detector Throughput

A0 star relative flux

G0 star relative flux

K5 star relative flux

400 55% 1.059 0.388 0.148500 95% 1.724 0.807 0.415600 95% 1.164 0.919 0.767700 92% 1 1 1800 77% 0.835 0.945 1.099900 47% 0.670 0.892 1.197

Figure 15: Comparison of the reference images (produced through drizzling) for a zenith angle of 60 degrees and an A0 star (left) and a K5 star (right).

Unlike the simulations above, we only assessed performance for 1 pixels sampling – 0.4”/pixel as per the Raven optical design, but we looked at the drop in performance as a function of magnitude, zenith angle and NGS type (Figures 16, 17, 18).

Figure 16: Drop in EE versus guide star magnitude for an A0 star observed through different WFS options (600 nm filter, ADC or no change) at zenith angles of 30, 45 and 60 degrees.

Figure 17 Drop in EE versus guide star magnitude for an G0 star observed through different WFS options (600 nm filter, ADC or no change) at zenith angles of 30, 45 and 60 degrees.

Figure 18 Drop in EE versus guide star magnitude for an K5 star observed through different WFS options (600 nm filter, ADC or no change) at zenith angles of 30, 45 and 60 degrees.

Note that these figures show the drop in Raven performance due only to the effects of aliasing, WFS noise, and atmospheric dispersion. The simulations do not include the losses associated with increased tomographic error due to the apparent separation of layers at higher zenith angles and the increase in the apparent r0 cos (zenith angle)3/5. We have modeled this last effect, and the performance definitely goes down as r0 gets smaller, but the magnitude difference in sky coverage is almost unchanged. As expected, the differences in expected performance for the 3 Raven options (ADC, filter or no change) are greatest for the A0 star because atmospheric dispersion is greatest at blue wavelengths (Table 4). Focusing on the K5 star, we find for a zenith angle of 60 degrees that we need NGSs that are on average 0.25 magnitudes brighter with no ADC versus implementing an ADC. Since most of the high airmass Raven science cases use fields near the Galactic Center, we do not think the lack of an ADC will significantly impact Raven sky coverage (or performance). We also note that for all but the reddest stars, it is a detriment to choose the filter option over choosing the ADC or no change options.

Table 4: Loss in limiting magnitude at different zenith angles and for different stellar types if no ADC is included in the WFS design.

Zenith Angle

A0 G0 K5Magnitude loss in sky coverage w/o ADC

30 0.05 0 045 0.30 0.15 0.1060 0.80 0.40 0.25

5 Conclusions We have provided a detailed description of the Raven pixel processing and

centroiding algorithms. We have included code (in the appendix) that was used for simulations but

can be adapted for use in the RTC. We have confirmed that the combination of aliasing plus sampling error

determined from MAOS simulations and included in our Raven modeling paper does not exceed 125 nm RMS.

Correlation and thresholded Center of Gravity (tCOG) centroiding techniques work equally well for bright stars with finely sampled WFSs.

Correlation centroiding works better than tCOG when NGS PSFs are under-sampled and when the S/N is low.

Correlation works (only) slightly better when the reference images are created through drizzling.

Correlation centroiding can work with synthetic reference images with a small penalty in performance (2% loss in EE) if synthetic reference image is up to 50% larger than the best data-derived reference image.

For bright stars, Raven performance is not much affected by atmospheric dispersion.

Raven would work best with an ADC, but without one there is a penalty in sky coverage.

For typical red NGSs, the limiting magnitude drops by less than 0.2 mag for moderate zenith angles (< 45 degrees).

Given the complexity and cost of adding ADCs to Raven, it is probably not worth adding them now.

Appendix: Correlation Matlab CodeI have included the matlab functions used for the simulations described above that would be relevant to implementing tCOG or correlation centroiding in the RTC.

% fCOG returns a vector "slopes" which contains x followed by y slopes% for all subapertures (length is 2x # of subaps)% "im" contains all the subapertures (assumed square) images in a% NX x NY x Nsubap 3D matrix.% "frac" is a fraction which sets the flux threshold.% "thresh" is the lower limit on the threshold (if frac<thresh, threshold=thresh% NOTE: reference vector is subtracted in a subsequent step

function slopes=fCOG(im,frac,thresh);

npix=size(im,1); % assumes as many pixels in X as in Y nslopes=size(im,3); slopes=zeros(nslopes*2,1);

for l=1:nslopes; sums=0; threshold=frac*max(max(im(:,:,l))); % theshold = maximum flux in any subap x frac if threshold<thresh threshold=thresh; end for m=1:npix

for n=1:npix if im(m,n,l)>thresh slopes(l) = slopes(l)+m*im(m,n,l); slopes(nslopes+l) = slopes(nslopes+l)+n*im(m,n,l); sums=sums+im(m,n,l); end end end slopes(l)=slopes(l)/sums; slopes(nslopes+l)=slopes(nslopes+l)/sums; end

% cor resturns a vector "slopes" which contains x followed by y slopes% for all subapertures (length is 2x # of subaps)% "im" contains all the subapertures (assumed square) images in a% NX x NY x Nsubap 3D matrix.% "i0" is a NX x NX reference image% "flat" is a vector containing the flat fields for all Nsubap subapertures.% It is optional if a separate reference image is created for each% subaperture independently.

function slopes = cor(im, i0, flat); npix=size(im,1); nsub=size(im,3); slopes=zeros(nsub*2,1); corim=zeros(npix,npix,nsub);

% Fourier transform for making cross-correlation function (CCF) ffttemp = fft(i0*flat(l),[],1); ffttemp = fft(ffttemp,[],2); ffttemp = conj(ffttemp); for l=1:nsub

fftref = fft(im(:,:,l),[],1); fftref = fft(fftref,[],2).*ffttemp; corim(:,:,l) = ifft(fftref,[],1); corim(:,:,l) = ifft(corim(:,:,l),[],2); corim(:,:,l) = ifftshift(fftshift(real(corim(:,:,l)),1),2); end

frac=0.2; % frac=0.2 and thresh=0 because correlation image has high S/N thresh=0; slopes=fCOG(corim,ref,frac,thresh);

% drizzle returns a reference image "i0" that is the average of many% subapertures that have been shifted and resampled to a higher resolution.% "im" is a set of subaperture images from several phase screens. It has% dimensions Npix x Npix x Nsubap x NphaseScreens% "back" is the average background flux per pixel (double).% "flux" is the flux of the image per subaperture (double).% "RN" is the readnoise (double).% "binFac" is the amount by which the original image is binned (int). This% won't be used for the real WFS.% "binDriz" is the amount by which the resolution of the drizzled image is% increased (int). In practice, it doesn't make sense for binDriz>2.% NOTE: drizzle creates a single reference image to be used with every% subaperture. It could be easily modified to create a reference image for% each subaperture separately.

function i0 = drizzle(im,back,flux,RN,binFac,binDriz);

npix=size(im,1); % assumes square subapertures nsub=size(im,3); nscreen=size(im,4); i0=zeros(npix/binFac*binDriz); % The original high resolution image from % the simulation will be binned by binFac and then drizzled % to a higher resolution. For the real system, binFac=1. slopes=zeros(nsub*2);

for k=1:nscreen% The following 4 lines just scale the highly sampled no-noise spot images% to the proper spatial resolution and flux level and determines the noise.% Just needed for simulation. bin = bin_im(im(:,:,:,k),binFac); total = median(sum(sum(bin))); bin = bin*flux/total+back; noise = addnoise(bin,RN);% The following uses fCOG, but one could use correlation centroiding with a% stale reference image as well% bin+noise is what comes off the camera naturally. back is measured. slopes(:,k)=fCOG(bin+noise-back,.12,RN*3)*binDriz/binFac;% Take the images and block-replicate them by a factor binDriz block = blkrep(bin+noise-back,binDriz);% Shift and sample the block images. temp2=xyshift(block,slopes); for j=1:nsub% coadd all shifted and resampled spot images onto i0. i0=i0+temp2(:,:,j); end endend

% xyshift shifts and resamples an image onto a new image of the same resolution% "final" is the new shifted image with same dimensions as "im"% "im" is Npix x Npix x Nsubap in size% "slopes" is a 2 x Nsubap vector containing the pixel centers in x and then% y for each subaperture.

function final = xyshift(im,slopes)

nsub=size(im,3); npix=size(im,1); nslopes=size(slopes,1); final=zeros(npix,npix,nsub);

for l=1:nsub for j=1:npix for k=1:npix u=j-slopes(l); v=k-slopes(l+nsub);

m=floor(u); n=floor(v); a=m-u+1; b=n-v+1; if a>1 a=1; elseif a<0 a=0; end if b>1

b=1; elseif b<0 b=0; end% Check that centers are within subapertureif m>0&&m<npix&&n>0&&n<npix final(m,n,l)=final(m,n,l)+im(j,k,l)*a*b; final(m+1,n,l)=final(m+1,n,l)+im(j,k,l)*(1-a)*b; final(m,n+1,l)=final(m,n+1,l)+im(j,k,l)*a*(1-b); final(m+1,n+1,l)=final(m+1,n+1,l)+im(j,k,l)*(1-a)*(1-b);end end end end