19
To appear in Journal of Microscopy, 2009 (This version contains corrections through July 14, 2009.) Recording and controlling the 4D light eld in a microscope using microlens arrays MARC LEVOY * , ZHENGYUN ZHANG * , & IAN MCDOWALL *Computer Science Department, Stanford University, Stanford, CA 94305, USA Fakespace Labs, 241 Polaris Ave., Mountain View, CA 94043, USA Keywords. plenoptic function, light eld, light eld microscope, synthetic aperture imaging, illumination, spatial light modulator, microlens array, spherical aberration, Shack-Hartmann sensor. Summary By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light elds of biological specimens in a single snapshot. Unlike a conventional photograph, light elds permit manipulation of viewpoint and focus after the snapshot has been taken, subject to the resolution of the camera and the diffraction limit of the optical system. By inserting a second microlens array and video projector into the microscope’s illumination path, one can control the incident light eld falling on the specimen in a similar way. In this paper we describe a prototype system we have built that implements these ideas, and we demonstrate two applications for it: simulating exotic microscope illumination modalities and correcting for optical aberrations digitally. 1. Introduction The light eld is a four-dimensional function represent- ing radiance along rays as a function of position and direc- tion in space. Over the past 10 years our group has built several devices for capturing light elds (Levoy, 2000; Levoy, 2002; Wilburn, 2005; Ng, 2005b). The last of these is a handheld camera in which a microlens array has been inserted between the sensor and main lens. A photograph taken by this camera contains a grid of circular subimages, one per microlens. Each subimage records one point in the scene, and within a subimage each pixel records one direc- tion of view of that point. Thus, each pixel in the photo- graph records the radiance along one ray in the light eld. Recently, we have shown that by inserting a microlens array at the intermediate image plane of an optical micro- scope, we can capture light elds of biological specimens Correspondence: Marc Levoy, [email protected] in the same way. We call this a light eld microscope (LFM) (Levoy, 2006). From the image captured by this device, one can employ light eld rendering (Levoy, 1996) to generate oblique orthographic views or perspective views, at least up to the angular limit of rays that have been captured. Since microscopes normally record orthographic imagery, perspective views represent a new way to look at specimens. Figure 1 shows three such views computed from a light eld of mouse intestine villi. Starting from a captured light eld, one can alterna- tively use synthetic aperture photography (Isaksen, 2000; Levoy, 2004) to produce views focused at different depths. Two such views, computed from a light eld of Golgi- stained rat brain, are shown in gure 10. The ability to cre- ate focal stacks from a single input image allows moving or light-sensitive specimens to be recorded. Finally, by apply- ing 3D deconvolution to these focal stacks (Agard, 1984), one can reconstruct a stack of cross sections, which can be visualized using volume rendering (Levoy, 1988). Summarizing, the light eld microscope allows us to capture the 3D structure of microscopic objects in a single snapshot (and therefore at a single instant in time). The sacrice we make to obtain this capability is a reduction in image size. Specically, if each microlens subimage con- tains N × N pixels, then our computed images will contain N 2 fewer pixels than if the microlenses were not present. In return, we can compute N 2 unique oblique views of the specimen, and we can generate a focal stack containing N slices with non-overlapping depths of eld (Levoy, 2006). Note that this tradeoff cannot be avoided merely by employing a sensor with more pixels, because diffraction places an upper limit on the product of spatial and angular bandwidth for a given aperture size and wavelength, regard- less of sensor resolution. Despite this limit, light elds contain much useful information that is lost when an object is photographed with an ordinary microscope. 1

Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009 (This version contains corrections through July 14, 2009.)

Recording and controlling the 4D light field in a microscopeusing microlens arrays

MARC LEVOY *, ZHENGYUN ZHANG *, & IAN MCDOWALL †

*Computer Science Department, Stanford University, Stanford, CA 94305, USA†Fakespace Labs, 241 Polaris Ave., Mountain View, CA 94043, USA

Keywords. plenoptic function, light field, light field microscope, synthetic aperture imaging,illumination, spatial light modulator, microlens array, spherical aberration, Shack-Hartmann sensor.

SummaryBy inserting a microlens array at the intermediate image

plane of an optical microscope, one can record 4D lightfields of biological specimens in a single snapshot. Unlikea conventional photograph, light fields permit manipulationof viewpoint and focus after the snapshot has been taken,subject to the resolution of the camera and the diffractionlimit of the optical system. By inserting a secondmicrolens array and video projector into the microscope’sillumination path, one can control the incident light fieldfalling on the specimen in a similar way. In this paper wedescribe a prototype system we have built that implementsthese ideas, and we demonstrate two applications for it:simulating exotic microscope illumination modalities andcorrecting for optical aberrations digitally.

1. IntroductionThe light field is a four-dimensional function represent-

ing radiance along rays as a function of position and direc-tion in space. Over the past 10 years our group has builtseveral devices for capturing light fields (Levo y, 2000;Levo y, 2002; Wilburn, 2005; Ng, 2005b). The last of theseis a handheld camera in which a microlens array has beeninserted between the sensor and main lens. A photographtaken by this camera contains a grid of circular subimages,one per microlens. Each subimage records one point in thescene, and within a subimage each pixel records one direc-tion of view of that point. Thus, each pixel in the photo-graph records the radiance along one ray in the light field.

Recently, we hav e shown that by inserting a microlensarray at the intermediate image plane of an optical micro-scope, we can capture light fields of biological specimens

Correspondence: Marc Levoy, [email protected]

in the same way. We call this a light field microscope(LFM) (Levo y, 2006). From the image captured by thisdevice, one can employ light field rendering (Levo y, 1996)to generate oblique orthographic views or perspectiveviews, at least up to the angular limit of rays that have beencaptured. Since microscopes normally record orthographicimagery, perspective views represent a new way to look atspecimens. Figure 1 shows three such views computedfrom a light field of mouse intestine villi.

Starting from a captured light field, one can alterna-tively use synthetic aperture photography (Isaksen, 2000;Levo y, 2004) to produce views focused at different depths.Tw o such views, computed from a light field of Golgi-stained rat brain, are shown in figure 10. The ability to cre-ate focal stacks from a single input image allows moving orlight-sensitive specimens to be recorded. Finally, by apply-ing 3D deconvolution to these focal stacks (Agard, 1984),one can reconstruct a stack of cross sections, which can bevisualized using volume rendering (Levo y, 1988).

Summarizing, the light field microscope allows us tocapture the 3D structure of microscopic objects in a singlesnapshot (and therefore at a single instant in time). Thesacrifice we make to obtain this capability is a reduction inimage size. Specifically, if each microlens subimage con-tains N × N pixels, then our computed images will containN 2 fewer pixels than if the microlenses were not present.In return, we can compute N 2 unique oblique views of thespecimen, and we can generate a focal stack containing Nslices with non-overlapping depths of field (Levo y, 2006).Note that this tradeoff cannot be avoided merely byemploying a sensor with more pixels, because diffractionplaces an upper limit on the product of spatial and angularbandwidth for a given aperture size and wav elength, regard-less of sensor resolution. Despite this limit, light fieldscontain much useful information that is lost when an objectis photographed with an ordinary microscope.

1

Page 2: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 1: Three oblique orthographic views from the indicated directions, computed from a light field captured by our LFM. Thespecimen is a fixed whole mount showing Listeria monocytogenes bacteria in a mouse intestine villus seven hours post infection.The bacteria are expressing green fluorescent protein, and the actin filaments in the brush border of the cells covering the surface ofthe villus are labeled with rhodamine-phalloidin. Scale bar is 10 µm. Imaging employed a 60×/1.4NA oil objective and an f/30 mi-crolens array (see section 3). Contrast of the bacteria was enhanced by spatially masking the illumination as described in figure 7.The inset at left shows a crop from the light field, corresponding to the square in the first oblique view. In this inset, we see the cir-cular subimages formed by each microlens. The oblique views were computed by extracting one pixel from each subimage - a pixelnear the bottom of each subimage to produce the leftmost view and a pixel near the top to produce the rightmost view.

While technologies for recording light fields haveexisted for more than a century, technologies for generatinglight fields have been limited until recently by the low reso-lution and high cost of spatial light modulators (SLMs).With the advent of inexpensive, high-resolution liquid crys-tal displays (LCDs) and digital micromirror devices(DMDs), interest in this area has burgeoned. Inmicroscopy, SLMs have been used to control the distribu-tion of light in space (Hanley 1999; Smith 2000; Cham-goulov, 2004) or in angle (Samson, 2007). The former isimplemented by making the SLM conjugate to the field ofthe microscope, and the latter by making it conjugate to theaperture. Systems have also been proposed for manipulat-ing the wav efront using an SLM placed in the apertureplane (Neil, 2000). However, in these systems the illumina-tion must be coherent, a significant limitation.

As an extension of our previous work, we show in thispaper that by inserting a microlens array at the intermediateimage plane of a microscope’s illumination path, one canprogrammatically control the spatial and angular distribu-tion of light (i.e. the 4D light field) arriving at the micro-scope’s object plane. We call this a light field illuminator(LFI). Although diffraction again places a limit on theproduct of spatial and angular bandwidth in these lightfields, we can nevertheless exercise substantial control overthe quality of light incident on a specimen.

In particular, we can reproduce exotic lighting effectssuch as darkfield, oblique illumination, and the focusing oflight at planes other than where the microscope is focused.In addition, by generating structured light patterns and

recording the appearance of these patterns with our LFMafter they hav e passed through a specimen, we can measurethe specimen’s index of refraction, even if this indexchanges across the field of view. In this application we areessentially using the LFI as a "guide star" and the LFM as aShack-Hartmann sensor. Finally, we can use this informa-tion to correct digitally for the optical aberrations inducedby these changes in index of refraction.

2. The four-dimensional light fieldWe begin by briefly reviewing the theory of light fields.

In geometrical optics, the fundamental carrier of light is aray (figure 2(a)). The radiance traveling along all such raysin a region of three-dimensional space illuminated by anunchanging arrangement of lights has been dubbed theplenoptic function (Adelson and Wang, 1992). Since raysin space can be parametrized by three coordinates and twoangles as shown in 2(b), the plenoptic function is five-dimensional. However, if we restrict ourselves to regionsof space that are free of occluders, then this function con-tains redundant information, because the radiance of a rayremains constant from point to point along its length. Infact, the redundant information is exactly one dimension,leaving us with a four-dimensional function historicallycalled the light field (Gershun, 1936).

Although the 5D plenoptic function has an obviousparametrization, the 4D light field can be parametrized in avariety of ways (Levo y, 2006c). In this paper we parame-trize rays by their intersection with two planes (Levo y,

2

Page 3: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

!(x,y,z)

cross-sectional!area

solid!angle

L

u

v

s

t L(u,v,s,t)

(a)!!radiance!along!a!ray (b)!!5D!plenoptic!function (c)!!4D!light!fieldFigure 2: Plenoptic functions and light fields. (a) The radiance L of a ray is the amount of light traveling along all possible straightlines through a tube of a given solid angle and cross-sectional area. The units of L are watts (W ) per steradian (sr) per metersquared (m2). (b) In three-dimensional space, this function is five-dimensional. Its rays can be parametrized by three spatial coor-dinates x, y, and z and two angles " and ! . (c) In the absence of occluders, the function becomes four-dimensional. Shown here isone possible parametrization, by pairs of intersections (u, v) and (s, t) between rays and two planes in general position.

1996) as shown in figure 2(c). While this parametrizationcannot represent all rays - for example rays parallel to thetwo planes if the planes are parallel to each other - it has theadvantage of relating closely to the analytic geometry ofperspective imaging. Indeed, a simple way to think about atwo-plane light field is as a collection of perspective viewsof the st plane (and any objects that lie beyond it), eachtaken from an observer position on the uv plane.

One important extension of the two-plane representa-tion is that one of the planes may be placed at infinity. Iffor example we place the uv plane at infinity, then the lightfield becomes a collection of orthographic views of the stplane, where each view is composed of parallel rays havingfixed direction uv. This parametrization is similar to theray matrices used in geometrical optics (Halbach, 1964),where s and t represent position on some imaging surfaceand u and v represent ray direction, which is expressed as aplane intercept in the paraxial approximation. We will needthis extension when we consider microscopes, which recordorthographic rather than perspective imagery due to thepresence of a telecentric stop (Kingslake, 1983) one focaldistance behind the objective’s second principal point.

Light field rendering and focusingWhether one of the two planes is placed at infinity or

not, we can treat the samples in a two-plane light field as anabstract 4D ray space indexed by u, v, s, and t. If weextract from this space a 2D slice parallel to the st axes, weobtain a pinhole view of the scene taken from one of theoriginal observer positions. For the easier-to-draw case of a2D ("flatland") light field, this procedure is diagrammed infigure 3(a). More interestingly, if we extract a 2D slice thatis not axis-aligned, we obtain a novel view, i.e. from anobserver position not contained in the original dataset

(figure 3(b)). We call this technique light field rendering(Levo y, 1996), and it has found numerous applications incomputer graphics and computer vision.

Alternatively, if we compute a projection of the 4Dspace perpendicular to its st axes, i.e. by adding togethersamples of constant u and v, then we obtain a non-pinholeview. Such a view is focused on the st plane in 3D and hasan aperture equal in extent to the uv plane (figure 3(c)). Wecall this synthetic aperture photography (Levo y, 2004).Finally, if we shear the 4D space parallel to its st axesbefore summing it (figure 3(d)), then we move the plane offocus perpendicular to the st plane (Isaksen, 2000; Ng,2005a). If we shear the array in other directions in 4D, weobtain a tilted focal plane (Vaish, 2005). Curved focal sur-faces are also possible, if we warp the space nonlinearly.

Light field illuminationThe algorithms described in the last two paragraphs for

generating views from light fields can also be applied toshaping illumination. Consider the 4D light field falling ona wall after passing through an aperture. Let the wall be thest plane and the aperture be the uv plane. As before,indices u, v, s, and t form an abstract 4D ray space. If weilluminate only those rays lying on a 2D slice parallel to thest axes, we produce the appearance of a point light sourcelocated somewhere inside the boundaries of the aperture. Ifthe uv plane (hence the aperture) is placed at infinity, thelight source becomes collimated, similar to figure 3(a). Ifthe slice is not axis-aligned in 4D ray space, the lightsource becomes either converging or diverging (like 3(b)).

Alternatively, if we assign a pattern to a slice that is par-allel to the st axes, and we extrude the pattern in 4D per-pendicularly to the st plane, then we produce a finite-aper-ture image of the pattern focused on the wall (like 3(c)).

3

Page 4: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

R

R

s

u

s

u0

u0

O

u1u2

u1

u2

umin umax

umin

umax

"

P

"

(a)!!orthographic! (b)!!perspective! (c)!!finite-aperture! (d)!!refocused!

ss s

u u u

Figure 3: Light field rendering and focusing in flatland. In a 2D world, rays (like R) are parametrized by their intersection with twolines u and s. To represent the situation in a microscope, we place the u line at infinity as described in the text. (a) A bundle of par-allel rays with direction u0 (upper diagram) constitutes a 1D pinhole orthographic view of the s line. In the abstract ray space de-fined by u and s (lower diagram), this view corresponds to a horizontal line at u0. Each point on this line denotes one ray. (b) Raysdiverging from a point (like O) constitute a perspective view. These correspond to a tilted line in ray space, comprised of rays hav-ing directions from u1 to u2. (c) Summing part or all of a column in ray space produces one pixel (like P) in a non-pinhole view fo-cused on the s line. Its aperture spans ray directions from umin to umax. (d) Shearing the ray space leftward in s with respect to u,then summing as before, we produce a view focused above the s line, or above the object plane if this were a microscope. A right-ward shear would produce a view focused below the s line. In (c-d), the bundle of rays constitutes the PSF of a synthetic aperture.In a microscope this PSF is spatially invariant, i.e. the vertical integral for every s spans the same range of directions in u.

Finally, if we shear or warp the 4D ray space after perform-ing the extrusion, we can focus the pattern at a differentdepth (like 3(d)), or onto a tilted plane or curved surface -without moving any optics. We call this synthetic apertureillumination (Levo y, 2004).

3. Prototype light field illuminator & microscopeTo explore these ideas, we have built a system that com-

bines a light field illuminator (LFI) and light field micro-scope (LFM). Our design is a standard optical microscopein which microlens arrays have been inserted at the inter-mediate image planes of the illumination and imagingpaths. In the illumination path this plane is imaged by avideo projector, and in the imaging path by a digital cam-era. Since we need optical quality in both paths, we choseepi-illumination for our prototype rather than transmittedlight, since the latter would require building a custommicroscope having objectives above and below the stage.Epi-illumination also facilitates fluorescence studies, animportant application as we shall see. The microlens arraysin the illumination and imaging paths are similar but dis-tinct. In fact, their optical recipes may differ, permittingdifferent spatio-angular tradeoffs in lighting and viewing.

Optical layoutFigure 4 shows our optical layout. Our spatial light

modulator is a Texas Instruments Digital Light Processor(DLP) having 1024 × 768 pixels, of which our prototypeuses the innermost 800 × 600. When imaged onto lightfield plane LfLFI in figure 4(b), this produces a 12mm ×9mm field. The illumination microlens array at #$LFI is astandard part made by Adaptive Optics. Each microlens isa square-sided, plano-convex lenslet with a pitch of 300 µmand a focal length of 3.0mm or 7.6mm. This gives us anaddressable spatial resolution of 12mm / 300 µm = 40 tileshorizontally and 9mm / 300 µm = 30 tiles vertically. Theangular resolution within each tile is 800 pixels / 40 = 20addressable directions horizontally and vertically.

The DLP is driven by a Discovery 1100 Controller(Digital Light Innovations), which is fed by a PC’s DVIoutput using a custom interface (Jones, 2007). In this inter-face a 24-bit image drawn on the PC results in an endlessloop of 24 binary images being displayed in rapid succes-sion on the DLP. To produce gray, a fraction of the bits ineach pixel are turned on. The resulting sequence of binaryimages integrates over time to approximate the desiredintensity. While this interface supports only 24 graylevels,it has proven sufficient for our applications.

4

Page 5: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 4: Prototype light field illuminator and microscope. (a) The base microscope is a Nikon 80i. The LFI (yellow outline) consistsof a video projector A and microlens array B. The LFM (red outline) is a camera C and second microlens array D. (b) Light from a120W metal halide source (X-Cite 120, EXFO) is conducted through a 1.5m liquid light guide (3mm diameter, 0.59NA, EXFO) to arectangular hollow reflective light pipe (Discovery Visible Optics Kit, Brilliant Technologies). From there it is relayed to a TIR prismand focused onto a DLP-based spatial light modulator. The DLP’s image is focused by a 1:1 telecentric relay lens (Brilliant Technolo-gies) onto light field plane LfLFI (also shown in (c)). Solid lines are the chief rays for the DLP, and dashed lines are the marginal raysfor one micromirror. (c) Light field plane LfLFI is the front focal plane of the illumination microlens array, which lies at a conjugate#$LFI of the microscope’s intermediate image plane. Plane #$LFI is imaged by a Nikon imaging tube lens TLLFI , beam splitter BS, andstandard microscope objective Ob, which focuses it onto object plane #. Returning light passes through the objective and the micro-scope’s native tube lens TLLFM , which focuses it onto the imaging microlens array at the native intermediate image plane #$LFM . Theback focal plane LfLFM of this array is imaged by a 1:1 relay lens system (not shown), which consists of two Nikon 50mm f/1.4 AISmanual focus lenses mounted front-to-front, and is recorded by a digital camera. Solid lines are the chief rays for a single microlens,and dotted lines are the marginal rays for that microlens. To simplify the drawings, only three microlenses are shown in each array.Further details on the optical recipes of LFMs can be found in Levo y (2006b).

Imaging is performed using a Retiga 4000R mono-chrome CCD camera (2048 × 2048 pixels, each 7.4 × 7.4µm). When imaged onto light field plane LfLFM in figure4(c), this corresponds to a field 15mm × 15mm. The imag-ing microlens array at #$LFM is a custom design manufac-tured to our specifications by Adaptive Optics and consistsagain of square-sided, plano-convex lenslets, this time witha pitch of 125 µm. This gives us an addressable spatial res-olution during imaging of 15mm / 125 µm = 120 tiles hori-zontally and 120 tiles vertically. The angular resolutionwithin each tile is 2048 pixels / 120 = 17 addressable direc-tions horizontally and vertically.

The optics connecting the illumination and imagingpaths consists of microscope objective Ob in figure 4(c) andtwo tube lenses TLLFI and TLLFM . We used a Nikon imag-ing tube lens at TLLFI rather than a Nikon illuminator tubelens because we believe the former to have higher opticalquality, which we need when making focused images. Theeffective focal length of this tube lens matches that of the

microscope’s native tube lens at TLLFM , thereby making thetwo intermediate image planes #$LFI and #$LFM conjugates.As a result, we can treat the two microlens arrays as if theywere superimposed. Comparing their fields, we see that theprojector somewhat underfills our camera’s field of view.More importantly, the camera has a higher pixel count thanthe projector, reflecting the current state of the art of thesetwo technologies at roughly comparable component costs.By our selection of microlens pitches (125 µm versus 300µm), we have chosen to devote those extra camera pixels tospatial rather than angular resolution.

To complete the optical layout, we must specify thefocal lengths of the microlenses. To optimize use of ourprojector and camera, we want the telecentric stop DOb infigure 4(c) to map through the microlens arrays to arrays ofimaging circles that exactly fill light field planes LfLFI andLfLFM . Ideally these circles should abut, neither overlap-ping nor leaving a gap between them. (Our square-sidedmicrolenses produces a rectilinear grid of circles, which

5

Page 6: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

(a)!!camera/projector!image! (b)!!microlens!array (c)!!one!microlens!subimage! (d)!!one!ray!bundle

s

t

x

y

s

tu

v

rrNA

!

Figure 5: Definition of coordinate systems. (a) The image returned by our camera or displayed on our projector. The pixel grid(black lines) is denoted using coordinates (x, y). With the microlens arrays in place, these images can be treated as an array of circularsubimages (gray circles), one per microlens. This array is generally not aligned to the pixel grid, as shown. Given a pixel in in one ofthese images (red dot in (a)), the cubic warps described in the text tell us the integer coordinates (s, t) of the microlens it lies in (redcircle in (b)), as well as the position in rectangular coordinates (u, v) or polar coordinates (r, ! ) of the pixel (red square in (c)) relativeto the centroid of its microlens. From r and subimage radius rNA, we can apply equation (1) to obtain declination angle " of the pix-el’s ray bundle (pink tube in (d)) with respect to the microscope’s optical axis. Polar coordinate ! in (c) directly gives the ray’s az-imuthal angle in (d). Finally, microlens coordinates s and t (from (b) tell us the spot in the field of view (red ellipse in (d)) where theray bundle intersects object plane #. The optics reflect " and ! around the origin, as shown in the diagrams, but not s and t. In thetext, camera coordinates bear the subscript c, and projector coordinates bear the subscript p.

wastes additional space; a hexagonal grid would be moreefficient.) This mapping is achieved if the NA of themicrolenses matches the image-side NA of the objective.Expressing this constraint as an F-number N givesN % M /(2NA). For example, for a 40×/0.95NA objective,we should employ f/20 microlenses for both illuminationand imaging. The LFI microlenses in our prototype have apitch of 300 µm, so their focal length should be 300 µm ×f/20 = 6.0mm for this objective. Similarly, the LFMmicrolenses with a pitch of 125 µm should have a focallength of 125 µm × f/20 = 2.5mm.

At this point it is instructive to look back at our discus-sion of light field parametrizations. In figure 3 we repre-sented orthographic light fields abstractly by placing the uline at infinity. In figure 4 the st plane can be treated aslying at the microscope’s object plane # (or any conjugate,such as image planes #$LFI or #$LFM ), and the uv plane canbe treated as lying at the objective’s back focal plane, i.e. attelecentric stop Dob. Since this stop is the microscope’saperture, we say formally that &1 ' (s, t) ' + 1 giv es a posi-tion in the microscope’s field of view, and &1 ' (u, v) ' + 1gives a position in its aperture or exit pupil.

From the foregoing it should now be clear that each cir-cular subimage on illumination light field plane LfLFI con-trols the light incident at one position in the microscope’sfield of view, and each pixel within a subimage controls thelight passing through one part of microscope’s aperture,hence one direction of light for that position. Similarly,each circular subimage on imaging light field plane LfLFMrecords the light reflected from one position in the field of

view, and each pixel within a subimage records one part ofthe aperture, hence one direction of view for that position.

It costs a few thousand dollars to convert a microscopewith a digital camera into a light field microscope, andabout the same to convert a Nikon epi-fluorescence moduleand Digital Light Innovations projector into a light fieldilluminator. While we replaced the illumination optics inthe Nikon module to ensure optical quality as noted earlier,we continue to use the module’s rotating turret of beamsplitters and fluorescence filters, as well as the Nikontrinocular for switchable observation or photography. Inthe future we envision a system having ports for the cameraand projector and two turrets of microlenses. These turretswould permit the user to optimize the microlens arrays’pitches and focal lengths for each microscope objective.This would permit the user to trade off spatial and angularresolutions, and to do so separately for illumination andimaging, as needed for their particular application.

Alignment and calibrationNo attempt is made in our prototype to physically align

the microlens arrays with the camera or projector chips.Instead, we calibrate our system in software. The coordi-nate systems we use are defined in figure 5. We first findthe mapping from camera pixels to camera microlenses,and from there to positions and directions of imaging rays.Then we find the mapping from projector pixels to camerapixels. Combining this mapping with the first one gives usthe mapping from projector pixels to illumination rays,since they pass through the same microscope optics. The

6

Page 7: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

entire calibration procedure, including image capture, takesabout one minute on a personal computer.

To calibrate the imaging path, we place a blank slide onthe microscope stage, illuminate it with transmitted light,focus the microscope for Koehler illumination, reduce thecondenser aperture to a pinhole, and capture a light field.This light field will contain a grid of circular spots each afew pixels in diameter as shown in figure 5(a). We find thecentroids (xc, yc) of these spots in camera coordinates (withsubpixel accuracy), form a correspondence between thegrid of centroids and points (sc, tc) on a 2D integer lattice(gray grid in figure 5(b)) representing the camera microlensarray, and solve for a cubic warp mapping lattice points tocamera coordinates. In addition to translation, scaling,rotation, and shear, this warp corrects for the radial distor-tion in our relay lenses. We assume that the microlensarray, while not aligned, is not itself distorted.

Assuming the microlenses are themselves distortion-free 1, a pixel at distance r from the centroid of a microlenssubimage (figure 5(c)) can be mapped to ray declinationangle " relative to the microscope’s optical axis (figure5(d)) using the Abbe sine condition (Oldenbourg, 2008)

(1)sin" =r

rNA. NA

n

where rNA is the radius of the microlens subimage producedby an objective having a numerical aperture NA, and n isthe refractive index of the medium at the object plane.

To calibrate the illumination path, we place a mirror onthe stage, we display on the projector a sequence of pat-terns consisting of black-and-white stripes, and for eachpattern we capture a light field. First, we divide the lightfield by a previously captured image with every projectorpixel on. This performs radiometric correction, to accountfor vignetting across the field of view and aperture. (Theformer affects the entire image; the latter affects eachmicrolens.) We then subtract from each pixel a fraction(40%) of the average intensity in its subimage, clamping atzero. This partly corrects for light scattered from the seamsbetween microlenses. This scattering is not completelyunderstood and may include diffraction from the edges ofthe microlenses; characterizing it is a topic of ongoingwork. Finally, thresholding the intensities seen in eachcamera pixel on each frame of the sequence, we produce abinary code per pixel. We perform this procedure firstusing horizontal stripes, then vertical stripes. These codes

1 Although our microlenses are uncorrected, we are using them at longconjugates (between f/10 and f/30) to produce small images (' 20 pixelson a side). We believe that aberrations or distortions are smaller than onepixel and can be ignored in a first-order approximation.

Figure 6: Calibrating our projector using Gray code sequences.(a) This plot contains 10 rows, one for each frame in the se-quence. Each row is 1024 pixels long, the width of our videoprojector. A row is extruded vertically to make the pattern dis-played on the projector for that frame. Within row t, each pixelx is either black or white. Taking a vertical slice through theplot gives the 10-bit Gray code for that pixel. Two adjacentcodes, on either side of the red line, are shown. Note that theydiffer in only one digit, in keeping with the Gray code’s uniqueproperty. At bottom are the images captured by our camera forthe second (b) and sixth (c) patterns. The specimen is a first-surface mirror with cover slip. Imaging employed a Nikon60×/1.2NA water objective, an f/25 illumination array, and anf/20 imaging array. See text for explanation of the red dots.

tell us (to the nearest pixel) which projector pixel (x p, y p)was seen by each camera pixel.

The stripe patterns we employ are Gray codes (Bitner1976). These codes have the property that if a stripeboundary falls inside a pixel, making it hard to determine ifthe pixel should be zero or one, it will do so only in once inthe sequence of patterns. Thus, the error we suffer as aresult of making the wrong determination will be no morethan ±1 projector pixel. The number of patterns required(horizontal and vertical) is 2 log2 W , where W is the widthof the projector in pixels. For example, for W = 1024 pix-els we must display 20 patterns, producing a 20-bit code foreach pixel. Such a code is shown in figure 6(a).

While Gray codes are commonly used to calibrate pro-jector-camera pairs (Rusinkiewicz 2002), the microlensarrays in our system make the task harder. In particular, noattempt is made to align our arrays to one another, nor tomatch the pitches of their microlenses. This produces amapping from projector pixels to camera pixels that foldsback on itself repeatedly. This in turn permutes thesequence of Gray codes observed across along a row (orcolumn) of camera pixels, destroying its error-tolerantproperty. To avoid this problem, we first discard any cam-era microlens whose subimage is blurry, since this repre-sents a fold in the mapping. We evaluate blurriness bythresholding the sum of the absolute values of the pixel

7

Page 8: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 7: Spatial control over illumination. The specimen and imaging arrangement are described in figure 1. Illumination em-ployed an f/25 microlens array. Scale bar is 10 µm. (a) The GFP channel under brightfield illumination. The bacteria are visible,but contrast is low due to autofluorescence from surrounding structures. (b) A video projector pattern designed to restrict the illumi-nated field to a circular mask 18 µm in diameter. The circle spans 29 microlenses. Each microlens is represented in the mask by asolid disk, indicating that within the 18 µm circle the microscope aperture is filled with light. (c) The GFP channel under themasked illumination. Contrast between the bacteria and its background, computed using (I1 + I2) / (I1 & I2)), is improved 6× over(a). Figure 1 shows three views of (c) (visualized in green) together with the surrounding structures (visualized in red).

gradients in X and Y, integrated across a microlens subim-age. For example, we would discard the second column ofmicrolenses in figure 6(b). Fortunately, the projectormicrolenses in our prototype are more than twice as wide asthe camera microlenses, so we can afford to be conserva-tive. We then transpose the light field, producing a 2Darray of "imagelets" each of which represents a single posi-tion (uc, vc) in the microscope’s exit pupil, hence a constantray direction. Tw o imagelets can be seen in figures 17(h-i).Adjacent pixels in an imagelet come from correspondingpositions in adjacent microlenses, for example the red dotsin figures 6(b-c). In these pixels, the Gray codes are guar-anteed not to be permuted with respect to camera pixelposition, preserving their error-tolerant property.

Once we have assembled an imagelet, which may besparse because we have discarded some of its pixels, weform a correspondence between the remaining pixels,whose Gray codes give the coordinates (x p, y p) of the pro-jector pixel they saw, and points (sc, tc) on a 2D integer lat-tice representing the camera microlens array. We thensolve for a quadratic warp mapping projector coordinates tolattice points. Concatenating this warp to the mapping wealready know from the camera microlens lattice to ray posi-tions and directions tells us the position and direction of rayproduced by turning on any projector pixel. To illuminate aray of fractional position and direction, we turn on aweighted set of pixels. If we assume linear interpolation,then to turn on a single ray we partially activate 16 projec-tor pixels (the two neighbors in each dimension of the 4Dlight field). As a result of this interpolation, the resultingray bundle will be broader in space and angle than wewould like, but this is the best we can do given our discretesampling of the light field.

4. Shaping microscope illuminationThe control provided by our light field illuminator over

the lighting in a microscope can be described formally as

(2)L$(u, v, s, t) = L(u, v, s, t) × MLFI (u, v, s, t)

where L is the light field that would arrive at the objectplane if we performed no modulation, and MLFI is theattenuation provided by our spatial light modulator.Although the SLM is two-dimensional, the presence of amicrolens array allows us to define MLFI as a 4D function.

Unfortunately, equation (2) is rather general, making itdifficult to appreciate which attenuation functions might beuseful. In this paper, we consider two specializations ofthis equation: separable spatio-angular control and syntheti-cally focused illumination.

Separable spatio-angular controlIn a microscope whose illumination path provides phys-

ical access to conjugates of the field and aperture planes,substantial control can be exercised by inserting masks atone or both planes. This control can be described as

(3)MLFI (u, v, s, t) = Ma(u, v) × M f (s, t)

where Ma and M f are attenuation functions of 2D, and(u, v) and (s, t) denote lateral positions in the field and exitpupil, respectively. In this formulation M f controls thespatial character of the illumination, and Ma controls itsangular character. To control spatial or angular characteralone, one merely sets the other function to unity.

We first consider spatial control. Microscopists knowthat by restricting illumination to a small region of the field,inscattered light from objects outside this region is reduced,

8

Page 9: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 8: Angular control over illumination using the LFI alone. The specimen is a blond human hair. Scale bar is 100 um. At topare the patterns displayed on our projector. As they pass through objective Ob, these patterns produce the illumination shown in themiddle row. At bottom are photographs captured by a color camera (Canon 5D) using a 10×/0.45NA objective and no imaging mi-crolens array. The illumination microlens array was f/10. (a) Brightfield. Ma (see equation (3)) is a full-aperture disk convolvedwith a Gaussian (( = aperture diameter × 0.1), which apodizes Ma to reduce diffraction. (b) Quasi-darkfield. Ma is a full-apertureannulus apodized as before. Note the enhanced visibility of scattering inside the hair fiber. (c) Headlamp, i.e. brightfield with a re-duced aperture. Ma is a Gaussian (( = aperture diameter × 0.4). This produces a specular highlight, whose undulations arise fromscales on the fiber surface. (d) Oblique, produced by shifting (c) to the edge of the aperture. See text for explanation of the arrow.

improving contrast. Indeed, this is the basis for confocalmicroscopy. Since we can programmatically mask illumi-nation and imaging, we can implement confocal imaging.However, since we are partitioning our projector’s pixelsbetween spatial and angular resolution, and we have chosento restrict spatial resolution to 40 × 40 tiles, our confocalperformance would be poor. In particular, it would worsethan using the projector to perform only spatial modulation(Hanley, 1999; Smith, 2000; Chamgoulov, 2004).

Nevertheless, our spatial resolution is high enough toimprove the contrast of ordinary fluorescence imaging.Figure 7 demonstrates this idea for the difficult case ofimaging L. monocytogenes bacteria in the intestinal epithe-lium. By restricting the illumination to a small circle inter-actively positioned to enclose only the bacteria, contrastwas improved 6×, allowing us to see them clearly eventhough they are embedded 40 µm into the tissue sample.

We now consider angular control. Te xtbooks on opticalmicroscopy typically devote an entire chapter to exoticways of illuminating the condenser aperture, includingbrightfield, darkfield, oblique illumination, optical staining,and so on (Pluta, 1988, vol. 2). As currently practiced,each of these techniques requires a special apparatus. It hasbeen previously shown that by placing a spatial light modu-lator conjugate to this aperture, many of these effects canbe simulated programmatically (Samson, 2007). Since weare partitioning our projector’s pixels among spatial andangular resolution as previously mentioned, our angularresolution is relatively low (20 × 20 directions).

Nevertheless, we too can simulate these effects, asdemonstrated in figure 8. The patterns at top differ onlywithin the circular subimage destined for each illuminationmicrolens. These differences represent different apertureattenuation functions (Ma in equation (3)). The obliqueillumination in 8(d) is particularly interesting. Some of the

9

Page 10: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 9: Using the LFI and LFM together to obtain truedarkfield imaging. At top are views computed from lightfields captured by our LFM. The specimen is L-Glutemicacid crystals on a slab of polished black marble, imaged witha 20×/0.75NA objective. The illumination microlens arraywas f/10, and the imaging array was f/13. Scale bar is 100um. The middle row are excerpts from the light fields, show-ing the microlenses used to compute the pixels inside the redrectangles at top. Visualized in yellow are the rays summedto compute one pixel. The bottom row contains blowups ofone microlens. The yellow rectangle shows the rays used.

light reflects from the top surface of the hair fiber, produc-ing a specular highlight, but some of it continues throughthe fiber, taking on a yellow cast due to selective absorptionby the hair pigment. Eventually this light reflects from theback inside surface of the fiber, producing a second high-light (white arrow), which is colored and at a differentangle than the first. This accounts for the double highlightcharacteristic of blond-haired people (Marschner, 2003).

Strictly speaking, darkfield imaging requires the con-denser to overfill the objective back aperture. This allowslight to be cast on the specimen from directions not seen bythe objective. Since we are illuminating and imagingthrough the same objective, figure 8(b) does not satisfy thiscondition. However, if we capture a light field instead of anordinary photograph, and we compute a synthetic aperturephotograph using only the center of each microlens subim-age, then we can correctly simulate darkfield imaging,albeit with a reduced aperture. This idea is shown in figure9. Under brightfield (a), the pixels in the yellow rectangle

Figure 10: Synthetic refocusing by shearing a light field. Thespecimen is a 100 µm slice of Golgi-stained rat brain, imagedwith transmitted light and a 40×/1.3NA oil objective. At topare views computed from a single light field captured by avariant of the optical layout in figure 4. Scale bar is 100 um.At bottom are excerpts from the light field. Visualized in yel-low are the rays summed to compute the pixel under thecrosshair at top. (a) Finite-aperture view focused on the mi-croscope’s natural object plane. (b) Focused 10 µm higher.Note that rays are drawn from more microlenses than in (a).

at bottom include a specular reflection from the marbleslab. This reflection makes the background appear bright inthe view at top. Under quasi-darkfield illumination (b),light arrives obliquely, reflects specularly from the marble,and leaves obliquely. Thus, it fills only the periphery ofeach microlens; the center remains dark. Extracting thecenter then produces a true darkfield image, shown at top.

Synthetically focused illuminationIf we relax the requirement in equation (3) that Ma and

M f be functions of (u, v) and (s, t) alone, we obtain a for-mulation with more generality,

(4)MLFI (u, v, s, t) = Ma(Ta(u, v, s, t))

× M f (T f (u, v, s, t))

where Ta and T f are arbitrary mappings from 4D to 2D.For example, if Ta = (u, v) and T f = (s + au, t + av), thenwe obtain a shear of st with respect to u and v, where acontrols the magnitude and sign of the shear. As shown infigure 3, shearing a light field is equivalent to refocusing it(Ng, 2005). Thus, using T f to shear M f corresponds tocreating a spatial illumination pattern focused on a planeparallel to the microscope’s natural object plane but

10

Page 11: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

displaced from it. As another example, ifTa = (u + bs, v + bt) and T f = (s, t), then the bundle ofdirections from which light arrives at the object plane, i.e.the illumination PSF, will vary as a function of field posi-tion. This allows us to simulate point sources at arbitrary3D positions relative to the specimen. These capabilitiesdiffer fundamentally from existing programmable-arrayillumination systems.

What does it look like to shear a light field? Figure 10shows two views from a light field of Golgi-stained neu-rons. To produce a view focused on the microscope’s natu-ral object plane, it suffices to sum the rays in eachmicrolens subimage. (The pixel at the crosshair liesbetween microlenses, so rays were drawn from fourmicrolenses.) To focus higher in the specimen, we shearthe light field. This is equivalent to summing the centralrays from the central microlenses and the peripheral raysfrom nearby microlenses, with the latter taken from theperiphery furthest from the central microlens. If we wishedto focus lower, we shear the light field the other way. Thiscauses us to draw the latter rays from the periphery closestto the central microlens.

Shearing an illumination light field works the sameway. If we replace the camera in figure 10 with a projector,and we turn on pixels corresponding to the yellow areas in(a) or (b), we would produce a spot of light at the lateralposition indicated by the crosshair but focused at differentdepths. The minimum lateral extent of this spot wouldequal the spacing between microlenses in object space (3µm for this example), and its minimum axial extent wouldequal the depth of field given by equation (5.2) in Levo y(2006) (4 µm for this example). These shears can be per-formed in real time using a modern graphics card.

Experimental demonstrationTo demonstrate our ability to refocus light programmat-

ically, we use our LFI to throw a light pattern onto a par-tially reflective slide (see figure 11). We then move themicroscope stage axially, throwing the slide out of focusand the illumination doubly out of focus, since its lightpasses twice through this increased distance. Shearing theincident light field in 4D, we can refocus it back to theplane of the slide. Shearing the captured light field instead,we can refocus our view back to the same plane. Doingboth at once brings the slide and illumination back intofocus - without moving any optics.

Let us analyze these results. Applying Sparrow’s crite-ria the minimum resolvable spot on the intermediate imageplane (Levo y, 2006, equation (2)) is 6.9 µm for our20×/0.75NA objective, assuming ) = 550 nm. On the imag-ing side, this gives us 18 spots within each 125 µm-widemicrolens. Our camera resolution is slightly worse than

Figure 11: Refocusing the illumination in a microscope. Thespecimen is a micrometer slide (Edmunds M53-713) consist-ing of diffuse diamonds on a mirror background, imaged us-ing a 20×/0.75NA objective. The illumination microlens arraywas f/10, and the imaging microlens array was f/13. Scale baris 100 um. (a) The word "LIGHT FIELD" was rasterizedwith anti-aliasing and displayed on our video projector. (b) Aview computed from a light field, focused on the microscope’sobject plane as in figure 10(a). The diamonds and words aresharp. (c) Lowering the stage by 50 µm defocuses our viewof the slide and doubly defocuses our view of the words. (d)Modifying the projected image as in figure 10(b) refocusesthe illumination downwards by 50 µm. (e) The words arenow focused at the plane of the slide, but our view of them isstill blurry. (f) Synthetically refocusing our view downwardsby 50 µm brings the diamonds and words back into focus.

this spot size (pixel size = 7.4 µm), so in practice weresolve only 16.8 spots. Taking this as Nu, the depth offield of a synthetically focused view (Levo y, 2006, equation5.2) is 18.4 µm, and the axial range over which these viewscan be focused before their circle of confusion exceeds onemicrolens (Levo y, 2006, equation 5.3) is 139 µm.

On the illumination side, we have 43 resolvable spotsbelow each 300 µm-wide microlens by Sparrow’s criteria.Our projector pixel size (13.7 µm) is much larger, so inpractice we can address only 21.8 spots. Taking this as Nu,the depth of field of a synthetically focused pattern is 11.6µm, and the axial range over which these patterns can befocused before their circle of confusion exceeds one illumi-nation microlens is 233 µm. However, if we wish the pat-tern to appear sharp in the captured light field as well as theincident light field, then the axial range must be reduced bythe ratio of the two microlens pitches, yielding a maximumrange of only 97 µm. Thus, we can expect to refocus theillumination by only 40 µm before it becomes noticeablyblurry in views of the light field. Indeed, the words in fig-ure 11(f) appear slightly less sharp than in (b).

11

Page 12: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

What is synthetically focused illumination good for?First, it allows us to move a focused spot of light axiallywithout moving the microscope optics. Combined with oursynthetically refocusable imaging, this facilitates the illu-mination and viewing of rapidly moving organisms. Sec-ond, it allows us to create masked illumination at multipledepths simultaneously, by summing several patterns eachsheared by a different amount. In ongoing work we areusing this technique to create steerable "follow spots" (byanalogy to theatrical lighting) to stimulate fluorescence inmultiple L. monocytogenes as they move through live tis-sue, while minimizing autofluorescence from surroundingstructures. Finally, it allows us to implement a low-resolu-tion scanning confocal microscope capable of acquiring a3D volume without moving the microscope optics.

5. Digital correction of optical aberrationsAberrations arise in an optical system when light leav-

ing a point on the object do not reconverge to a point on theimage plane. Since 4D light fields separately record eachray passing through the system, it is possible in principle tocompute focused views free of aberrations - without alter-ing the optics - by capturing and resampling a light field.This idea is shown geometrically in figure 12. Whether onecan accomplish these corrections in practice depends oncapturing rays at fine enough resolution in space and angle,and accurately characterizing the rays actually captured. Inprevious work in our laboratory, Ng (2006) showed thatnon-chromatic aberrations in photographic lenses whoseoptical recipe he knew could be reduced in this way.

Like any imaging system, microscopes are sensitive tooptical aberrations. Minimizing these aberrations requirescontrolling the index of refraction of the immersionmedium, and for a non-dipping objective, the thickness ofthe cover slip. If the specimen has a different index ofrefraction than the immersion medium, then the objectivecan only be well-corrected for a single depth. Researchershave proposed numerous ways to reduce aberrations,including introducing additional optics (akin to the coverslip corrector found on some objectives), changing theindex of refraction of the immersion medium, or employingcomputer-controlled deformable mirrors (Albert, 2000;Booth, 2002; Sherman, 2002; Kam, 2007; Wright, 2007).However, these techniques offer only a limited ability tocorrect for aberrations that vary spatially. When imagingthick biological specimens the index of refraction may varyboth laterally and axially, making full correction difficult.

In a light field microscope, aberrations affect us twoways. First, they cause light field samples to deviate fromthe nominal position and direction we expect for them, asshown in figure 12(a). We can correct for these deviations

!

!’

Lf

Ob

BC

D A

(a)!!aberrant (b)!!corrected

Figure 12: Aberrations in a generalized plenoptic imagingsystem, having an object plane #, objective lens Ob, imageplane PI $ with microlenses, and light field plane Lf . (a) Thesubimage formed on Lf by one microlens records the differ-ent directions of light (gray paths) leaving a point (B) on theobject plane. In the presence of aberrations, light field sampleA on the periphery of that subimage maps (solid black path) toa laterally displaced point C. Adding this sample to the othersin its subimage during synthetic focusing would produce ahazy image. What we actually wanted was the dashed path,which was recorded (at D) in the subimage formed by anothermicrolens. (b) By selecting and summing light field samplesfrom several microlenses, we can synthesize a full-aperture,aberration-free image. Drawing is not to scale.

by resampling the light field as shown in 12(b). The pathsin 12(a) arise if the medium between # and Ob has a lowerrefractive index than expected, causing rays to bend exces-sively as they pass through the objective. In this case, D isto the left of A, and D is closer to the center of itsmicrolens subimage than A is to the center of its subimage.If the medium had a higher refractive index than expected,D would be to the right of A and further from the center ofits subimage than is A. In fact, it may fall outside theobjective’s exit pupil and be irretrievable. This unavoidablyreduces the angular spread of rays captured.

The second effect of aberrations is that the light werecord for each sample may itself be aberrant. Since a sen-sor pixel records an integral over space and angle, in thepresence of aberrations our samples may represent integralsthat are asymmetric with respect to their nominal positionand direction or that overlap those of adjacent samples.

In this paper, we measure and correct for the first effect,but we ignore the second. Since aberrations are propor-tional to aperture size, and the aperture of a single lightfield sample is small, this simplification is justifiable. How-ev er, for severely aberrant imaging the intra-sample

12

Page 13: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 13: Lateral shifts due to aberrations. The specimen is amirror, imaged using a Nikon 60×/1.0NA water dipping objec-tive. The immersion medium is distilled water (a) or 3% glyc-erol (b). The illumination microlens array is f/25, and theimaging array is f/30. Scale bar is 5 um. In row 1 we illumi-nate the center pixel (pupil position 0.0) in one projector mi-crolens. This produces a ray parallel to the optical axis, whichreflects from the mirror and is recorded by several imaging mi-crolenses (due to their smaller pitch). A blowup of one mi-crolens (yellow rectangle) is shown at right. As the illuminat-ed pixel moves leftward, the illumination angle increases. Thisinduces an opposite tilt in the reflection, causing it to moverightward in the imaging microlenses (rows 2-5). Eventuallythe angle exceeds the pupil diameter (row 6), and the light isblocked. Continuing leftward, we pass into an adjacent projec-tor microlens (rows 7-12), producing a similar sequence in re-verse. In glycerol the paraxial rays (rows 1-2) behave as in (a),but peripheral rays (rows 3-4) start shifting left (red arrows).

aberrations become significant. We will return to this issuelater.

There are several ways we could measure the deviationsof light field samples from their expected position anddirection. Here are two methods we have tried:

Method #1: tracking guide starsIn terrestrial telescopes, aberrations due to refraction in

the atmosphere can be reduced by projecting a laser guide

star into the sky, observing it with a Shack-Hartmannwavefront sensor (1971), and physically modulating theshape of the telescope’s main reflector to improve conver-gence of the rays - a technique called adaptive optics. AShack-Hartmann sensor is simply a microlens arrayfocused on a sensor. In microscopy Bev erage et al. (2002)have used a Shack-Hartmann sensor the same way, employ-ing as a guide star a single fluorescent microsphere placedat the microscope’s object plane. In a related approach,Pavani (2008) measures the deformation of a structuredillumination pattern as it passes through a phase object.

In our method we create a grid of widely spaced "guidestars" by reflecting a sparse grid of spots produced by ourLFI off a mirror or glass slide. After two passages throughthe specimen, we record images of these spots using ourLFM. Figure 13 shows the light field captured for a singlemoving guide star under normal and aberrant conditions.By measuring the shift in rows 3-4 of 13(b) relative to13(a), we can correct digitally for the aberrations using theray selection strategy shown diagrammatically in figure 12.

While similar in some respects to Beverage et al., ourmethod differs in several ways. First, we do not character-ize the complete PSF; we measure only lateral shifts. How-ev er, this simplification is justified by the relatively low res-olution of images we can produce from our light fields.Second, our method permits aberrations to be measuredsimultaneously in each part of the field of view, althoughwe have not yet implemented this. Third, rays deviatetwice in our approach due to the mirror. This makes ittwice as sensitive as observing a fluorescent microsphere.

Although the analogy with guide stars is alluring, thismethod of measuring aberrations has several problems.First, it requires tracking the lateral motion of a reflectedspot over a potentially long sequence of frames. In thepresence of severe aberrations, the reflection can shift sub-stantially, causing it to be confused with the reflectionsfrom adjacent spots. More importantly, the reflection froma single projector pixel is weak, so its SNR is poor, espe-cially if the medium exhibits scattering or attenuation.

Method #2: Analyzing Gray code sequencesThe calibration procedure in section 3 depends on find-

ing correspondences between projector pixels and camerapixels after reflection from a mirror. In the presence ofaberrations, axial rays will be unaffected, but marginal rayswill be shifted laterally, leading them to strike the "wrong"camera pixel. Since we can predict the "right" pixel fromthe axial rays, we can measure these shifts, allowing us tosolve for the refractive index of the medium as before.

This method has the important advantage over the guidestar method that it can be performed simultaneously at

13

Page 14: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 14: The effect of aberrations on calibration. The spec-imen is a mirror to which a cover slip has been adhered usingwater, but leaving air bubbles in parts of the field. Illumina-tion consists of the same Gray code as figure 6(c). Imaging isperformed with a Nikon 60×/1.2NA water objective. The illu-mination microlens array is f/25, and the imaging array isf/20. (a) Non-aberrant imaging. Each camera microlens fo-cuses at a point on the object plane #. In particular, the mi-crolens outlined in yellow sees the boundary between a blackand white stripe, as diagrammed below (not to scale). (b)Aimed at an air bubble. The core of paraxial rays still focuseson the stripe boundary, but marginal rays strike # at shiftedpositions, causing them to see other stripes, as shown in thediagram. The dots and inset image is explained in the text.

ev ery camera pixel with no danger of confusion. This pro-vides robustness and good SNR. It also allows us to solvefor the refractive index no matter how fast it changes acrossthe field, limited only by the resolution of our camera. Ofcourse, if we perform this measurement independently inev ery pixel, we trade off spatial resolution against SNR. Inthis paper, we demonstrate only the ability to measure therefractive index of a homogeneous medium; mapping a spa-tially varying index of refraction is left as future work.

Experimental demonstrationTo apply this method, we need only run our calibration

procedure, then compare the results in axial rays and mar-ginal rays. For a specimen consisting of air bubbles caughtbetween a mirror and cover slip, figure 14 illustrates theprocedure. When aimed at the bubbles, marginal raysundergo a strong lateral shift. Squinting at one of themicrolens subimages in (b), one can imagine its annulus asa separate image, focused higher in z than the object plane#. A cartoon depicting this effect is inset at top. In realityno sharp boundary exists between the core and annulus.

Applying this procedure to the optical arrangement infigure 13 allows us to measure the ray shift seen by eachcamera pixel. These shifts are plotted as blue dots in figure

-1 -0.5 0 0.5 1-15

-10

-5

0

5

10

15

between dashed lines not worth correcting

position within exit pupilra

y sh

ift(in

mic

role

nses

)

-31.2

-20.8

-10.4

0.0

10.4

20.8

31.2

(in m

icro

ns o

n sp

ecim

en)

observations n = 1.3442 n = 1.3447

Figure 15: Measuring aberrations using Gray codes. Thespecimen and imaging is described in figure 13, but with 10%glycerol instead of 3%. The blue dots give the lateral ray shiftobserved in pixels taken from a transect through the microlenssubimages, averaged across the field of view and plotted as afunction of pixel position in the pupil. The dots are collec-tively positioned vertically so that the center dot (pupil posi-tion 0.0) is assigned zero shift on the assumption that axialrays are aberration-free. Moving away from this pixel, rayshifts increase, then decrease to zero again at pupil positions-0.3 and +0.3. These secondary zeros arise from our attemptto maximize sharpness by refocusing the microscope. Be-yond these positions, ray shifts accelerate rapidly. The redcurves are fits to these observations, as explained in the text.

15. In this experiment we employed only vertical stripes,and observations are made using pixels from a horizontaltransect through the microlenses; similar results areobtained for horizontal stripes and a vertical transect. Thegeometry underlying these ray shifts is diagrammed in fig-ure 16. Summarizing the formulas in that figure’s caption,the predicted shift x$a for rays of a given ideal angle " is

(5)x$a = za tan" & (za & *z) tan +,sin&1 (n/na sin" )-

.

Applying this equation to the present imaging situation, weobtain the dashed red curve in figure 15.

Alternatively, assuming we do not know the refractiveindex na of the medium, we can solve for it using uncon-strained nonlinear optimization. The function we mini-mized is the weighted sum of squared differences betweenthe shifts predicted by equation (5) and the measured shifts(blue dots). The weighting we employed was a Gaussiantuned to match the observed falloff in light intensitytowards the pupil boundary for this optical arrangement.Optimization was performed using Matlab’s fminsearch

14

Page 15: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

function. For this arrangement the minimum occurred at na= 1.3442, plotted as the solid red curve in figure 15. Thisagrees closely (within 3%) with the known value of 1.3447.The entire procedure, including image capture, takes aboutone minute on a personal computer.

Despite this agreement, the observed data (blue dots) donot agree well with the model curves in the extreme periph-ery of the exit pupil. We ascribe these discrepancies toincomplete subtraction of scattered light. In addition, webelieve there is slight miscalibration of projector pixels rel-ative to camera pixel for axial (non-aberrant) rays. We rep-resent this error on the figure as horizontal error bars oflength ±0.5 pixels, or equivalently ±0.05 of pupil diameter.

Whether we measure the refractive index of a specimenexperimentally or we know it a priori, we can employ theray selection strategy diagrammed in figure 12 to correctdigitally for aberrations. As an example, we replaced themirror used in figures 13 and 15 with a micrometer slide,focused the microscope on the scale markings to maximizeimage sharpness, captured a light field using our camera,

"

"a

!

xa

za

"z

"x

A B

DC

(a)!!ray!shifts (b)!!with!manual!refocusing

!

M

Figure 16: A dipping objective used with the wrong immer-sion medium. M is the objective’s first glass surface and # isthe object plane. (a) Normally, rays leaving the objective con-verge at point A on #. If the immersion medium has index ofrefraction na instead of n, then a marginal ray, which normal-ly makes an angle " with the optical axis, will instead makean angle " a = sin&1 (n/na sin" ). This causes it to strike # atB instead of A. In this example, na > n, so " a < " . If za isthe height of the immersion column, then the shift (from A toB) is xa = za tan" & za tan" a. (b) To measure these shiftsempirically, we must focus the microscope as best we can de-spite the aberrations. If for example we move the stage downby *z, so that paraxial rays (dotted lines) come to an approxi-mate focus at C, then our marginal ray strikes # at D insteadof B, and its shift is reduced by *x = *z tan" a. If na wereless than n, the signs of xa, *z, and *x would be reversed.

and used the ray shifts given by the dashed curve in figure15 to improve the quality of focus synthetically.

The results of this experiment are shown in figure 17.Note the microlens subimages in the visualization for (b);they are distorted as in figure 14(b). Comparing the yellowhighlighting in (c) to the diagram in figure 12(b), one cansee how rays are being borrowed from nearby microlensesto improve the focus. In this case, rays were borrowedfrom the inner edges of microlenses. If the index of refrac-tion had been lower rather than higher than expected, theserays would have been borrowed from the outer edgesinstead. Note also that the ray borrowing is half as exten-sive laterally as the ray shifts in figure 15, because thoseshifts reflect two passages through the aberrant mediumrather than one. Finally, note that the sampling required tocorrect aberrations is quite different than the samplingrequired to perform focusing in the absence of aberrations(figure 10(b)). In focusing, relatively few samples aredrawn from any one microlens, while in aberration correc-tion most samples are drawn from the central microlens.These samples are the paraxial rays, which suffer littleaberration.

DiscussionWhile this experiment proves that we can measure and

correct for aberrations, the improvement was modest. Whyis this? When a user manually refocuses a microscope inan aberrant situation to maximize image sharpness, thisreduces the magnitude of ray shifts. This reduction appearsin figure 15 as a downward shear of the observed data forpositive pupil positions and upward for negative positions.In mildly aberrant situations, this shear causes rays in theinner part of the pupil to shift by less than one microlens,indicated by the zone between the horizontal dashed lines.Since each microlens corresponds to one pixel in the finalview, shifts smaller than a microlens are barely worth cor-recting. The outer part of the pupil is still worth correcting,and we have shown the utility of doing so. Discarding thispart of the pupil after recording the light field is not a goodalternative, since it reduces SNR. However, most of theenergy passing through a microscope objective is paraxial,so the improvement we obtain is modest.

In severely aberrant situations, we must recall that lightfield samples are not abstract rays; they are integrals overspace and direction. The PSF of a synthetically focusedview, like that of the underlying microscope, is hourglass-shaped with its waist at the plane of focus. If this planecoincides with the microscope’s natural object plane, wehave the situation in figure 17(f). The upper half of thisPSF is depicted by a conical bundle of gray lines. The half-PSF of one light field sample (shaded area) is the sameshape, but slenderer by a factor equal to the number of

15

Page 16: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 17: Digital correction of aberrations. The specimen is a micrometer slide (Edmunds M53-713), imaged as in figure 13. Theminor scale ticks are 10 µm. The synthetic views at top are focused on the ticks. At middle are blowups and line plots. At bottom areportions of the light field, showing which rays (yellow) were summed to compute the pixel under the crosshair at top. (a) Distilled wa-ter. The ticks exhibit a contrast (as defined in figure 7) of 54%. (b-c) 10% glycerol (na = 1.3447), without and with correction foraberrations. Contrast is 25% and 39%, respectively (an improvement of 56%). The visualization at bottom shows that rays have beendrawn from several microlenses to form the corrected focus in (c). (d-e) A failure case: 40% glycerol (na = 1.3895), without and withcorrection for aberrations. Contrast is 12% and 10%, respectively. See text for explanation of (f-i).

pixels behind each microlens (tick marks). In an aberrantsituation (figure 17(g)), paraxial sample A is unaffected, butthe focus of peripheral sample B is shifted both laterallyand axially. The procedure described in this paper can cor-rect only for the lateral shift; the axial shift causes lightfield samples to see a blurry view of the object plane #.

To illustrate this secondary blur, figure 17(h) is a viewof the same light field as in (d), but constructed using onlythe middle pixel in each microlens, i.e. the red dots in fig-ure 14(b), and (i) is constructed using only pixels frompupil position (-0.44, 0), the yellow dots in 14(b). Theseviews were called "imagelets" in section 3. Note the hori-zontal blur in (i); these samples will be of little use in cor-recting aberrations. To reduce this blur, one must increasethe angular resolution of the light field. However, thespace-angle bandwidth product is limited by diffraction, sothis requires a sacrifice in spatial resolution, which in turnwidens the zone (dashed lines in figure 15) inside which itis not worth performing correction. It may be fruitful to

investigate aspherical microlenses, which could providefiner angular discrimination in the periphery of the aperturewithout sacrificing spatial resolution.

In addition to improving image quality, correcting aber-rations should increase the accuracy of volume data recon-structed from light fields using synthetic focusing followedby 3D deconvolution (Levo y, 2006). Discarding aberrantmarginal rays is likely to be a markedly inferior solution inthis application, because 3D deconvolution quality isknown to depend critically on high numerical aperture.Indeed, our proof in (Levo y, 2006) that synthetic focusingfollowed by 3D deconvolution is equivalent to limited-angle tomography underscores the importance of maximiz-ing the angular range of rays.

Finally, our correction methodology could lead to newdesigns for microscope objectives, in which optical perfor-mance is relaxed with respect to aberrations that can be cor-rected using microlenses, thereby allowing performance tobe optimized in other respects, as suggested by Ng (2006).

16

Page 17: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Figure 18: Scatterometry using a light field microscope. The specimen is a live skin patch from longfin inshore squid Loligopealeii. Imaging was performed using a 20×/0.5NA objective and an f/20 microlens array. Scale bar is 100 µm. (a) Oblique views15° (top) and 30° (bottom) from the optical axis, computed from a single light field captured under reflected light. Note that thecentermost iridescent spot (iridiphore) turns from green to blue. (b) Moving a physical pinhole across the condenser aperture, onecan change the direction of incident illumination. The polar plots at top show the angular distribution of reflected light for 9 inci-dent light directions (white stars and red line) ranging from axial (at left) to 30° (at right). Each plot was computed by summingseveral microlens subimages in a non-iridescent part of the skin. The yellow line is a linear fit to the path taken by the specularhighlight. The green line and circle in the rightmost image shows the estimated surface normal, halfway between the light and high-light. The surface thus points slightly to the left. The plots at bottom show the light reflected from the centermost iridiphore in (a)after highlight subtraction and contrast expansion. At normal incidence, the iridescence is red-orange and preferentially to the right.As the illumination declines to the north, the iridescence becomes, then blue, and rotates towards the highlight direction.

6. Conclusions and future workIn this paper, we hav e described a system for control-

ling microscope illumination in space and angle. Coupledwith our light field microscope, we have demonstrated twoapplications for this system: programmable illuminationand correction of aberrations. The major limitation of oursystem is that spatial resolution is sacrificed to obtain angu-lar resolution. This limits the precision with which we candirect and focus illumination, and it limits the degree towhich we can correct for aberrations. A secondary limita-tion is the difficulty of obtaining a light source that is uni-form in both space and angle. In our prototype we used areflective light pipe, but this was not entirely successful.

Regarding future work, we have measured theimprovement in contrast obtained by restricting illumina-tion to a spot at the microscope’s natural object plane; wenow need to measure the improvement obtainable for otherplanes. Similarly, we hav e corrected for aberrations arisingfrom a mismatch in the index of refraction that is homoge-neous across the field, and we have argued that our tech-nique also works if the index varies locally in X and Y; wenow need to verify this argument. (It is less clear whetherour technique can be used to measure 3D changes in refrac-tive index.) Finally, we need to build a model predicting

for which objectives, microlenses, and specimens one canobtain worthwhile correction.

Since inserting microlenses into the illumination path ofa microscope has not previously been done, it is not surpris-ing that many potential applications remain untapped. Hereare three ideas:

Scattering light field microscope. The bidirectionalreflectance distribution function (BRDF) gives reflectanceas a function of incident and reflected light direction(Nicodemus, 1977). Kaplan (1999) has recorded 2D slicesof this 4D function by imaging an objective’s back focalplane. Our system can generate 4D incident light fields andrecord 4D reflected light fields. Thus, we can measurereflectance as a function of incident and reflected positionand direction. This 8D function is called the bidirectionalsurface scattering distribution function (BSSRDF). Whileour spherical gantry (Levo y, 2002) can measure this func-tion for macroscopic objects, ours is the first system thatcan measure it microscopically. We hav e not yet tried this,but figure 18 shows how it might work - using an LFM anda mockup of the angular control provided by an LFI toexamine the spatio-angular dependence of scattering fromiridescent squid skin.

17

Page 18: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Reconstructing the 3D shape of opaque objects.While most objects are partially transparent at the micro-scopic scale, thick specimens viewed at low magnificationare largely opaque. At these scales shape cannot be recon-structed using 3D deconvolution; instead we must employmethods from computer vision. One example is shape-from-focus (Nayar, 1990; Noguchi, 1994). Since our LFIcan illuminate specimens with sheets of light from anyangle, and our LFM can compute views from any direction,we should be able to digitize 3D shape using structured-light triangulation rangefinding (Levo y, 2000). Combiningrange data with BSSRDFs could lead to new appearancemodels for complex materials such as bird feathers.

General spatio-angular control over illumination.The ability to independently specify field and aperturemasks, and to refocus illumination, gives us great flexibilitywhen designing incident light fields. Figure 19 provides anintuition for what this flexibility looks like - using the LFIto create a light pattern and the LFM to visualize its pas-sage through a fluorescent liquid. What might this flexibil-ity be good for? Using an array of video projectors, wepreviously demonstrated the ability to illuminate an objecthiding behind foliage while not illuminating the foliageitself (Levo y, 2004). An analogous application inmicroscopy is optical stimulation of a single neuron whileavoiding stimulating nearby neurons. Alternatively, onecan imagine an interactive 4D paint program, in which theuser outlines regions of interest in the field of view, thenpaints for each region a mask specifying from which direc-tions light should arrive at that region. In both applications,one should be able to correct for aberrations while planningthe illumination, using the methods described in this paper.

In closing, we note that there is a large body of litera-ture on the use of coherent or partially coherent illumina-tion to measure 3D structure (Poon, 1995), phase (Barty,1998), or refractive index (Charrière, 2006; Choi, 2007), tocorrect for aberrations (Neil, 2000), or to create illumina-tion patterns in 3D (Piestun, 2002; Nikolenko, 2008). Thetechniques described in (Levo y, 2006) and the presentpaper employ microlens arrays and incoherent light forsimilar aims. A comparison needs to be undertakenbetween these approaches. A useful starting point for thiscomparison might be to study the relationship between theWigner distribution, a wav e optics representation, and thelight field (Zhang, 2009). Our expectation is that eachapproach will prove to hav e advantages anddisadvantages 2.

2 High-resolution images, light fields, and videos related to this work canbe found at http://graphics.stanford.edu/papers/lfmicroscope/.

Figure 19: Visualizing incident light fields. The specimen isa chamber filled with 100 µM fluorescein isothyocyanate in1M tris, imaged using a 20×/0.75NA objective. The illumina-tion microlens array was f/10, and the imaging microlens ar-ray was f/13. Scale bar is 100 um. (a) The field mask at bot-tom (M f in equation (3)) is three vertical stripes, and theaperture mask at top (Ma) is two pinholes. (b) The resultingpattern displayed on our projector. (c) An oblique view fromthe captured light field. The three fluorescing shapes corre-spond to the three stripes, and the two sheets per stripe showthat light is arriving from two directions, corresponding to thepinholes. (d) The pattern was altered to refocus the illumina-tion 25 µm lower. (e) The sheets now intersect inside thechamber, as indicated by the superimposed drawings.

AcknowledgementsThis research was supported by the National Science Founda-

tion under award 0540872, the Stanford University Office of Tech-nology Licensing, and the Stanford University School of Engi-neering. We thank Matthew Footer and Mickey Pentecost for sev-eral of the specimens used in this paper, Shinya Inoue for the ratbrain, Oliver Hamann for the L-Glutemic acid, and Lydia Mathgerfor the squid. Finally, we thank Mark Horowitz, Ren Ng, JulieTheriot, Matthew Footer, Stephen Smith, Mark Schnitzer, andRudolf Oldenbourg for many useful discussions, and our anony-mous reviewers for several useful suggestions.

ReferencesAdelson, T., Wang, J.Y.A. (1992) Single lens stereo with a plenop-

tic camera. Tr ans. Pattern Analysis and Machine Intelligence14(2), 99-106.

Agard, D.A. (1984) Optical sectioning microscopy: Cellular archi-tecture in three dimensions. Ann. Rev. Biophys. Bioeng. 13,191-219.

Albert, O., Sherman, L., Mourou, G., Norris, T.B. (2000) Smartmicroscope: an adaptive optics learning system for aberrationcorrection in multiphoton confocal microscopy. Optics Let-ters 25(1), 52-54.

Barty, A., Nugent, K.A., Paganin, D., Roberts, A.. (1998) Quanti-tative optical phase microscopy. Optics Letters 23(11),817-819.

Beverage, J., Shack, R., Descour, M. (2002) Measurement of thethree-dimensional microscope point spread function using a

18

Page 19: Recording and controlling the 4D light field in a …...By inserting a microlens array at the intermediate image plane of an optical microscope, one can record 4D light fields of

To appear in Journal of Microscopy, 2009

Shack-Hartmann wav efront sensor. J. Microsc. 205, 61-75.Bitner, J. R., Erlich, G, and Reingold, E. M. (1976) Efficient gen-

eration of the binary reflected Gray code and its applications.CACM 19(9), 517-521.

Booth, M.J., Neil, M.A.A., Juskaitis, R., Wilson, T. (2002) Adap-tive aberration correction in a confocal microscope. Proc.Nat. Acad. of Sciences (PNAS) 99(9), 5788-5792.

Chamgoulov, R.O., Lane, P.M., MacAulay, C.E. (2004) Opticalcomputed-tomography microscope using digital spatial lightmodulation. SPIE 5324, 182-190.

Charrière, F., Marian, A., Montfort, F., Kuehn, J., Colomb, T.,Cuche, E., Marquet, P., Depeursinge, C. (2006) Cell refrac-tive index tomography by digital holographic microscopy.Optics Letters 31(2), 178-180.

Choi, W., Fang-Yen, C., Badizadegan, K., Oh, S., Lue, N., Dasari,R.R., Feld, M.S. (2007) Tomographic phase microscopy.Nature Methods 4, 717-719.

Gershun, A. (1936) The Light Field. Translated by P. Moon andG. Timoshenko in J. Mathematics and Physics XVIII (1939),51-151.

Halbach, K. (1964) Matrix Representation of Gaussian Optics.American Journal of Physics 32(2), 90-108.

Hanley, Q.S., Verveer, P.J., Gemkow, M.J., Arndt-Jovin, D., Jovin,T.M. (1999) An optical sectioning programmable arraymicroscope implemented with a digital micromirror device.J. Microsc. 196, 317-331.

Isaksen, A., McMillan, L., Gortler, S.J. (2000) Dynamically repa-rameterized light fields. Proc. Siggraph 2000, 297-306.

Jones, A., McDowall, I., Yamada, H., Bolas, M., Debevec, P.(2007) Rendering for an Interactive 360° Light Field Display.ACM Trans. on Graphics (Proc. SIGGRAPH) 26(3), 40.

Kam, Z., Kner, P., Agard, D., Sedat, J.W. (2007) Modelling theapplication of adaptive optics to wide-field microscope liveimaging. J. Microsc. 226, 33-42.

Kaplan, P.D., Trappe, V., Weitz, D.A. (1999) Light-scatteringmicroscope. Applied Optics 38(19), 4151-4157.

Kingslake, R. (1983) Optical system design. Academic Press.Levo y, M. (1988) Display of Surfaces from Volume Data. Com-

puter Graphics and Applications 8(3), 29-37.Levo y, M., Hanrahan, P. (1996) Light field rendering. Proc. SIG-

GRAPH 1996, 31-42.Levo y, M., et al. (2000) The Digital Michelangelo Project: 3D

scanning of large statues. Proc. SIGGRAPH 2000, 131-144.Levo y, M. (2002) The Stanford Spherical Gantry. http://graph-

ics.stanford.edu/projects/gantry/.Levo y, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I.,

Bolas, M. (2004) Synthetic aperture confocal imaging. ACMTr ans. on Graphics (Proc. SIGGRAPH) 23(3), 825-834.

Levo y, M., Ng, R., Adams, A., Footer, M., Horowitz, M. (2006)Light field microscopy. ACM Trans. on Graphics (Proc.SIGGRAPH) 25(3), 924-934.

Levo y, M., Optical recipes for light field microscopes. (2006)Stanford Computer Graphics Laboratory Technical Memo2006-001.

Levo y, M. (2006) Light fields and computational imaging. IEEEComputer 39(8), 46-55. August 2006.

Marschner, S.R., Jensen, H.W., Cammarano, M., Worley, S., Han-rahan., P. (2003) Light Scattering from Human Hair Fibers.ACM Trans. on Graphics (Proc. SIGGRAPH) 22(3),780-791.

Nayar, S.K., Nakagaw a, Y. (1990) Shape from focus: An effectiveapproach for rough surfaces. Proc. International Conferenceon Robotics and Automation (ICRA), 218-225.

Neil, M.A.A., Wilson, T., Juskaitis, R. (2000) A wav efront gener-ator for complex pupil function synthesis and point spread

function engineering. J. Microsc. 197, 219-223.Ng, R. (2005) Fourier slice photography. ACM Trans. on Graph-

ics (Proc. SIGGRAPH) (24)3, 735-744.Ng, R., Levo y, M., Bredif, M., Duval, G., Horowitz, M., Hanra-

han, P. (2005) Light Field Photography with a Hand-HeldPlenoptic Camera. Stanford Technical Report CTSR2005-02.

Ng, R., Hanrahan, P. (2006) Digital correction of lens aberrationsin light field photography. SPIE 6342.

Nicodemus, F., Richmond, J., Hsia, J., Ginsberg, I., Limperis, T.(1977) Geometrical Considerations and Nomenclature forReflectance. National Bureau of Standards.

Nikolenko, V., Watson, B.O., Araya, R., Woodruff, A., Peterka,D.S., Yuste, R. (2008) SLM Microscopy: Scanless Two-Pho-ton Imaging and Photostimulation with Spatial Light Modu-lators, Fr ont Neural Circuits 2(5).

Noguchi, M., Nayar, S. (1994) Microscopic shape from focususing active illumination. Proc. IAPR International Confer-ence on Pattern Recognition (ICPR) A, 147-152.

Oldenbourg, R. (2008) Polarized light field microscopy: an analyt-ical method using a microlens array to simultaneously cap-ture both conoscopic and orthoscopic views of birefringentobjects. J. Microsc. 231, 419-432.

Pavani, S.R.P., Libertun, A.R., King, S.V., Cogswell, C.J. (2008)Quantitative structured-illumination phase microscopy.Applied Optics 47(1), 15-24.

Piestun, R., Shamir, J. (2002) Synthesis of Three-DimensionalLight Fields and Applications. Proc. IEEE(90)2,

Pluta, M. (1988) Advanced Light Microscopy (3 vols). Elsevier.Poon, T.-C., Doh, K.B., Schilling, B.W., Wu, M.H., Shinoda, K.,

Suzuki, Y. 1995. Three-dimensional microscopy by opticalscanning holography, Optical Engineering 34(5), 1338-1344.

Rusinkiewicz, S., Hall-Holt, O., Levo y, M. (2002) Real-Time 3DModel Acquisition. ACM Trans. on Graphics (Proc. SIG-GRAPH) 21(3), 438-446.

Samson, E.C., Blanca, C.M. (2007) Dynamic contrast enhance-ment in widefield microscopy using projector-generated illu-mination patterns. New Journal of Physics 9(363).

Shack, R.V., Platt, B.C. (1971) Production and use of a lenticularHartmann screen. J. Opt. Soc. Am. 61, 656.

Sherman, L., Ye, J., Albert, O., Norris, T. (2002) Adaptive correc-tion of depth-induced aberrations in multiphoton scanningmicroscopy using a deformable mirror. J. Microsc. 206,65-71.

Smith, P.J., Taylor, C.M., Shaw, A.J., McCabe, E.M. (2000) Pro-grammable array microscopy with a ferroelectric liquid-crys-tal spatial light modulator. Applied Optics 39(16),2664-2669. Vol. 39, No. 16, 1 June 2000.

Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B.,Horowitz, M., Levo y, M. (2005) Synthetic aperture focusingusing a shear-warp factorization of the viewing transform.Proc. CVPR 2005.

Wilburn, B., Joshi, N., Vaish, V., Talvala, E., Antunez, E., Barth,A., Adams, A., Levo y, M., Horowitz, M. (2005) High Perfor-mance Imaging Using Large Camera Arrays. ACM Trans. onGraphics (Proc. SIGGRAPH) 40(3), 765-766.

Wright, A.J., Poland, S.P., Girkin, J.M., Freudiger, C.W., Evans,C.L., Xie, X.S. (2007) Adaptive optics for enhanced signal inCARS microscopy. Optics Express 15(26), 222-228.

Zhang, Z., Levo y, M. (2009) Wigner Distributions and How TheyRelate to the Light Field. IEEE International Conference onComputational Photography (ICCP).

19