31
MULTISPECTRAL IMAGING COURSE PROJECT FOR AS-75.2128 IMAGING AND DISPLAY TECHNOLOGY HELSINKI UNIVERSITY OF TECHNOLOGY (TKK) PETTERI TEIKARI ([email protected] ) VERSION 6/4/2008 1 INTRODUCTION .......................................................................................................... 2 2 MULTISPECTRAL IMAGING .......................................................................................... 2 3 APPLICATIONS ............................................................................................................ 4 3.1 Biometric pattern recognition ................................................................................................ 4 3.1.1 Face recognition................................................................................................................... 4 3.1.2 Iris analysis ............................................................................................................................ 6 3.2 Fluorescence microscopy ........................................................................................................ 9 3.3 Retinal physiology ................................................................................................................... 11 3.4 Non-image forming (NIF) visual responses in humans ..................................................... 12 3.4.1 Personal dosimeters .......................................................................................................... 13 3.4.2 Circadian-weighed luminancephotometers ................................................................... 15 3.4.3 Digital photography ........................................................................................................... 16 3.5 Plant physiology ...................................................................................................................... 18 3.5.1 Multispectral imaging in food industry............................................................................ 19 3.6 Prospective uses of multispectral imaging .......................................................................... 21 4 CONCLUSION ............................................................................................................ 22 5 REFERENCES .............................................................................................................. 23 Picture from: http://probes.invitrogen.com/servlets/photohigh?fileid=g004686&company=probes

Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

Embed Size (px)

Citation preview

Page 1: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING COURSE PROJECT FOR AS-75.2128 IMAGING AND DISPLAY TECHNOLOGY HELSINKI UNIVERSITY OF TECHNOLOGY (TKK)

PETTERI TEIKARI ([email protected]) VERSION 6/4/2008

1 INTRODUCTION .......................................................................................................... 2

2 MULTISPECTRAL IMAGING .......................................................................................... 2

3 APPLICATIONS ............................................................................................................ 4

3.1 Biometric pattern recognition................................................................................................ 4 3.1.1 Face recognition................................................................................................................... 4 3.1.2 Iris analysis ............................................................................................................................ 6

3.2 Fluorescence microscopy........................................................................................................ 9 3.3 Retinal physiology................................................................................................................... 11 3.4 Non-image forming (NIF) visual responses in humans ..................................................... 12

3.4.1 Personal dosimeters .......................................................................................................... 13 3.4.2 Circadian-weighed luminancephotometers ................................................................... 15 3.4.3 Digital photography ........................................................................................................... 16

3.5 Plant physiology ...................................................................................................................... 18 3.5.1 Multispectral imaging in food industry............................................................................ 19

3.6 Prospective uses of multispectral imaging .......................................................................... 21

4 CONCLUSION ............................................................................................................ 22

5 REFERENCES .............................................................................................................. 23

Picture from: http://probes.invitrogen.com/servlets/photohigh?fileid=g004686&company=probes

Page 2: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

INTRODUCTION 2 BIOMETRIC PATTERN RECOGNITION

1 INTRODUCTION Multispectral imaging refers to the use of several different spectral bands when acquiring images. The most common application for multispectral images has been remote sensing and more precisely imaging ultraviolet (UV) and infrared (IR) bands with normal RGB-based visible light imaging. However, recently multispectral imaging have been used in many new applications which are discussed this work emphasizing the methods used in quantifying biological processes. These applications include biometric pattern recognition (face, fingerprint, iris), magnetic resonance imaging (MRI), retinal physiology, non-image forming (NIF) responses of light, and plant physiology among others. The work is started with a brief overview of multispectral imaging (chapter 2) and then followed with an overview of these abovementioned applications. The review is limited to understanding the basic principles and details such as algorithms and protein-dependent emission spectra have been left out.

2 MULTISPECTRAL IMAGING Spectral imaging combines the strength of conventional imaging with that of spectroscopy to accomplish tasks that each cannot perform separately [1,2]. The product of a spectral imaging system is a stack of images of the same object or scene, each at a different spectral narrow band. The field is divided into techniques called multispectral, hyperspectral, and ultraspectral. While no formal definition exists, the difference is usually based on the number of bands. Multispectral deals with several images (from 6 to 31) at discrete and fairly narrow bands, which is what distinguishes multispectral in the visible from conventional red-green-blue (RGB) image capturing. Hyperspectral deals with imaging narrow spectral bands (up to 100) over a contiguous spectral range. Ultraspectral is typically reserved for interferometer-type imaging sensors with a very fine spectral resolution and deals with more than 100 bands. These sensors often have a low spatial resolution of several pixels only, a restriction imposed by the high data rate. These devices, when combined with computational image-processing algorithms [3], can produce the spectra of all the pixels in the scene [4]. Traditionally multispectral imaging has been related to remote sensing (taking aerial photos from satellites) of which an example picture is seen in Figure 1 [5]. Recently however the knowledge from remote sensing has been used for other applications that could benefit from multispectral imaging [6]. These applications include such as recovering fluorescent/skylight spectra [4,7,8], artwork analysis [9,10], document authentication [11], skin burn monitoring [12], among other

Figure 1. Aerial multispectral image of Amsterdam (the Netherlands) acquired by NASA’s Landsat Thematic Mapper. (left) Spectral bands 3,2,1; (center) Spectral bands 4,3,2; (right) Spectral bands 7,4,3. Band 1:450-520nm, resolution 30m; Band 2:520-600nm, resolution 30m; Band 3:630-690nm, resolution 30m; Band 4:760-900nm, resolution 30m; Band 5:1'550-1'750nm, resolution 30m; Band 6:10'400-12'500 nm, resolution 120m; Band 7:2'080-2'350 nm, resolution 120m. Images could be further processed for example using a program called MultiSpec [5].

Page 3: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

MULTISPECTRAL IMAGING 3 BIOMETRIC PATTERN RECOGNITION

applications. In next chapter, some of the applications of multispectral imaging are reviewing emphasizing the applications related to biology. There exist several methods of filtering the incoming light so that the multispectral image is composed. The simplest and the most inexpensive solution for multispectral imaging is to use a monochrome camera and place manually different color filters on top of its sensor. However, this is rarely used in real-world situations where more money can be invested on the equipment. Monochromators are one of the most commonly used wavelength selection devices. Typical monochromators utilize a single grating for angular dispersion and spectral separation. Monochromators are used in all spectral regions, from the UV to the IR [13]. The more advanced way to filter light is to use tunable filters that can be controlled electrically and above all very fast compared to manual method improving the temporal resolution of the measurement. Tunable filters can be roughly divided into two categories: 1) liquid crystal tunable filters (LCTF) [14-16], and 2) acusto-optical tunable filter (AOTF) [13,17]. Acousto-optic tunable filters (AOTF) and liquid crystal tunable filters (LCTF) operate on different principles, but they are both capable of rapid wavelength selection with microsecond to millisecond tuning speeds while preserving imaging integrity. An AOTF is a diffraction based optical-band-pass-filter that can be rapidly tuned to pass various wavelengths of light by varying the frequency of an acoustic wave propagating through an anisotropic crystal medium. The LCTF is a birefringent filter, which uses the retardation, in phase, between the ordinary and extraordinary light rays passing through a liquid crystal to create constructive and destructive interference, to pass a single wavelength of light. By combining several electronically tunable stages in series, high spectral resolution can be achieved (e.g., 8 cm-1) [18]. The choice of a filter type is in most cases application-dependent. An example of the spectral sensitivity of LCTF setup is shown in Figure 2.

Figure 2. Spectral transmittances of the narrow-band (left) and wideband (right) setups of the LTCF when varying the peak wavelength in 10-nm steps from 400 to 720 nm [16].

Page 4: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 4 BIOMETRIC PATTERN RECOGNITION

3 APPLICATIONS In this chapter some of the applications of multispectral imaging are reviewed. The focus of applications is on the biological field and in the scope of this work it would be impossible to review all possible applications existing for multispectral imaging.

3.1 BIOMETRIC PATTERN RECOGNITION Multispectral imaging has not yet been extensively in biometric analysis. Most of the existing studies have been done on face recognition some whereas iris [39] and finger [19] recognition have received only little attention. In this chapter, the principles of face recognition are briefly review accompanied with more discussion on iris analysis as it can be of a value when designing pupillometry setups [20].

3.1.1 FACE RECOGNITION Face recognition technology has a wide range of potential applications related to security and safety including surveillance, information security, access control, and identity fraud. Considerable progress has been made in face recognition over the last decade especially with the development of powerful models of face appearance (i.e. eigenfaces [ 21 ]; Figure 7,Figure 6). Despite the variety of approaches and tools studied, however, face recognition is not accurate or robust enough to be deployed in uncontrolled environments. Several factors affect face recognition performance including pose variations, facial expression changes, occlusions, and most importantly, illumination changes. Recently, a number of studies have demonstrated that thermal IR offers a promising alternative to visible imagery for handling variations in face appearance due to illumination changes [ 22 ] (Figure 4A-B,D-E), facial expression [23] and [24] (Figure 4C,F), and face pose [24] more successfully. In particular, thermal IR imagery is nearly invariant to changes in ambient illumination [22] and [23], and provides a capability for identification under all lighting conditions including total darkness (Figure 5) [ 25 ]. Thus, while visible-based algorithms opt for pure algorithmic solutions into inherent phenomenology problems, IR-based algorithms have the potential to offer simpler and more robust solutions, improving performance in uncontrolled environments and deliberate attempts to obscure identity [26]. Despite its robustness to illumination changes, IR has several drawbacks. First, it is sensitive to temperature changes in the surrounding

Figure 3. Changes in the thermal appearance of the face in the presence of eyeglasses and with body temperature changes. (A) and (B) wearing glasses. (C) and (D) after physical exercise [28].

Figure 5. Face images taken under very low lighting. (A) Visual image. (B) Corresponding thermal IR image [28].

Figure 4. Comparison of visual and thermal IR images under variations in illumination and facial expression. (A) and (B) visual face images with different illumination directions. (C) Different facial expression. (D) (E), and (F) are the corresponding thermal images to (A), (B), and (C) [28].

Page 5: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 5 BIOMETRIC PATTERN RECOGNITION

environment. Currents of cold or warm air could influence the performance of systems using IR imagery. As a result, IR images should be captured in a controlled environment. Second, it is sensitive to variations in the heat patterns of the face (Figure 3C-D). Factors that could contribute to these variations include facial expressions (e.g. open mouth), physical conditions (e.g. lack of sleep), and psychological conditions (e.g. fear, stress, excitement). Finally, IR is opaque to glass. As a result, a large part of the face might be occluded (e.g. by wearing eyeglasses, Figure 3A-B). In contrast to IR, visible imagery is more robust to the above factors but more sensitive to illumination changes [27]. Face recognition in the IR spectrum has received relatively little attention compared to visible spectrum, mostly because of the high cost of IR sensors and lack of IR data sets. A number of recent studies have shown that face recognition in the IR offers many benefits [28]. In the past, IR and visible image fusion has been successfully used for visualization purposes [ 29 , 30 ], especially in the remote sensing area. Choosing an appropriate fusion scheme is both application and data dependent. In the context of face detection, infrared cameras measure the heat energy emitted by the face while optical cameras capture the light reflectance of the face surface. As the surface of the face and its temperature do not have anything in common, the information in the IR and visible images is independent and complimentary. In general multi-resolution fusion is used for the fusion on each source image then constructing a composite multiscale representation from these according to some specific fusion rules. The fused image is obtained by taking an inverse multiscale transform [31]. Multiscale face representations have been used in several systems [32]. Some of the most popular multiscale techniques include the Laplacian pyramid [33], Fourier transform, and wavelets [34]. High frequencies are relatively independent of global changes in the illumination, while the low frequencies take into account

Figure 6. Eigenfaces for thermal face images. (A) A set of thermal IR face images for training. (B) Eigenfaces created from the training set [28].

Figure 7. Computation of the eigenfaces from a set of face images. (A) Sample training set. (B) Eigenfaces[28].

Page 6: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 6 BIOMETRIC PATTERN RECOGNITION

the spatial relationships among the pixels and are less sensitive to noise and small changes (e.g. facial expression). However, despite the recent advances [28] in IR face recognition there still exist some room for improvement. Additional issues for future research include investigating the effect of environmental (e.g. temperature changes), physical (e.g. lack of sleep) and physiological conditions (e.g. fear, stress) to IR performance.

3.1.2 IRIS ANALYSIS The need of accurate iris recognition and analysis has been growing due to the development of biometric detection systems [35]. Commercial iris recognition systems operate predominately in the near-Infrared (IR) range of the electromagnetic spectrum. The intricate textural pattern represented in different colored irises is revealed in the near-IR range and has been traditionally used as a biometric indicator [36]. However, it could be argued that incorporating visible region of the electromagnetic radiation to the iris analysis the accuracy of iris analysis could be improved. Also the increased accuracy of iris detail can be used in image segmentation purposes to detect pupil better in eye tracking [ 37 ] and pupillometry applications [ 38 ]. To better appreciate the significance of multispectral iris analysis, a short review of the pupil anatomy and reflectance characteristics is first presented. A cross section of the iris reveals two layers: the anterior stroma layer and the posterior epithelial layer. The anterior portion of the iris is the foremost visible portion of the eye (Figure 8). Therefore, it is easily imaged and is the focus of all iris recognition systems. The anterior surface of the iris is separated into two zones: the pupillary zone and the ciliary zone. These two zones are divided by a circular zigzag ridgeline known as the collarette. The two zones on the surface of the iris often differ in color. Many pit like oval structures appear mainly in the zone around the collarette and the outer edge of the iris. These structures are called crypts (Fruch’s crypts) and they permit fluids to quickly enter and exit the iris during dilation and contraction of the pupil [39]. The anterior surface has a velvety appearance showing a series of radial streaks that are caused by trabeculae or bands of connective tissue that enclose the crypts. These radial streaks straighten when the pupil is constricted and turn wavy when the pupil is dilated. Near the outer part of the ciliary zone concentric lines can be seen. These lines become deeper as the pupil dilates and are called contraction furrows. These lines, caused by the folding of the iris as the pupil dilates, are easily seen in darkly pigmented irises. At the pupillary margin, the heavily pigmented epithelium extends around the edge of the pupil. The radial folds of the epithelium give the pupillary margin a sort of beaded or pearl appearance. This region is termed the pupillary fringe or ruff. The average diameter of the iris is approximately 12mm, with an average thickness of about 0.5 mm, thickest at the collarette and thinning radially away from the pupil [39]. The posterior layer of the iris is composed of two pigmented epithelial layers. The anterior layer lies in contact with the stroma and is associated with the muscular process of the dilator pupillae. This layer contains relatively few melanin granules. The posterior layer cells are larger than the anterior layer and cubical in shape. They are stacked together in a compact and orderly arrangement and are heavily composed of melanin granules. The iris’s main function is to

Figure 8. Anatomy of the anterior portion of the iris [39].

Page 7: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 7 BIOMETRIC PATTERN RECOGNITION

regulate the amount of light that can enter the eye and impinge on the retina. It does this through dilation and constriction of the pupil. In low light conditions the dilator pupillae muscle is triggered through parasympathetic nerve activity and dilates the iris to allow in more light. In bright or intensive light conditions the constrictor pupillae muscle is triggered through parasympathetic and sympathetic nerve activity that dilates and constricts the pupil [39]. The iris can vary in color from light blue to dark brown [40]. This variation can be (a) across the population, (b) between the left and right eyes of an individual, or (c) in different regions of the iris in the same eye. The main contributors of color are the cellular density in the extracellular matrix of the iris stroma (vascular connective tissue containing collagen fibers, fibroblasts, melanocytes, nerve fiber, smooth muscle, myoepithelial cells, radial vessels, and matrix), the pigment contained in the iris stroma and the pigment contained in the iris pigment epithelium (IPE) layer. The color of the iris relies on the cellular density of the stroma and more heavily on the pigmentation of the stroma. Heavy melanin synthesis corresponds to a dark brown iris, whereas light melanin synthesis corresponds to a light blue iris. Impinging light on the iris gives the appearance of color. Longer wavelength light readily penetrates the iris and is absorbed. However, shorter wavelength light (blue light) is reflected back and scattered by the iris stroma. Thus, irises with low pigmentation have a bluish appearance. Dark-colored eyes noticed to have a richer texture under IR illumination, whereas the texture of light-colored eyes is enhanced under visible light [36]. The striated trabecular meshwork of elastic pectinate ligament (the anterior layer) creates the predominant texture under visible light. Under near-IR (NIR) light, deeper stromal features dominate the iris pattern. Although the images of light-colored eyes are can be relatively pale under the NIR light, the texture normally contains enough information for alignment purposes. In visible light, for example Imai (2000) [41] had measured the spectral reflectance of eyes with different colors using a normal DSLR camera. Through the use of multispectral imagery, an iris can be broken down into its own unique reflection pattern according to its phenotypical traits [39]. Boyce et al. (2006) [39] studied the possibility of using an adaptive equalization scheme to enhance the iris structure in an optimal way depending on the spectral reflectance (eye color) of the subject. Multispectral images consisted of four individual spectral channels/wavelengths (IR, Red, Green, and Blue). To capture the different wavelengths being reflected from an iris, multispectral camera was employed which incorporates three charge coupled devices (CCD) and three band-pass prisms behind the lens to simultaneously capture four different wavelength bands (Figure 10). The IR and Red sensor output an image of size 1300x1040. This represents an average of 56,000 pixels inside the segmented iris. The G and the B images are recorded on a RGB Bayer pattern sensor and are, therefore, one-third the resolution of the other images. The G and the B images are extracted and scaled to have the same resolution as the IR and R image s using linear interpolation of the nearest neighbors [42]. To image the iris accurately and in a convenient fashion, the multispectral camera was mounted onto an ophthalmologist’s slit-lamp mount (Figure 9). The mount consisted of a chin rest, to position the head, and a mobile

Figure 10. The normalized transmittance of the band-pass prisms and sensor spectral response of the acquisition device. Filled portions of the graph indicate the actual combined response of the sensors and prisms [39].

Figure 9. Image acquisition [39].

Page 8: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 8 BIOMETRIC PATTERN RECOGNITION

camera-mount arm that could be easily manipulated to finely focus on the iris. Figure 11, Figure 12 and Figure 14 show the intensity of iridal reflection across the four channels for different eye colors. Note that the CIR (color infrared) images are obtained by dropping the blue channel and including the IR channel (these are false color images). The graphical plots in these figures indicate the variation in pixel intensity across the four channels as one moves radially outward from the boundary of the pupil/iris toward the boundary of the iris/sclera. The color of each iris was determined by visual inspection since it is difficult to automatically elicit the eye color from the images given the rapid

Figure 11. Example of a dark brown iris. The iris exhibits high iridal reflectance in the IR channel. The reflectance is observed to decrease significantly with wavelength [39].

Figure 14. Example of a blue iris. The iridal reflection is comparable across all four channels [39].

Figure 12. Example of a light-brown/green iris. The iris exhibits high iridal reflectance in the IR and Red channels. Reflectance decreases significantly with other wavelengths [39].

Figure 13. Result of segmentation on the IR, red, green, and blue channels of a brown iris image [39].

Page 9: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 9 FLUORESCENCE MICROSCOPY

variations in texture chromaticity within the high-resolution image. Figure 13 shows the example of the segmentation accuracy using individual channels with dark irises. In general, the IR and Red channels are observed to perform very well for the brown irides The authors anticipate that for irides of other colors, the use of multispectral information will be beneficial. Pixel-clustering [43,44] based on color information may be used to elicit and examine the various components of the iris. One application of such an analysis would be the detection of moles and freckles present on the surface of the iris. The analysis of multispectral images also provides the option of processing the iris information contained in high resolution color images of the face.

3.2 FLUORESCENCE MICROSCOPY Cell biological approaches try to understand dynamic interactions among intracellular molecular components within the spatial and temporal context of the cells, which have proved to be a difficult task to quantify in living cells. In this context, multispectral imaging has become of increasing importance as an approach to observe dynamics of many proteins within a living cell. One of the major methods of multispectral imaging used for remote sensing is Fourier transform spectroscopy [45] where interferometric measurement of an image generated an interferogram and Fourier transform of the interferogram recovers the spectral distribution of the image. Recently microscope applications implementing interferometer to a fluorescent microscope have been made [46,47]. Also other methods that can be implemented to confocal [48,49] or widefield fluorescence microscopes have been devised to obtain spectral resolution of microscopic images [50-54]. An essential part of spectral imaging microscope systems is a spectral dispersion element that separates the incident light into its spectral components. Figure 15 schematically illustrates different types of multispectral imaging microscope designs, such as an interferometer, tunable filter or grating [ 55 ]. Multispectral imaging fluorescence microscopy of biological specimens provides the versatility in combination with fluorescent dyes [ 56 , 57 ]. Multispectral images are obtained as a set of two-dimension images as a function of wavelength (Figure 16A). The example shows fixed HeLa cells (also Hela or hela cell, is an immortal cell line used in medical research) that are stained with GFP (green fluorescent protein), FITC (fluorescein isothiocyanate), and YFP (yellow fluorescent protein), which have fluorescence spectra closely overlapping with each other (the emission peak is 509 nm, 518 nm and 527 nm, respectively). Spectral overlap between fluorescent dyes can be

Figure 15. Microscope systems for multispectral imaging. (A) Filter-based spectral imaging. (B) Fourier transform-based interferometric image spectroscopy. The diagram merely shows the principle of an interferometer, but not exact light paths. (c) Grating-based spectral imaging using a laser scanning confocal microscope. (D) Grating-based spectral imaging using a widefield fluorescence microscope with an optical fibre coupling [55].

Page 10: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 10 FLUORESCENCE MICROSCOPY

separated into spectra of each dye by computational processing called "linear unmixing" [52,53]. This method (simple mathematical equations in Hiraoka et al. (2002) [55]) was capable of separating different the different color dyes (GFP, FITC, YFP) as shown Figure 16B. Figure 17 shows a summary of the emission maxima of the current collection of GFP-like proteins and their mutants [58]. Temporal resolution of the imaging depend mainly on the speed of wavelength switching, which is on an order of seconds with a filter wheel, an order of a few 10 milliseconds with LCTF, and submilliseconds with AOTF. By the use of a grating or prism, information of continuous spectra can be obtained simultaneously with no moving elements. Grating-based spectral imaging microscope systems provide simultaenous spectral dispersion, and thus are useful for multispectral imaging of rapid processes in living cells. Further research on this topic [59,60,61] require interdisciplinary approaches in chemistry, material science and engineering in order to achieve further technological development of live cell imaging. There is also a need for dyes that change their spectra under certain conditions (e.g. phosphorylation and dephosphorylation events, molecular interactions or conformational change of proteins) to better detect the dynamics of intracellular environment. Summary

Figure 16. Multispectral images and linear unmixing. (A) Multispectral images of HeLa cells. HeLa cells expressing histone H2B-GFP, YFP-RanGAP1 were fixed with 3.7% formaldehyde, then stained with an anti a-tubulin antibody and detected with an FITC-conjugated secondary antibody. The cells were excited by 488 nm light from an argon laser, and observed using a water-immersion objective lens (C-Apochromat 40x/NA=1.2) and a dichroic mirror (HFT413/488). (B) Images of GFP (a), YFP (b), and FITC (c) was separated by linear unmixing, and merged with pseudocolor (d). Images were obtained using a commercially available multispectral microscope (Zeiss LSM510 META), and processed with its standard software package of linear unmixing [55].

Figure 17. Summary of the emission maxima in the current collection of GFP-like proteins and their mutant variants available for biotechnology applications [58].

Page 11: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 11 RETINAL PHYSIOLOGY

3.3 RETINAL PHYSIOLOGY Multispectral imaging can be used to study different characteristics of retina (the part of the eye comparable to the film in a camera) both in clinical and in research environments. Retina can be roughly modeled either as an optical system (like a camera) [62,63] or as a biological system [64,65]. There are three main type of photoreceptors in the human retina: 1) rods, that work mainly under low illumination (scotopic vision) with no color sensation; 2) cones, that work under normal daytime illumination (photopic vision); and 3) intrinsically photosensitive retinal ganglion cells (ipRGCs) [66] that are responsible for non-image forming (NIF) responses of the human visual system such as entrainment to environmental light-dark cycle for example. In the following short example of retina research, multispectral imaging is used to quantify the thickness of retinal nerve fiber layer (RNFL), which can be of an interest when studying the age-related changes of retina [67] or the ocular changes in various diseases such as glaucoma [68,69], multiple sclerosis [70]. Other studied retinal parameters using multispectral imaging include the measurement of birefringence (double refraction, is the decomposition of a ray of light into two rays (the ordinary ray and the extraordinary ray) when it passes through certain types of material) of retinal nerve fiber layer (RNFL) for glaucoma diagnosis [71], the measurement of oxygen saturation of blood in retinal arteries and veins (retinal vessel oximetry [72]), and cross section imaging of retina [73]. The retinal nerve fiber layer (RNFL) in humans consists of bundles of unmyelinated axons of retinal ganglion cells running just under the surface of the retina. The RNFL is damaged in glaucoma and other diseases of the optic nerve. Various optical methods have been developed to assess the RNFL and several are moving toward routine clinical use. Quantitative measurements, however, need a comprehensive understanding of the basic optical properties of the RNFL and their underlying anatomic basis. Optical properties of the RNFL have been studied in vitro by various imaging methods [74-78]. In order to relate an optical measurement to the RNFL structure, one must be able to retrieve the imaged tissue for histologic (study of tissue sectioned as a thin slice) study. These comparisons have been for example Toth et al. [79] and Huang et al. [80] using optical coherence tomography (OCT [81,82], example of one image shown in Figure 18). Although standard histologic measurements of RNFL thickness in tissue sections are widely used, the method has at least three major drawbacks: it is hard to locate the exact positions measured optically, limited numbers of nerve fiber bundles can be measured, and the process is very time consuming [83 ]. However, it should be noted that currently the anatomical basis of the RNFL is not completely known, and what kind of structures contribute to RNFL birefringence and reflectance as pointed out by Huang et al. (2006) [84].

Figure 18. In vivo spectral domain optical coherence tomography (SD-OCT) image of the human retina around the optic nerve head, consisting of 1000 depth profiles acquired in 34 ms. Dimensions are 6.4 mm wide x 1.7 mm deep. Dynamic range within the image was 40 dB. The image shows features that can be identified as12 the nerve fiber layer (NFL), inner plexiform layer (IPL) inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), interface between the inner and outer segments of the photoreceptors (OS), and retinal pigment epithelium (RPE). A blood vessel (indicated with an arrow) can be distinguished near the left side of the optic nerve head [82].

Page 12: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 12 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS

3.4 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS While the role of the mammalian eye in detecting light for vision has long been known, it has only recently emerged that the eye performs a dual role in detecting light for a range of behavioral and physiological responses that are distinct from sight. In humans, ocular light exposure resets circadian rhythms, induces suppression of the nocturnal pineal hormone melatonin, induces pupillary constriction, increases heart rate, enhances subjective ratings of alertness, and changes the frequency of electroencephalogram (EEG) brain waves that are indicative of a more alert state [ 85 - 91 ]. These effects of light are collectively termed ‘non-visual’ or ‘non-image forming’ responses and are sometimes grouped under the term ‘circadian photoreception’, as much of the behavioral and neuroanatomical work that first identified these effects was focused on studies of the ability of light to shift the timing of the endogenous circadian (near-24 hour) pacemaker [65]. The rapid advances in understanding how light stimulates non-visual functions were led by the discovery of a novel opsin (visual pigment responding to light) termed melanopsin in the mammalian eye [92,93]. Surprisingly, melanopsin protein was found to be located, not in the outer retina where rhodopsin and cone opsins are present, but in about 1–3% of cells in the ganglion cell layer, spread across the retina in a network-like distribution. These melanopsin-containing ganglion cells project to multiple brain areas involved in non-image forming responses, including the suprachiasmatic nuclei (SCN) of the hypothalamus, the site of the principal mammalian circadian pacemaker as illustrated in Figure 19 [65]. One of the key feature of the non-image forming responses is that its spectral sensitivity differs (λmax≈440-480 nm [94], Figure 20) from V(λ)-function defined for photopic vision (λmax≈550 nm). Therefore the normal photometric units such as candela and lux cannot be used to express the effects of a given light in humans. The topic is currently under a research [95,96], and there are hopes (despite of some efforts even to patent that [97]) that in recent future there exist an equivalent consensus on how to measure non-image forming visual responses in humans as already exist with visual responses excluding mesopic range [98]. There is then a need for measurement methods

Figure 20. The histogram shows the distribution of the estimated optimal wavelength. The curve is the least-squares best-fitting retinaldehyde template function; λmax (479.2 nm) [4].

Figure 19. Pathways for light-induced activation of non-visual brain areas. (A) Light exposure activates melanopsin-containing intrinsically photosensitive retinal ganglion cells (ipRGC), which are most sensitive to short-wavelength visible light, and cone-driven classical ganglion cells (cRGC) of the color vision system, which are most sensitive to mid-

wavelength light ( λλλλ max = 555 nm). (B) Melanopsin-

containing ipRGCs project to a range of ‘‘‘‘non-visual’’’’ areas of the brain, including the suprachiasmatic nuclei (SCN), which then project multisynaptically to the pineal gland, as well as to many areas that share input from the visual photoreceptor system, such as the lateral geniculate nucleus (LGN), pretectum and superior colliculus (SuC). Through as yet unidentified pathways, light stimulates the ascending arousal system and eventually the cortex to enhance alertness and cognition. INL, inner nuclear layer; ONL, outer nuclear layer; RPE, retinal pigment epithelium

[65].

Page 13: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 13 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS

and devices that include the newly defined spectral sensitivity probably with the possibility of record NIF-responses along with photopic responses. In ideal case this could be done by using spectroradiometer based imaging but in many practical applications this could be impractical and expensive. There is a need for field measurements with light-weight personal dosimeters equipped with at least with photopic sensor and NIF-sensor possibly equipped also with scotopic, infrared and UV sensors. The visible range of electromagnetic radiation could be also further divided to several different bands allowing more accurate estimation of the given spectral power distribution as already mentioned in chapter 2 [7,8,4]. In addition to quantifying personal exposure experienced by a person it is valuable to know the “NIF-luminance”-distribution in the space which should again be differently spectrally weighed compared to photopic luminance. In the following chapters, some existing technologies are briefly reviewed.

3.4.1 PERSONAL DOSIMETERS At the moment there exist two different documented personal dosimeters that include the measurement of different spectral bands. The first dosimeter is called LuxBlick (or old name LichtBlick) [ 99 ]. LuxBlick is a low-cost device (75€ as an estimated price for components) that has been developed and tested for use in long-term field experiments. With two separate sensors, illuminance and the effective irradiance for non-visual biological effects are measured and recorded. The device and its spectral sensitivity is shown in Figure 21. The other device is called Daysimeter [100,101], and it is slightly more sophisticated than LuxBlick containing two sensors in a similar manner as LuxBlick, but in addition it has two-axis accelerometer to measure activity and head angle. Figure 22 shows illustrative data of the Daysimeter as a practical research tool. Figure 22A shows values obtained worn at the office and then for the drive home at night. Figure 22B shows the data when performing a computer-based numerical verification task. The room was illuminated throughout the experiment with fluorescent lamps having a CCT of 3500K. Towards the end of the experimental session, the photopic illuminance was approximately 1,000 lx whereas the blue sensor recorded approximately 3000 b-lux due to increased daylight in the room. In conclusion, without the Daysimeter (or a system equivalent) it would be difficult to accurately determine people’s circadian (NIF) radiation exposure.

Figure 21. The two sensors are fixed at the frame of the glasses. On the right side the graph shows the spectral distribution of c(λ) and V(λ) together with the spectral response of the two sensors at an angle of light incident of 0°(data from manufacturer HAMAMATSU) [99].

Figure 22. (A) Daysimeter data showing the recording of the photopic and blue sensors when worn at the office and then for the drive home at night. Local time is displayed on the abscissa, and the coordinate is in units of photopic lux and b-lux. Annotations on the graph provide a description of some of the lighting conditions experienced by the subject. (B) Daysimeter data showing the recording of the photopic and blue sensors when worn by a subject during a 5 h experimental session while seated in an electrically illuminated room next to a north-facing window. Local time is indicated on the abscissa, as is the occurrence of sunrise. Values on the coordinate are in units of photopic lux and b-lux. Breaks taken away from the window are evident by light level drops in both channels [100].

(A) (B)

Page 14: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 14 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS

Yotter et al. (2003) [102] have reviewed the technical characteristics of different photodetectors used in observing biological systems. The intent was to provide an overview of the performance metrics and trade-offs among popular photodetectors in order to facilitate an easier match among the photodetector, biological stimulus, and optical pathway. The characteristics and nonidealities of fluorescent and phosphorescent reporters, and the properties of optical components such as filters, lenses, and light sources, are reviewed. The conclusions from the work are reviewed only shortly here. Systems that detect light-emitting reporters rely predominantly on the use of the (photoemissive) photomultiplier tube (PMT), cooled charge-coupled devices (CCDs), or the (junction- based) avalanche photodiode (APD) for high sensitivity detection. Table 1 compares the performance of these three popular choices with other common photodetectors. In biological measurements CMOS-based measurement devices are becoming more and more common as in commercial photographic devices [103]. Short comparison of different CMOS-structures are then shown in Table 2. Table 1. Comparison of photodetectors (T=25˚) [102].

Table 2. Representative performance of alternative CMOS structures [102].

Page 15: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 15 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS

3.4.2 CIRCADIAN-WEIGHED LUMINANCEPHOTOMETERS The method of measuring “NIF-light” distribution in the space was presented by Gall et al. [104] in 2004. The method enables the derivation of “NIF/circadian” quantities from photometric quantities using a circadian action factor acv. The measurement of values is possible by using either spectroradiometers, c(λ)-adapted detectors. By using circadian action function c(λ), it is possible to calculate circadian radiation quantities Xec: (constant K = 1)

∫ λλ= λ d)(cXKX eec (1)

Where, Xeλ = Le,λ= Spectral radiance

sr,nm,2

m

W

The ratio of the integrals of the circadian and the photometric quantities is called by Gall and Lapuente [105] the circadian action factor acv:

∫∫

λλ

λλ=

λ

λ

d)(VX

d)(cXa

e

e

cv (2)

Where, V(λ) = Photopic spectral luminous efficiency function

This action factor allows a comparison of different light spectra. The relation between circadian quantities and photometric quantities Xv is as follow:

vm

cvec X

K

aX = (3)

Where, Km = Maximum spectral luminous efficacy 683

W

lm.

These equations can be used to calculate the circadian quantities from the spectral power distribution measured by spectroradiometers. It would also be possible to manufacture a custom c(λ)-filter for a measurement camera. Authors have used luminancephotometer LMK color [106], which gives graphic distribution of acv-values within an area of measurement (Figure

23A). The first approximation with the CIE standard color-matching function λz , the acv circadian action can also be measured. For example, a tristimulus colorimeter can be used with the Y-detector to measure the photometric quantity, where x, y, and z are CIE color coordinates:

y

yx1

y

z

d)(VX

d)(zXa

e

e

cv

−−==λλ

λλ≈∫∫

λ

λ (4)

Figure 23B shows the lines with similar acv-values in the CIE standard chromaticity diagram. This enables the circadian evaluation of the light sources with correlated color temperature (CCT). As the circadian response depends heavily on the illuminance at the eye rather than on task areas, the final response can be calculated taking into account the reflectance characteristics of the environment. One benefit of this model would be that it allows direct transformation from photometric quantities to circadian quantities measured with traditional lighting measurement

Page 16: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 16 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS

devices or with spectroradiometers. However, the main problem is the problems in defining the exact spectral characteristics of NIF-responses in humans as recent research have shown that cones and rods are still involved in the regulation of NIF-responses [107-109].

3.4.3 DIGITAL PHOTOGRAPHY The measurement of NIF-effective luminance can be also done with “non-scientific” cameras that offer raw format. Hollan [110] compared two commercial digital cameras (Fuji S5000 and Canon EOS D60) for the match of the sensitivity of blue pixels to the action spectrum of the non-imaging forming (NIF) human visual system. The examined cameras were calibrated using a solar spectrum. The images have been taken with the appropriate angular height of the Sun in the sky, so that its light went through 1.5 times the thickness of the atmosphere. A CD-based cardboard spectroscope (with a lit from two razors) had been used, after a series of attempts. Solar spectrum has a lower intensity at a handful of wavelengths, so-called spectral lines. After processing the images, the solar spectrum graph as recorded by the three types of camera pixels has been obtained. The results can be seen in Figure 24, with the comparison of CCD-colors (Fuji S5000, Figure 24A) and CMOS-colors (Canon EOS D60, Figure 24B) to the three sensitivity functions (photopic, scotopic, NIF(“circadian”)).

Figure 23. (A) The distribution of acv-values within an area of measurement.. (B) acv-values in the CIE standard chromaticity diagram [104].

(A) (B)

Circadian action factor in the CIE

standard chromaticity diagram.

(B) (A)

Figure 24. Comparison of the camera sensors to photopic, scotopic and proposed “metabolic” (circadian) spectral sensitivity function. (A) Fuji S5000 CCD-sensitivity, and (B) Canon EOS D60 CMOS-sensitivity [Error! Bookmark not defined.].

Page 17: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 17 NON-IMAGE FORMING (NIF) VISUAL RESPONSES IN HUMANS

From the two graphs, it can be seen that at least some CCD cameras can measure melatonin-affecting light rather well. All the needed software is available at the website of the author for calibration of the camera [111]. However, it is pointed out that it is not easy to use, but what is important that the effective amount of radiation affecting melatonin secretion can be documented for further use. It should be noted that the proposed method only can provide an easy and cheap way to (for example with existing DSLR) estimate “NIF-luminance” in the space and cannot match the spectral accuracy of a spectroradiometer. The examples of the b-luminance (blue = circadian effective luminance) measurements with Fuji S5000 digital camera can be seen in Figure 25. The b-luminance distribution illustrated using color-coding. As the study by Hollan [110] was a part of scotobiology (the study of biology as affected by darkness [112]) research, the introduced method could be used to quantify the light pollution affecting human and animal physiology as this area is relatively unknown [113]. For example 5% of Czech population perceives unwanted artificial light from outdoors as one of the two main causes of their sleep problems [114].

(A) (B)

Figure 25. Examples of showing and summing color-coded b-luminances (blue) of scene in a logarithmic scale taken with Fuji S5000. Middle of the red color range corresponds to 1 cd/m2, the green to 10 cd/m2, yellow to 100 cd/m2 etc. Luminance of the face around the eyes is about one candela per square meter (or one nit, using a convenient non-SI name of the unit). (A) A scene from Childhood Leukemia Conference. The black spot means oversaturated pixels. Average luminances of the tiles are given at their bottom, the number in the centre is the average green pixels raw reading. (B) Westminster Abbey. Its luminances are in one nit and one decinit range. Note the obsolete glaring luminaires at right (the only light which should be visible is the red traffic light [110].

Page 18: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 18 PLANT PHYSIOLOGY

3.5 PLANT PHYSIOLOGY When subjected to changes in environmental conditions originating from biotic (relating to, produced by, or caused by living organisms) or abiotic (In biology, abiotic components are non-living chemical and physical factors in the environment. These may be classified as light, temperature, water, atmospheric gases, and wind as well as soil, animals, and human beings (edaphic) and physiographic (nature of land surface) factors.) factors, plants can respond with adjustments in their biochemical and physiological processes as either adaptive mechanisms or mechanisms that are due to stress. For example, adverse environmental conditions can result in either reduced synthesis or breakdown of photosynthetic pigments or loss of water. These adjustments are often followed by changes in reflectance, transmittance, and absorbance of plant leaves [ 115 - 118 ]. For several decades in vivo leaf spectroscopic measurements have been exploited as a noninvasive means to provide information on these plant mechanisms. Several studies have demonstrated that the magnitudes of fluorescence emissions changed under several stress conditions including those of both natural and anthropogenic origins. Such changes in the fluorescence emissions are frequently expressed as ratios of two-band combinations of blue, green, red, and far-red regions of spectrum as indicators of plant vigor [117,119-123]. The mechanisms and plant constituents governing the fluorescence changes, especially in the blue–green region of the spectrum, are complex and require additional research for full elucidation of the processes involved [121,124]. Fluorescence sensing instrumentation may be classified in terms of the excitation light source into 1) those using pulsed excitation sources (i.e., pulse lasers), and 2) those using a nonpulsed (continuous) as the excitation source [125].The main advantage of using a pulsed laser as the excitation source is that fluorescence emissions can be captured in the presence of ambient radiation with the use of a pulse-gated high-speed detector system such as an intensified CCD. Hence laser-induced fluorescence (LIF) is a feasible remote sensing tool for assessing plant vigor. In fact, several studies have used LIF techniques to demonstrate fluorescence measurements from airborne platforms [126- 129 ]. However; the currently available lasers typically have some pulse-to-pulse peak power variations. To avoid the adverse effects of these variations, fluorescence emission band ratios that normalize fluorescence emission fluctuation in magnitude are often used [125]. Kim et al. [125] have presented an example of a laboratory-based multispectral fluorescence imaging system (MFIS) for plant leaves. The authors use fluorescence emissions with 360-nm excitation that are captured at four spectral bands in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 680, and 740 nm, respectively. The imaging system consists of a UV excitation source (Four 12-W long-wave UV-A fluorescent lamps), a sample holder, interference filters (AB300 automated filter wheel), a CCD digital camera (Peltier-cooled [-15˚ C] CCD camera with a spatial resolution of 192x165 pixels), a 12-bit-resolution analog-to-digital converter, and a computer interface for instrument control and data collections. The imaging system was designed for qualitative and quantitative characterization of spectral fluorescence changes, including multiple band ratios resulting from biochemical and physiological changes in plants with the added benefit of allowing spatial characterizations to be made. Figure 26 illustrates the transmittance characteristics of the interference filters. The blue filter has a maximum transmittance at 450 nm with a 25-nm FWHM, the green filter at 550 nm with a 25-nm FWHM, the red filter at 680 nm with a 10-nm FWHM, and the far-red filter at 740 nm with a 10-nm FWHM. For the blue–green

Figure 26. Transmittance of fluorescence imaging system filters shown on top of a typical fluorescence emission spectrum (shaded area) of a soybean leaf under UV excitation [125].

Page 19: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 19 PLANT PHYSIOLOGY

region of the spectrum, broader FWHM filters are used to increase the sensitivity of the imaging system, since the QE of the CCD camera in this region is lower than that of the red far-red region of the spectrum. Figure 27 shows the gray-scale fluorescence ratio images of the soybean [Glycine max Merr.] leaves used in the study for F450/F550 (blue/green), F680/F740 (red/far-red), F450/F680 (blue/red), and F550/F680 (green/red) of adaxial and abaxial surfaces. For a given ratio the images of adaxial (facing toward the axis, as the upper surface of a leaf) and abaxial (facing away from the axis of an organ or organism, also called dorsal sometimes) surfaces are presented under the same gray scale. These ratio images also illustrate significant variations across leaf surfaces, especially between the major vascular bundles (veins) and interveinal regions. In addition, within the interveinal regions, the ratios of abaxial surfaces show less variation in terms of ratio values compared with those of the adaxial surfaces. Results from the study [125] confirm that instruments measuring fluorescence from an entire leaf surface give a more accurate assessment than those that are limited to separate individual integrated-point measurements taken across a leaf area.

3.5.1 MULTISPECTRAL IMAGING IN FOOD INDUSTRY Light-based sensing techniques, especially near-infrared spectroscopy (NIRS), offer great potential for predicting internal quality of fresh fruits nondestructively. NIRS is fast and relatively easy to implement; more importantly, it has the potential to measure multiple quality attributes simultaneously. NIRS has been used to measure the SSC of apples and other tree fruits [130,131] and commercial applications of NIRS for sorting fruit for SSC have been reported [132]. NIRS has also been studied for measuring fruit firmness; [133] however; results are still unsatisfactory for grading and sorting purposes. When a light beam is incident upon a fruit, some will be absorbed and some will scatter in the form of either backscattering reflectance or transmission. Absorption and scattering are two basic phenomena as light interacts with the biological tissue. Light absorption is related to the concentration of certain chemical constituents such as sugar, water and chlorophyll. Scattering properties, on the other hand, offer insight into composition, density, and tissue structures such

Figure 27. Fluorescence ratio images of soybean leaves: (a) F450/F550, (b) F680/F740, (c) F450/F680, and (d) F550/F680. Images in the left-hand column represent ratios of the adaxial surface, and images on the right-hand column are those of the abaxial surface. The same gray scales are used for adaxial and abaxial surfaces within each of the ratios [125].

Page 20: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 20 PLANT PHYSIOLOGY

as cells, intracellular bond (or middle lamella), and extracellular matrices [134]. Hence with the knowledge of absorption and scattering, it is possible to measure better the chemical and physical/mechanical properties of fruit. Conventional NIRS measures an aggregate amount of light reflected or transmitted from a specific area of the sample (point measurement); it does not provide quantitative information on how light scatters in the sample. As a result, conventional NIRS has limited capability for quantifying structurally related properties such as fruit firmness. Recently multispectral (hyperspectral) imaging has emerged as a useful sensing technique for quality evaluation and safety inspection of food and agricultural products [135]. Hyperspectral imaging combines the main features of conventional imaging and spectroscopy to obtain both spectral and spatial information from an object simultaneously. Hence, it is ideally suited for quantifying light absorption and scattering in fruits and other agricultural products at a broad range of wavelengths [135]. Lu et al. [135] the objective of this research was to use a hyperspectral scattering technique to acquire spectral scattering images from apple fruit and develop a data analysis method relating hyperspectral scattering characteristics to fruit firmness and soluble solids content (SSC). A hyperspectral imaging system was assembled, which mainly consisted of an imaging unit and an illumination unit (Figure 29). The imaging unit was composed of a high performance CCD camera, an imaging spectrograph with a zoom lens, and a computer with a frame grabber. As the light enters the imaging spectrograph, it is dispersed into different wavelengths while preserving the original spatial information for each pixel from the scanning line. As a result, a special two-dimensional image is created; one dimension represents the spectral and the other dimension spatial. As a light beam hits the fruit, it penetrates into the fruit tissue; photons are either absorbed or scattered. Backscattered light illuminates a portion of the fruit adjacent to the incidence area, generating spectral scattering images at the surface of the apple fruit (Figure 28). To acquire a complete hyperspectral scattering image over an area of 25 mm or greater, multiple scans would be required. This would be time consuming and create a large amount of image data. Instead, a different approach was implemented by the authors so that hyperspectral scattering images would be acquired instantaneously. Since the light beam was small and circular (1.6 mm) with a large incidence angle between the beam and fruit (horizontally), scattering could be assumed to be symmetric with respect to the beam incidence point. Because of this symmetric feature, it was possible to measure scattering from one single line passing the incidence center (Figure 28). As a result, it greatly improved the speed of image acquisition and processing, without losing essential information about light scattering in the fruit [135].

Figure 29. Schematic of the hyperspectral imaging system for acquiring spectral scattering images from apple fruit [135].

Figure 28. Scattering of light in apple fruit and the scanning position by the hyperspectral imaging system with respect to the beam incidence center [135].

Page 21: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

APPLICATIONS 21 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

Typical hyperspectral scattering images for GD (‘Golden Delicious’) and RD (red ‘Delicious’) apples are shown in Figure 30. A horizontal line taken from the scattering image represents the scattering profile for a specific wavelength or waveband, whereas a vertical line gives the spectral profile at a specific spatial location (pixel) from the illumination center. Thus, each image can be viewed as being composed of more than 150 scattering profiles for the wavelengths of 500–1,000 nm or 127 reflectance spectra for pixels of the 25 mm scanning line at the fruit surface. A method combining principal component analysis (PCA) [137,138] and neural networks [3] was used to develop calibration models. This was done in practice using MATLAB and the Neural Networks Toolbox. The method combining principal component analysis and neural networks was shown to provide an effective means for relating scattering characteristics to fruit firmness and soluble solids content (SSC). The results of the study [135] supported the idea that hyperspectral scattering is potentially useful for assessing and grading internal quality of apple fruit. However, further research is needed to investigate other data analysis methods in order to accurately characterize scattering profiles at individual wavelengths and better relate scattering characteristics to fruit firmness and SSC.

3.6 PROSPECTIVE USES OF MULTISPECTRAL IMAGING Multispectral imaging has not yet been used very extensively; therefore the list of prospective uses is very large. In practice a lot of image segmentation can benefit from the use of multispectral imaging if the studied phenomenon exhibits some wavelength-dependent characteristics. Imaging static objects, the imaging setup implementation can be very easy since at the basic level only cheap monochrome camera with a set of color filters can be used to compose a multispectral image. Naturally the choice of spectral bands is very application-dependent and the studied phenomenon needs to be understood before actualizing the measurements. There are also two projects at Department of Electrical Engineering of TKK that could possibly benefit from this type of multispectral imaging: 1) Research on the effects of LED illumination to plant growth where multispectral imaging could provide more detailed information of the changes at cellular level [139]; and 2) the quantification of the amount of chlorophyll in pine trees.

Figure 30. Typical scattering images for ‘Golden Delicious’ (a-1) and ‘Delicious’ (b-1) apples where the horizontal axis represents the spatial dimension and the vertical axis the spectral. Plots in (a-2) and (b-2) represent spectral profiles for selected locations from the beam incidence point and plots in (a-3) and (b-3) represent spatial scattering profiles for selected wavelengths [135].

Page 22: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

CONCLUSION 22 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

4 CONCLUSION In this work some of the recently emerged applications of multispectral imaging in biology were reviewed. It seemed from the studies discussed here that multispectral imaging has a lot of potential in making some of the biological measurements more accurate and feature-specific. Multispectral imaging could have been used just for image segmentation purposes so that different features can be more easily extracted when doing image analysis with the different spectral bands itself containing little physical significance. Whereas in some other applications it can be of an interest to study what is for example the amount of ultraviolet radiation or what is the luminous intensity of some fluorescent protein on a specific spectral band. However, many applications still require further research in order to find more optimal measurement methodology as well as to elucidate the underlying physics behind observed spectral characteristics. As noted, the basic case of multispectral imaging implementation with a monochrome camera and color filters for static filters is still relatively inexpensive and easy to use and analyze, whereas more advanced systems require more engineering knowledge and also larger investments. Therefore in many practical cases the cost-benefit analysis will show that multispectral imaging is not feasible way to improve imaging quality (such as pupillometry), whereas when buying new microscopes the multispectral feature can be relatively inexpensive related to total costs of the instruments.

Page 23: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 23 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

5 REFERENCES 1 Gerrit Polder and Gerie W. van der Heijden, "Calibration and characterization of spectral imaging

systems, " Proceedings of SPIE, Volume 4548, Multispectral and Hyperspectral Image Acquisition and Processing, Qingxi Tong, Yaoting Zhu, Zhenfu Zhu, Editors, September 2001, pp. 10-17, http://dx.doi.org/10.1117/12.441362

2 K. Wojciechowski, B. Smolka, H. Palus, R.S. Kozera, W. Skarbek and L. Noakes, "Radiometric calibration of a multispectral camera," Computational Imaging and Vision 32:273-278, http://dx.doi.org/10.1007/1-4020-4179-9_39

3 M. Egmont-Petersen, D. de Ridder and H. Handels, "Image processing with neural networks—a review," Pattern Recognition 35(10):2279-2301 (2002), http://dx.doi.org/10.1016/S0031-3203(01)00178-9

4 J. L. Nieves, E. M. Valero, S. M. C. Nascimento, J. Hernández-Andrés, and J. Romero, "Multispectral synthesis of daylight using a commercial digital CCD camera," Appl. Opt. 44, 5696-5703 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=ao-44-27-5696 (accessed October 18, 2007).

5 Anon. Landsat Thematic Mapper Data Sets. Available from: http://observe.arc.nasa.gov/nasa/education/tools/stepby/archive.html (accessed October 18, 2007).

6 Anon. Laser Focus World. Multispectral imaging completes the picture. Available from: http://www.laserfocusworld.com/display_article/31072/12/ARCHI/none/News/Multispectral-imaging-completes-the-picture (accessed October 18, 2007).

7 J. L. Nieves, E. M. Valero, J. Hernández-Andrés, and J. Romero, "Recovering fluorescent spectra with an RGB digital camera and color filters using different matrix factorizations," Appl. Opt. 46:4144-4154 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=ao-46-19-4144 (accessed October 18, 2007).

8 M. A. López-Álvarez, J. Hernández-Andrés, E. M. Valero, and J. Romero, "Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight," J. Opt. Soc. Am. A 24:942-956 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=josaa-24-4-942 (accessed October 18, 2007).

9 G Fredlund and L Sundstrom, "Digital infra-red photography for recording painted rock art," Antiquity 81(313):733 (2007), http://antiquity.ac.uk/ant/081/ant0810733.htm (accessed October 18, 2007).

10 P Carcagnì, A Della Patria, R Fontana, M Greco, M. Mastroianni, M. Materazzi, E. Pampaloni and L. Pezzati, “Multispectral imaging of paintings by optical scanning,” Optics and Lasers in Engineering 45(3):360-367 (2007), http://dx.doi.org/10.1016/j.optlaseng.2005.02.010

11 Chieu D. Tran, Yan Cui, and Sergey Smirnov, "Simultaneous Multispectral Imaging in the Visible and Near-Infrared Region: Applications in Document Authentication and Determination of Chemical Inhomogeneity of Copolymers," Anal. Chem. 70(22):4701-4708 (1998), http://dx.doi.org/10.1021/ac980647q

12 Afromowitz MA, Callis JB, Heimbach DM, DeSoto LA, Norton MK, "Multispectral imaging of burn wounds: a new clinical instrument for evaluating burn depth," IEEE Trans Biomed Eng. 35(10):842-850 (1988), http://dx.doi.org/10.1109/10.7291

13 Ling Bei, Glenn I. Dennis, Heather M. Miller, Thomas W. Spaine and Jon W. Carnahan, "Acousto-optic tunable filters: fundamentals and applications as applied to chemical analysis techniques," Progress in Quantum Electronics 28(2):67-87, http://dx.doi.org/10.1016/S0079-6727(03)00083-1

14 H. Brettel, J. Y. Hardeberg, and F. Schmitt, "Multispectral image capture across the Web," in Proc. IS&T and SID's 7th Color Imaging Conf.: Color Science, Systems and Applications, pp. 314–316, Scottsdale, AZ (1999).

Page 24: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 24 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

15 Nahum Gat, "Imaging Spectroscopy Using Tunable Filters: A Review," available online:

http://www.techexpo.com/WWW/opto-knowledge/E-tunable-filters.pdf (accessed October 18, 2007).

16 Jon Y. Hardeberg, Francis Schmitt and Hans Brettel, "Multispectral color image capture using a liquid crystal tunable filter," Optical Engineering 41(10):2532-2548 (2002), http://dx.doi.org/10.1117/1.1503346

17 Javier Calpe-Maravilla et al., "400- to 1000-nm imaging spectrometer based on acousto-optic tunable filters," Proceedings of SPIE, Volume 5570, Sensors, Systems, and Next-Generation Satellites VIII, Roland Meynart, Steven P. Neeck, Haruhisa Shimoda, Editors, November 2004, pp. 460-471, http://dx.doi.org/10.1117/12.565587

18 Dimitra N. Stratis, Kristine L. Eland, J. Chance Carter, Samuel J. Tomlinson, and S. Michael Angel, "Comparison of Acousto-optic and Liquid Crystal Tunable Filters for Laser-Induced Breakdown Spectroscopy," Applied Spectroscopy, Vol. 55, Issue 8, pp. 999-1004, http://www.opticsinfobase.org/abstract.cfm?URI=as-55-8-999 (accessed October 24, 2007)

19 R. Rowe, K. Nixon, S. Corcoran, “Multispectral Fingerprint Biometrics,” Proceedings of the IEEE Workshop on Information Assurance and Security, West Point, NY, 2005, http://dx.doi.org/10.1109/IAW.2005.1495928

20 P. Teikari, (2007) “Automated pupillometry,” Project Work on Measurement Science and Technology, Helsinki University of Technology, Finland.

21 W. Zhao, R. Chellappa, P. Phillips and A. Rosenfeld, Face recognition: a literature survey, ACM Computing Surveys 35 (4):399–458 (2003), http://doi.acm.org/10.1145/954339.954342

22 L. Wolff, D. Socolinsky, C. Eveland, Quantitative measurement of illumination invariance for face recognition using thermal infrared imagery, in: IEEE Workshop on Computer Vision Beyond the Visible Spectrum, Hawaii, 2001, http://dx.doi.org/10.1117/12.457626

23 D. Socolinsky, A. Selinger and J. Neuheisel, Face recognition with visible and thermal infrared imagery, Computer Vision and Image Understanding 91(1-2):72–114 (2003), http://dx.doi.org/10.1016/S1077-3142(03)00075-4

24 G. Friedrich, Y. Yeshurun, Seeing people in the dark: face recognition in infrared images, in: Second BMCV, 2003, http://portal.acm.org/citation.cfm?id=648248.751741 (accessed October 25, 2007)

25 A. Jain, R. Bolle and S. Pankanti, Biometrics: Personal Identification in Networked Society, Kluwer Academic Publishers, Dordrecht (1999).

26 I. Pavlidis, P. Symosek, The imaging issue in an automatic face/disguise detection system, in: IEEE Workshop on Computer Vision Beyond the Visible Spectrum, 2000, pp. 15–24, http://dx.doi.org/10.1109/CVBVS.2000.855246

27 G Bebis, A Gyaourova, S Singh and I Pavlidis, "Face recognition by fusing thermal infrared and visible imagery," Image and Vision Computing 24(7):727-742 (2006), http://dx.doi.org/10.1016/j.imavis.2006.01.017

28 S.G. Kong, J. Heo, B.R. Abidi, J. Paik and M.A. Abidi, Recent advances in visual and infrared face recognition—a review, Computer Vision and Image Understanding 97: 103–135 (2005), http://dx.doi.org/10.1016/j.cviu.2004.04.001

29 P. Scheunders, Local mapping for multispectral image visualization, Image and Vision Computing 19(13):971–978 (2001), http://dx.doi.org/10.1016/S0262-8856(01)00058-0

30 S.S.K. Vani and K. Lakshmi, Comparison of conventional and wavelet transform techniques for fusion of irs-1c, kiss-iii, and pan images, in: Proceedings of the ACRS, vol. 1, 2001, pp. 140–145.

31 G. Piella, A general framework fo multiresolution image fusion: from pixels to regions, Information Fusion 4: 259–280 (2003), http://dx.doi.org/10.1016/S1566-2535(03)00046-0

Page 25: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 25 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

32 B. Manjunath, R. Chellappa and C von der Malsburg, A feature based approach to face recognition,

in: Computer Vision and Pattern Recognition, pp. 373–378 (1992), http://dx.doi.org/10.1109/CVPR.1992.223162

33 E. Adelson and P. Burt, Image data compression with the laplacian pyramid, in: Pattern Recognition and Image Processing, Dallas TX, 1981, pp. 218–223.

34 C. Chui, An Introduction to Wavelets, Academic Press, London (1992).

35 J.G. Daugman, “How iris recognition works,” Image Processing. 2002. Proceedings. 2002 International Conference on, 2002, http://www.cl.cam.ac.uk/~jgd1000/irisrecog.pdf (accessed October 24, 2007).

36 J.G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” Pattern Analysis and Machine Intelligence, IEEE Transactions on 15, no. 11 (1993): 11481161, http://doi.ieeecomputersociety.org/10.1109/34.244676.

37 Andrew T Duchowski, “A breadthfirst survey of eyetracking applications,” Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc 34(4): 455-470 (2002).

38 A. De Santis and D. Iacoviello, “Optimal segmentation of pupillometric images for estimating pupil shape parameters,” Computer Methods and Programs in Biomedicine 84(23):174-187 (2006), http://dx.doi.org/10.1016/j.cmpb.2006.07.005

39 Christopher Boyce, Arun Ross, Matthew Monaco, Lawrence Hornak, Xin Li, "Multispectral Iris Analysis: A Preliminary Study51," cvprw, p. 51, 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06), 2006, http://doi.ieeecomputersociety.org/10.1109/CVPRW.2006.141 [Abstract only]

40 S. Fan, C. Dyer, L. Hubbard, “Quantification and Correction of Iris Color,” Technical Report 1495, University of Wisconsin-Madison, 2003, Available from: http://www.cs.wisc.edu/~shaohua/research/iris/IrisColor_files/IrisTr1495.pdf (accessed October 24, 2007)

41 FH Imai, "Preliminary Experiment for Spectral Reflectance Estimation of Human Iris using a Digital Camera,", 2000, Available online: http://www.cis.rit.edu/mcsl/research/PDFs/IrisImaging.pdf (accessed October 24, 2007)

42 T. Sakamoto, C. Nakanishi, T. Hase, “Software Pixel Interpolation for Digital Still Cameras Suitable for a 32-Bit MCU,” IEEE Transactions on Consumer Electronics, 44(4):1342-1352 (1998), http://dx.doi.org/10.1109/30.735836

43 Li Q, Fraley C, Bumgarner RE, Yeung KY, Raftery AE, "Donuts, scratches and blanks: robust model-based segmentation of microarray images," Bioinformatics 21(12):2875-2882 (2005), http://dx.doi.org/10.1093/bioinformatics/bti447

44 A. Blansché, P. Gançarski and J.J. Korczak, "MACLAW: A modular approach for clustering with local attribute weighting," Pattern Recognition Letters 27(11):1299-1306, http://dx.doi.org/10.1016/j.patrec.2005.07.027

45 Chamberlain, J. 1978. The Principles of Interferometric Spectroscopy. Wiley, New York.

46 Malik, Z., Cabib, D., Buckwald, R.A., Talmi, A., Garini, Y., and Lipson, S. G., “Fourier transform multipixel spectroscopy for quantitative cytology,” J. Microsc., 182: 133-140 (1996), http://dx.doi.org/10.1046/j.1365-2818.1996.131411.x

47 Tsurui, H., Nishimura, H., Hattori, S., Hirose, S., Okumura, K., and Shirai, T., “Seven-color fluorescence imaging of tissue samples based on Fourier spectroscopy and singular value decomposition,” J. Histochem. Cytochem., 48: 653-662 (2000), http://www.jhc.org/cgi/content/abstract/48/5/653 (accessed October 25, 2007)

Page 26: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 26 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

48 A. E. Dixon, S. Damaskinos and M. R. Atkinson, "A scanning confocal microscope for transmission

and reflection imaging," Nature 351:551 – 553 (1991), http://dx.doi.org/10.1038/351551a0

49 José-Angel Conchello and Jeff W Lichtman, "Optical sectioning microscopy," Nature Methods 2: 920 - 931 (2005), http://dx.doi.org/10.1038/nmeth815

50 Wachman, E.S., Niu, W., and Farkas, D.L., “AOTF microscope for imaging with increased speed and spectral versatility,” Biophys. J., 73: 1215-1222 (1997), http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1181021 (accessed October 25, 2007)

51 Ford, B.K., Volin, C.E., Murphy, S.M., Lynch, R.M., and Descour, M.R., “Computed tomography-based spectral imaging for fluorescence microscopy,” Biophys. J., 80: 986-993 (2001), http://www.biophysj.org/cgi/content/abstract/80/2/986 (accessed October 25, 2007)

52 Lansford, R., Bearman, G., and Fraser, S.E., “Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy,” J. Biomed. Opt., 6: 311-318 (2001), http://dx.doi.org/10.1117/1.1383780

53 Haraguchi, T., Shimi, T., Koujin, T., Hashiguchi, N., and Hiraoka, Y., “Spectral imaging for fluorescence microscopy,” Genes Cells, 7: 881-887 (2002), http://dx.doi.org/10.1046/j.1365-2443.2002.00575.x

54 Timo Zimmermann, "Spectral Imaging and Linear Unmixing in Light Microscopy", in Advances in Biochemical Engineering/Biotechnology, Springer Berlin, Heidelberg, 2005, http://dx.doi.org/10.1007/b102216

55 Yasushi Hiraoka, Takeshi Shimi and Tokuko Haraguchi; “Multispectral Imaging Fluorescence Microscopy for Living Cells”. Cell Struct. Funct., 27: 367-374 (2002), http://dx.doi.org/10.1247/csf.27.367

56 R. Neher and E. Neher, “Optimizing imaging parameters for the separation of multiple labels in a fluorescence image” Journal of Microscopy 213 (1): 46–62 (2004), http://dx.doi.org/10.1111/j.1365-2818.2004.01262.x

57 Jennifer Lippincott-Schwartz and George H. Patterson, "Development and Use of Fluorescent Protein Markers in Living Cells," Science 300(5616):87-91 (2003), http://dx.doi.org/10.1126/science.1082520

58 Mikhail V. Matz, Konstantin A. Lukyanov, Sergey A. Lukyanov, "Family of the green fluorescent protein: Journey to the end of the rainbow," BioEssays 24(10):953-959, http://dx.doi.org/10.1002/bies.10154

59 Jason R. Swedlow, Ilya Goldberg, Erik Brauner, Peter K. Sorger, "Informatics and Quantitative Analysis in Biological Imaging," Science 300(5616):100-102 (2003), http://dx.doi.org/10.1126/science.1082602

60 Wuwei Wu and Alexander DQ Li, "Optically switchable nanoparticles for biological imaging," Nanomedicine 2(4): 523-531 (2007), http://dx.doi.org/10.2217/17435889.2.4.523

61 Nadya G Gurskaya, Vladislav V Verkhusha, Alexander S Shcheglov, Dmitry B Staroverov, Tatyana V Chepurnykh, Arkady F Fradkov, Sergey Lukyanov and Konstantin A Lukyanov, "Engineering of a monomeric green-to-red photoactivatable fluorescent protein induced by blue light," Natur Biotechnology 24:461-465 (2006), http://dx.doi.org/10.1038/nbt1191

62 P Artal and DR Williams, "Introduction to the Special issue on “Optics in Vision”," Journal of Vision, 4(4):i (2004), http://dx.doi.org/10.1167/4.4.i

63 G Westheimer, "Specifying and controlling the optical image on the human retina," Prog Retin Eye Res. 25(1):19-42 (2006), http://dx.doi.org/10.1016/j.preteyeres.2005.05.002

64 H Kolb, "How the Retina Works," American Scientist 91(1):28 (2003), http://dx.doi.org/10.1511/2003.1.28

Page 27: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 27 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

65 Steven W. Lockley and Joshua J. Gooley, "Circadian Photoreception: Spotlight on the Brain,"

Current Biology 16(18):R795-R797 (2006), http://dx.doi.org/10.1016/j.cub.2006.08.039

66 Berson DM, Dunn FA and Takao M. Phototransduction by retinal ganglion cells that set the circadian clock. Science 295:1070–1073 (2002), http://dx.doi.org/10.1126/science.1067262

67 Parikh RS, Parikh SR, Sekhar GC, Prabakaran S, Babu JG and Thomas R, "Normal age-related decay of retinal nerve fiber layer thickness," Ophthalmology 114(5):921-926 (2007), http://dx.doi.org/10.1016/j.ophtha.2007.01.023

68 LM Zangwill and C Bowd, "Retinal nerve fiber layer analysis in the diagnosis of glaucoma," Current Opinion in Ophthalmology 17(2):120-131 (2006), http://www.co-ophthalmology.com/pt/re/coophth/fulltext.00055735-200604000-00003.htm (accessed October 24, 2007).

69 C. Chiquet, C. Gronfier, C. Rieux, R.A. Hut, B. Claustrat, J. Brun, P. Denis and H.M. Cooper, “Reduced Sensitivity To Light Suppression Of Nocturnal Plasma Melatonin In Glaucoma Patients,” Invest Ophthalmol Vis Sci 2004;45: E-Abstract 4339. http://abstracts.iovs.org/cgi/content/abstract/45/5/4339 (accessed October 24, 2007).

70 Fisher JB, Jacobs DA, Markowitz CE, Galetta SL, Volpe NJ, Nano-Schiavi ML, Baier ML, Frohman EM, Winslow H, Frohman TC, Calabresi PA, Maguire MG, Cutter GR and Balcer LJ, "Relation of visual function to retinal nerve fiber layer thickness in multiple sclerosis," Ophthalmology 113(2):324-332, http://dx.doi.org/10.1016/j.ophtha.2005.10.040

71 Xiang-Run Huang and Robert W. Knighton, "Linear birefringence of the retinal nerve fiber layer measured in vitro with a multispectral imaging micropolarimeter," Journal of Biomedical Optics, 7(2): 99-204 (2002), http://dx.doi.org/10.1117/1.1463050

72 Arthur Lompado, Matthew H. Smith, Lloyd W. Hillman and Kurt R. Denninghoff, "Multispectral confocal scanning laser ophthalmoscope for retinal vessel oximetry," Proceedings of SPIE, Volume 3920, Spectral Imaging: Instrumentation, Applications, and Analysis, Gregory H. Bearman, Dario Cabib, Richard M. Levenson, Editors, March 2000, pp. 67-73, http://dx.doi.org/10.1117/12.379584 [Abstract only]

73 Anon. Specim: Advertisement for hyperspectral microscopy, available online: http://www.specim.fi/media/pdf/spectral-imaging-references/hyperspectral-microscopy-kestrel.pdf (accessed October 18, 2007)

74 R.N. Weinreb, A.W. Dreher, A. Coleman, H. Quigley, B. Shaw and K. Reiter, Histopathologic validation of Fourier-ellipsometry measurements of retinal nerve fiber layer thickness, Arch. Ophthalmol. 108:557–560 (1990),

75 R.W. Knighton and Q. Zhou, The relation between reflectance and thickness of the retinal nerve fiber layer, J. Glaucoma 4:117–123 (1995).

76 X.-R. Huang and R.W. Knighton, Linear birefringence of the retinal nerve fiber layer measured in vitro with a multispectral imaging micropolarimeter, J. Biomed. Opt. 7:199–204 (2002), http://dx.doi.org/10.1117/1.1463050

77 X.-R. Huang and R.W. Knighton, “Diattenuation and polarization preservation of retinal nerve fiber layer,” Appl. Opt. 42:5737–5743 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=ao-42-28-5737 (accessed October 25, 2007)

78 R.W. Knighton and X.-R. Huang, “Visible and near-infrared imaging of the nerve fiber layer of the isolated rat retina,” J. Glaucoma 8:31–37 (1999).

79 C.A. Toth, D.G. Narayan, S.A. Boppart, M.R. Hee, J.G. Fujimoto, R. Birngruber, C.P. Cain, C.D. DiCarlo and W.P. Roach, A comparison of retinal morphology viewed by optical coherence tomography and by light microscopy, Arch. Ophthalmol. 115:1425–1428 (1997), http://archopht.ama-assn.org/cgi/content/abstract/115/11/1425 (accessed October 25, 2007)

Page 28: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 28 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

80 Y.J. Huang, A.V. Cideciyan, G.I. Papastergiou, E. Banin, S.L. Semple-Rowland, A.H. Milam and S.G.

Jacobson, Relation of optical coherence tomography to microanatomy in normal and rd chickens, Invest. Ophthalmol. Vis. Sci. 39:2405–2416 (1998), http://www.iovs.org/cgi/content/abstract/39/12/2405 (accessed October 25, 2007)

81 M. R. Hee, J. A. Izatt, E. A. Swanson, D. Huang, J. S. Schuman, C. P. Lin, C. A. Puliafito and J. G. Fujimoto, "Optical coherence tomography of the human retina," Arch Ophthalmol 113(3): 325, http://archopht.highwire.org/cgi/content/abstract/113/3/325 (accessed October 25, 2007)

82 Nader Nassif, Barry Cense, B. Hyle Park, Seok H. Yun, Teresa C. Chen, Brett E. Bouma, Guillermo J. Tearney, and Johannes F. de Boer, "In vivo human retinal imaging by ultrahigh-speed spectral domain optical coherence tomography," Optics Letters 29(5):480-482, http://www.opticsinfobase.org/abstract.cfm?URI=ol-29-5-480 (accessed October 25, 2007)

83 Huang XR, Knighton RW and Shestopalov V, "Quantifying retinal nerve fiber layer thickness in whole-mounted retina," Exp Eye Res., 83(5):1096-101 (2006), http://dx.doi.org/10.1016/j.exer.2006.05.020

84 Xiang-Run Huang, Robert W. Knighton, and Lora N. Cavuoto, "Microtubule Contribution to the Reflectance of the Retinal Nerve Fiber Layer," Investigative Ophthalmology and Visual Science 47:5363-5367, http://dx.doi.org/10.1167/iovs.06-0451

85 C. Cajochen, J.M. Zeitzer, C.A. Czeisler and D.J. Dijk, “Dose-response relationship for light intensity and ocular and electroencephalographic correlates of human alertness,” Behav. Brain Res. 115:75–83 (2000), http://dx.doi.org/10.1016/S0166-4328(00)00236-9

86 Kavita Thapan, Josephine Arendt and Debra J. Skene, "An action spectrum for melatonin suppression: evidence for a novel non-rod, non-cone photoreceptor system in humans," Journal of Physiology 535(1):261-267 (2001), http://jp.physoc.org/cgi/content/abstract/535/1/261 (accessed October 25, 2007)

87 S.W. Lockley, G.C. Brainard and C.A. Czeisler, “High sensitivity of the human circadian melatonin rhythm to resetting by short wavelength light,” J. Clin. Endocrinol. Metab. 88 (2003), pp. 4502–4505, http://jcem.endojournals.org/cgi/content/abstract/88/9/4502 (accessed October 25, 2007)

88 C. Cajochen, M. Munch, S. Kobialka, K. Krauchi, R. Steiner, P. Oelhafen, S. Orgul and A. Wirz-Justice, “High sensitivity of human melatonin, alertness, thermoregulation, and heart rate to short wavelength light,” J. Clin. Endocrinol. Metab. 90:1311–1316 (2005), http://dx.doi.org/10.1210/jc.2004-0957

89 Revell VL, Arendt J, Fogg LF, Skene DJ, "Alerting effects of light are sensitive to very short wavelengths," Neurosci Lett. 399(1-2):96-100 (2006), http://dx.doi.org/10.1016/j.neulet.2006.01.032

90 G. Vandewalle, E. Balteau, C. Phillips, C. Degueldre, V. Moreau, V. Sterpenich, G. Albouy, A. Darsaud, M. Desseilles and T. Dang-Vu et al., “Daytime light exposure dynamically enhances brain responses,” Curr. Biol. 16:1616–1621 (2006), http://dx.doi.org/10.1016/j.cub.2006.06.031

91 P. Teikari, (2006) “Biological effects of light,” Master’s thesis, Lighting Laboratory, Helsinki University of Technology, Finland.

92 I. Provencio, I.R. Rodriguez, G. Jiang, W.P. Hayes, E.F. Moreira and M.D. Rollag, A novel human opsin in the inner retina, J. Neurosci. 20:600–605 (2000), http://www.jneurosci.org/cgi/content/abstract/20/2/600 (accessed October 25, 2007)

93 O. Dkhissi-Benyahya, Rieux C, Hut RA and Cooper HM, “Immunohistochemical evidence of a melanopsin cone in human retina,” Investig Ophthalmol Vis Sci. 47:1636—1641 (2006), http://dx.doi.org/10.1167/iovs.05-1459

94 X. Qiu, T. Kumbalasiri, S.M. Carison, K.Y. Wong, V. Krishna, I. Provencio and D.M. Berson, Induction of photosensitivity by heterologous expression of melanopsin, Nature 433:745–749. (2005), http://dx.doi.org/0.1038/nature03345

Page 29: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 29 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

95 George C. Brainard and John P. Hanifin, "Photons, Clocks, and Consciousness," Journal of Biological

Rhythms 20(4):314-325 (2005), http://dx.doi.org/10.1177/0748730405278951

96 Anon. CIE TC6-61 “Action Spectra and Dosimetric Quantities for Circadian and Related Neurobiological Effects,” Available online: http://physics.nist.gov/Divisions/Div844/CIE/CIE6/TCs/6-61.htm (accessed October 25, 2007)

97 G Brainard, " Photoreceptor system for melatonin regulation and phototherapy," Unites States Patent 20010056293. Available online: http://www.freepatentsonline.com/20010056293.html (accessed October 25, 2007)

98 Anon. CIE TC 1-58 “Visual Performance in the Mesopic Range,” Available online: http://lightinglab.fi/CIETC1-58/index.html (accessed October 25, 2007)

99 S Hubalek, D Zöschg, and C Schierz, "Ambulant recording of light for vision and non-visual biological effects," Lighting Research and Technology 38(4): 314-321 (2006), http://dx.doi.org/10.1177/1477153506070687

100 A Bierman, TR Klein, MS Rea, “The Daysimeter: a device for measuring optical radiation as a stimulus for human circadian system. Meas. Sci. Technol. 16:2292-2299 (2005), http://dx.doi.org/10.1088/0957-0233/16/11/023

101 Anon. Rensselaer Polytechnic Institute, Lighting Research Center. Press release: "Researchers Developing Device To Predict Proper Light Exposure for Human Health", Available from: http://news.rpi.edu/update.do?artcenterkey=2303&setappvar=page(1) (accessed October 25, 2007)

102 R.A. Yotter and D.M. Wilson, " A review of photodetectors for sensing light-emitting reporters in biological systems," IEEE Sensors Journal 3(3):288-303 (2003), http://dx.doi.org/10.1109/JSEN.2003.814651

103 Anon. SiliconImaging. CMOS Market Opportunity. Available from: http://www.siliconimaging.com/cmos_market_opportunity.htm (accessed October 25, 2007)

104 Gall D, Bieske K. 2004. Definition and measurement of circadian radiometric quantities. CIE Symposium ’04 “Light and Health”, pp. 129-132.

105 Gall D, Lapuente V. 2002. Beleuchtungsrelevante Aspekte bei der Auswahl eines förderlichen Lampenspektrums. Licht 54:860-871.

106 Anon. LMK (98-3) Color. Online brochure. Available from: http://www.technoteam.de/e898/e97/e286/e287/e292/LMK98-3-Color-engl_eng.pdf (accessed October 25, 2007).

107 O. Dkhissi-Benyahya, C. Gronfier, W. De Vanssay, F. Flamant, H. Cooper, Modeling the Role of Mid-Wavelength Cones in Circadian Responses to Light., Neuron, Volume 53, Issue 5, Pages 677-687 (2007), http://dx.doi.org/10.1016/j.neuron.2007.02.005

108 Satchidananda Panda, Multiple Photopigments Entrain the Mammalian Circadian Oscillator, Neuron 53(5):619-621 (2007), http://dx.doi.org/10.1016/j.neuron.2007.02.017

109 Elise Drouyer, Camille Rieux, Roelof A. Hut and Howard M. Cooper, Responses of Suprachiasmatic Nucleus Neurons to Light and Dark Adaptation: Relative Contributions of Melanopsin and Rod–Cone Inputs, The Journal of Neuroscience, September 5, 2007, 27(36):9623-9631, http://dx.doi.org/10.1523/JNEUROSCI.1391-07.2007

110 J Hollan, “Metabolism-influencing light: measurement by digital cameras,” Poster at Cancer and Rhythm, Oct 14-16, Graz, Austria, 2004. Available from: http://amper.ped.muni.cz/noc/english/canc_rhythm/g_camer.pdf (accessed October 25, 2007).

111 J Hollan, "Luminance and Radiance measurement using raw formats of common cameras", Available online: http://amper.ped.muni.cz/light/luminance/ (accessed October 25, 2007)

Page 30: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 30 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

112 Anon. Ecology of the Night Symposium. Scotobiology. Online article. Available from:

http://www.muskokaheritage.org/ecology-night/scotobiology.asp (accessed October 25, 2007)

113 Lyytimäki J. 2006. Unohdetut ympäristöongelmat [In Finnish: Forgotten environmental problems], Gaudeamus, Helsinki, Finland.

114 Forejt M, Hollan J, Skočovský, Skotnice R. 2004. Sleep disturbances by light at night: two queries made in 2003 in Czechia. Poster at Cancer and Rhythm, Oct 14-16, Graz, Austria, 2004. Available from: http://amper.ped.muni.cz/noc/english/canc_rhythm/g_sleep.pdf (accessed October 25, 2007)

115 B. Yode and L. S. Daley, “Development of a visible spectroscopic method for determining chlorophyll a and b in vivo in leaf samples,” Spectroscopy 5, 44–50 (1990).

116 E. W. Chappelle, M. S. Kim, and J. E. McMurtrey, “Ratio analysis of reflectance spectra (RARS): an algorithm for the remote estimation of the concentrations of chlorophyll a, chlorophyll b, and carotenoids,” Remote Sens. Environ. 37, 121–128 (1992).

117 J. E. McMurtrey, E. W. Chappelle, M. S. Kim, and L. A. Corp, “Distinguishing nitrogen fertilization levels in field corn (Zea mays L) with actively induced fluorescence and passive reflectance measurements,” Remote Sens. Environ. 47, 36–44 (1994).

118 L. Ning, L. S. Daily, W. J. Bower, E. H. Piepmeier, G. A. Strobel, and J. B. Callis, “Spectroscopic imaging of water in living plant leaves,” Spectroscopy 11, 34–44 (1996).

119 E. W. Chappelle, F. M. Wood, J. E. McMurtrey, and W. W. Newcomb, “Laser-induced fluorescence of green plants. 1. A technique for the remote detection of plant stress and species differentiation,” Appl. Opt. 23, 134–138 (1984), http://www.opticsinfobase.org/abstract.cfm?&id=27181 (accessed October 26, 2007).

120 E. W. Chappelle, J. E. McMurtrey, F. M. Wood, and W. W. Newcomb, “Laser-induced fluorescence of green plants. 2. LIF changes caused by nutrient deficiencies in corn,” Appl. Opt. 23, 139–134 (1984) http://www.opticsinfobase.org/abstract.cfm?&id=27182 (accessed October 26, 2007).

121 M. Lang, P. Stiffel, Z. Braunova, and H. K. Lichtenthaler, “Investigation of the blue–green fluorescence emission of plant leaves,” Bot. Acta 105, 395–468 (1992).

122 F. Stober, M. Lang, and H. K. Lichtenthaler, “Studies on the blue, green, red fluorescence signature of green etiolated and white leaves,” Remote Sens. Environ. 47, 65–71 (1994).

123 W. W. Chappelle, J. E. McMurtrey, and M. S. Kim, “Identification of the pigment responsible for the blue fluorescence band in laser induced fluorescence (LIF) spectra of green plants, and the potential use of this band in remotely estimating rates of photosynthesis,” Remote Sens. Environ. 36, 213–218 (1991).

124 Moon S. Kim, James E. McMurtrey, Charles L. Mulchi, Craig S. T. Daughtry, Emmett W. Chappelle, and Yud-Ren Chen, "Steady-state multispectral fluorescence imaging system for plant leaves," Applied Optics 40(1): 157-166, http://ao.osa.org/abstract.cfm?id=62839 (accessed October 24, 2007).

125 IY Zayas and PW Flinn, "Detection of insects in bulk wheat samples with machine vision," Transactions of the ASAE. 41(3):883-888 (1998), http://asae.frymulti.com/abstract.asp?aid=17206&t=2 (accessed October 28, 2007).

126 H. H. Kim, “New algae mapping technique by the use of an airborne laser fluoresensor,” Appl. Opt. 12, 1454–1459 (1973), http://www.opticsinfobase.org/abstract.cfm?&id=17932 (accessed October 28, 2007).

127 F. E. Hoge, R. N. Swift, and J. K. Yungel, “Feasibility of airborne detection of laser induced fluorescence from green plants,” Appl. Opt. 22, 2991–2998 (1983), http://www.opticsinfobase.org/abstract.cfm?&id=26950 (accessed October 28, 2007).

128 G. Cecchi, M. Bazzani, V. Raimond, and L. Pantani, “Fluorescence LIDAR in vegetation remote sensing: system features and multiplatform operation,” in Proceedings of the International

Page 31: Teikari Multispectral Imaging - Petteri Teikari · studies have been done on face recognition some whereas iris [39] and finger [19] recognition ... and (D) after physical exercise

MULTISPECTRAL IMAGING PETTERI TEIKARI, 2007

REFERENCES 31 PROSPECTIVE USES OF MULTISPECTRAL IMAGING

Geoscience and Remote Sensing Symposium IGARSS ’94 (Institute of Electrical and Electronics Engineers, New York, 1994), pp. 637–639.

129 H Nilsson, "Remote Sensing and Image Analysis in Plant Pathology," Annual Review of Phytopathology 33:489-528, (1995) http://dx.doi.org/10.1146/annurev.py.33.090195.002421

130 Dull et al., "Near Infrared Analysis of Soluble Solids in Intact Cantaloupe," Journal of Food Science 54(2):393-395, http://dx.doi.org/10.1111/j.1365-2621.1989.tb03090.x

131 Lu R, Guyer DE and EM Beaudry, "Determination of firmness and sugar content of apples using near-infrared diffuse reflectance," Journal of Texture Studies 31(6):615-630, http://dx.doi.org/10.1111/j.1745-4603.2000.tb01024.x

132 S. Kawano, H. Abe, M. Iwamoto, in Nondestructive Techniques for Quality Evaluation of Fruits and Vegetables. Proc. Int. Workshop 15–19 June, 1993. ed. by G. Brown, Y. Sarig (US-Israel BARD Fund, Spoken, WA, 1994). p. 1–18

133 B. Park, J. A. Abbott, K. J. Lee, C. H. Choi and K. H. Choi, "Near-infrared diffuse reflectance for quantitative and qualitative measurement of soluble solids and firmness of Delicious and Gala apples," Transactions of the ASAE. Vol. 46(6): 1721-1731 (2003), http://asae.frymulti.com/abstract.asp?aid=15628&t=2 (accessed October 28, 2007).

134 GS Birth, "The light scattering characteristics of ground grains," International agrophysics 2(1):59-67 (1986).

135 Renfu Lu, "Nondestructive measurement of firmness and soluble solids content for apple fruit using hyperspectral scattering images, " Sensing and Instrumentation for Food Quality and Safety 1(1):19-27 (2007), http://dx.doi.org/10.1007/s11694-006-9002-9

136 B. Park, K.C. Lawrence, W.R. Windham and R.J. Buhr, “Hyperspectral imaging for detecting fecal and ingesta contaminants on poultry carcasses,” Trans. ASAE 45:2017-2026 (2002), http://asae.frymulti.com/abstract.asp?aid=7433&t=2 (accessed October 28, 2007).

137 R Vidal, Yi M and S Sastry, " Generalized principal component analysis (GPCA), " Pattern Analysis and Machine Intelligence, IEEE Transactions on 27(12):1945-1959 (2005), http://dx.doi.org/10.1109/TPAMI.2005.244

138 Di-Yuan Tzeng and Roy S. Berns, "A review of principal component analysis and its applications to color technology," Color Research & Application 30(2):84-98 (2005), http://dx.doi.org/10.1002/col.20086

139 P Pinho, E Tetri and L Halonen, "LED-based Lighting System for Plants Illumination," Available from: http://lightinglab.fi/research/national/LEDplant/index.html (accessed October 28, 2007).