8
Highlight detection and removal from spectral image Pesal Koirala, 1,2, * Paras Pant, 1 Markku Hauta-Kasari, 1 and Jussi Parkkinen 1,3 1 University of Eastern Finland, School of Computing, P.O.B. 111, FI-80101 Joensuu, Finland 2 Norsk Elektro Optikk AS, Solheimveien 62 A, N-1473 Lørenskog, Norway 3 Monash University Sunway Campus, Jalan Lagoon Selatan, 46150 Bandar Sunway Selangor, Malaysia *Corresponding author: [email protected] Received July 8, 2011; revised September 7, 2011; accepted September 8, 2011; posted September 16, 2011 (Doc. ID 150512); published October 13, 2011 We present a constrained spectral unmixing method to remove highlight from a single spectral image. In the con- strained spectral unmixing method, the constraints have been imposed so that all the fractions of diffuse and highlight reflection sum up to 1 and are positive. As a result, the spectra of the diffuse image are always positive. The spectral power distribution (SPD) of the light source has been used as the pure highlight spectrum. The pure diffuse spectrum of the measured spectrum has been chosen from the set of diffuse spectra. The pure diffuse spec- trum has a minimum angle among the angles calculated between spectra from a set of diffuse spectra and the measured spectrum projected onto the subspace orthogonal to the SPD of the light source. The set of diffuse spectra has been collected by an automated target generation program from the diffuse part in the image. Constrained energy minimization in a finite impulse response linear filter has been used to detect the highlight and diffuse parts in the image. Results by constrained spectral unmixing have been compared with results by the orthogonal subspace projection (OSP) method [Proceedings of International Conference on Pattern Recognition (2006), pp. 812815] and probabilistic principal component analysis (PPCA) [Proceedings of the 4th WSEAS International Conference on Signal Processing, Robotics and Automation (2005), paper 15]. Constrained spectral unmixing outperforms OSP and PPCA in the visual assessment of the diffuse results. The highlight removal method by constrained spectral unmixing is suitable for spectral images. © 2011 Optical Society of America OCIS codes: 100.2960, 100.2980, 110.4234, 330.5000. 1. INTRODUCTION The enormous information due to the large number of bands at each pixel position in spectral images has always been a great advantage for analysis. For better processing results, it is necessary to have the image without highlights; otherwise, the desired objects may be obscured by highlights or detected as different objects. The highlight can be removed by using cross setting of a polarizing filter in front of the camera sensor and light source during image acquisition. This is based on the idea that the highlight or specular component is polarized and the diffuse component is not polarized for any dielectric ob- jects [1]. However, it may not always be feasible to put a po- larizing filter in front of the camera and light source, and it is not suitable for an already measured image. Therefore, an al- ternative method for an already measured image is required. A lot of work has been done in RGB images to remove highlights [15]. To our knowledge, there has been only limited work regarding highlight removal from spectral images [6,7]. Most of the highlight removal methods for RGB, as well as for spec- tral images, depend on the basic concepts of the dichromatic reflection model (DRM) [8]. In a spectral image, the radiance spectrum at each pixel position is the mixed spectrum as a linear combination of the diffuse component and the highlight component [8]. The highlight component is due to the contri- bution of the spectral power distribution (SPD) of the light source. Here the goal is to separate the diffuse and highlight components from a single radiance or reflectance at each pixel position. Bochko and Parkkinen [7] exploited principal component analysis (PCA) to remove highlights from spectral images. In their method, a mixture model of probabilistic PCA was used to detect the highlight-affected part and the diffuse part in the image. Finally, the highlight-detected part mapped across the first eigenvector of the diffuse part was used to re- move highlight during the reconstruction process by PCA. Since the accuracy of the result depends on the information that is carried by the first principal component, a multicolored image with a background may not be properly segmented be- tween the diffuse and highlight parts. The PCA method [7] does not need the information of the light source. Fu et al. [6] proposed a highlight-free spectral image using OSP. In their method, the projector maps the radiance spectrum to the subspace orthogonal to the illumination spectrum (the SPD of the light source). Therefore, the OSP projector [6,9,10] re- jects the uninteresting signature (the SPD of the light source) from the mixed spectra and, as a result, a highlight-free spec- tral image is achieved. The OSP method in the single illumi- nant case removes one band information during rejection of the highlight and removes multiple bands in multiple illumi- nants. However, this should not be problem in a spectral image since it consists of tens or hundreds of spectral bands. Because the resultant spectra from the OSP method have ne- gative values in some wavelengths, it may not be physically realistic. The OSP method utilizes the prior information of the light source. In the OSP method, the highlight-free image ac- curately preserves the shades of color. However, a shift of col- or value is detected when the highlight-free spectral image is rendered in RGB space because it produces the equivalent of the diffuse component projected onto the orthogonal sub- space of the SPD of the light source instead of the exact dif- fuse component. Nevertheless, this method is computationally fast and can be applied to a spectral image rendered with multiple light sources [6]. 2284 J. Opt. Soc. Am. A / Vol. 28, No. 11 / November 2011 Koirala et al. 1084-7529/11/112284-08$15.00/0 © 2011 Optical Society of America

Highlight detection and removal from spectral image

  • Upload
    jussi

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Highlight detection and removal from spectral image

Highlight detection and removal from spectral image

Pesal Koirala,1,2,* Paras Pant,1 Markku Hauta-Kasari,1 and Jussi Parkkinen1,3

1University of Eastern Finland, School of Computing, P.O.B. 111, FI-80101 Joensuu, Finland2Norsk Elektro Optikk AS, Solheimveien 62 A, N-1473 Lørenskog, Norway

3Monash University Sunway Campus, Jalan Lagoon Selatan, 46150 Bandar Sunway Selangor, Malaysia*Corresponding author: [email protected]

Received July 8, 2011; revised September 7, 2011; accepted September 8, 2011;posted September 16, 2011 (Doc. ID 150512); published October 13, 2011

We present a constrained spectral unmixing method to remove highlight from a single spectral image. In the con-strained spectral unmixing method, the constraints have been imposed so that all the fractions of diffuse andhighlight reflection sum up to 1 and are positive. As a result, the spectra of the diffuse image are always positive.The spectral power distribution (SPD) of the light source has been used as the pure highlight spectrum. The purediffuse spectrum of the measured spectrum has been chosen from the set of diffuse spectra. The pure diffuse spec-trum has a minimum angle among the angles calculated between spectra from a set of diffuse spectra and themeasured spectrumprojected onto the subspace orthogonal to the SPD of the light source. The set of diffuse spectrahas been collected by an automated target generation program from the diffuse part in the image. Constrainedenergy minimization in a finite impulse response linear filter has been used to detect the highlight and diffuseparts in the image. Results by constrained spectral unmixing have been compared with results by the orthogonalsubspace projection (OSP) method [Proceedings of International Conference on Pattern Recognition (2006),pp. 812–815] and probabilistic principal component analysis (PPCA) [Proceedings of the 4th WSEAS International

Conference on Signal Processing, Robotics and Automation (2005), paper 15]. Constrained spectral unmixingoutperforms OSP and PPCA in the visual assessment of the diffuse results. The highlight removal method byconstrained spectral unmixing is suitable for spectral images. © 2011 Optical Society of America

OCIS codes: 100.2960, 100.2980, 110.4234, 330.5000.

1. INTRODUCTIONThe enormous information due to the large number of bandsat each pixel position in spectral images has always been agreat advantage for analysis. For better processing results,it is necessary to have the image without highlights; otherwise,the desired objects may be obscured by highlights or detectedas different objects. The highlight can be removed by usingcross setting of a polarizing filter in front of the camera sensorand light source during image acquisition. This is based on theidea that the highlight or specular component is polarized andthe diffuse component is not polarized for any dielectric ob-jects [1]. However, it may not always be feasible to put a po-larizing filter in front of the camera and light source, and it isnot suitable for an already measured image. Therefore, an al-ternative method for an already measured image is required. Alot of work has been done in RGB images to remove highlights[1–5]. To our knowledge, there has been only limited workregarding highlight removal from spectral images [6,7]. Mostof the highlight removal methods for RGB, as well as for spec-tral images, depend on the basic concepts of the dichromaticreflection model (DRM) [8]. In a spectral image, the radiancespectrum at each pixel position is the mixed spectrum as alinear combination of the diffuse component and the highlightcomponent [8]. The highlight component is due to the contri-bution of the spectral power distribution (SPD) of the lightsource. Here the goal is to separate the diffuse and highlightcomponents from a single radiance or reflectance at eachpixel position. Bochko and Parkkinen [7] exploited principalcomponent analysis (PCA) to remove highlights from spectralimages. In their method, a mixture model of probabilistic PCAwas used to detect the highlight-affected part and the diffuse

part in the image. Finally, the highlight-detected part mappedacross the first eigenvector of the diffuse part was used to re-move highlight during the reconstruction process by PCA.Since the accuracy of the result depends on the informationthat is carried by the first principal component, a multicoloredimage with a background may not be properly segmented be-tween the diffuse and highlight parts. The PCA method [7]does not need the information of the light source. Fu et al. [6]proposed a highlight-free spectral image using OSP. In theirmethod, the projector maps the radiance spectrum to thesubspace orthogonal to the illumination spectrum (the SPDof the light source). Therefore, the OSP projector [6,9,10] re-jects the uninteresting signature (the SPD of the light source)from the mixed spectra and, as a result, a highlight-free spec-tral image is achieved. The OSP method in the single illumi-nant case removes one band information during rejection ofthe highlight and removes multiple bands in multiple illumi-nants. However, this should not be problem in a spectralimage since it consists of tens or hundreds of spectral bands.Because the resultant spectra from the OSP method have ne-gative values in some wavelengths, it may not be physicallyrealistic. The OSP method utilizes the prior information of thelight source. In the OSP method, the highlight-free image ac-curately preserves the shades of color. However, a shift of col-or value is detected when the highlight-free spectral image isrendered in RGB space because it produces the equivalent ofthe diffuse component projected onto the orthogonal sub-space of the SPD of the light source instead of the exact dif-fuse component. Nevertheless, this method is computationallyfast and can be applied to a spectral image rendered withmultiple light sources [6].

2284 J. Opt. Soc. Am. A / Vol. 28, No. 11 / November 2011 Koirala et al.

1084-7529/11/112284-08$15.00/0 © 2011 Optical Society of America

Page 2: Highlight detection and removal from spectral image

In this paper, we propose the linear positive constrainedspectral unmixing method to separate the spectral image intoits highlight and diffuse components at each pixel position.The spectral unmixing method [11] has been widely used inremote sensing to detect the fractions (abundances) of thetarget signatures (endmembers) of different components ofimages, such as sand, trees, and roads. The spectral unmimx-ing method defines the measured spectra as the linear com-bination of endmembers. In our case, we have assumedthat the endmember of the highlight-affected part is the SPDof the light source and the other endmembers are assumed tobe the spectra of pure pixels selected from the diffuse part ofthe image. Endmembers of pure pixels have been selectedusing an automated target generation program (ATGP) [12].In the case of a known light source, the first target spectrumin the ATGP is the SPD of the light source; otherwise, the spec-trum of the pixel that has maximum length has been used asthe first pure target spectrum and assumed to be the signatureof the highlight-affected part. Here the maximum length is themaximum value of the sum of the square of the spectra valueagainst the wavelengths among all pixels. The endmembersfor the diffuse component is determined by the ATGP onlyin the diffuse part of the image. The number of endmembersfor the diffuse part can be selected by using hypothesis testingbased on the eigenthresholding method [13]. The diffuse partof the image has been detected by using constrained energyminimization (CEM) in a finite impulse response (FIR) linearfilter [9]. The diffuse part of the image should be understoodas the part in the image that is not affected by highlight. Simi-larly, the highlight part of the image should be understood asthe part in the image that is affected by highlight. Linear un-mixing with the abundance nonnegativity constraint (ANC)and the abundance sum to one constraint (ASC) gives theabundances of different endmembers at each pixel position.After knowing the abundances of SPD of the light sourceand the diffuse endmember, the diffuse and highlight imagesare calculated.

The rest of the paper is organized as follows. Section 2describes the spectral image acquisition process. Section 3introduces the DRM. Sections 4 and 5 describe the spectralimage highlight detection and highlight removal methods, re-spectively. The results are discussed in Section 6.

2. SPECTRAL IMAGE ACQUISITIONThe spectral images have been acquired by a liquid crystal tun-able filter (LCTF) camera [14], Hyspex VNIR-1600 hyperspec-tral camera from the visible to near infrared regions [15] andthe RGB to spectral image conversion method [16]. Spectralimages were acquired by the LCTF camera under single andmultiple light sources. The wavelengths of the images arewithin the range from 420 to 720nm at 10nm sampling. Asa result, there are 31 different bands. The LCTF acquires thespectral image by filtering in the spectral domain. Spectralimages captured by the Hyspex VNIR-1600 are from the 400to 1000 nm wavelengths with sampling of about 3:7 nm, yield-ing a total of 160 bands. The Hyspex VNIR-1600 is a pushb-room line scan camera. Diffuse spectral images of the sameobjects have been measured by the Hyspex VNIR-1600 cam-era. In the diffuse image measurement, a polarizing filter hasbeen fitted in front of the light source and the camera. The

setting, called a cross polarizer setting, removes the highlighteffect in the image.

We also tested our method in the spectral images obtainedfrom an RGB image. RGB image to spectral image conversionestimates a spectral image from a three-band RGB imageusing known basis functions derived from training sets [16]. Inthe method, the spectral image of the color checker and thecorresponding RGB image were measured as a training set.The basis functions were calculated by applying the transfor-mation from the RGB image to the corresponding spectralimage by the Wiener estimation method. The calculated basisfunctions have been used to find the spectral image from themeasured RGB image. A more accurate spectral image can becomputed if the same type of camera sensor and light sourcecould be set for RGB image measurement, as was done in thetraining sets. The wavelengths from 400 to 700 nm with 10nmsampling have been considered. As a result, there are 31 dif-ferent bands.

3. DICHROMATIC REFLECTION MODELShafer [8] described a DRM for modeling the reflectance ofdielectric objects. The model suggests that the radiance spec-tra of the receptor is the linear combination of the diffuse andhighlight components. Based on this model, the response ofthe receptor for each wavelength λ at geometric positionðxÞ is defined as in Eq. (1):

Rðx; λÞ ¼ dðxÞRdðλ; xÞSðλÞqðλÞ þ sðxÞSðλÞqðλÞ þ eðxÞ; ð1Þ

where Rðx; λÞ is the radiance value or response of the receptorat geometric position ðxÞ at wavelength λ. The camera sensorsensitivity at each wavelength is represented by qðλÞ. dðxÞ andsðxÞ are the weighting factors for the diffuse and highlight re-flections, respectively, at each pixel position ðxÞ. The noise ateach pixel is represented by eðxÞ. The weighting factors aredependent on the geometric structure of the surface. Rdðλ; xÞis the diffuse reflectance or surface albedo at pixel positionðxÞ and wavelength λ. SðλÞ is the SPD of the illuminant, whichis independent of the geometry of the surface. In the highlightcomponent, the highlight reflectance is not considered since itis assumed that the highlight reflectance is equal to the SPD oflight source. Even if the sensitivity is not flat, it can be omittedby merging into the illumination term SðλÞ for the same imagesensor used [6]. Equation (1) is rewritten in simple matrixform in Eq. (2):

R ¼ dDþ sS þ e; ð2Þ

where D is the radiance of diffuse reflectance derived fromthe elementwise product between diffuse reflectance Rd andthe SPD of the light source, S. Therefore, D is D ¼ Rd · S. TheDRM can be extended to multiple light sources by describingthe effect of multiple light sources as a linear combination. Ifthe multiple light sources are uniformly illuminated, the re-sponse of the receptor for each wavelength λ at geometricposition ðxÞ is extended to Eq. (3):

Rm ¼XLi¼1

diDi þXLi¼1

siSi þ e: ð3Þ

Koirala et al. Vol. 28, No. 11 / November 2011 / J. Opt. Soc. Am. A 2285

Page 3: Highlight detection and removal from spectral image

In Eq. (3), L represents the number of exposed lightsources. Di is the radiance value of diffuse reflectance con-tributed by the ith light source and Si is the SPD of the ithlight source. Accordingly, di and si are the diffuse and high-light weighting factors corresponding to the ith light source.Equation (4) represents Eq. (3) in matrix form:

Rm ¼ Dmdm þ Smsm þ e: ð4Þ

The superscript ½�m is the representation for the case of multi-ple light sources. Rm is the response of the receptor and a col-umn matrix of size n × 1. Here n is number of wavelengths.Sm ¼ ½S1S2…SL� is the collection of the SPDs of L differentlight sources. Similarly, Dm ¼ ½D1D2…DL� is a collection ofthe different diffuse spectra contributed by L different lightsources. Since each radiance value at each pixel is of sizen × 1, the size of Dm and Sm is n × L. The fractions of the dif-fuse component and the highlight component are representedby dm ¼ ½d1d2…dL�T and sm ¼ ½s1s2…sL�T , respectively. All diand si are the scalar values. Therefore, the sizes of dm and sm

are L × 1. Here, i varies from 1 to L. In the case of availablereflectance image, the DRM is written in Eq. (5):

Rr ¼ dDr þ sI þ ~e; ð5Þwhere Rr is the measured reflectance of a pixel. Dr is the re-flectance of the diffuse component. I is the column vector thathas size equal to the number of wavelengths. Here, I containsvalues equal to 1 since reflectance is calculated by dividing theradiance by the SPD of the light source S.

4. HIGHLIGHT DETECTIONCEM in an FIR filter [9] has been used as the highlight detec-tion method in this paper. Highlight detection is used to thesegment highlight and diffuse parts from the spectral image.The segmented diffuse part has been used to select the num-ber of endmembers for diffuse reflection. In the method, thesignature of the highlight-affected part (the SPD of the lightsource) has been used as the desired signature. For a betterresult, a correlation matrix of the image without the pure pixel

of the desired signature is required in CEM. The detector(CEM filter) may give fractions equivalent to zero or less thanzero for the background pixels. The K-means clustering algo-rithm has been used to classify the fraction image in two parts:the diffuse part and the highlight part. Figure 1 shows the clas-sification results of the diffuse and highlight parts of the spec-tral image. The classification results in Fig. 1 were obtained byapplying the K-means clustering algorithm in the fractionimage by CEM.

A. CEM in the FIR Linear FilterThe CEM method [9] is useful to find the measure of a parti-cular desired signature at each pixel level without knowingthe prior information of other mixing targets and undesiredsignatures. In that method, a FIR linear filter is designed.The filter coefficient WT that minimizes the filter output en-ergy is obtained subject to the constraint DTW ¼ WTD ¼ 1.Here, D is the desired signature. In our case, the desired sig-nature D is the target signature of the highlight-affected partor the SPD of the light source. The size of the filter coefficientWT is 1 × n, where n is the number of bands in the spectralimage. The output of the FIR filter for each ith pixel of inputimage R is written in Eq. (6):

f i ¼Xnj¼1

WjRij ¼ WTRi ¼ ðRiÞTW: ð6Þ

The average energy of the filter output is written as

1N

XNi¼1

ðf iÞT f i ¼ 1N

�XNi¼1

ððRiÞTWÞT ðRiÞTW�

¼ WT

�1N

XNi¼1

RiðRiÞT�W ¼ WTKW; ð7Þ

where N is the total number of pixels in the image and K ¼1N

PNi¼1 R

iðRiÞT is the autocorrelation matrix of the spectralimage. The CEM was developed to solve the following linearlyconstrained optimization problem:

Fig. 1. (Color online) Highlight detection in spectral image. (a) and (b) Spectral images rendered in three bands (420, 550, and 700nm). (c) and(d) Highlight detected results. Images in (c) and (d) were obtained by applying the K-means algorithm in the fraction image obtained by CEM.

2286 J. Opt. Soc. Am. A / Vol. 28, No. 11 / November 2011 Koirala et al.

Page 4: Highlight detection and removal from spectral image

minimizeW

WTKW subject to DTW ¼ WTD ¼ 1: ð8Þ

The optimal weight vector W as shown in Eq. (9) was derivedin [17]:

WT ¼�

K−1DDTK−1D

�T: ð9Þ

The CEM filter after applying optimal weight vector W isWTR. The CEM filter is the fraction value of the desired sig-nature at each pixel position. Collectively, the CEM filter is thefraction image of the desired signature. In our case, the targetsignature of then highlight-affected part is considered the de-sired signature. In the highlight-affected part, the spectra ofpixels are similar to the desired signature; therefore, the frac-tions are near to 1. If the spectrum of the pixel is the exactlydesired signature as R ¼ D, the fraction value will be exactly 1since WTD ¼ 1. However, in the diffuse part of the image, thespectra of the pixels are less similar to the desired signatureand, therefore, the fractions are smaller for the diffuse part.After applying the K-means algorithm in the fraction image,the diffuse part and the highlight part of the image have beenclassified as shown in Fig. 1. CEM is quite suitable for classi-fying the highlight-affected part and the diffuse part since it iseasy to find the fractions of the signature of the highlight-affected part without knowing all the other target signaturesof the diffuse part. So, CEM can sometimes be called the par-tial unmixing method.

5. HIGHLIGHT REMOVALThe spectrumat eachpixel in the image has been defined as thelinear combination of the highlight spectrum and the diffusespectrum in DRM [8]. The highlight removal method should se-parate the highlight spectrum and the diffuse spectrum at eachpixel position to get the diffuse image.Herewepropose ameth-od that relies on the constrained spectral unmixing to separatethe highlight and diffuse spectra pixelwise from spectralimages based on the DRM. Our method guarantees a positivevalue in the highlight and diffuse spectra in all pixels. Othermethods exist, such asOSP andPPCA, to separate the highlightfrom the spectral image. In OSP, the highlight effect is removedby projecting the image on the orthogonal subspace of the SPD.The results by constrained linear unmixing andOSP are shownin Fig. 2. The results show that constrained linear unmixing im-proves the virtual color of the image. Bochko andParkkinen [7]presented the PPCA method for obtaining a highlight-free im-age. In their approach, there is no need for the information ofthe light source. This method [7] works quite well for an imagehaving a uniform colored surface, but ourmethod, constrainedspectral unmixing, clearly outperforms in an image with a

textured surface and different colors. Figure 3 shows the re-sults obtained by the constrained spectral unmixing and PPCAmethods. There is also a physical approach to removing thehighlight from an image, where a polarizer should be used infront of the light source and sensor. It is based on the idea thatthe highlight or specular component in a dielectric object ispolarized and the diffuse component is not polarized [1].However, this method is not always useful, especially for analready measured image. The setting of the polarizer in frontof the light source, except for a point light source, may be dif-ficult. Figure 4(a) shows the raw spectral image and Figs. 4(b)–4(d) show a diffuse image by a polarizing filter, a diffuse imageby the proposed constrained linear unmixing method, and ahighlight-free image by OSP, respectively. These figures showthat the result obtained by constrained linear unmixing is moresimilar to the image obtained by the polarizer. Figure 5 showsthe radiance of the selected part in the original image, in thediffuse imagebyapolarizer, in the diffuse imagebyconstrainedspectral unmixing, and in the highlight-free image by OSP. Thefigure shows that the spectra by OSP suffers from a negativevalue. Below we review the methods: OSP [6], highlightanalysis by PPCA [7], and the constrained linear unmixingmethod [11].

Fig. 2. (Color online) Constrained spectral unmixing methodcompared to OSP method. (a) Original spectral image. (b) Highlight-removed image obtained by constrained spectral unmixing.(c) Highlight-removed image obtained byOSP. All images are renderedin three bands (420, 550, and 700nm).

Fig. 3. (Color online) Constrained spectral unmixing compared to PPCA. (a) Original spectral image. (b) Highlight-removed image obtained byconstrained spectral unmixing. (c) Highlight-removed image obtained by PPCA. All images are rendered in three bands (420, 550, and 700nm).

Koirala et al. Vol. 28, No. 11 / November 2011 / J. Opt. Soc. Am. A 2287

Page 5: Highlight detection and removal from spectral image

A. Orthogonal Subspace ProjectionSince the highlight effect in the image is due to the augmenta-tion of the scaling of illumination, as shown in Eq. (2), weshould make the operator P, which eliminates the highlighteffect produced by the SPD of the illuminants. In that case,the operator P projects radiance R onto a subspace orthogo-nal to the illumination spectrum S. As a result, a new vector isobtained with highlight-free information. The operator P iscalled the orthogonal subspace projector or the least-squareoptimal rejection operator [9,10]. The operator P is given as

P ¼ I − SS#; ð10Þ

where S# ¼ ðSTSÞ−1ST is the pseudoinverse of S. Here P is amatrix of size n × n; n is the total number of wavelengths. I isan identity matrix. ½�T and ½�−1 represent the transpose and in-verse of the matrix, respectively. The operator P maps the

diffuse part into a space orthogonal to a space spanned bythe light source S [6]. The operator has been applied to theradiance image of the measured spectra in Eq. (2):

PR ¼ PDdþ PSsþ Pe: ð11ÞSince the projector P completely rejects S, the componentPSs completely diminishes from Eq. (11). As a result, ahighlight-free image is obtained as shown in Eq. (12):

PR ¼ PDdþ Pe: ð12ÞThis approach is an optimal interference rejection process inthe least square sense as P reduces the contribution of S inEq. (2) to zero. The resulting vector gives the radiancespectrum without the highlight effect at each pixel and theresultant image is a highlight-free (specular-free) spectral im-age. For multiple light sources, the orthogonal subspace pro-jector of each light source is applied in sequence in Eq. (4).

Fig. 4. (Color online) Constrained spectral unmixing compared to the polarizer method and OSP. (a) Original spectral image (the radiance spectraof the selected part are shown in Fig. 5). (b) Highlight-removed image obtained by the polarizer method. (c) Highlight-removed image obtained byconstrained spectral unmixing. (d) Highlight-removed image obtained by OSP. All images are rendered in three bands (420, 550, and 700nm).

Fig. 5. (Color online) Comparison of radiance spectra obtained by the polarizer method, constrained spectral unmixing, and OSP. (a) Radiancespectra from the original image. (b) Radiance spectra from a highlight-removed image obtained by the polarizer method. (c) Radiance spectra froma highlight-removed image obtained by constrained spectral unmixing. (d) Radiance spectra from a highlight-removed image obtained by OSP. Theradiance spectra are from the selected parts of the images shown in Fig. 4.

2288 J. Opt. Soc. Am. A / Vol. 28, No. 11 / November 2011 Koirala et al.

Page 6: Highlight detection and removal from spectral image

The OSP operator made by one light source removes the high-light effect by that light source, but the highlight contributedby other light sources remains. The highlight by each lightsource is removed by applying the OSP operator for each lightsource. The OSP method produces the image without high-light, preserves the spatial structure of the image, and savesthe shade of image.

B. Probabilistic PCAA method of removing highlight from a single spectral imageby using PPCA, PCA, and the K-nearest algorithm was pro-posed [7]. Segmentation of the entire colored object was madein advance and the interreflection regions were excluded fromthe analysis. PPCA was used to cluster the data for the high-light cluster and the body cluster. In the method in [7], the K-nearest-neighbor algorithm replaces highlight pixels withbody reflection pixels, which leads to highlight removal inthe spectral image.

C. Spectral UnmixingSpectral unmixing is a subpixel analysis method. Linear con-strained spectral unmixing is a commonly accepted approachin mixed pixel classification in spectral imaging. In linear spec-tral unmixing, the spectrum is defined as the linear combina-tion of pure target signatures. In the method, the fractions oftarget materials at each pixel spectrum are estimated. In high-light removal fromspectral image, the signature of the highlightcomponent and the signature of the diffuse component are ta-ken as the target signatures. The SPD of the light source is thetarget signature of the highlight component, but the target sig-nature of the diffuse component is still not known. The spectrachosen from the diffuse parts using theATGPmethod [12] havebeen used as the candidates for the target signatures of the dif-fuse parts. A single signature from the diffuse parts may notrepresent all diffuse parts, so the required number of diffusesignatures from the diffuse parts are estimated by eigenthre-sholding [13]. Figure 6 shows the target signature of the high-light parts (highlight signature) and the target signatures of thediffuse parts (diffuse signatures) for the spectral image shownin Fig. 4. Out of the selected diffuse signatures, the best diffusesignature at each pixel is the signature that has minimum dis-tance with the highlight-free spectra obtained by OSP. Here,the distance is calculated as the angular distance by a spectralangle mapper, as shown in Eq. (15). Consider that we havemixed spectra R made with p number of endmembers ei asformulated in Eq. (13):

RðxÞ ¼Xpi¼1

aiðxÞei; ð13Þ

where aiðxÞ is the fractional coverage called the abundance ofendmember ei at pixel position ðxÞ. Equation (13) is repre-sented in matrix form in Eq. (14):

R ¼ ea; ð14Þ

where R is a column vector of size n × 1; n is the number ofbands. The size of matrix of endmember e is n × p and the sizeof abundance matrix a is p × 1. Here p is the number of targetsignatures. All these calculations should be done at each pixel.The value of a for each endmember at each pixel position canbe calculated quite easily by using the unconstrained linearleast square. However, the unconstrained linear least squareestimation may give a negative value of abundance [11]. To re-strict a negative value and to get apercentage fraction, twocon-straints are usually imposed in Eq. (13). These are the ANC andASC as shown below:

aiðxÞ ≥ 0 for all 1 ≤ i ≤ p;Xpi¼1

aiðxÞ ¼ 1:

In the highlight-removal case, the value of p is 2 for the diffuseendmember and the highlight endmember. Figure 7 shows thediffuse image calculated by the constrained spectral unmixingmethod in different wavelengths. The SPD of the light sourceand the candidate of diffuse endmembers as shown in Fig. 6have been used to calculate the output image in Fig. 7.

1. Selection of Diffuse SpectraATGP [12] has been used to find the target signatures of thediffuse component by using the OSP approach. In our method,the initial signature S ¼ ½S1� is either the SPD of the lightsource or a vector with unit value in the case of radiance andreflectance images, respectively. As a first step, the spectralimage R has been projected onto the orthogonal subspaceof S ¼ ½S1� to get projected image ~R. If the pixel at positionðxÞ has the spectrum with length ~dðxÞ that is maximum amongthe lengths of all spectra from projected image ~R, the secondtarget signature S2 is the spectrum from image R at pixel posi-tion ðxÞ. The length of each spectrum from image ~R is calcu-lated as ~dðiÞ ¼ ~RðiÞ21 þ ~RðiÞ22 þ…þ ~RðiÞ2n; here n is number ofwavelengths and i varies from 1 to the number of pixels in theimage. The maximum length ~dðxÞ is maximum among ~dðiÞ.Now, to get the third target signature, the first image R hasbeen projected again onto the orthogonal subspace ofS ¼ ½S1S2� and the third target signature S3 has been foundfrom the pixel with maximum length among all spectra inthe projected image. The required number of target signaturesS ¼ ½S1S2S3…Stþ1� has been found by repeating the process.The count t of target spectra or endmembers of the diffuse

Fig. 6. (Color online) Target signatures of the highlight and diffuse components. Target signatures of the diffuse components are collected fromthe image in Fig. 4(a). (a) Power distribution of the light source; it is used as the target signature or endmember of the highlight component of theimage. (b) Candidate target signatures or endmembers of the diffuse components collected by using ATGP. Out of these candidate target signatures,the best target signature is selected pixelwise.

Koirala et al. Vol. 28, No. 11 / November 2011 / J. Opt. Soc. Am. A 2289

Page 7: Highlight detection and removal from spectral image

component is unknown. Hypothesis testing based on the ei-genthresholding method [13] can be applied to estimate thenumber of distinctive spectra in the spectral image, and thisnumber is the number of endmembers [18]. The first spectrumS1 from a set of target spectra S is the target spectrum of thespecular component and the remaining spectra are the targetspectra of the diffuse components in the single light sourcecase. The target spectra of the diffuse components have beencalculated from the diffuse part of the image as segmented byCEM or other methods. The set of diffuse target spectra isD ¼ ½S2S3…Stþ1�. From the set of diffuse target spectra, thebest diffuse target spectrum is chosen for each measuredspectrum RðxÞ at each pixel position ðxÞ. The best diffusespectrum DbðxÞ for pixel position ðxÞ is chosen from amongthe spectra D that has the minimum distance with ~RðxÞ. Here,~R is the highlight-free image obtained by OSP. The distancebetween two spectra has been calculated using a spectral an-gle mapper [19] as shown in Eq. (15). The smaller angle be-tween the two spectra means a smaller distance betweenthe spectra. The spectral angle mapper is suitable to calculatethe similarity between two spectra since the illumination af-fects the magnitude of the spectra more than the shape of thespectra. Figure 6(b) shows the set of candidate diffuse end-members for the image shown in Fig. 4(a). Out of these threeendmembers, the best endmember is chosen for each pixel:

θ ¼ cos−1P

nλ¼1 R1R2P

nλ¼1 R

21

Pnλ¼1 R

22

: ð15Þ

6. RESULTS AND DISCUSSIONAs the scaling factor of the diffuse and highlight reflectionsdepends on the geometric properties of the surface, the dif-fuse and highlight components should be separated pixelwise.It is an ill-posed problem to separate the highlight and diffusereflectance correctly from a single radiance spectrum at eachpixel position. The highlight component of the image can beeliminated by projecting the radiance of the image on the

orthogonal subspace of the SPD of the light source [6]. Asa result, a highlight-free image has been obtained by OSP,as shown in Figs. 2 and 4. These figures show that OSP pro-duces a highlight-free image and preserves the geometricstructure of the object accurately, but a significant color dif-ference between the original image and the highlight-free im-age has been noticed in three-band visualization. Figure 5(d)shows that the spectra of the highlight-free image obtained byOSP suffer from negative values in a considerable number ofbands. It is difficult to separate the diffuse reflectance and thehighlight reflectance from a single pixel since the scaling fac-tors of the diffuse and highlight reflectances is dependent onthe surface geometry. To simplify the problem, we have usedthe SPD of the light source as the highlight reflectance foreach pixel. Now, the next problem is to find the diffuse reflec-tance for each pixel. The set of candidate diffuse reflectanceshave been gathered from the diffuse part of the image by usingATGP. The number of candidates of diffuse reflectance is de-termined by eigenthresholding. The best diffuse spectrum foreach pixel is the spectrum selected from the set of candidatediffuse spectra that has minimum spectral angle with the spec-trum at the same pixel position from the OSP image. Now, theproblem remains to find the fractions of diffuse reflectanceand highlight reflectance. We have exploited the constrainedspectral unmixing method. To get the positive spectra of thediffuse image, we have imposed the constraint positive and toget the percentage of fractions, we imposed constraint sum toone. The results of our method depend on how well we seg-ment diffuse and highlight parts in the image. CEM in FIR filterhas been exploited to segment diffuse and highlight parts inthe image. Figure 1 shows the segmentation results by CEM.However, another segmentation method, such as PPCA as de-scribed in Bochko and Parkkinen [7], can also be used.Figure 3 shows the results obtained by applying constrainedspectral unmixing and PPCA [7] on the image. The resultshows that PPCA does not preserve the boundaries in somehighlight-affected parts, and the color of the image and alsosuffers from a black spot. The result shows that spectral un-mixing produces a more desirable result than PPCA, as shown

Fig. 7. Original spectral image and highlight-removed spectral image obtained by constrained spectral unmixing in different bands: (a)–(d) Imageswith highlight. (e)–(h) Highlight-removed images obtained by constrained spectral unmixing.

2290 J. Opt. Soc. Am. A / Vol. 28, No. 11 / November 2011 Koirala et al.

Page 8: Highlight detection and removal from spectral image

in Fig. 3. Comparison of our result with the result obtained bya polarizing filter, as shown in Fig. 4, shows that the resultwith a polarizer is better, but our result from spectral unmix-ing also preserves the color and shape of the image better thandoes PPCA or OSP. Nevertheless, the diffuse image by spec-tral unmixing may also suffer from black spots in some partsof the image, as shown in the red apple (at right) in Fig. 4(c). Adark spot may occur in pixels that have spectra similar to theSPD of the light source because the spectrum that is similar tothat of the light source has a fraction of diffuse componentnear to zero. Figure 7 shows the diffuse image obtained byspectral unmixing in different bands. Figures 1(a) and 2(a)are images acquired by an LCTF camera. Figure 3(a) is thespectral image extended from the RGB image taken from[20]. Figure 4(a) is the spectral image measured by a HyspexVNIR-1600. All the color images are rendered in the 420, 550,and 700 nm bands.

7. CONCLUSIONSWe have proposed a method that detects and removes high-light from a spectral image. The highlight from the spectralimage has been successfully detected using a K-means cluster-ing algorithm in the fraction image provided by CEM. Thehighlight removal method based on positive constrained spec-tral unmixing has been presented. The method separates thehighlight and diffuse components of the spectral image at eachpixel position. The proposed highlight removal method lookspromising for spectral imaging.

ACKNOWLEDGMENTSWe thank Norsk Elektro Optikk AS, Norway for providing thesupport for spectral image measurement. We also thank Mr.Vladimir Bochko for providing his application of highlight re-moval by probabilistic principal component analysis and somespectral images to test.

REFERENCES1. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection

components using color and polarization,” Int. J. Comput. Vision21, 163–186 (1997).

2. R. T. Tan and K. Ikeuchi, “Seperating reflection components oftextured surfaces using a single image,” IEEE Trans. PatternAnal. Machine Intell. 27, 178–193 (2005).

3. P. Koirala, M. Hauta-Kasari, and J. Parkkinen, “Highlight re-moval from single image,” in Proceedings of Advanced Concepts

for Intelligent Vision Systems (Springer, 2009), Vol. 5807,pp. 176–187.

4. H-C. Lee, “Method for computing the scene illuminant chroma-ticity from specular highlights,” J. Opt. Soc. Am. A 3, 1694–1699(1986).

5. P. Tan, S. Lin, L. Quan, and H-Y. Shum, “Highlight removal byillumination-constrained inpainting,” in Proceedings of the 9th

IEEE International Conference on Computer Vision (IEEEComputer Society, 2003), pp. 164–169.

6. Z. Fu, R. T. Tan, and T. Caelli, “Specular free spectral imagingusing orthogonal subspace projection,” in Proceedings of Inter-

national Conference on Pattern Recognition (IEEE ComputerSociety, 2006), pp. 812–815.

7. V. Bochko and J. Parkkinen, “Highlight analysis using a mixturemodel of probabilistic PCA,” in Proceedings of the 4th WSEAS

International Conference on Signal Processing, Robotics and

Automation (World Scientific and Engineering Academy andSociety, 2005), paper 15.

8. S. A. Shafer, “Using color to separate reflection components,”Color Res. Appl. 10, 210–218 (1985).

9. C.-I. Chang, “Orthogonal subspace projection (OSP) revisited:a comprehensive study and analysis,” IEEE Trans. Geosci.Remote Sens. 43, 502–518 (2005).

10. J. C. Harsanyi, “Hyperspectral image classification and dimen-sionality reduction: an orthogonal subspace projectionapproach,” IEEE Trans. Geosci. Remote Sens. 32, 779–785(1994).

11. N. Keshava, “A survey of spectral unmixing algorithms,” LincolnLab. J. 14, 55–78 (2003).

12. H. Ren and C-I. Chang, “Automatic spectral target recognition inhyper spectral imagery,” IEEE Trans. Aerosp. Electron. Syst. 39,1232–1248 (2003).

13. C.-I. Chang and Q. Du, “Estimation of number of spectrally dis-tinct signal sources in hyperspectral imagery,” IEEE Trans.Geosci. Remote Sens. 42, 608–619 (2004).

14. http://www.spectralcameras.com/files/downloads/VariSpec_Technote.pdf (last viewed 24.09.2010).

15. http://www.hyspex.no/products/hyspex/vnir1600.php(lastviewed03.07.2011).

16. P. Stigel, K. Miyata, and M. Hauta-Kasari, “Wiener estimationmethod in estimating of spectral reflectance from RGB images,”Pattern Recogn. Image Anal. 17, 233–242 (2007).

17. O. L. Frost III, “An algorithm for linearly constrained adaptivearray processing,” Proc. IEEE 60, 926–935 (1972).

18. Q. Du, I. Kopriva, and H. Szu, “Investigation on constrainedmatrix factorization for hyperspectral image analysis,” in IEEE

International Geoscience and Remote Sensing Symposium

Proceedings (IEEE International, 2005), pp. 4304–4306.19. R. H. Yuhas, A. F. H. Goetz, and J. W. Boardman, “Discrimination

among semiarid landscape endmembers using the spectral anglemapper (SAM) algorithm,” in Summaries of the Third Annual

JPL Airborne Geoscience Workshop, Publication 92-14 (JetPropulsion Laboratory, 1992), Vol. 1, pp. 147–149.

20. http://people.cs.uu.nl/robby/textureSeparation/results.html (lastviewed 02.9.2011).

Koirala et al. Vol. 28, No. 11 / November 2011 / J. Opt. Soc. Am. A 2291