6
Simplifying Irradiance Independent Color Calibration Pouya BASTANI * , Brian FUNT School of Computing Science, Simon Fraser University 8888 University Drive, Burnaby, BC, Canada V5A 1S6 ABSTRACT An important component of camera calibration is to derive a mapping of a camera’s output RGB to a device- independent color space such as the CIE XYZ or sRGB 6 . Commonly, the calibration process is performed by photographing a color chart in a scene under controlled lighting and finding a linear transformation M that maps the chart’s colors from linear camera RGB to XYZ. When the XYZ values corresponding to the color chart’s patches are measured under a reference illumination, it is often assumed that the illumination across the chart is uniform when it is photographed. This simplifying assumption, however, often is violated even in such relatively controlled environments as a light booth, and it can lead to inaccuracies in the calibration. The problem of color calibration under non-uniform lighting was investigated by Funt and Bastani 2,3 . Their method, however, uses a numerical optimizer, which can be complex to implement on some devices and has a relatively high computational cost. Here, we present an irradiance-independent camera color calibration scheme based on least-squares regression on the unit- sphere that can be implemented easily, computed quickly, and performs comparably to the previously suggested technique. 1. INTRODUCTION An important component of camera calibration is to derive a mapping of a camera’s output RGB to a device- independent color space such as the CIE XYZ or sRGB. In addition, in certain colorimetry applications where the camera is used for accurate measurement of colors in the scene, it is often desirable to reproduce the color of surfaces in the scene as they would be observed under a reference illuminant such as D65. In such cases, to obtain an accurate mapping of colors, a color chart is often placed in the scene being photographed. The goal of camera calibration is then to obtain a linear transformation M that accurately maps the camera RGB values corresponding to the color chart’s patches as recorded in the scene to the CIE XYZ coordinates of the patches as measured under the reference illuminant. Applying the transformation M to the whole image would then result in mapping of all image colors to a device-independent space. Assuming for the moment that the camera response is a linear function of scene luminance, the main step in the calibration is to determine a 3×3 linear transformation matrix mapping data from (linear) uncalibrated camera RGB to XYZ. Determining the transformation is often done by performing a least-squares regression on the difference between the camera's RGB digital counts obtained for each color chart patch and their corresponding true XYZ values. One difficulty with this method is that in many applications it is hard to guarantee that the lighting is completely uniform. For instance, in the problem of camera characterization for soil color measurement using a mobile phone camera investigated by Gomez-Robledo et al. 4 many factors such as the position of the light and shadows from surrounding objects can cause the illuminant irradiance to vary across the color chart. This variation in the irradiance results in the RGB digital count from the camera to vary. Since the amount of this variation at each point on the color chart is often unknown, calibrating the camera based on its RAW digital counts results in an incorrect transformation M, and hence, inaccurate mapping of scene colors. This happens because the camera RGB associated with each patch of the color chart varies proportionally with the illuminant irradiance incident on the patch. If the irradiance is not uniform across the scene, the camera RGB for different patches will be scaled differently as a function of the scene irradiance gradient. Thus, calibration methods * [email protected] [email protected]

Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

Simplifying Irradiance Independent Color Calibration

Pouya BASTANI*, Brian FUNT† School of Computing Science, Simon Fraser University 8888 University Drive, Burnaby, BC, Canada V5A 1S6

ABSTRACT

An important component of camera calibration is to derive a mapping of a camera’s output RGB to a device-independent color space such as the CIE XYZ or sRGB6. Commonly, the calibration process is performed by photographing a color chart in a scene under controlled lighting and finding a linear transformation M that maps the chart’s colors from linear camera RGB to XYZ. When the XYZ values corresponding to the color chart’s patches are measured under a reference illumination, it is often assumed that the illumination across the chart is uniform when it is photographed. This simplifying assumption, however, often is violated even in such relatively controlled environments as a light booth, and it can lead to inaccuracies in the calibration. The problem of color calibration under non-uniform lighting was investigated by Funt and Bastani2,3. Their method, however, uses a numerical optimizer, which can be complex to implement on some devices and has a relatively high computational cost. Here, we present an irradiance-independent camera color calibration scheme based on least-squares regression on the unit-sphere that can be implemented easily, computed quickly, and performs comparably to the previously suggested technique.

1. INTRODUCTION

An important component of camera calibration is to derive a mapping of a camera’s output RGB to a device-independent color space such as the CIE XYZ or sRGB. In addition, in certain colorimetry applications where the camera is used for accurate measurement of colors in the scene, it is often desirable to reproduce the color of surfaces in the scene as they would be observed under a reference illuminant such as D65. In such cases, to obtain an accurate mapping of colors, a color chart is often placed in the scene being photographed. The goal of camera calibration is then to obtain a linear transformation M that accurately maps the camera RGB values corresponding to the color chart’s patches as recorded in the scene to the CIE XYZ coordinates of the patches as measured under the reference illuminant. Applying the transformation M to the whole image would then result in mapping of all image colors to a device-independent space. Assuming for the moment that the camera response is a linear function of scene luminance, the main step in the calibration is to determine a 3×3 linear transformation matrix 𝑀 mapping data from (linear) uncalibrated camera RGB to XYZ. Determining the transformation 𝑀 is often done by performing a least-squares regression on the difference between the camera's RGB digital counts obtained for each color chart patch and their corresponding true XYZ values. One difficulty with this method is that in many applications it is hard to guarantee that the lighting is completely uniform. For instance, in the problem of camera characterization for soil color measurement using a mobile phone camera investigated by Gomez-Robledo et al.4 many factors such as the position of the light and shadows from surrounding objects can cause the illuminant irradiance to vary across the color chart. This variation in the irradiance results in the RGB digital count from the camera to vary. Since the amount of this variation at each point on the color chart is often unknown, calibrating the camera based on its RAW digital counts results in an incorrect transformation M, and hence, inaccurate mapping of scene colors. This happens because the camera RGB associated with each patch of the color chart varies proportionally with the illuminant irradiance incident on the patch. If the irradiance is not uniform across the scene, the camera RGB for different patches will be scaled differently as a function of the scene irradiance gradient. Thus, calibration methods                                                                                                                * [email protected][email protected]

funt
Typewritten Text
Preprint version. Final version available: Proc. SPIE 9015, Color Imaging XIX: Displaying, Processing, Hardcopy, and Applications, 90150N (January 8, 2014); doi:10.1117/12.2042507; http://dx.doi.org/10.1117/12.2042507
funt
Typewritten Text
funt
Typewritten Text
Page 2: Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

that do not account for the irradiance variation, such as least-square regression that minimizes the difference between transformed RGBs and XYZs, produce color correction matrices that depend on the irradiance gradient in the scene, an unwanted result. One way to account for illumination variation is to measure the irradiance at each patch, but this extra step can be quite time consuming. Funt and Bastani2,3 noted that in a single-illuminant scene even though the illuminant irradiance may vary across the color chart, the relative spectral power distribution of the illuminant remains constant. They suggested a method based on minimizing the angular difference between RGBs and XYZs to derive a calibration matrix that is independent of the irradiance gradient. This technique, however, relies on a non-linear optimization solver, Nelder-Mead5, that, in addition to being computationally expensive, can be complex to implement on some devices. It would be preferable to have a color correction algorithm that, as well as being irradiance-independent, can be implemented easily and computed efficiently, thus making it suitable for applications where the color correction needs to be performed repeatedly on a given device with limited computational resource. Here we propose a method based on least-squares regression on the unit sphere that is able to discount the effect of illumination variation as well as being efficient to compute. Unlike the traditional least-squares method of calibration, which takes into account both the direction and magnitude of RGB vectors, and thus is affected by any shading arising from irradiance variation, the proposed method seeks the 3×3 linear transform that best aligns the vectors from RGB space with those in XYZ space, without regard to their magnitudes. As a result, the effect of irradiance is eliminated from the minimization.

2. BACKGROUND

The camera color calibration process involves imaging a target with a camera. Let 𝐴 represent the 3×𝑁 matrix of camera responses for 𝑁 patches. Similarly, let 𝐵 represent the 3×𝑁matrix of the corresponding tristimulus values computed using the measured spectral power distribution of the illuminant, the spectral reflectance function of each patch, and the CIE color matching functions. Conventional least-squares regression minimizes the energy functional 𝐸 𝑀 = 𝑀𝐴 − 𝐵 !. It is well known that the best mapping in this case is given by the Moore-Penrose pseudo-inverse

𝑀 = 𝐵𝐴! 𝐴𝐴! !!. As is clear from the above equation, both the direction and magnitude of RGB vectors represented in 𝐴 affect the regression results. Funt and Bastani2,3 suggested a calibration method based on minimizing the angular difference between {𝑎!}!!!! and {𝑏!}!!!! , the column vectors of matrices A and B, respectively. Their proposed technique, Angle Minimization, finds a 3×3 linear transform 𝑀 mapping RGB to XYZ by minimizing

𝐸 𝑀 =   cos!!𝑀𝑎! ⋅ 𝑏!|𝑀  𝑎!| 𝑏!

!

!!!

 .

By taking into account only the angles between pairs of vectors, E(M) becomes independent of vector magnitudes, and thus independent of the irradiance gradient. Since the above functional is a nonlinear one, Funt et al. employed the Nelder-Mead nonlinear multivariable optimization scheme5 to solve for M. This minimization, however, is computationally expensive, and while it may be suitable for color calibrating the camera once, it is not very suitable for repeated applications, such as that of measuring the color of soil using a mobile device camera. We next propose another irradiance-independent calibration technique that can be computed efficiently with comparable performance when measured in terms of the CIEDE2000 color difference between mapped and true colors.

1. NORMALIZED LEAST-SQUARES Minimizing angular differences between vector pairs is one way of eliminating the effect of irradiance gradient. Another method is to map each pair of RGB and XYZ vectors onto the unit sphere and then perform least-squares

Page 3: Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

regression. In other words, we first normalize both sets of vectors {𝑎!}!!!! and {𝑏!}!!!! to obtain {𝑝!}!!!! and {𝑞!}!!!! , respectively. We then minimize the squared difference between each pair, leading to the following functional

𝐸 𝑀 =   𝑀𝑝! − 𝑞! !!

!!!

.

The matrix M that minimizes E(M) is given by the Moore-Penrose pseudo-inverse as before:

𝑀 = 𝑄𝑃! 𝑃𝑃! !!, where P and Q are the matrices with normalized column vectors 𝑝! and 𝑞!. The calibration is thus a least-squares minimization based on the normalized color vectors. We shall refer to this technique as the Normalized Least Squares regression. Since the above minimization discards vector magnitudes, the resulting color correction matrix does not necessarily have the correct overall scaling. To correct for the overall magnitude of 𝑀, Funt and Bastani3 suggested rescaling the matrix by dividing it by the sum of the entries in its second row. In this way, the maximum Y that can be obtained from the 𝑅𝐺𝐵 → 𝑋𝑌𝑍 mapping is 1. Note that this scale factor, just as all entries of M, is independent of any irradiance gradient across the calibration target. The color correction matrix obtained using the above normalized least-squares minimization, therefore, is independent of the scene irradiance gradient and is scaled in order to ensure that the maximum Y arising from any possible surface under the reference illuminant is 1. The above regression scheme can also be applied to higher-order regression methods. Specifically, Root-Polynomial regression proposed by Finlayson et al.1 finds a transformation M that maps, for instance, [𝑅,𝐺,𝐵, 𝑅𝐺, 𝐺𝐵, 𝑅𝐵  ] to CIEXYZ. This method eliminates the dependence on camera exposure that conventional polynomial regression schemes suffer from, since scaling of each channel’s response by a constant 𝑘  in the vector [𝑅,𝐺,𝐵,𝑅!,𝐺!,𝐵!,𝑅𝐺,𝐺𝐵,𝑅𝐵] results in unequal scaling of the different terms: linear and quadratic terms are multiplied by 𝑘 and 𝑘!, respectively. While Root Polynomial color correction is successful at eliminating the effect of camera exposure and the overall illuminant power, the method is still susceptible to gradients of scene irradiance. By simply normalizing the RGB and XYZ vectors prior to performing root-polynomial regression, we obtain a scheme that is irradiance-independent as well as being efficient to compute. We shall refer to this calibration scheme as Normalized Root-Polynomial Regression. We perform several experiments to measure the dependence of the aforementioned calibration methods on irradiance variation. We next discuss the details and result of these experiments.

2. EXPERIMENTS

To measure the effectiveness of the proposed calibration technique in handling irradiance gradient across the calibration chart, we perform two sets of experiments:

1. Calibration using real camera captured images 2. Calibration using synthesized images

In the first experiment, we take 10 RAW linear-RGB images of the X-Rite Digital ColorChecker SG in the Macbeth Judge II light booth under the D65 illumination using a Grasshopper-20S4C camera made by Point Grey Research Inc. (the position and orientation of the camera is fixed during the multiple captures). These 10 images are then averaged to reduce the effect of noise. The camera’s RGB coordinates for each color patch were obtained by averaging the RGB values across each patch, further reducing noise. We exclude the border achromatic patches of the color chart, leaving the 96 interior patches. Figure 1 shows the resulting set of colors corresponding to these interior patches. We shall refer to this image as the original unadjusted image. Even though the light booth is a controlled lighting environment, there is a significant variation in the irradiance. To measure this gradient, we remove the color chart and photograph the grey back wall of the light booth against which the color chart had been placed. By repeating the same procedure as discussed above, we obtain the RGB coordinates of the positions on the back wall corresponding to the color chart positions. We use R+G+B as a measure of the irradiance at each patch. Figure 1 (right) shows the resulting image due to this irradiance variation,

Page 4: Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

gamma-adjusted to reveal the gradient clearly. As expected, due to the positioning of the light at the top of the booth, the patches at the top and center of the color chart are brighter than the rest of the patches (the ratio of the intensity of the brightest patch to the dimmest patch was 1.45). This intensity variation must be accounted for in order to obtain an accurate calibration. Specifically, we rescale each of the RGBs inversely by the measured background intensity corresponding to each patch. This results in a new image of the color chart, which we refer to as the adjusted ground-truth image, since the intensity variation is accounted for in this image. To obtain the mapping 𝑀:𝑅𝐺𝐵 → 𝑋𝑌𝑍, we also need the XYZ coordinates of the each of the interior patches of the color chart. These are computed using the previously measured reflectance spectra of the patches (from an earlier experiment) and the illuminant spectral power distribution measured using the PR-650 SpectraScan Spectroradiometer. Of the 96 interior patches, we exclude 10 for which X+Y+Z < 0.25. This is done so that only colors with high signal-to-noise ratio are included in the calibration process.

We perform experiments with the following calibration methods to test their effectiveness in accounting for irradiance variation:

1. Least-Squares 2. Root Polynomial 3. Angle Minimization 4. Normalized Least-Squares 5. Normalized Root-Polynomial

Having both a ground-truth adjusted image and an original unadjusted image, we can compute the color correction matrix for both images using each of the above calibration methods. Let us define 𝑀 and 𝑀 as the color correction matrices corresponding, respectively, to the adjusted ground-truth image and the original unadjusted image.

The relative difference between the two is a measure of the degree to which the calibration process is affected by the irradiance gradient. We measure this difference using the Frobenius matrix norm (the square root of the sum of the squares of the matrix elements):

relative  difference =  𝑀 −𝑀 !𝑀 !

The first column in Table 1 shows this relative difference for the five calibration methods mentioned above. As can be seen, for the first two methods, Least-Squares and Root-Polynomial, 𝑀 and 𝑀 can very significantly even for the small amount of irradiance gradient present in the light booth. On the other hand, the other calibration schemes are completely unaffected by this variation.

Figure 1. Left: Camera RGB values of the interior patches of the X-Rite Digital ColorChecker SG. Right: The shading arising from irradiance variation on the color checker as measured by removing the color checker and photographing the back wall of the light booth.

Page 5: Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

The error in the calibration process caused by the irradiance variation translates into inaccurate mapping of colors when the image is taken under a different lighting condition. To measure this accuracy, we applied the color correction matrix 𝑀 to the ground truth image and measured the mean, median, and maximum CIEDE2000 color difference between the mapped XYZ and the measured XYZ values of the patches. As the results of Table 1 show, the irradiance-independent methods perform an accurate mapping of colors from camera RGB to CIE XYZ even though they discard all intensity information.

 𝑀 −𝑀!/  𝑀 ! mean Δ𝐸!!∗ median Δ𝐸!!∗ max Δ𝐸!!∗

Least-Squares 0.1400 3.46 2.70 16.77 Root Polynomial 0.4357 2.47 2.05 8.78 Angle Minimization 0.0000 2.78 2.31 9.44 Normalized Least Squares 0.0000 3.31 2.93 9.44 Normalized Root Polynomial 0.0000 2.80 2.61 7.26

Table 1: Experimental results for color correction using real images in the presence of small irradiance gradient. First column denotes the relative error in the computation of the color correction matrix as measured by the Frobenius norm. The other columns summarize statistics for CIEDE2000 color differences between mapped and measured XYZ data when the camera is calibrated using the original unadjusted image and tested using the ground-truth adjusted image.

As mentioned earlier, in addition to performing experiments with real captured images, we performed tests using synthesized images of the color chart using the camera’s spectral sensitivity functions and the measured illuminant spectrum. The resulting image, shown in Figure 2 (left), models a scene free of irradiance variation, and as such, we shall refer to it as the ground truth image.

We also model a scene with a single illuminant placed closer to the upper right corner of the color chart, resulting in a gradient map shown in Figure 2 (right) – the ratio of the largest intensity value (top right corner) to the lowest one (bottom left corner) is 2.5. To obtain an image affected by this intensity variation we scale each of the RGBs in the left image by the corresponding scalar value in the right image. This image is analogous to the original image in the real-image experiments. Using these synthesized images, we repeat the same set of experiments as before. Table 2 shows the results of the calibration in all five cases. They show that under more significant variations in irradiance, the conventional calibration methods fall short of producing an accurate mapping; the mean and median Δ𝐸!!∗ indicate that with more pronounced shading caused by the illuminant positioning, the irradiance-independent methods can outperform the rest. In addition, the proposed scheme does not require the computational complexity that Angle Minimization demands, since the latter was formulated as a nonlinear optimization problem requiring an iterative procedure for finding the optimum matrix M, whereas the newly proposed technique is posed as a linear optimization problem.

Figure 2. Left: Synthesized Camera RGB values of the interior patches of the X-Rite Digital ColorChecker SG, modelling uniform irradiance. Right: Synthesized image of the irradiance variation on the color checker used for producing the image of the color checker under non-uniform lighting.

Page 6: Simplifying Irradiance Independent Color Calibrationsummit.sfu.ca/system/files/iritems1/18203/EI2014... · RGB to XYZ. Determining the transformation ! is often done by performing

 𝑀 −𝑀!/  𝑀 ! mean Δ𝐸!!∗ median Δ𝐸!!∗ max Δ𝐸!!∗

Least-Squares 0.2611 4.93 5.06 9.87 Root Polynomial 0.7813 5.11 5.44 7.95 Angle Minimization 0.0000 3.26 3.09 8.07 Normalized Least Squares 0.0000 3.23 3.05 7.36 Normalized Root Polynomial 0.0000 2.95 2.54 6.72 Table 2: Experimental results for color correction using 5 different methods with synthesized images in the presence of a significant irradiance gradient. The first column denotes the relative difference in the computed color correction matrices as measured by the Frobenius norm. The other columns summarize statistics for CIEDE2000 color differences between mapped and measured XYZ data when the camera is calibrated using the simulated shaded image and tested using the ground-truth image.

3. CONCLUSION In summary, the proposed calibration scheme is an alternative to the angle minimization method suggested by Funt and Bastani2,3. The results show that the performance of this scheme, as measured by the CIEDE2000 color difference between the mapped and measured XYZ values, is comparable to that obtained by the angle minimization method. The advantage offered by the proposed Normalized Least-Squares technique is that it avoids the need to solve a nonlinear optimization problem by posing the problem as that of minimizing squared-differences between pairs of normalized color vectors on the unit sphere. Furthermore, as we demonstrated, this new technique can easily be combined with higher order regressions, such root-polynomial color correction.

REFERENCES

[1] Finlayson, G., Mackiewicz, M., and Anya Hurlbert, “Root Polynomial Color Correction,” Proc. IS&T/SID Nineteenth Color Imaging Conference, 115–119 (2011) [2] Funt, B. and Bastani, P., “Intensity Independent RGB-to-XYZ Colour Camera Calibration,” Proc. AIC, 128–131 (2012) [3] Funt, B. and Bastani, P., “Irradiance-Independent Camera Color Calibration,” 2013 (in press) [4] Gómez Robledo, L., López Ruiz, N., Melgosa, M., Palma, A.J., and Sánchez Marañon M., “Mobile Phone Camera Characterization for Soil Colour Measurements Under Controlled Illumination Conditions,” Proc. AIC 2013 [5] Nelder, J.A. and Mead, R., “A Simplex Method for Function Minimization,” Comp. Jour. 7 (4), 308-313 (1965) [6] Wyszecki, G. and Stiles, W. S., [Color Science, Concepts and Methods: quantitative data and formulas], John Wiley and Sons, New York (1967)