8
Adaptive Light Field Sampling and Sensor Fusion for Smart Lighting Control Fangxu Dong Smart Lighting Engineering Research Center and Dept. of Electrical, Computer and Systems Engineering Rensselaer Polytechnic Institute Troy, NY 12180 [email protected] Vadiraj Hombal Dept. of Civil and Environmental Engineering Vanderbilt University Nashville, TN 37201 [email protected] Arthur C. Sanderson Smart Lighting Engineering Research Center and Dept. of Electrical, Computer and Systems Engineering Rensselaer Polytechnic Institute Troy, NY 12180 [email protected] Abstract—For the development of flexible and adaptive control in smart lighting, it is important to have a systematic methodol- ogy for monitoring the generated light field and for fusion of the sensor information. This paper introduces a systematic approach to light field sampling using a distributed sensor network . This approach is based on the multiscale representation of the light field and adaptive selection of sample locations to maximize the information obtained from the field. Experimental results have shown that a systematic selection of sensor locations can significantly reduce the error in representation of the light field with corresponding improvement in the lighting control. I. I NTRODUCTION Recent lighting design for both commercial and residential buildings has begun to shift from electric lighting to electronic smart lighting based on solid-state light sources such as light emitting diodes (LEDs). The goal of smart lighting is to provide high quality light to meet the occupant needs while balancing energy utilization. The criteria for lighting quality today are no longer just ’visibility’, which aims to provide necessary illuminance in a workspace, but also includes other aspects of occupant satisfaction such as color preference, human productivity and health. These broad considerations of energy efficiency and lighting quality impose new constraints on the lighting design. Previous research [1], [11], [12], [15] has revealed various factors affecting the lighting conditions and indicated that an adaptive approach to lighting control can be very effective in improving the lighting quality. Since adaptive control requires real-time feedback of the actual lighting condition, obtaining an efficient and accurate estimate of lighting fields through real-time monitoring is critical in the implementation of smart lighting. In recent years, real-time observation and monitoring of the physical environment has been facilitated by emerging sensor network technology [3], [8], [9], [10], [13], [14]. A spatially distributed network of compact and low cost sensor nodes can extract and estimate important features of the underlying phe- nomena from a set of point measurements in time and space. An example is the deployment of Autonomous Underwater Vehicles (AUVs) with embedded sensors to collaboratively observe and monitor oceanographic processes in order to understand the ocean’s health [4], [8], [16]. The understanding of environmental phenomena such as plankton layer structures [5] and harmful algal blooms [2] will be enhanced by three- dimensional monitoring and mapping. The research presented in this paper is based on the use of sensor network technology in the development of systematic methodologies for real-time light field monitoring in smart lighting applications. Light field sensing is based on a multi-scale modeling of the light field, and implemented adaptively by sequentially refining the field model. A sensor fusion algorithm is introduced to systematically select sensors, and incorporate the sensor information to generate a functional representation of the light field. Effective sampling and fusion of information is required to effectively meet performance goals of the lighting field while maintaining energy efficiency of the system. This paper is organized as follows: In section II, a color- science-based specification of the light field is presented, and real-time light field monitoring is formulated as a sampling problem based on a distributed sensor network. In section III, an adaptive sampling algorithm is presented for systematic selection of sensor locations for light field monitoring. In section IV, the synthesis of adaptive light field sampling and lighting control is discussed, and a prototype lighting system with synthetic light field estimation is presented. In section V, experimental results from implementation of the prototype lighting system on a lighting testbed are presented and analyzed. II. REAL- TIME LIGHT FIELD MONITORING A. Light Field Specification The light field is defined as a function describing the radi- ance traveling in any direction through any point in space. In 3D space, the light field is a 7D function, parameterized by 3D position, 2D direction, time and wavelength. In smart lighting design, a primary goal is to develop a full-spectrum lighting system to produce light with desired spectral properties (color) according to occupant needs. In order to meet this requirement, light field monitoring should take into consideration both the physical light spectrum and the human perception of the resulting colors. Therefore, it is important to appropriately 40

Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

Adaptive Light Field Sampling and Sensor Fusionfor Smart Lighting Control

Fangxu DongSmart Lighting Engineering Research Center

and Dept. of Electrical, Computer andSystems Engineering

Rensselaer Polytechnic InstituteTroy, NY [email protected]

Vadiraj HombalDept. of Civil and

Environmental EngineeringVanderbilt UniversityNashville, TN 37201

[email protected]

Arthur C. SandersonSmart Lighting Engineering Research Center

and Dept. of Electrical, Computer andSystems Engineering

Rensselaer Polytechnic InstituteTroy, NY [email protected]

Abstract—For the development of flexible and adaptive controlin smart lighting, it is important to have a systematic methodol-ogy for monitoring the generated light field and for fusion of thesensor information. This paper introduces a systematic approachto light field sampling using a distributed sensor network . Thisapproach is based on the multiscale representation of the lightfield and adaptive selection of sample locations to maximizethe information obtained from the field. Experimental resultshave shown that a systematic selection of sensor locations cansignificantly reduce the error in representation of the light fieldwith corresponding improvement in the lighting control.

I. INTRODUCTION

Recent lighting design for both commercial and residentialbuildings has begun to shift from electric lighting to electronicsmart lighting based on solid-state light sources such as lightemitting diodes (LEDs). The goal of smart lighting is toprovide high quality light to meet the occupant needs whilebalancing energy utilization. The criteria for lighting qualitytoday are no longer just ’visibility’, which aims to providenecessary illuminance in a workspace, but also includes otheraspects of occupant satisfaction such as color preference,human productivity and health. These broad considerations ofenergy efficiency and lighting quality impose new constraintson the lighting design. Previous research [1], [11], [12], [15]has revealed various factors affecting the lighting conditionsand indicated that an adaptive approach to lighting controlcan be very effective in improving the lighting quality. Sinceadaptive control requires real-time feedback of the actuallighting condition, obtaining an efficient and accurate estimateof lighting fields through real-time monitoring is critical in theimplementation of smart lighting.

In recent years, real-time observation and monitoring of thephysical environment has been facilitated by emerging sensornetwork technology [3], [8], [9], [10], [13], [14]. A spatiallydistributed network of compact and low cost sensor nodes canextract and estimate important features of the underlying phe-nomena from a set of point measurements in time and space.An example is the deployment of Autonomous UnderwaterVehicles (AUVs) with embedded sensors to collaborativelyobserve and monitor oceanographic processes in order tounderstand the ocean’s health [4], [8], [16]. The understanding

of environmental phenomena such as plankton layer structures[5] and harmful algal blooms [2] will be enhanced by three-dimensional monitoring and mapping. The research presentedin this paper is based on the use of sensor network technologyin the development of systematic methodologies for real-timelight field monitoring in smart lighting applications. Lightfield sensing is based on a multi-scale modeling of the lightfield, and implemented adaptively by sequentially refiningthe field model. A sensor fusion algorithm is introducedto systematically select sensors, and incorporate the sensorinformation to generate a functional representation of the lightfield. Effective sampling and fusion of information is requiredto effectively meet performance goals of the lighting fieldwhile maintaining energy efficiency of the system.

This paper is organized as follows: In section II, a color-science-based specification of the light field is presented, andreal-time light field monitoring is formulated as a samplingproblem based on a distributed sensor network. In section III,an adaptive sampling algorithm is presented for systematicselection of sensor locations for light field monitoring. Insection IV, the synthesis of adaptive light field samplingand lighting control is discussed, and a prototype lightingsystem with synthetic light field estimation is presented. Insection V, experimental results from implementation of theprototype lighting system on a lighting testbed are presentedand analyzed.

II. REAL-TIME LIGHT FIELD MONITORING

A. Light Field Specification

The light field is defined as a function describing the radi-ance traveling in any direction through any point in space. In3D space, the light field is a 7D function, parameterized by 3Dposition, 2D direction, time and wavelength. In smart lightingdesign, a primary goal is to develop a full-spectrum lightingsystem to produce light with desired spectral properties (color)according to occupant needs. In order to meet this requirement,light field monitoring should take into consideration both thephysical light spectrum and the human perception of theresulting colors. Therefore, it is important to appropriately

40

Page 2: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

specify the light field based on both color science and a humanlight response model.

Color is a perceptual property from a spectral distributionof light, received by the human retina and processed by thebrain. The human eye has three types of photoreceptors withdifferent sensitivity peaks to perceive a color. Any color can berepresented as a mixture of three primary colors in an additivecolor model. For a given color sample, the magnitudes of thethree matching primaries is called the tristimulus value. Awidely used color space to represent color is the RGB colorspace based on the RGB color model, which is an additivemodel with primary colors red, green and blue. Thus, any colorin the RGB color space can be represented by three additiveprimaries R, G and B. Given a light spectral distribution f(λ),with proper color matching functions r(λ), g(λ) and b(λ), thecorresponding tristimulus value can be derived by projectionof the color on the RGB vector space:

fR =∫∞

0f(λ)r(λ) dλ,

fG =∫∞

0f(λ)g(λ) dλ,

fB =∫∞

0f(λ)b(λ) dλ.

(1)

The matching functions in the above projection are notalways positive along all wavelengths, therefore, there maybe negative values which are not physically realizable. Toavoid this problem, an XYZ color space was created bythe International Commission on Illumination (CIE) based onmeasurements of human color perception. In the XYZ colorspace, each color is expressed by its tristimulus values denotedas X , Y , and Z. Given the light spectral distribution f(λ), thecorresponding tristimulus values are calculated as:

fX =∫∞

0f(λ)x(λ) dλ,

fY =∫∞

0f(λ)y(λ) dλ,

fZ =∫∞

0f(λ)z(λ) dλ,

(2)

where λ is the wavelength. x(λ), y(λ) and z(λ), are the threeCIE 1931 color matching functions defined as the spectralresponse curves for the detectors in the human eye.

With the linearity of projection, there exists a linear transfor-mation between the RGB projection and the XYZ projectionfor the light spectrum f(λ):

fXY Z = TfRGB ,

fXY Z =

fXfYfZ

, fRGB =

fRfGfB

. (3)

The XYZ color space is a device-independent color space.It mathematically represents the perceived light color andprovides a basis for sensor fusion of spectral information asperceived by a human observer. However, in this color space,too much area is assigned for green and most of the colorvariations are allocated to a small area. So it is not perceptuallyuniform. That is, the change of the same amount in a colorvalue may not produce the same perceived visual difference.For perceptual uniformity, the CIE L∗a∗b∗ (CIELAB) colorspace (Fig. 1) was created in to approximate human vision,where the L component denotes lightness and the a and b

components represent chromaticity. This CIELAB color spaceis a three-dimensional space, and it is converted by a nonlineartransformation from CIE XYZ color space:

fL = 116g(fY /Yn)− 16,fa = 500[g(fX/Xn)− g(fY /Yn)],fb = 200[g(fY /Yn)− g(fZ/Zn)],

(4)

where fL, fa, fb are the three coordinates of f(λ) in CIELABspace. Xn, Yn, Zn are the XYZ tristimulus values of thereference white point and function g(t) is defined as

g(t) =

t13 if t > ( 6

29 )3

13 ( 29

6 )2t+ 429 otherwise.

(5)

Fig. 1. 3D view of CIELAB color space

As shown in Fig. 1, the CIELAB color space is organized ina cube form, and ”colors” within the space are separated witha perceptually uniform color scale. The L, a and b axes areorthogonal to each other. The L axis runs from 0 to 100 whilethe a and b axes have no numerical limits. The perceptualdifference of any two colors can be approximated by theEuclidean distance between their corresponding color pointsin this space. With the property of perceptual uniformity, theCIELAB color space is chosen for light field specification insmart lighting. In this case, the above light field with spectrumf(λ) can be represented by a three-dimensional vector inCIELAB space fLab = [fL fa fb]

T .

B. Problem Formulation

This paper describes an approach to sampling and estima-tion of the generated light field using a distributed sensornetwork. The goal is to sample and recover features of anygenerated light field for feedback lighting control. To achievethis goal, one needs to consider two general problems insmart lighting design: where to optimally deploy the sensors(sampling) and how to recover the color distribution using thesensor samples (fusion).

While the smart lighting system aims to produce the rightlight, a desired light field is usually designed to satisfy thelight field requirements in a specific lighting application. Thisdesired light field should be the target of lighting control, andthus the generated light field will be a set of fields similar tothe target one. In this case, it is reasonable to deploy sensors

41

Page 3: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

based on the target field to maximize the information obtainedfrom the deployed sensor array.

Suppose there exists a target light field f tarLab : Ω→ <3, andf tarLab = [f tarL f tara f tarb ]T . For analysis, the space domain Ω isdiscretized by a prescribed spatial resolution ∆x. For the targetfield f tarLab, a discrete estimate f tarLab := [Ltar atar btar]T witha spatial resolution ∆x can be generated from a set of pointsamples f tarLab(Xs);Xs = xini=1, xi ∈ Ω. The estimationquality depends on the approximation model f and the choiceof samples Xs. With the light field specified in CIELAB colorspace, a metric of the estimation quality can be defined as theEuclidean norm between the target field and the estimate:

I(Xs) =∑x∈Ω

‖f tarLab(x)− f tarLab(x,Xs, f(Xs))‖2. (6)

In light field sampling design, the goal is to select a bestset of samples f tarLab(Xs), Xs with an appropriate modelf to minimize the estimation error I(Xs) of the target lightfield. The most commonly used sampling approach is uniformgrid sampling [16], where the sampling points in each spatialdirection are equally separated. Generally speaking, uniformsampling is often relatively easier to design and implement.However, it can be quite inefficient when applied in anenvironment with non-uniform features. Since the samples areuniformly distributed over the space, regions with featuresmay be undersampled while regions lacking features areoversampled. Thus, efficient and effective sampling requiresa sampling regime which can adaptively allocate differentspatial resolutions appropriate to capture the variation of theprocess in different regions. The resultant sampling approachis adaptive sampling [6], [7], [9], [17]. In this paper, anadaptive sampling approach is developed to guide sensordeployment for target field sampling.

III. ADAPTIVE SAMPLING DESIGN

A. Hierarchical Radial Basis Functions

In adaptive sampling, it is important to choose an appropri-ate field model to integrate initial samples into an estimationof the underlying function. In [9], Hombal et al. introduced amulti-scale surrogate model for sampling and estimation of theunknown underlying process based on localized radial basisfunctions. To be consistent with this general sampling regimefor an unknown process, in this paper, we employ hierarchicalradial basis functions (HRBF) to implement coarse-to-finemodeling of the underlying light field. The HRBF network[6], [14] may be viewed as a neural model for multi-scale ap-proximation of a function through multi-layer decompositionof the approximation error space. Each layer of the model isapproximated by a radial basis function (RBF) network witha different scale.

1) Decomposition and HRBF Synthesis: Given an underly-ing function f , an M -level hierarchical decomposition can be

expressed as:

e1 = fe1 = e1 + e2 k=1e2 = e2 + e3 k=2

......

eM = eM + eM+1 k=M

(7)

where ek is the approximation error in the (k−1)th layer andinitially e1 = f . ek is the approximation to the error ek inthe kth layer and ek = ek − ek+1. From this decompositionmechanism, an M -level approximation fM of the underlyingfunction can be constructed as:

fM = f − eM+1 =N∑k=1

ek. (8)

In any layer k in the above decomposition, the error ek(x)at point x is approximated by an RBF network of Nk basisfunctions:

ek(x) =

Nk∑j=1

ωk,jφk,j(x) = Φk(x)wk (9)

where φk,j(x) denotes a basis function φ(‖x − ck,j‖;σk)centered at ck,j with scale σk. Φk(x) is the interpolationmatrix and wk is the vector of approximation parameters. ThenfM can be represented by a multi-scale RBF approximationmodel:

fM (x) =M∑k=1

Nk∑j=1

ωk,jφk,j(x) =M∑k=1

Φk(x)wk = Φ(x)W

(10)where W is the matrix of the approximation parametersand Φ is the multi-scale interpolation matrix with structuralparameters c and σ.

2) Structural and Approximation Parameters: To determinethe structural parameters in the HRBF model, a hierarchicalanalysis grid is constructed in the problem domain, whereeach layer is a dyadic partition of the previous layer and theintersections between partitions are considered as nodes. Ananalysis grid in a 1D domain [a, b] is shown in Fig. 2. Inthe first layer, the partition starts with node number 3 andresolution ρ1 = (b − a)/2. The nodes in subsequent layerswill be generated as the child nodes of nodes in the previouslayer [9]:

C(k,j)k+1 =

ck+1,2j−1, ck+1,2j j = 1ck+1,2j−2, ck+1,2j−1, ck+1,2j 1 < j < 2k + 1

ck+1,2j−2, ck+1,2j−1 j = 2k + 1(11)

where nodes ck+1,2j−2, ck+1,2j−1 and ck+1,2j respectivelyrepresent the left, middle and right child of node ck,j . Theirlocations are calculated as:

ck+1,2j−2 = 12 (ck,j + ck,j−1)

ck+1,2j−1 = ck,jck+1,2j = 1

2 (ck,j+1 + ck,j).(12)

Given a problem domain, the structural parameters in theHRBF model can be uniquely defined by this analysis grid.

42

Page 4: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

Fig. 2. 1D analysis grid [9]

In each layer of the grid, the location and density of nodesdetermine the center and scale of RBFs in the correspondingHRBF layer. Thus a node ck,j in the kth layer corresponds to abasis function φk,j centered on this node and with scale σk =(b− a)/2k. This partition strategy can also be used in higherdimensions. For a problem domain [a, b]λ with dimension λ,a λ-dimensional analysis grid can be constructed by applyingthe above partition approach on each dimension of the domain.Thus in the kth layer, the total number of nodes is Nk =(2k+1)λ and the corresponding resolution is ρk = (b−a)/2k.Fig. 3 shows the analysis grid in a 2D domain [−4, 4]2.

(a) Layer 1 (b) Layer 2

(c) Layer 3

Fig. 3. 2D analysis grid

Once the structural parameters are fixed, optimization ofan M -layer RBF network involves the determination of allweights W to minimize ‖f− fM‖. However, if the underlyingfunction f is unknown, the adaptive sampling approach willgenerate a sequential evolution of the model. In this case,the approximation parameters wk in layer k are chosen tominimize ‖ek−ek‖ for optimal approximation of the kth layererror.

B. Adaptive Sample Selection

Based on the choice of the HRBF network as the approxi-mation model, given a target light field, a multi-layer analysisgrid is first constructed on the problem domain Ω. As shownin Fig. 2, such an analysis grid will discretize the domainΩ to a set of available sample points, which are equallyseparated with a prescribed spatial scale ∆x. Suppose we areinterested in the target color on these m sample points, buthave only n available sensors where n < m. Then samplingand approximation of the target field should consider theoptimal deployment of sensors on the sample points.

In HRBF modeling, each sampled point is associated witha set of localized radial basis functions centered on this pointbut with different scales. While a node on the analysis gridis selected for sampling, its corresponding basis function willbe employed in the estimation of the underlying function. Inthis case, the sensor deployment problem is equivalent to abasis function selection problem. In this problem, there is aset of radial basis functions centered at m available samplepoints X = ximi=1, xi ∈ Ω, and we would like to selecta subset of basis functions associated with n points for bestapproximation of the target light field f tarLab. However, for anon-trivial data set, it is N-P complete to select the optimalsubset of centers of basis functions. Thus instead of globallyselecting the optimal n samples, suboptimal selection methodsmust be considered.

A greedy algorithm may be used to sequentially select basisfunctions and corresponding samples. In this algorithm, atevery iteration, only one basis function is selected and it, alongwith previously selected basis functions, can be introducedto best approximate the target field at this iteration. Let Ψdenote the set of available basis functions to be selected, andΦs denote the vectors consisting of selected basis functionsand initially Φs = ∅. In the first iteration, a basis functionis selected from Ψ for the best approximation of the targetfield f tarLab. Specifically, for each basis function φi in Ψ, basedon the measurements of the target field on its center xi, aninterpolation function f i1 is constructed to approximate f tarLab:

f i1 = wiφi =

f tarLab(xi)

φi(xi)φi =

Ltar(xi)φi(xi)

φi

atar(xi)φi(xi)

φi

btar(xi)φi(xi)

φi

(13)

Then the basis function φ∗1 with minimum interpolation erroris selected:

φ∗1 = arg minφi∈Ψ

∑x∈Ω

‖f tarLab(x)− f i1(x)‖2. (14)

Once selected, φ∗1 is removed from Ψ and added to Φs.In each successive iteration, one basis function is selected

from Ψ such that the interpolation function formed by it andpreviously selected basis functions in Φs can best approximatethe data f tarLab. For example, in the kth iteration where 1 < k <n, let Φs = φ∗1, φ∗2, . . . , φ∗k−1 be the vector consisting ofpreviously selected basis functions and Xs be the set of theircenters. For every basis function φi with center ci remaining

43

Page 5: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

in Ψ, an interpolation function will be constructed using thepreviously selected basis Φs and φi:

f ik = Wi

[Φsφi

], (15)

where Wi is a weighting vector which is chosen to satisfy theinterpolation constraints on ci and previously selected centersin Xs:

f ik(ci) = f tarLab(ci),

f ik(Xs) = f tarLab(Xs).(16)

Since these interpolation functions are used to approximatef tarLab, the basis function φ∗k is selected corresponding to theminimum approximation error:

φ∗k = arg minφi∈Ψ

∑x∈Ω

‖f tarLab(x)− f ik(x)‖2. (17)

Once φ∗k is selected, it is removed from Ψ and added intoΦs to update these sets. This greedy selection procedure isrepeated until the number of centers of the selected radialbasis functions is equal to n.

C. Recovery of the Generated Light Field

The previous section provides an iterative approach toselection of a set of samples based on the target light field.Sensors can then be deployed on the selected sample locationsto sample and recovery the generated light field. Supposethe location of these sensors are Xs = xini=1, xi ∈ Ωand the corresponding radial basis functions are in vectorΦs = [φ∗1, φ

∗2, . . . , φ

∗n′ ]T where n ≤ n′. For any newly

generated light field fLab, the estimation fLab can be derivedas an HRBF interpolation based on the measurements of fLabon Xs:

fLab = WΦs =

WL

Wa

Wb

Φs, (18)

where the interpolation weights are determined by the sensormeasurements fLab(Xs) such that: WL

Wa

Wb

Φs(Xs) = fLab(Xs) =

fL(Xs)fa(Xs)fb(Xs)

. (19)

Through the interpolation, a multi-scale functional approxi-mation fLab of the light field is generated and provides aneffective basis for lighting control.

IV. A PROTOTYPE LIGHTING SYSTEM

In this section, we will describe a prototype lighting systemincorporating the proposed sampling methodology and adap-tive lighting control. For full spectrum lighting, the lightingsystem consists of multispectral LED modules as light sources,and a set of color sensors are used for light field sampling andestimation.

A. System Model

Effective feedback control in a lighting system requiresknowledge of the light propagation under given conditions.For an LED lighting system, Afshari et al. [1] introduced alinear light transport model to map the system input to thegenerated light field on sensors:

fRGB = GuRGB + w, (20)

where fRGB ∈ <m×1 is the generated light field represented inRGB color space on a set of spatial points, and uRGB ∈ <n×1

is the RGB LED input. G is an m×n matrix called the LightTransport Matrix (LTM) and w ∈ <m×1 denotes light distur-bances in the system. The LTM G depends on the room con-figuration and can be identified using a least square approach.Specifically, for a set of random lighting input URGB =[u1RGB , u

2RGB , . . . , u

NRGB ]T , the corresponding color outputs

are measured as FRGB = [f1RGB , f

2RGB , . . . , f

NRGB ]T , and

the LMT G can be derived as:

G = U†RGBFRGB , (21)

where U†RGB is the pseudo inverse of URGB and U†RGB =(UTRGBURGB)−1UTRGB .

B. Feedback Control

Once an approximation to the generated lighting field isconstructed, it can be used in feedback mode for adaptivelighting control. The objective of the control is to minimizethe perceptual difference between the estimated and target lightfield. This can be formulated as an optimization problem:

minu J(u)st. fRGB = Gu+ w

fLab = h(fRGB)

fLab = W (Xs, fLab(Xs))Φs

(22)

whereJ(u) =

∑x∈Ω

‖f tarLab(x)− fLab(x)‖22

and h(·) is a nonlinear mapping from the RGB color spaceto the CIELAB space. With this formulation, a gradient-basedcontrol method is developed to iteratively obtain the optimalLED input for the target light field. The control input at theith time-step is calculated as:

ui+1 = ui + ε

5u(Ltar − L(u))5u(atar − a(u))5u(btar − b(u))

†i

(Ltar − L)(atar − a)

(btar − b)

i

(23)where ε is a tunable step size and 5u(·) is the gradient withrespect to u.

V. EXPERIMENTAL RESULTS

The prototype lighting system described here has beentested and evaluated on the smart space lighting control testbedat the Smart Lighting Engineering Research Center (Fig. 4).This testbed is a room with full power, communications andcomputational access at all points, and a supporting software

44

Page 6: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

architecture to facilitate implementation of distributed lightingand sensor systems. It consists of 12 Renaissance multispectralLED modules (Acuity Brands), which are accessible throughwireless communications and offer real-time control of spec-tral characteristics. In addition, 10 Colorbug wireless RGBcolor sensors (Ocean Optics) are used in the testbed to monitorthe generated light field for closed-loop lighting control. Inour laboratory experiments, the light field is observed on a2D horizontal plane at 81.28 cm high in the smart space.Figs. 5(a)-5(c) show one such target light field specified in theCIELAB color space as f tarLab = [Ltar atar btar]T . Fig. 5(d)shows the sample distribution generated by adaptive samplingcorresponding to this target field.

Fig. 4. Smart space lighting control testbed (Smart Lighting EngineeringResearch Center, Rensselaer Polytechnic Institute)

Figs. 6-9 show the results of one adaptive sampling andfeedback lighting control experiment. Fig. 6 illustrates the lightfield measurements from a sensor located at (33.75,−21.5)at different time steps while the set of twelve multispectrallight sources are being controlled in real-time based on sensorfeedback at adaptive sampling locations. The target light fieldat this point is L = 38.3, a = 4.2 and b = 19.5, andthe lighting system sequentially adjusts the generated lightfield toward this target. Fig. 7 shows the generated lightfield fLab = [L a b]T produced by the lighting system attime t = 50, when the control system was stable and fullyconverged. The corresponding light field estimation fLab =[L a b]T derived from 10 discrete adaptively chosen sensormeasurements at t = 50 is shown in Fig. 8. In this experiment,the lighting control error ecntl is defined to quantify theperceptual difference between the target and generated lightfields:

ecntl := ‖f tarLab − fLab‖2.

(a) Ltar (b) atar

(c) btar (d) Sensor Deployment

Fig. 5. Target light field f tarLab and selected sensor locations for a planar

lighting configuration in the smart space laboratory.

Similarly, the light field estimation error eest is a measure ofthe perceptual difference between the generated and estimatedlight fields:

eest := ‖fLab − fLab‖2.

Fig. 9 shows the steady state control error ecntl at time t =50 and the corresponding estimation error eest. Both of thesetwo errors are very small in the region of observation. In thiscase, with accurate feedback of the generated light fields, thelighting system is capable of reproducing the target field withundetectable perceptual distortions.

Fig. 6. Sensor measurement v.s. time (target: L = 38.3, a = 4.2, b =19.5) for a single discrete sensor in the light field under closed-loop feedbackcontrol.

In order to further evaluate the adaptive light field samplingapproach, a parallel experiment was conducted to test thelighting system with a uniform discrete sampling of the lightfield with an equal number of sensors. Fig. 10 shows the steadystate control error of this lighting system, where the magentadots denote the location of deployed sensors. In this case, thelighting system fails to adequately reproduce the target field in

45

Page 7: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

(a) Lact (b) aact

(c) bact

Fig. 7. Generated light field factLab at time t = 50 for the planar light field

under closed-loop control with target shown in Fig. 5.

(a) Lest (b) aest

(c) best

Fig. 8. Estimated light field festLab at time t = 50 for the planar light field

under closed-loop control with target shown in Fig. 5.

(a) ecntl (b) eest

Fig. 9. Control error ecntl and estimation error eest at time t = 50 for theplanar light field under closed-loop control with target shown in Fig. 5.

TABLE IMEAN CONTROL ERRORS FOR DIFFERENT TARGET LIGHT FIELDS

Target Field 1 Target Field 2 Target Field 3Lighting System with

Adaptive Sampling 5.90 5.82 6.61

Lighting System withUniform Sampling 28.03 18.11 20.99

the region of interest due to the lack of sensors and resultantinsufficient feedback to the generated light field. The meancontrol error on the problem domain is 4 times higher thanthat in the system with adaptive sampling.

Fig. 10. Steady state control error of lighting system with uniform samplingfor the planar light field under closed-loop control with target shown in Fig.5.

A series of similar experiments have been implemented totest the performance of the adaptive sampling and feedbackcontrol systems in the smart space. Different target light fieldsare selected in these experiments, and the corresponding meancontrol errors of the two lighting systems are shown in TableI. In these experiments, the lighting system with adaptive lightfield sampling could consistently reduce the control error bymore than 65% relative to that with uniform sampling.

VI. CONCLUSION AND FUTURE WORK

In this paper an adaptive sampling approach was introducedfor systematic selection of sensors to sample the generatedlight field and fusion of sensor information to support feedbackcontrol of lighting. In this approach, a functional representa-tion of the field is generated and provides an effective basisfor sensor fusion and lighting control. Experimental resultshave shown that this systematic selection of sensor samplelocations could significantly reduce the error in representationof the light field with corresponding improvement in lightingcontrol.

For more accurate estimation of the generated light field, fu-ture work will consider light field sampling subject to dynamicdisturbances such as user activities and natural light. Furtherexperiments will be run to test and analyze the performance ofthe algorithm subject to different disturbances. On the otherhand, with the incorporation of robotic technologies to thesensor network, real-time reallocation of sensor locations maybe used to adaptively sample the dynamic light field.

46

Page 8: Adaptive Light Field Sampling and Sensor Fusion for Smart ...fusion.isif.org/proceedings/fusion12CD/html/pdf/006_131.pdfspecify the light field based on both color science and a human

ACKNOWLEDGMENT

The authors would like to thank Mr. Sina Afshari, Prof.Agung Julius, Prof. Sandipan Mishra and Prof. John Wen fromthe Smart Lighting ERC for their contributions to this research.This research is supported primarily by the Engineering Re-search Centers Program of the National Science Foundationunder NSF Cooperative Agreement No. EEC-0812056 and inpart by New York State under NYSTAR contract C090145 andin part by NSF Grant OCE-1050534.

REFERENCES

[1] S. Afshari, S. Mishra, A. Julius, F. Lizarralde and J.T. Wen, Modeling andfeedback control of color-tunable LED lighting systems, 2012 AmericanControl Conference, Montreal, Canada.

[2] M. Babin, J.J. Cullen, C.S. Roesler, P.L. Donaghay, G.J. Doucette,M. Kahru, M.R. Lewis, C.A. Scholin, M.E. Sieracki and H.M. Sosik,New approaches and technologies for observing harmful algal blooms,Oceanography, vol. 18, no. 2, 2005.

[3] M.A. Batalin and G.S. Sukhatme, Coverage, exploration and deploymentby a mobile robot and communication network, TelecommunicationSystems, vol. 26, no. 2, pp. 181-196, 2004.

[4] D.R. Blidberg, The development of autonomous underwater vehicles(AUVS): a brief summary, IEEE ICRA, vol. 4, 2001.

[5] T.J. Cowles, Planktonic layers: physical and biological interactions onthe small scale, Handbook of Scaling Methods in Aquatic Ecology:Measurements, Analysis, Simulation, 2004.

[6] S. Ferrari, M. Maggioni and N.A. Borghese, Multiscale approximationwith hierarchical radial basis functions networks, Neural Networks, IEEETransactions on, vol.15, no. 1, pp. 178-188, 2004.

[7] E. Fiorelli, P. Bhatta and N.E. Leonard, Adaptive sampling using feedbackcontrol of an autonomous underwater glider fleet. Proceedings 13th Inter-national Symposium on Unmanned Untethered Submersible Technology,2003.

[8] D. Fries, S. Ivanov, H. Broadbent and A.C. Sanderson, Broadband low-cost coastal sensor networks, Oceanography, vol. 20, no. 4, pp. 73-79,December, 2007.

[9] V. Hombal, A.C. Sanderson and D.R. Blidberg, Adaptive multiscalesampling in robotic sensor network, Intelligent Robots and Systems, 2009.(IROS 2009). 2009 IEEE/RSJ International Conference, pp. 122-128,2004.

[10] V. Isler, S. Kannan and K. Daniilidis, Sampling based sensor-networkdeployment, Intelligent Robots and Systems, 2004. (IROS 2004). 2004IEEE/RSJ International Conference on, vol. 2, pp. 1780-1785, 2004.

[11] A. Mahdavi and H. Eissa, Subjective evaluation of architectural lightingvia computationally rendered images, Journal of the Illuminating Engi-neering Society, vol. 31, no. 2, pp. 11-20, 2002.

[12] G. R. Newsham, R. G. Marchand and J. A. Veitch, Preferred surface lu-minances in offices, by evolution, Journal of the Illuminating EngineeringSociety, vol. 33(1), pp. 14-29, 2004.

[13] D.O. Popa, H.E. Stephanou, C. Helm and A.C. Sanderson, Roboticdeployment of sensor networks using potential fields. Proceedings ofRobotics and Automation 2004. (ICRA’ 04). 2004 IEEE InternationalConference on, vol. 1, 2004.

[14] A.C. Sanderson, V.J. Hombal, D.P. Fries, H.A. Broadbent, J.A. Wilson,P.I. Bhanushali, S.Z. Ivanov, M. Luther and S. Meyers, Distributedenvironmental sensor network: design and experiments, Multisensor Fu-sion and Integration for Intelligent Systems, 2006 IEEE InternationalConference on, pp. 79-84, 2006.

[15] J. A. Veitch and G. R. Newsham, Lighting quality and energy-efficiencyeffects on task performance, mood, health, satisfaction and comfort,Journal of the Illuminating Engineering Society, vol. 27, issue 1, pp.107-129, 1998.

[16] J.S. Willcox, J.G. Bellingham, Y. Zhang and A.B. Baggeroer, Perfor-mance metrics for oceanographic surveys with autonomous underwatervehicles, Oceanic Engineering, IEEE Journal, vol. 26, no. 4, pp. 711-725, 2001.

[17] M. Zoghi and M.H. Kahaei, Adaptive sensor selection in wireless sensornetworks for target tracking, Signal Processing, IET, vol. 4, issue 5, pp.530-536, 2010.

47