7
Tracking colour objects using adaptive mixture models Stephen J. McKenna a , *, Yogesh Raja b , Shaogang Gong b a Department of Applied Computing, University of Dundee, Dundee DD1 4HN, UK b Department of Computer Science, Queen Mary and Westfield College, Mile End Road, London E1 4NS, UK Received 6 October 1997; received in revised form 16 March 1998; accepted 26 March 1998 Abstract The use of adaptive Gaussian mixtures to model the colour distributions of objects is described. These models are used to perform robust, real-time tracking under varying illumination, viewing geometry and camera parameters. Observed log-likelihood measurements were used to perform selective adaptation. 1999 Elsevier Science B.V. All rights reserved. Keywords: Real-time tracking; Colour model; Gaussian mixture model; Adaptive learning 1. Introduction Colour can provide an efficient visual cue for focus of attention, object tracking and recognition allowing real-time performance to be obtained using only modest hardware. However, the apparent colour of an object depends upon the illumination conditions, the viewing geometry and the camera parameters, all of which can vary over time. Approaches to colour constancy attempt to reconstruct the incident light and adjust the observed reflectances accord- ingly (e.g. Ref. [1]). In practice, these methods are only applicable in highly constrained environments. In this paper, a statistical approach is adopted in which colour dis- tributions are modelled over time. These stochastic models estimate an object’s colour distribution on-line and adapt to accommodate changes in the viewing conditions. They are used to perform robust, real-time object tracking under variations in illumination, viewing geometry and camera parameters. Swain and Ballard [2] renewed interest in colour-based recognition through their use of colour histograms for real- time matching. Kjeldson used Gaussian kernels to smooth the histograms [3]. These colour histogram methods can be viewed as simple, non-parametric forms of density esti- mation in colour space. They gave reasonable results only because the number of data points (pixels) was always high and because the colour space was coarsely quantised. In the absence of a sufficiently accurate model for apparent colour, good parametric models for density estimation cannot be obtained. Instead, a semi-parametric approach has been adopted using Gaussian mixture models. Estimation is, thus, possible in a finely quantised colour space using rela- tively few data points without imposing an unrealistic para- metric form on the colour distribution. Gaussian mixture models can also be viewed as a form of generalised radial basis function network in which each Gaussian component is a basis function or ‘hidden’ unit. The component priors can be viewed as weights in an output layer. The mixture models are adapted on-line using stochastic update equations. It is this adaptation process which is the main focus of this paper. In order to boot-strap the tracker for object detection and re-initialisation after a tracking fail- ure, a set of predetermined generic object colour models which perform reasonably in a wide range of illumination conditions can be used. These are determined off-line using an iterative algorithm. Once an object is being tracked, the model adapts and improves tracking performance by becoming specific to the observed conditions. Finite mixture models have also been discussed at length elsewhere [4–10]. In particular, Priebe and Marchette [8] describe an algorithm for recursive mixture density estima- tion. It was extended to model non-stationary data series through the use of temporal windowing. Their algorithm adds new components dynamically when the mixture model fails to account well for a new data point. The approach adopted here differs in that the number of mixture components is determined using a fixed data set. These components’ parameters are then adapted on-line while keeping the number of components fixed. The remainder of this paper is organised as follows. 0262-8856/99/$ - see front matter 1999 Elsevier Science B.V. All rights reserved. PII: S0262-8856(98)00104-8 * Corresponding author. Tel: 0044 171 975 5230; Fax: 0044 181 980 6533. Image and Vision Computing 17 (1999) 225–231

Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

Tracking colour objects using adaptive mixture models

Stephen J. McKennaa ,*, Yogesh Rajab�, Shaogang Gongb

�aDepartment

�of Applied Computing, University of Dundee, Dundee DD1 4HN, UK

bDepartment�

of Computer Science, Queen Mary and Westfield College, Mile End Road, London E1 4NS, UK

Received�

6 October 1997; received in revised form 16 March 1998; accepted 26 March 1998

Abstract

The use of adaptive Gaussian mixtures to model the colour distributions of objects is described. These models are used to perform robust,real-time tracking under varying illumination, viewing geometry and camera parameters. Observed log-likelihood measurements were usedto perform selective adaptation. � 1999 Elsevier Science B.V. All rights reserved.

Keywords: Real-time tracking; Colour model; Gaussian mixture model; Adaptive learning

1. Introduction

Colour�

can provide an efficient visual cue for focus ofattenti� on, object tracking and recognition allowing real-timeperform� ance to be obtained using only modest hardware.However, the apparent colour of an object depends uponthe�

illumination conditions, the viewing geometry and thecamera� parameters, all of which can vary over time.Approaches to colour constancy attempt to reconstruct theincident light and adjust the observed reflectances accord-ingly (e.g. Ref. [1]). In practice, these methods are onlyapplica� ble in highly constrained environments. In thispaper,� a statistical approach is adopted in which colour dis-tributio�

ns are modelled over time. These stochastic modelsestimat e an object’s colour distribution on-line and adapt toaccomm� odate changes in the viewing conditions. They areused to perform robust, real-time object tracking undervariation� s in illumination, viewing geometry and cameraparame� ters.

Swain�

and Ballard [2] renewed interest in colour-basedrecogn ition through their use of colour histograms for real-time�

matching. Kjeldson used Gaussian kernels to smooththe�

histograms [3]. These colour histogram methods can beviewed� as simple, non-parametric forms of density esti-mation� in colour space. They gave reasonable results onlybecau�

se the number of data points (pixels) was always highand� because the colour space was coarsely quantised. In theabsen� ce of a sufficiently accurate model for apparent colour,good� parametric models for density estimation cannot be

obta� ined. Instead, a semi-parametric approach has beenadopt� ed using Gaussian mixture models. Estimation is,thus�

, possible in a finely quantised colour space using rela-tivel�

y few data points without imposing an unrealistic para-metric� form on the colour distribution. Gaussian mixturemodels can also be viewed as a form of generalised radialbasi�

s function network in which each Gaussian componentis�

a basis function or ‘hidden’ unit. The component priorscan� be viewed as weights in an output layer.

The mixture models are adapted on-line using stochasticupdat e equations. It is this adaptation process which is themain� focus of this paper. In order to boot-strap the trackerfor�

object detection and re-initialisation after a tracking fail-ure, a set of predetermined generic object colour modelswhich� perform reasonably in a wide range of illuminationcondi� tions can be used. These are determined off-line usingan� iterative algorithm. Once an object is being tracked, themodel adapts and improves tracking performance bybecom�

ing specific to the observed conditions.Finite mixture models have also been discussed at length

els ewhere [4–10]. In particular, Priebe and Marchette [8]descr�

ibe an algorithm for recursive mixture density estima-tion.�

It was extended to model non-stationary data seriesthr�

ough the use of temporal windowing. Their algorithmadds� new components dynamically when the mixturemode� l fails to account well for a new data point. Theappro� ach adopted here differs in that the number of mixturecom� ponents is determined using a fixed data set. Thesecom� ponents’ parameters are then adapted on-line whilekeepi�

ng the number of components fixed.Th

�e remainder of this paper is organised as follows.

0262-8856/99/$ - see front matter � 1999 Elsevier Science B.V. All rights reserved.PII: S0262-8856(98)00104-8

* Corresponding author. Tel: 0044 171 975 5230; Fax: 0044 181 980 6533.

Image and Vision Computing 17 (1999) 225–231

Page 2: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

Gaussian�

mixtures for modelling objects’ colour distri-butions

�are described in Section 2. In Section 3, a method

for�

adapting the mixture models over time is given. Section4

�describes selective adaptation. Experimental results and

conclusi� ons are given in Sections 5 and 6.

2. Colour mixture models

The�

conditional density for a pixel, x� , belonging to anobject,! " , is modelled as a Gaussian mixture with m# com� -ponent$ densities:

p% (&x ' ( )

) * m+j

, -1

p% (&x . j/

)) 0

(&j

/)

)

where1 a mixing parameter 2 (3j

/)4

corresponds to the priorproba$ bility that x was1 generated by the j

/th5

component,m+j

, 61 7 (

&j

/ )) 8

1. Each mixture component, p% (3x� 9 j/

),4

is aGaussian

�with mean : and; covariance matrix < , i.e. in the

case� of a two-dimensional colour space:

p% (&x = j/

)) > 1

2? @ A B

j, C 1

2D eE F 1

2(x G H jI )

J T K L 1j

I (x M N jI )

J

Expectation-maximisation (EM) provides an effective max-imum

O-likelihood algorithm for fitting such a mixture to a

dataP

set XQ (0)

Rof! size N

S (0)R

[4,11,12]. The EM algorithm isiterative with the mixture parameters being updated ineachE iteration. Let T oldU

denotP

e the sum of the posterior prob-abilities; of the data evaluated using the old model from theprevious$ iteration, V oldW X

x Y X(Z0)

[ PoldW(

&j

/ \x)

). The following

update] rules are applied in the order given:

^ newj

, _ P` oldW

(&j

/ ax� )

)x�b oldW c new

j, d

e oldWN

S (0f

)J

g newj

, h P` oldW

(&j

/ ix� )(

)x� j k newl

j, )

) T (&x� m n newl

j, )

)o oldW

where1 all summations are over x� p XQ (0)

R. Bayes’ theorem gives

the5

posterior probabilities:

P`

(&j

/ qx� )

) r p% (&x� s j/

)) t

(&j

/)

)p% (

&x u v )

)EM

wmonotonically increases the likelihood with each itera-

tion,5

converging to a local maximum. The resulting mixturemodel will depend on the number of components m# and; theinitial

Ochoice of parameters for these components. Initially,

the5

following simple procedure was used. A suitable valuefor m# was1 chosen based upon visual inspection of theobject’! s colour distribution. The component means wereinitialised to a random subset of the training data points.All

xpriors were initialised to y z 1/m# and; the covariance

matrices were initialised to { I, where | was1 the Euclideandistanc

Pe from the component’s mean to its nearest

neighbouring component’s mean. In practice, this initialisa-tion5

method invariably provided good performance for thetracki5

ng application described in this paper. Alternatively, aconst� ructive algorithm which uses cross-validation to auto-matically select the number of components m# and; theirpara$ meters has been used [13]. In this method, disjoint train-ingO

and validation sets are used. A single Gaussian is first fitto5

the training set. The number of components is thenincreased by iterating the following steps: (1) the likelihoodof! the validation set given the current mixture model isestE imated; (2) the component with the lowest responsibilityfor�

the training set is split into two separate componentswith1 equal covariance matrices and principal axes equal tothe5

original component’s principal axis; (3) the iterative EMalgo; rithm is run. These three steps are repeated until thevalidat} ion likelihood is maximised or is considered to besufficiently~ large.

Most�

colour cameras provide an RGB (red, green, blue)sign~ al. In order to model objects’ colour distributions, theRGB�

signal is first transformed to make the intensity orbri�

ghtness explicit so that it can be discarded in order toobta! in a high level of invariance to the intensity of ambientillumiO

nation. Here the HSI (hue, saturation, intensity) repre-sent~ ation was used and colour distributions were modelledin the two-dimensional hue-saturation space. Hue corre-sponds~ to our intuitive notion of ‘colour’ whilst saturationcorr� esponds to our idea of ‘vividness’ or ‘purity’ of colour.Atx

low saturation, measurements of hue become unreliableand; are discarded. Likewise, pixels with very high intensityare; discarded.

It�

should be noted that the HSI system does not relate wellto5

human vision. In particular, the usual definition of inten-sity~ as (R � G

� �B)/3

4is at odds with our perception of

intensiO

ty. However, this is not important for the trackingappl; ication described here. If in other applications it wasdeemP

ed desirable to relate the colour models to human per-cept� ion then perceptually based systems like CIE L*u� *v� *and; CIE L*a� *b

�* should be used instead of HSI.

Gau�

ssian mixture models have been used to perform real-time5

object tracking given reasonably constrained illumina-tion5

conditions. The resulting tracking system is surpris-inglyO

robust under large rotations in depth, changes ofscale~ and partial occlusions [14,15]. However, in order tocope� with large changes in illumination conditions in parti-cula� r, an adaptive model is required.

3.�

Adaptive colour mixture models

Ax

method is presented here for modelling colourdynamP

ically by updating a colour model based on thechangi� ng appearance of the object. Fig. 1 illustrates a colourmixture model of a multi-coloured object adapting overtime5

. While the components’ parameters are adapted overtime5

, the number of components is fixed. The assumptionmade here is that the number of components needed to

226 S.J.�

McKenna et al. / Image and Vision Computing 17 (1999) 225–231

Page 3: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

accurat; ely model an object’s colour does not alter signifi-cantly� with changing viewing conditions. However, theparame$ ters of the model will definitely need to adapt. Aninitial mixture model is obtained by running the EM algo-rithm� discussed in the previous section. Each mixture com-ponent$ then has an ‘initial’ parameter set (� 0

� , � 0� , � 0

� ).4

IneachE subsequent frame, t� , a new set of pixels, X

Q (Rt� )

�, is sampled

from the object and can be used to update the mixturemodel� 1. These colour pixel data are assumed to sample aslowly~ varying non-stationary signal. Let � (

Rt� )

�denoteP

thesum~ of the posterior probabilities of the data in framet� ,� � (t� )

J �x� � X

� (Zt)

[ P(&j

/ �x)

). The parameters are first estimated

for�

each mixture component, j/, using only the new data,

XQ (

Rt

� )�, from frame t� :

� (t� )J � P

`(

&j

/ �x� )

)x�  (t� )

J ¡ (t� )J ¢ £ (t� )

JN

S (t� )J

¤ (t� )J ¥ P(

&j

/ ¦x)(

)x § ¨ t� © 1)

) T (&x ª « t� ¬ 1)

)­ (t� )

Jwhere1 N

S (Rt� )

�denot

Pes the number of pixels in the new data set

and; all summations are over x� ® XQ (

Rt� )

�. The mixture model com-

ponent$ s then have their parameters updated using weightedsums~ of the previous recursive estimates, (̄ t� ° 1, ± t� ² 1, ³ t� ´ 1),

4estimatE es based on the new data, (µ (

Rt� )

�, ¶ (

Rt� )

�, · (

Rt� )

�),4

and esti-mates� based on the old data, (̧ (

Rt

� ¹Lº 1), » ¼ t� ½ L

¾ ¿1À , Á Â t� Ã L

¾ Ä1Å )4

(see3

Appendix A):

Æt

� Ç Èt

� É 1 ÊË (t� )

JDÌ

t� (& Í (t� )

J Î Ït

� Ð 1)) Ñ Ò (t� Ó L Ô 1)

JDÌ

t� (& Õ (t� Ö L

¾ ×1)

J Ø Ùt

� Ú 1))

Ût� Ü Ý t� Þ 1 ß

à (t� )J

t� (

& á (t� )J â ã

t� ä 1)) å æ (t� ç L

¾ è1)

JDÌ

t� (

& é (t� ê L¾ ë

1)J ì í

t� î 1))

ït� ð ñ t� ò 1 ó N

S (t� )J

ô t�T õ t� ö L

¾ NS ( ÷ )

J (& ø (t� )

J ù út� û 1)

)

ü NS (t� ý L

¾ þ1)

Jÿ t�

T � t� �

LNS (

& � )) (

& � (t� � L¾ �

1)J � �

t� � 1))

whe1 re Dt� t� � t� � L

( � )J. The following approximation is

used] for efficiency:

� (t� � L � 1)J � Dt� � 1

L� �

1

Th�

is yields a recursive expression for DÌ

t� :Dt� � (

&1 � 1� (& L � 1))

)Dt� � 1 � � (t� )

J

The parameter L controls� the adaptivity of the model2�.

4.�

Selective adaptation

An obvious problem with adapting a colour model duringtracki5

ng is the lack of ground-truth. Any colour-basedtracker5

can lose the object it is tracking due, for example,to5

occlusion. If such errors go undetected the colour modelwill1 adapt to image regions which do not correspond to theobje! ct. In order to alleviate this problem, observed log-likel�

ihood measurements were used to detect erroneousframes. Colour data from these frames were not used toadapt; the object’s colour model.

Th�

e adaptive mixture model seeks to maximise the log-likelihood of the colour data over time. The normalised log-likelihood, (

Rt� )

�, of the data, X (

Rt� )

�, observed from the object at

time5

t� is given by:

! (t� )J " 1

NS (t� )

Jx� # X

� (Zt)

[ log p% (&x $ % )

)

At each time frame, & (Rt� )

�is evaluated. If the tracker loses the

obje! ct there is often a sudden, large drop in its value. Thisprovi$ des a way to detect tracker failure. Adaptation is thensuspended~ until the object is again tracked with sufficientlyhigh

'likelihood. A temporal filter was used to compute a

thr5

eshold, T t� . Adaptation was only performed when ( (Rt� )

� )T t� . The median, * , and standard deviation, + , o f , were1com� puted for the n- most� recent above-threshold frames,whe1 re n- . L. The threshold was set to T / 0 1 k

2 3, where

Fig. 1. A mixture model superimposed onto plots of a bottle’s colour distribution. Hue corresponds to angle and saturation to the distance from the centre.4

Ellipses show the four Gaussian components. The leftmost plot shows the original mixture model. The remaining two plots show the model adapting to theillumination

5and viewing conditions.

1 Througho6

ut this paper, superscript (t) denotes a quantity based only ondata from frame t. Subscripts denote recursive estimates.

27

Setting L 8 t and ignoring terms based on frame t 9 L : 1 gives astochastic algorithm for estimating a Gaussian mixture for a stationarysignal [4,16].

227S.J.;

McKenna et al. / Image and Vision Computing 17 (1999) 225–231

Page 4: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

k2

was1 a constant. In all the experiments described here, k2 <

1.5, n- = 2f>

and; L ? 6@

f>, where f

>denotes

Pthe frame rate in Hz.

5.A

Experiments

The adaptive mixture modelling described in the previoustwo

5sections was integrated with an existing colour-based

trackin5

g system [14,15] implemented on a standard200 MHz Pentium PC platform with a Matrox Meteor fra-megrabb� er. This system performs tracking at approximatelyf

> B15 Hz. The tracker estimates the centroid, height and

width1 of the object. New samples of data for adaptation aregathereC d from a region of appropriate aspect ratio centred onthe

5estimated object centroid. It is assumed that these data

form�

a representative sample of the object’s colours. Thiswill1 hold for a large class of objects.

Figs. 2 and 3 illustrate the use of the mixture model forface

�tracking and the advantage of an adaptive model over a

non-aD daptive one. In this sequence, the illumination condi-tions

5coupled with the camera’s auto-iris mechanism

resulted in large changes in the apparent colour of theface as the person approached the window. Towards theendE of the sequence the face became very dark, makinghue

'and saturation measurements unreliable. In Fig. 2, a

non-adaptive model was trained on the first image of theseque~ nce and used to track throughout. It was unable to copewith1 the varying conditions and failure eventually occurred.

In�

Fig. 3, the model was allowed to adapt and successfullymaintained lock on the face.

Fig.E

4 illustrates the advantage of selecting when to adapt.Th�

e person moved through challenging tracking conditions,befor�

e approaching the camera at close range (frames 50–60).@

Since the camera was placed in the doorway of anotherroom with its own lighting conditions, the person’s faceunder] went a large, sudden and temporary change in appar-entE colour. When adaptation was performed in every frame,this5

sudden change had a drastic effect on the model andultimat] ely led the tracker to fail when the person recededintoO

the corridor. With selective adaptation, these suddenchange� s were treated as outliers and adaptation was sus-pended$ , permitting the tracker to recover.

Fig. 5 depicts the tracking of a multi-coloured item ofclothi� ng with adaptation performed in every frame.Althox

ugh tracking was robust over many frames, erroneousadapt; ation eventually resulted in failure. Fig. 6 shows thelas�

t four frames from the same sequence tracked correctlyusin] g selective adaptation.

6.F

Conclusions

ObjectG

s’ colour distributions were modelled using Gaus-sia~ n mixture models in hue-saturation space. An adaptivelearning algorithm was used to update these colour modelsover! time and was found to be stable and efficient. Theseadapt; ive models were used to perform colour-based object

Fig. 3. The sequence depicted in Fig. 2 tracked with an adaptive colour model. Here, the model adapts to cope with the change in apparent colour. Only the lastfour images are shown for conciseness. Performance in previous frames was similar.

Fig.H

2. Eight frames from a sequence in which a face was tracked using a non-adaptive model. The apparent colour of the face changes due to: (i) varyingillumination; and (ii) the camera’s auto-iris mechanism which adjusts to the bright exterior light.

228 S.J.;

McKenna et al. / Image and Vision Computing 17 (1999) 225–231

Page 5: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

Fig. 4. At the top are frames 5, 15, 25, 35, 45, 55, 65 and 75 from a sequence. There is strong directional and exterior illumination. The walls have a fleshy tone.4

At around frame 55, the subject rapidly approaches the camera which is situated in a doorway, resulting in rapid changes in illumination, scale and auto-irisparameters.I This can be seen in the three-dimensional plot of the hue-saturation distribution over time. In the top sequence, the model was allowed to adapt inevery frame, resulting in failure at around frame 60. The lower sequence illustrates the use of selective adaptation. The right-hand plot shows the normalisedlog-likelihood measurements and the adaptation threshold.

Fig.H

5. A green, yellow and black shirt tracked using the adaptive mechanism. Eventually, tracking inaccuracies cause the model to adapt erroneously and thesystem fails.

229S.J.;

McKenna et al. / Image and Vision Computing 17 (1999) 225–231

Page 6: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

trackin5

g in real-time under varying illumination, viewinggeometC ry and camera parameters. Outlier detection basedon! a normalised log-likelihood statistic was used to detecttrackin

5g failures. This adaptive scheme outperformed the

non-aD daptive colour models.Topics for further work include: (i) emphasised co-opera-

tion5

with other visual cues during periods when colourbecomes

�unreliable; (ii) adaptive modelling of background

scene~ colours; and (iii) adaptive model order selection, i.e.adaptat; ion of the mixture size during tracking.

Acknowledgements

S.J.M.J

was supported by EPSRC grant IMV GR/K44657whilst1 at Queen Mary and Westfield College. Y.R. was sup-ported$ by an EPSRC/BBC CASE studentship.

Appendix A

HereK

we derive the update equations for the adaptivemixture model components. For each mixture component,let

� Lt� and; M

t� be�

the mean and the covariance matrix esti-mated� from the L

N O1 most recent time-slots:

Pt

� Qt�

R S t� T L¾

x U X(Z V

)[ p% (

&j

/ Wx� )

)x�

t�X Y t� Z L

[ ( \ )J

]t� ^

t�_ ` t� a L

¾x b X(

Z c)

[ p% (&

j/ d

x)()

x e f g h 1)) T (

&x i j k l 1)

)t�

m n t� o L¾

p ( q )J

The above expressions are both of the form:

rt� s

t�t u t� v L

w (T)J x

(T)J

t�

where1 yt� denotes

Peither z t� or! {

t� . A recursive expression for|t� is

Oderived as follows:

}t

� ~ 1Dt�

t� � 1

� � t� � L¾ �

1

� ( � )J �

( � )J � �

(t� )J �

(t� )J � �

(t� � L¾ �

1)J �

(t� � L¾ �

1)J

� 1Dt

��

t� � 1

t� � 1

� � t� �

L � 1

� ( � )J � �

(t� )J �

(t� )J �  

(t� ¡ L¾ ¢

1)J £

(t� ¤ L¾ ¥

1)J

¦ 1Dt�

§t

� ¨ 1

t�

© ª t� « L

¬ ( ­ )J ® ¯

(t� ° 1)J ± (t� )

J ² ³(t� ´ 1)

J µ (t� ¶ L¾ ·

1)J

¸ ¹ (t� )J º

(t� )J » ¼

(t� ½ L¾ ¾

1)J ¿

(t� À L¾ Á

1)J

 Ãt� Ä 1 Å

Æ (t� )J

t� (& Ç (t� )

J È Ét� Ê 1)

) Ë Ì (t� ÍL Î 1)

JD

Ìt� (

& Ï (t� Ð L¾ Ñ

1)J Ò Ó

t� Ô 1))

ÕA1 Ö

The update expression for the component priors × t� isobta! ined similarly to Eq. (A1). If the number of data pointsisO

the same in every time frame (i.e. NS (

R Ø)

� ÙN

S, for all Ú )

4then

we1 have:

Ût� Ü Ý t� Þ 1 ß

à (t� )J á â

(t� ã L¾ ä

1)J

L å 1

Refereæ

nces

[1] D.A. Forsyth, Colour constancy and its applications in machinevision, Ph.D. thesis, University of Oxford, 1988.

[2] M.J. Swain, D.H. Ballard, Colour indexing, International Journal ofComputer Vision (1991) 11–32.

[3] R. Kieldsen, J. Kender, Finding skin in color images, in: 2nd Interna-tional

4Conference on Automatic Face and Gesture Recognition, Kill-

ington, Vermont, USA, 1996.[4] C. Bishop, Neural Networks for Pattern Recognition, Oxford Univer-

sity Press, New York, 1995.[5] B.S. Everitt, D.J. Hand, Finite Mixture Distributions, Chapman and

Hall, New York, 1981.[6] G.J. McLachlan, K.E. Basford, Mixture Models: Inference and Appli-

cations to Clustering, Marcel Dekker Inc., New York, 1988.[7] C.E. Priebe, Adaptive mixtures, Journal of the American Statistics

Association 89 (427) (1994) 796–806.[8] C.E.Priebe,D.J. Maxchette, Adaptive mixtures: recursive nonparametric

paI ttern recognition, Pattern Recognition 24 (12) (1991) 1197–1209.[9] C.E. Priebe, D.J. Marchette, Adaptive mixture density estimation,

Patternç

Recognition 26 (5) (1993) 771–785.[10] D.M. Titterington, A.F.M. Smith, U.E. Makov, Statistical Analysis of

FiniteH

Mixture Distributions, Wiley, New York, 1985.[11] A.P. Dempster, N.M. Laird, D.B. Rubin, Maximum likelihood from

incomplete data via the EM algorithm, Journal of the Royal StatisticalSociety B-39 (1977) 1–38.

[12] R.A. Redner, H.F. Walker, Mixture densities, maximum likelihoodand the EM algorithm, SIAM Review 26 (2) (1984) 195–239.

[13] Y. Raja, S.J. McKenna, S. Gong, Colour model selection and adapta-tion

4in dynamic scenes, in: European Conference on Computer Vision,

Freiburg, Germany, June 1998.

Fig.H

6. The sequence shown in Fig. 5 tracked using selective adaptation. The shirt was correctly tracked throughout. Only the last four frames are shownè forbrevity.

é

230 S.J.;

McKenna et al. / Image and Vision Computing 17 (1999) 225–231

Page 7: Tracking colour objects using adaptive mixture modelsrtc12/CSE586/papers/emGongAdaptiveColorTracking.pdfattention, object tracking and recognition allowing real-time performance to

[14] S. McKenna, S. Gong, Y. Raja, Face recognition in dynamic scenes,in:

5British Machine Vision Conference, University of Essex, UK,

1997, pp. 140–151.[15] Y. Raja, S. McKenna, S. Gong, Segmentation and tracking using

colour mixture models, in: Asian Conference on Computer Vision,Hong Kong, 1998, pp 607–614.

[16] H.G.C. Traven, A neural network approach to statistical patternclassification by ‘semiparametric’ estimation of probability densityfunctions, IEEE Transactions on Neural Networks 2 (3) (1991)366–378.

231S.J.;

McKenna et al. / Image and Vision Computing 17 (1999) 225–231