4

[IEEE 2012 International Bhurban Conference on Applied Sciences and Technology (IBCAST) - Islamabad, Pakistan (2012.01.9-2012.01.12)] Proceedings of 2012 9th International Bhurban

  • Upload
    rab

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 International Bhurban Conference on Applied Sciences and Technology (IBCAST) - Islamabad, Pakistan (2012.01.9-2012.01.12)] Proceedings of 2012 9th International Bhurban

Mapping of Visible to IR Data for Scene

Matching M.Ali Chaudhry, Asim Baig and Rab Nawaz

CESAT, Islamabad Pakistan

Abstract: Scene matching has become a challenging problem

due to mUlti-temporal and multi-modal image acquisition.

There is no direct and linear relation between EO (Electro­

optical) and IR (Infra-red) images, which are required to be

matched. In this paper, we propose a statistical technique of

mapping EO to IR data by transformation function deduced

from their gray level distribution. As the proposed

technique is statistical and deals with multi-modal data, MI

index (Mutual Information) and its variants are more

appropriate similarity measures in this case. Therefore, we

have used mutual information as a measure of statistical

dependence between the two images. Results of MI shows

that technique is effective in mapping visible to IR spectrum

Keywords: Multi-temporal, Multi-modal, Infra-red, Electro­

optical, Mutual information.

T.TNTRODUCTION

Infrared and visible spectrum imaging systems are widely used in military and non-military applications. Normally information in both the spectrums are fused together to gain more information present in the scene. Sometimes it is desirable to compare EO and IR images in applications such as registration, navigation, remote sensing, medical imaging etc. [1,2,3,4]. Normally in such applications task is to search IR template image in visible spectrum reference image (multi-modal analysis). There is no strong relation or correspondence between infrared and visible spectrum images because of their nature. The relationship between gray values of corresponding pixels is complex and unknown.

Imaging mechanism in IR and visible spectrum is entirely different, because of physical and electrical characteristics of imaging sensors. Visible spectrum imaging (0.3 � 0.7 flm) is mainly dependent upon the amount of light falling on CCD (Charged Couple Device) from the scene. In case of IR imaging, amount of IR radiation (3�14 flm) reaching the IRFPA (Infrared Focal Plane Array) sensor from the scene is converted to meaningful gray scale image.

Due to the nature of sensors, images in both the spectrums differ a great deal, therefore correspondence between the gray level features in two domains cannot be established with some linear relationship. Quality of features such as edges, lines, corners are also somewhat dependent upon the gray level distribution. Though the formation of visible and IR images are obtained from the different mechanism but they do share some commons as they are representation of the same scene. There exists some relation-ship between the two images, which can be learnt statistically.

Gray level distribution in IR images of same scene under different temperatures and consistent illumination would be different, which would result in one to many

978-1-4577-1929-5/12/$26.00 ©2011 IEEE

mapping. Therefore, mapping of visible data to IR spectrum will not be unique, it will be temperature/season dependent. Problem of mapping can be divided for different weathers/temperatures depending upon the availability of IR data along with visible data of the same scene. Transformation function for different temperatures can be estimated which can be used for mapping of visible data to IR according to weather conditions.

In this paper, we have proposed mapping of visible image to IR image using Bayesian iterative estimation. It consists of two phases, in first phase mapping data from many to one is used as likelihood transformation function. In the next phase, priory deduced transformation has been used for estimation of IR value using unseen input visible data.

Il.MAPPING TECHNIQUE OF VISIBLE TO IR DATA.

A. Motivation Due to the nature of imaging systems, both the images

in visible and IR domain do not have explainable

relationship. Apart from scaling and orientation differences, main difference is the contrast reversal in two images. Corresponding regions in IR and visible image might have either partial or complete intensity reversal. Image similarity measures such as cross correlation, mean square difference, mutual information etc. [5] can be used quantitatively to estimate the amount of resemblance between the two images. Measure like cross correlation will yield a matching index close to unity for identical regions but in case of partial/complete contrast reversal it may turn out to be non-match. Relationship which does exist between the pixel values of two images in different spectrums can be learnt statistically.

B. Image Pre-processing Visible and IR imaging sensors have their own

characteristics, which results in different spatial resolution and scaling. It is necessary, if we are trying to estimate the likelihood transformation function data then two images should be roughly aligned. There are three main differences such as rotation, field of view(translation) and scaling. Rotation, translational and scaling factors can be estimated using Matlab image registration tool box. Although alignment of micro features in both the images will be approximate but corresponding regions will be aligned.

C. Bayesian Estimation of IR from Visible data The relationship between the corresponding gray levels

of two images is unknown, and often contrast reversal

Proceedings of 2012 9th International Bhurban Conference on Applied Sciences & Technology (IBCAST) 54 Islamabad, Pakistan, 9th - 12th January, 2012

Page 2: [IEEE 2012 International Bhurban Conference on Applied Sciences and Technology (IBCAST) - Islamabad, Pakistan (2012.01.9-2012.01.12)] Proceedings of 2012 9th International Bhurban

occurs. Estimation of IR pixel values can be done by statistically extracting the transformation function from the available learning data. Contrast and gray level may change entirely with change in temperature, which makes the transformation matrix temperature dependent. Proposed technique is divided into two phases: estimation of likelihood transformation matrix from offline learning from available IR data and in next phase mapping of visible to IR using the transformation matrix. Offline learning: Offline likelihood estimation block diagram is shown in fig. 1. Pair of visible and IR images of roughly the same area with possibly different scaling and orientation are used for offline learning. IR image is filtered with median filter to remove sensor noise in the form of salt and pepper before alignment.

I Visible

I I IR Image I Image

� 1 Median Filter 1

. I Alignment of l I '1 two Images I �

Generation of samples from visible and IR image

1 Many to one assignment of gray

levels

1 Curve estimation from gray

levels (Likelihood estimation)

1 Transformation Matrix

Figure 1: Offline Learning from IR Data

Both the images are aligned to best possible extent so that corresponding regions to match for the process of pixel assignment from many (IR) to one (visible) assignment of gray levels.

Figure 2: Many to One Assignment

As both the IR and visible images are obtained from different sensors, there is no direct relation between them. Diverse values of gray levels in IR image are assigned to single gray level of visible image.

Suppose Vni are the gray levels of visible image, where i={ 1...256} and n are the samples. Ti are gray levels of IR image assigned to each gray level

represented by polygon as shown in fig. 2. Maximum of 256 polygons are possible in eight bit image, one for each gray level. These polygons are overlapping and there are no clear distinct boundaries. Initially, R is a matrix set to dimension of 256 x 10000 filled with all zero entries as given in (1). After IR pixel assignment, each row is filled with non-zero data.

where (1) {i = 1,2 ... 256 }

:s; 1 n :s; 10000 's

is R matrix is a set of row vectors with different lengths containing non-zero data of visible pixel values assigned to each gray level in IR image. Probability distribution of assigned IR pixel values in each row follows more or less Gaussian bell shape. Data for gray level T; lOa is plotted in fig. 3 showing Gaussian distribution which IS

approximated by Gaussian function given by (2).

140,-----�--�--,-----�--�-___,

120 w u z ! 100

is 80 o w w 60 � ::; " C1i 40 �

20

Figure 3: Gaussian Approximation of Data THOU

(2)

Where al are the scaling factor, bl is the mean value and cl is the spread of Gaussian function. Gaussian function represents closely the data which will be used in online process for mapping of visible to IR pixel values. Set of Gaussians form a transformation matrix whose each row contains the mean and spread of the Gaussian likelihood approximation modeling the IR data for every gray level present in an image. Plot of transformation matrix is shown in fig. 4.

85,-----�--�--�--�--�--�

80

.75' ···························i ............... • / i ....................................... i ........... ················· ·········.L\�i· ·A

l £;:70 15

� 65

§ 60

55

20 40 60 80 Gray Levels of Visible Image

100

Figure 4: Transformation function used for mapping

120

Online mapping: Online mapping of visible to IR

spectrum is very much dependent upon the characteristics

OF DATA (TEMPERATURE, WEATHER CONDITIONS) USED IN

Proceedings of 2012 9th International Bhurban Conference on Applied Sciences & Technology (IBCAST) 55 Islamabad, Pakistan, 9th - 12th January, 2012

Page 3: [IEEE 2012 International Bhurban Conference on Applied Sciences and Technology (IBCAST) - Islamabad, Pakistan (2012.01.9-2012.01.12)] Proceedings of 2012 9th International Bhurban

calculation of transformation matrix. Block diagram of

online Bayesian estimation is shown in fig. 5.

I----� Bayesian Estimation

II Classes Done >-----

Gray Level Assignment to Class with Highest

Probability

Figure 5: Online Mapping of Visible to IR Image

In eight bit gray scale visible image, there are 256 gray shades present which need to be replaced with a new gray levels so that resulting image should attain characteristics of IR image. Each gray shade can be treated as separate class or adjacent shades can be binned together and assigned to same class. Prior probabilities of each class is known p(vi)=NilN and class conditional probabilities P(ViITki) already estimated in offline learning phase describing the distribution of IR pixels in each class. Class conditional probabilities as a Gaussian model are stored in transformation matrix. By using the above information, we can estimate the IR mapped pixels iteratively using Bayesian estimation from visible image pixels by using (3).

peT, I V) =

P(� I Tki )P(Tki) (3) kl I P(�)

Where P(Vi) is normalizing factor, P(ViITki) is likelihood transformation function deduced from the available IR training data and Tki is the prior IR

probabilities. Two scenarios are generated: • If we use same likelihood transformation for

mapping of data regardless of the temperature/seasonal changes then pnor probabilities must reflect the temperature profile under which mapping is performed.

• Effect of prior probabilities become in-effective if we are using likelihood transformation of same temperature profile as of mapping data.

In second scenario, different data models are required for generation of likelihood transformation which are difficult to get. In this research work we fall in second scenario which makes the Bayesian estimation more like Maximum likelihood estimation.

lll.ExpERIMENTAL RESULTS

We evaluated the performance of proposed techniques using the pair of IR and EO images of the same scene, sample image is shown in figure 6 [6]. 15 sample pairs of visible and IR images of dimensions 656 x 490 and 384 x 284 were chosen respectively. Both the images have different resolution and scaling, they were aligned before computation of transformation matrix.

(a) Visible Image (b) IR Image

Figure 6: Pair of Visible and IR image

Bayesian estimated mapped image from visible to IR spectrum is shown in fig. 7. Mapped image contrast and gray level distribution is between visible and IR image. Visually, it is difficult to judge the effectiveness of the proposed technique. Therefore some quantitative similarity measure IS required to evaluate the performance.

Figure 7: Mapped Image

A. Similarity Measure The choice of an image similarity measure depends on

the nature of the image and application. Common examples of such measures are Cross Correlation, Mutual information, Mean square difference etc. [5]

We have used Mutual information which is an information theory measure of statistical dependence between two random variables. In terms of images, the

amount of information that one image contains about the other. It can be considered as a quantitative measure of how well one image explains the other image. Mutual

information between two images is given by (4).

Ml(Xy)=H(y)-H(YIx) = H(X) + H(Y)-H(XY) (4)

H (X) = -I Px log Px represents entropy and Pi is the x

probability mass function.

H (X, Y) = -II Px,y log Px,y is the joint entropy x y

with joint probability mass function Px,y-

Proceedings of 2012 9th International Bhurban Conference on Applied Sciences & Technology (IBCAST) 56 Islamabad, Pakistan, 9th - 12th January, 2012

Page 4: [IEEE 2012 International Bhurban Conference on Applied Sciences and Technology (IBCAST) - Islamabad, Pakistan (2012.01.9-2012.01.12)] Proceedings of 2012 9th International Bhurban

Mapping of visible to IR spectrum was done on 15 image and then mutual information index was calculated between visible/IR and mappedlIR image to show the effectiveness of the proposed technique shown pictorially in fig. 8.

If we analyze the mutual information index, it can be observed that out of 15 sample pairs, 10 of them have got fair improvement in MI index. In such a small database, MI index improved for 66% of images which was expected because we are using probabilistic model in estimation.

-··SELF Ml

-MI of MAPPED/IR

• MI of IRNISABLE

.. ...... ... . _.- _.- - - ......... ,,\ " '-........

00 15 Number of Images

Figure 8: Mutual Information Index for Proposed Technique

IV. CONCLUSION

In this paper, we have proposed a technique for mapping data from visible to IR spectrum using Bayesian iterative estimation. Results of MI index depicts that

technique is effective in estimation of data from one spectrum to another. MI index shows significant improvements for image pairs whose gray level distribution closely resembles the Gaussian model used in calculation of transformation matrix.

Proposed technique is statistical and probabilistic based upon the available data, which makes it dependent on the amount of data used for learning the priori probabilities. Characteristics of IR images are dependent upon the seasonal/temperature changes and data set should cover urban as well as rural areas for effective mapping.

REFERENCES

[I] B. wang, D. Wu and W. Xu, "A new Image Registration Method for Infrared Images and Visible Images". ]'" International

Congress on Image and Signal Processing, CISP. 2010. [2] C. Venkateswarlu, S. Yenduri and S.S. Iyengar, "Digital Analysis

of Thermal Infrared Imagery using Temperature Mapping". Froc.

of International Conference on Information Technology, ITCe. 2004.

[3] M. Dou, C. Zhang, P.Hao and 1. Li, "Converting Thermal Infrared Face Images into Normal Gray Level Images". Part II,

LNCS 4844, pp 722-732 2007.

[4] 1. Fujimasa, T. Chinzei and 1. Saito, "Converting Far Infrared Image Information to Other Physiological Data" IEEE

Engineering in Medicine and Biology. PP 71-76 May 2000.

[5] B. Zitova, F Jan, "Image Registration Methods: A Survey" Image

and Vision Computing pp 977-1000 Elsevier 2000.

[6] WWW.dgp.toronto.edu/-nmorris/datalIRData.

Proceedings of 2012 9th International Bhurban Conference on Applied Sciences & Technology (IBCAST) 57 Islamabad, Pakistan, 9th - 12th January, 2012