Upload
qssr123
View
214
Download
0
Embed Size (px)
Citation preview
8/17/2019 Content-Based Rotary Kiln Flame Image Retrieval
1/5
Content-Based Rotary Kiln Flame Image Retrieval
Zhang Hongliang1, Chen Xiangtao2, Zou Zhong1, Li Jie1
School of Metallurgical Science and Engineering, Central South University, Changsha, Hunan 410083,
ChinaSchool of Information Science and Engineering, Central South University, Changsha, Hunan 410083,
ChinaCorresponding Author :Chen Xiangtao ([email protected])
Abstract
In this paper, a content-based image retrieval system for
rotary kiln flame image (CBIR-RKFI) is introduced for the
purpose of making good use of rotary kiln flame images. Itcalculates the texture and fire & clinker features of the
flame image, then, through the similarities comparison, it
returns a set of original retrieval results. Moreover, with
the users’ relevant feedbacks, it optimizes the final retrievalresults. A prototype system was realized based on the rotary
kiln image database that contained more than 500 rotary
kiln flame images (Sampled in an alumina rotary kiln). Re-
trieval experiments with different features were carried out.
The results demonstrate the effectiveness of the retrieval
methods, and among them, the integrated features based
method has the highest precision (84% ). The research can
provide strong support for modern rotary kiln supervision
and management.
Key Words: rotary kiln flame image, content-based image
retrieval, texture features, relevance feedback, fire &
clinker features
1. Introduction
The traditional control of rotary kiln is usually per-
formed based on the “watch flame” by experienced opera-
tors. This is a subjective and low efficient job. Recently,with the development of rotary kiln automation, computersoftware and hardware technology and the digital image
process technology, the “Computer watch flame” is replac-ing the traditional “watch flame” by experienced worker.
As a consequence, mass digital flame image data, which is
important for rotary kiln control and management, will be produced every day. Therefore, how to effectively manage
and utilize these image data becomes a challenging and
difficult research issue. It will be of great important for ro-
tary kiln management and control.
The traditional database management systems (DBMS),
which are keywords based query, aren’t the efficient imageretrieval methods: first, unlike the traditional text data, the
digital image data can’t be fully described only by some
keywords; second, the procedures of description are time-
consuming and subjective. Therefore, the traditional imageretrieval methods have low precision and are not suitable
for digital image retrieval. While, the content-based image
retrieval (CBIR) is to calculate the similarities between the
images and return a group of similar results. It has become
the main method for image retrieval. At present, it has been
studied in many fields[1-2], such as: medical image data
retrieval [3], human face image database retrieval [4], traf-
fic tools image database retrieval and others [5]. Some of
them have been applied in practice and achieved very good
performances. However, in the rotary kiln industry, due tothe low automation level and other reasons, none studieshave been made on the rotary kiln flame image retrieval.
Therefore, in this paper we proposed a content-basedimage retrieval system for rotary kiln flame image. For the
queried image, we calculated its texture and fire & clinker
features to form the query vector, then calculated the simi-
larities between all the images in image database and the
queried one, and get the original result according to the
similarities comparison. Moreover, with the users’ rele-
vance feedbacks, update the query vector and get the final
retrieval results. Experimental results indicate the effective-
ness of the proposed system.The rest of the paper is organized as follows. Section 2
presents the extraction of the features. In Section 3, we de-
scribe the similarity calculation and relevant feedback while
Section 4 explains the system architecture. Experiments and
performance evaluation are given in Section 5. Finally,concluding remarks are drawn in Section 6.
2. Feature extraction
The flame image is differ from the other images in thefollowing ways: first, all the flame images are sampled
from the same rotary kiln and they are similar in some way;
second, the main objects in the flame images are clinker andfire shape. So, in order to gain the overall image informa-tion, four texture features were extracted. At the same time,
according to the expertise of “flame watch” operators, the
features of the fire & clinker were also extracted, such as
the color, the fire length and clinker height, because the
2008 Congress on Image and Signal Processing
978-0-7695-3119-9/08 $25.00 © 2008 IEEE
DOI 10.1109/CISP.2008.578
490
2008 Congress on Image and Signal Processing
978-0-7695-3119-9/08 $25.00 © 2008 IEEE
DOI 10.1109/CISP.2008.578
490
8/17/2019 Content-Based Rotary Kiln Flame Image Retrieval
2/5
color indicate the temperature of the fire and the clinker,
while the latter reflect the combustion states inside the kiln.
They represent the key features of the main content that the
operators should acquire when watching the flame. One
thing must be noted is that the features of the queried image
are calculated on-line, while the features of images in data- base are calculated off-line and saved into database.
2.1. Texture features extraction
Although there aren’t standard definitions about the tex-
ture, but it provides the features such as smoothness, rough-
ness and orderliness. In image processing, there are three
kinds of methods to calculate texture: statistical, structural
and frequency spectrum method [6]. In this paper, the statis-tical method is adopted, which means that the texture fea-
tures are calculated by Gray Level Co-occurrence Matrix
(GLCM) [7-8].
The GLCM is a matrix represented by probabilityP( g 1, g 2) and reflected the probability of the (d, θ ) apart
points, ( x1, y1) and ( x2, y2), whose gray-value are g 1 and g 2
respectively. That is:1 1 2 2 1 1 1 2 2 2
1 2
#{[( , ),( , )] | ( , ) & ( , ) }( , )
#
x y x y S f x y g f x y g P g g
S
∈ = == (1)
Where f is the gray-value function; g 1 and g 2 are the im-age gray-values; symbol “#” is the element number of the
aggregate; S is the whole image area. The GLCM reflectsthe gray-value information in direction, neighbour intervaland scope. It is the foundation of their arrangement.
In this paper, we selected d=1, θ =0º according to the
experiment results. The feature 1-4 are calculated as follow-
ings:
Entropy:
1 2
1 1 2 1 2( , ) log( ( , ))
g g
f P g g P g g = −∑∑ (2)
Energy: [ ]1 2
2
2 1 2( , ) g g
f P g g = ∑∑ (3)
Inverse differential moment:
1 2
3 1 22
1 2
1( , )
1 ( ) g g f P g g
g g =
+ −∑∑ (4)
Contrast:
1 2
2
4 1 2 1 2( ) ( , ) g g
f g g P g g = −∑∑ (5)
2.2. Fire & clinker features extraction
Since the sampling camera is fixed and the rotary kiln is
in stable state, the fire and clinker area of flame image will
be appeared in a fixed area. So according to expertise, wecan define the reference point ( P 0 in Fig.1 and Fig.2) and
the axial of the fire (l 0 in Fig.1) for the calculation.
2.2.1. Fire features calculation. As one of the main objects
in rotary kiln flame images, the fire information is a basic
criterion for flame image classification. The features of fire
include color and shape: the color will be connected with
the combustion temperature and the shape will reflect the
combustion status information. In this paper, following fire
features are extracted: average gray-value, fire length and
fire offset degree.Feature 5: Average gray-value of the fire area f 5. We can
define and segment the fire area according to the expertise,and calculate the average gray-value of the pixels that be-
longs to the fire using the following formula:
5
,
1( , )
F i j S F
f f i j N
∈
= ∑ (6)
Where, f (i, j) is the image function, (i, j) is the point in
image plane, N F is the number of pixels that belongs to fire,
S F is the segmented fire area.
Feature 6: As shown in Fig.1, where l 1 is the line per-
pendicular to l 0 and pass through P 0, where l 2 is the line
perpendicular to l 0 and pass through P i. If d i is the maxi-mum distance then the fire length f 6 is d max ,and the fire off-
set f 7 is d max/|P max ,P 0|.
Fig.1 Fire Length Extraction
2.2.2. Clinker features extraction. The clinker is the other
object in rotary kiln, which is the production of rotary kiln.
In this paper, the features we used are clinker average gray-value and clinker height. The former is concerned with the
clinker temperature and the latter is related with clinkerstatus, which indicates whether the clinker has satisfied the
requirement.
Feature 8: Average gray-value of clinker area f 8. We de-
fine the “approximate clinker area”, where the clinker willappear, according to the expertise. Then we segment clinker
from this area, and finally calculate the average gray-value
of the pixels that belong to clinker with following equation:
8
,
1( , )
M i j S M
f f i j N ∈
= ∑ (7)
Where N M is the number of pixels after segmentation that
belong to clinker; S M is the clinker area after segmentation.Feature 9: Clinker height, which means the height that
the clinker is raised by the rotary kiln as shown in Fig.2,
where l 0 is the base line of the clinker. The clinker height
feature f 9 is the distance ( H clinker ) between the top point P maxand the base line l 0.
491491
8/17/2019 Content-Based Rotary Kiln Flame Image Retrieval
3/5
Fig.2 Clinker Height Extraction
2.2.3. Gray feature for whole image. Feature 10: the greyfeature of the whole flame image f 10. It can be calculated by
followings:
10
,
1( , )
IMGi j S IMG
f f i j N
∈
= ∑ (8)
Where, N IMG is the number of pixels of whole image;
S IMG is the whole image area.
3. Similarity calculation and relevant feedback
The system follows the nearest neighbour search rule
and uses a simple Euclidean distance between features vec-tor F q of query image and F j of image in database. How-
ever, we must realize that since each feature has identicalmeaning, its amplitude will differ greatly, so we have to
normalize them.
3.1. Features normalization
Suppose that there are N frames of flame images in data-
base, then for the ith feature f i of all the images, a matrix
can be formed:1 2[ , , , , , ]i i i ij iN F f f f f = … … . If the maxi-
mum and minimum values are MAX F and MIN F , then the ith
feature of jth
image can be normalized into [0, 1] according
the following equation:
'ij F
ij
F F
f MIN f
MAX MIN
−
=
−
(9)
3.2. Similarity calculation between feature vectors
Given the feature vectors for queried image and database
images are:
1 2, , ,
q q q qM F f f f = … ; 1 2, , , j j j jM F f f f = … (10)
Then the similarity SIM ( F q, F j) can be calculated with the
following Euclidean distance:
2
1
( , ) ( ) M
q j qi ji
i
SIM F F f f =
= −∑ (11)
3.3. Relevance feedback
The aim of relevance feedback is to optimize the re-
trieval results according to the user requirement through
man-machine interaction. Recently, most studies are con-centrated on two aspects: query point movement and re-
weighting [9-10]. In this paper, we use the former method
(Rocchio algorithm [11]) to adjust the query point. It can be
described as following equation:
1
1 1* * *
R IR
i i i i
i D i D R IR
F F D D N N
α β γ +
∈ ∈
= + −
∑ ∑ (12
)
Where, F i and F i+1 are the query vector in the ith
and(i+1)
th query; Di is feature vector; D R and D IR are the posi-
tive feedback(relevant feedback) and negative feedback
(irrelevant feedback) and N R and N IR are the total numbers
of the corresponding images. The relevance of an image is
determined by the experienced kiln operators with the help
expert knowledge; α, β and γ are the parameters controllingthe relative weight of each component. In this paper, we
selected α=0.5, β =0.3 and γ=0.2 based on the experiment
results.
4. System architecture
Our proposed image retrieval system is composed of the
following modules: Graphics User Interface module-provides the graphics
user interface and displays retrieval results;
Features Extraction Module-extracts the low-level
features including texture and fire & clinker features;
Similar Calculation and Relevant Feedback-calculates
the distance between two images and return the simi-
larity; with labelled of the retrieval images with “rele-
vant” or “irrelevant” by user, iterates the retrieval
process; Image Database Module-since there isn’t a standard
rotary kiln flame image database, we construct one
with the flame images sampled from rotary kiln.
The framework of our proposed CBIR-RKFI system is
depicted in Fig.3. The main processing of the system in-volves the offline and online stages. Offline processing in-cludes feature extraction, representation, and organization.
Online processing is the interaction between the user and
the system through the Graphics User Interface (GUI). The
online processing steps are described as follows:
Step 1: Initial query
User can browse through the image collection and inputthe initial query image to the system, which will calculate
the feature vector of the queried image. The system follows
the nearest neighbour search rule and uses a simple Euclid-
ean distance measure for matching the query with the im-
ages in the database, and subsequently returns the 10 most
similar images.
Step 2: Relevance feedback The user provides his/her evaluation by labelling each
displayed image with “relevant” or “irrelevant”. Based on
the current feedback images, a new query vector is created
and a new ranked list of images which better approximate
492492
8/17/2019 Content-Based Rotary Kiln Flame Image Retrieval
4/5
the user’s preferences is obtained through the retrieval
process. The newly retrieved result is presented to the user.
Step 3: EndStop if the user is satisfied with the retrieval result; oth-
erwise, repeat step 2.
5. Experiments and performance evaluation
The system prototype in this paper is designed by
VC++6.0, and the rotary kiln flame image database is real-
ized by Microsoft SQL Server 2000. Since there isn’t a
standard flame image database, we built an experimental
database in this work. The raw image data is sampled byCCD (Charge Coupled Device) and digital image sampling
card (See hardware list in Tab.1) in the head of the rotary
kiln of an alumina plant. 500 gray scale images, whose sizeis unified into be 768×576 pixels, are selected randomly
and stored into the image database in this paper. They were
pre-processed and extracted features, and then saved into
image database.
Fig.3 Framework of the proposed CBIR-RKFI system
Tab.1 Hardware List of Image Sample
Hardware Type and Configuration
CCD SC-4183BRH color 1/3"SONY CCD 0.8 LUX
Water-cooling Shield SL-I
Image Sample Card MicroView V5.0Image Process Plat-
form
Windows XP SP2; CPU:
2.20G; Memory: 512M
5.1. Experiments and results
In order to evaluate the system’s performance, four kinds
of experiments are carried out:
Flame image retrieval based on fire & clinker fea-
tures without relevance feedback;
Flame image retrievals based on fire & clinker fea-
tures with relevance feedback;
Flame image retrieval based on integrated features(texture fire & clinker features) without relevance
feedback;
Flame image retrievals based on integrated featureswith relevance feedback.
During the experiments, we firstly selected a randomflame image (The “Queried Image” displayed in Figure4) to
carry out the upper four retrievals. The results are shown in
Fig.5 and Fig.6.
Then, choose 10 images randomly as the query images
to perform experiment 2 and 4, and use the following preci-
sion measurement to evaluate the retrieval performance:
1
1 T
avg i
iq
P P N
=
= ∑ (13)
Where N q is the number of selected queries, and in this
paper, N q =10. P i is precision defined by:
Number of relevant retrieved images
Number of retrieved imagesi P =
(14)
We calculated the average precisions of the queries withdifferent feedback iterations. The results are displayed in
Fig.7.
Fig. 4-Graphics User Interface of the prototype of
CBIR-RKFI system
(a) (b) (c)
Fig.5 Retrieval results of fire & clinker based queries. a:
Retrieval result without relevant feedback; b: Retrieval re-
sults after the first iteration of relevance feedback; c: Re-
trieval results after the 5th iteration of relevance feedback
493493
8/17/2019 Content-Based Rotary Kiln Flame Image Retrieval
5/5
(a) (b) (c)
Fig.6- Retrieval result of integrated features based queries.
a: Retrieval result without relevant feedback; b: Retrieval
results after the first iteration of relevance feedback; c:
Retrieval results after the 5th iteration of relevance feed-
back
Fig.7- Retrieval performance comparison of Fire &
Clinker and Integrated based quires.
5.2. Performance evaluation
In Fig.4-Fig.6, it’s very clear that all the methods can re-
turn a set of images, which are similar to the queried image,
and after relevant feedback, the number of similar images
are increased; as we contrast Fig. 5 with Fig.6, we can find
that the images in Fig.6 (Query by integrated features) aremore similar to the query image, and this is displayed moreclearly in Fig. 7. Moreover, Fig.7 also indicates that the
integrated based queries have higher precision than the fire
& clinker based quires; the precisions increase fast after the
first feedbacks, and then become stable; the highest preci-sion of integrated based queries(more than 83%) is much
higher than that without feedback(lower than 67%). Thus,
we can conclude that the proposed system: integrated fea-
tures based flame image retrieval system with relevant
feedback is effective and can get the satisfactory retrieval
result for rotary kiln application.
However, the proposed relevant feedback is subjectiveand must be depend on the expert knowledge. Therefore,
there should be error between different users. So, based onthis system’s framework, build the semantic model for the
rotary kiln flame image will be interesting for further stud-
ies.
6. Conclusions
In this paper, we present a content-based image retrieval
system for rotary kiln flame image based on fire & clinker
and texture features. We firstly analyzed and extracted the
features from flame images, then combined them with the
relevance feedback information from user to realize the
flame image retrieval. Also, a system prototype is designedwith friendly GUI. Experimental results demonstrate that
our approach is effective in addressing different user infor-mation needs. Therefore, it can provide strong support for
rotary kiln control and management.
AcknowledgmentsThis work was supported by the National Natural Science
Foundation of China (60634020).
7. References[1] J. R. Smith and S.-F. Chang, “VisualSEEk: a fully auto-
mated content-based image query system”, Proc. of the 4th ACM International Conference on Multimedia, pp.87-93,
Boston, USA, November 1996.[2] M. Flickner, H. Sawhney, W. Niblack, et al., “Query by
image and video content: the QBIC system”, IEEE Com- puter Special Issue on Content-Based Retrieval , vol.28,no.9, pp. 23-32, September 1995.
[3] M. Unser, A. Aldroubi, “Wavelets in Medical Imaging”. IEEE Transactions on Medical Imaging , vol.22, no. 3, pp.285-288, January 2003.
[4] Tae-Kyun Kim, Hyunwoo Kim, and Wonjun Hwang, et al,“Component-based LDA face description for image retrievaland MPEG-7 standardisation”, Image and Vision Comput-ing , vol.23,no.7, pp.631-642, July 2005.
[5] Xuelong Li, “Watermarking in secure image retrieval”, Pat-tern Recognition Letters, vol.24, no.14, pp.2431-2434, October 2003.
[6] Rafael C. Gonzalez and Richard E. Woods , Digital Image
Processing (2nd Edition), Beijing: Publishing House ofElectronics Industry, 2003.
[7] Zhang Yujing Image Engineering (Volume A)Image Processand Analysis, Beijing: Tsinghua University Press, 1999.
[8] Xudong Xie and Kin-Man Lam, “Elastic shape-texturematching for human face recognition”, Pattern Recognition,vol.41, no.1, pp.396-405, January 2008(in press).
[9] G. Ciocca and R. Schettini, “A relevance feedback mecha-nism for content-based image retrieval”, Information Proc-
essing and Management , pp. 605-632, September 1999.[10] Peng-Yeng Yin and Shin-Huei Li, “Content-based image
retrieval using association rule mining with soft relevancefeedback”, Journal of Visual Communication & Image Rep-resentation, vol.17, no.5, pp. 1108-1125, October 2006.
[11] Rocchio J.J, “Relevance Feedback in Information Retrieval”,
The SMART Retrieval System, pp. 313~323, EnglewoodCliffs, N.J.: Prentice-Hall, Inc. 1971.
494494