Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Turk J Elec Eng & Comp Sci
(2015) 23: 2253 – 2266
c⃝ TUBITAK
doi:10.3906/elk-1307-9
Turkish Journal of Electrical Engineering & Computer Sciences
http :// journa l s . tub i tak .gov . t r/e lektr ik/
Research Article
A new method for segmentation of microscopic images on activated sludge
Halime BOZTOPRAK∗, Yuksel OZBAYDepartment of Electrical-Electronics Engineering, Selcuk University, Konya, Turkey
Received: 02.07.2013 • Accepted/Published Online: 16.09.2013 • Printed: 31.12.2015
Abstract:Activated sludge samples were taken from the Konya Wastewater Treatment Plant. Two hundred images for
each sample were captured by a systematic examination of the slides. Segmentation of microscopic images is a challenging
process due to lack of focus. Therefore, adjustment of the focus is required for every movement of the mobile stage.
Because the mobile stage does not have the z axis, the focus cannot be adjusted. A new method that uses automatic
segmentation of the captured images is developed for solving this problem. The proposed method is not dependent on
image content, has minimal computation complexity, and is robust to noise. This method uses a cellular neural network
(CNN) in which an adaptive iterative value is calculated by wavelet transform and spatial frequency. A model is fixed
in the system in order to estimate the iterative value of the CNN. Integrated automatic image capture and automatic
analysis of large numbers of images by using evaluation software are improved in our system. Approximately 1000
microscopic images are processed in this experiment. The proposed method is compared with the traditional threshold
method and the CNN through constant iteration. The experimental results are given.
Key words: Automatic image capture, wastewater treatment, segmentation, activated sludge, cellular neural network,
wavelet transform, spatial frequency, entropy
1. Introduction
Wastewater treatment is classified generally into physical, biological, and chemical processes. The aerobic
activated sludge system is one of the most used technologies for the secondary treatment of wastewaters.
Activated sludge contains organic and inorganic substances, which consist of live and dead microorganisms.
Microorganisms convert soluble organic matter into their own biomasses, oxidizing carbonaceous and nitrogenous
matters and removing phosphates. The formed semiliquid material (a community of microorganisms in flocs) is
then separated from the treated supernatant. Such structures are easily settled, therefore enabling us to separate
treated effluent from sludge. Activated sludge is a process for treating sewage and industrial wastewaters using
air and a biological floc composed of bacteria and protozoa. In our process, the amount of dissolved organic
matter from wastewater is reduced by using microorganisms growing in aeration tanks [1].
Sludge characterization is crucial for an efficient operation of the activated sludge process [2]. Image
processing techniques are successively used for segmentation of floc and filamentous organisms, estimation of
biomass concentration [3,4] and activated sludge volume index [5,6], and prediction of bulking events [7]. A
range of 30–200 images obtained from a slide were taken by a systematic examination [5–8]. Manual methods for
everyday operation of wastewater treatment plants are laborious and time-consuming. Therefore, a wastewater
treatment plant is monitored by image analysis. The morphological process in image analysis is generally used
as a method for the segmentation of flocs and filaments.
∗Correspondence: [email protected]
2253
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
The discrete wavelet transforms and spatial frequency techniques are used in image fusion studies [9,10].
High frequency coefficients are usually calculated with the spatial frequency method [11]. There are several
studies about autofocus in the literature. In most of these studies three or more frames from the same cameradata were taken and then the focus was found by comparing these frames with each other. The high-frequency
components [12] and spatial frequency, visibility, edge detection, and wavelet transform [13] were applied to
match the image.
Acquiring a well-focused image is important. Out-of-focus images are unavoidable when capturing
microscopic images. Since the other steps depend on it, autofocusing is the critical step. The aim of autofocusing
is to locate the z axis position of the focus at the best image. It is generally useful to begin with a coarse search
over a wide range and then follow with a finer search over a narrower range [14]. An efficient example of the
automatic microscopic image analysis with potential relevancy is an early warning system for the excessive
growth of the filamentous population in aeration tanks. Manual methods are usually laborious and time-
consuming [15].
In this study, a new method was developed for automatic image acquisition and analysis. Floc segmen-
tation was done using a cellular neural network (CNN), in which an adaptive iterative value was calculated
with wavelet transform and spatial frequency. The aim of the proposed approach is to effectively determine floc
segmentation of images with different characteristics.
2. Materials and methods
Activated sludge samples were supplied from the Konya Wastewater Treatment Plant. The plant was designed
for 200,000 m3/day flow and an equivalent population of about 1,000,000 (p.e.). It was planned to treat all
domestic and pretreated industrial wastewaters of Konya (Turkey). An important feature of the proposed
system is to automate several processes. On the other hand, the system requires the interference of a user and
expert domain knowledge. An overview of the system architecture is shown in Figure 1.
Image analysis CNN,
WT and spatial
frequency
Segmentation resultImage equipment
Figure 1. Overview of the system architecture.
2.1. Image acquisition
The samples of activated sludge were collected from the aerations basins and tested in the laboratory. The
prepared slide was placed on the microscope’s stage for image acquisition. The microscope was equipped with
a visualization system that consisted of a Moticam microscope camera and a motorized stage. Images were
acquired at 1288 × 966 pixels through the interface designed by using a Motic AE21 microscope at 40×magnification. Each slide was scanned from the top left corner to the bottom right. Therefore, adjacent fields
2254
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
were taken systematically and periodically using the motorized stage. It was not possible to take them manually.
Two methods can be used for the scanning of adjacent fields. The first is to use a lamel that is particularly
divided into squares; the second is to use the moving table.
Motorized stages are assembled into different configurations, such as XY stages, XYZ stages, and
XYZ/rotation stages. Motorized XY scanning stages are designed for these applications where high accu-
racy and repeatability are required. It has PC control via USB interface and custom support brackets available
for various microscope types for image analysis. All of the scanning stage requires a motion controller for each
operation. A motorized XY scanning stage (8MTF) was used in this study. Travel range was 102 mm × 102
mm. The motorized scanning stage was controlled with the 8SMC1-USBhF Microstep Driver (USB interface).
2.1.1. Improved image acquisition software
Digital image storage and processing give users the opportunity to save, annotate, and analyze large image
collections. A software program is required to arrange the actions of all these microscope components. The
importance of controlling the microscope by software in microscopy-based research and classification is progres-
sively increasing. In this study, an interface of MATLAB GUI as shown in Figure 2 is designed for doing various
scans that may be required for microscope image acquisition. The usage of this application, offering advanced
tools for customization, is easy for microscope users. The acquisition of an image with a modern biological
microscope often involves a sequence of steps.
Figure 2. The improved user interface for automated data collection.
A typical imaging sequence includes moving to a position on a specimen slide, setting a camera exposure
time, retrieving the image from the camera, and storing the image. Such sequences are automated so that they
are easily repeated and reproducible. The main window permits the user to snapshot the images, shows the
state of the microscope system, controls the camera and the motorized stage, and sets up the experiments. An
2255
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
image is captured using several settings and displayed in an image window by using the “start” and “scan”
buttons.
Many attempts were made to image the retrieval stages. The interface offers different viewing options.
The software, having speed adjustment and information about the current position, is available to investigate a
selected position without jumping around, to scan the form of squares, and to capture the image automatically
with respect to the date of registration. The motorized stage was adjusted to work synchronously with a camera.
The motorized stage moved at regular intervals and the camera automatically captured images at each interval.
Approximately 1000 images were obtained by the interface. The stored images were processed and analyzed by
using MATLAB.
2.2. Cellular neural network
The CNN model was proposed by Chua and Yang [16]. The CNN is an analog parallel-computing paradigm
defined in space and characterized by local connections between processing elements such as cells or neurons.
CNNs are known as offering high-speed implementations.
A standard CNN consists of an M × N rectangular array of cells. The smallest part of the CNN is called
cell (C (i, j)) with Cartesian coordinates (i, j) (i = 1, 2, 3... M, j = 1, 2, 3... N). Each cell is defined by the
following linear and nonlinear mathematical equations [16]:
dxij (t) = −xij (t) +∑
C(k,l)ϵSr(i,j)
Aij:k,lykl (t) +∑
C(k,l)ϵSr(i,j)
Bij:k,lukl (t) + zij , (1)
yij =1
2|xij + 1| − 1
2|xij − 1| (2)
where:
ykl: outputsofcells
ukl: inputofcells,
zij : threshold,
A (i, j; k, l) : feedbackoperator,
B (i, j; k, l) : controloperator.
Additionally, t is the iterative step (time) and Sr(i, j) is the sphere of influence of radius r(r ∈ N+∗ ) of cell C
(i, j), which are defined as the set of all neighboring cells (2r+1) (2r+1) with center in (i, j).
The input ukl is the intensity of the pixels of an M × N image in grayscale, normalized to the range –1
≤ ukl ≤ + 1, where white is coded as (–1) and black as (+1).
A template specifies a relation between each cell and all neighboring cells. This relationship is character-
ized in terms of input, state, and output variables. The intercell interaction could be linear, nonlinear, or delay
type [17].
There are many templates in the CNN Software Library for various image processing methods. Accord-
ingly, the templates could be simple, standard, and complex, including first-order and higher-order cells [18]. A
series of CNN templates was used to convert images in [19].
2256
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
2.3. Discrete wavelet transform
Discrete wavelet transform (DWT) uses multiresolution filter banks and particular wavelet filters for the analysis
and reconstruction of signals. The different output signals of the analysis filter bank are called subbands. A
filter bank consists of filters that separate a signal from frequency bands. When the input signal x (k) enters
the analysis bank, it is filtered by the filters g (n) and h (n), which are a low-pass and a high-pass filter,
respectively. Downsampling by a factor two, denoted by ↓2, is implemented to the outputs of the filters as
shown in Figure 3. Although each output contains half of the frequency content, the input signal includes the
same amount of samples. When two outputs have the same frequency content as the input signal, the quantity
of data is doubled. Low- and high-pass filtering branches compensate the approximations and details of the
signal, respectively [20].
h(n)
g(n)
h'(n)
g'(n)
x(k) y(k)
Synthesis bankAnalysis bank
ch(k)
cl(k)
2
2
2
2
Figure 3. Two-channel filter bank. The analysis filters, g (n) and h (n), are the low-pass and high-pass filters,
respectively.
Two-level 2D transform operations shown in Figure 4 are computed by using filter banks. LL1 , LH1 ,
HL1 , and HH1 are the one-level decomposition of 2D-DWT. After finishing the first-level decomposition, second-
level decomposition is done. The transform decomposes the LL1 band to the four subbands LL2 , LH2 , HL2 ,
and HH2 . This procedure is repeated until the desired level of N decomposes [21].
h(n)
g(n)
x(n)
Vertical filter
LL1
h(n)
g(n)
h(n)
g(n)
h(n)
g(n)
h(n)
g(n)
h(n)
g(n) LH
1
HL1
HH
H1
L1
LL2
H2
L2 LH
2
HL2
HH2
Horizontal filter
Vertical filter Horizontal filter
2
2
2
2
2
2
2
2
2
2
2
2
Figure 4. Two-level 2D-DWT subband [21].
2.4. Image processing and analysis
Figure 5 shows activated sludge images. The stored images are processed and analyzed.
2257
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
Figure 5. Activated sludge samples.
For each sample, between 30 and 200 images are to be captured by a systematic examination of the slide
[5–8]. After 10 min, it was observed that the samples start to deteriorate on the slide, as shown in Figure 6.
Figure 6. Deterioration of activated sludge samples.
Out-of-focus images indispensably happen when capturing microscopic images. For example, when the
scanning stage moves back and forth, a blurred image is obtained due to being out of focus. Therefore, the
focus needs to be adjusted at every movement of the mechanism. However, since our mobile stage does not
have a z axis, adjustment of the focus is not possible. Accordingly, a well-focused image may not be obtained
everywhere. To solve this problem, we present a new method that distinguishes the noisy, blurred, and other
uncorrupted images via the wavelet transform and CNN. The aim of the proposed approach is to determine floc
segmentation of images with different characteristics effectively.
Our proposed image analysis method consists of preprocessing and CNN-based segmentation. In the
preprocessing phase, an original image is first converted into a grayscale image. Median filtering is applied to
remove unwanted noise. In the segmentation phase, floc segmentation is performed by the CNN, and an adaptive
iterative value is designed by using wavelet transform and spatial frequency. Segmentation of microscopic images
is a challenging process due to lack of focus in any structure. All captured images are dissimilar in respect to
some characteristics such as color, intensity, or texture. Finding a suitable template is difficult since there is no
solution of autofocus for all captured images. Therefore, the template may be found empirically. In this case,
one template may not be sufficient and different templates for each captured image may be required. To solve
this problem, an adaptive iterative value is used in our system.
The images are then decomposed by DWT. The Haar wavelet, which is discontinuous, resembles a step
function. For the function “f”, the Haar wavelet transform is defined as:
f →(aL | dL
),
aL =(a1, a2, . . . , aN/2
),
dL = o(d1, d2, . . . , dN/2
),
(3)
2258
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
where d is the detailed subband, a is the approximated subband, and L is the level of the decomposition.
am =f2m + f2m−1√
2
dm =f2m − f2m−1√
2
(4)
One-level Haar wavelet transform is applied to the activated sludge images. The resultant image is
decomposed into four subbands: the low-low (LL), low-high (LH), high-low (HL), and high-high (HH) subbands.
The LL subband is decomposed further. After this decomposition, the LL1 band is divided into four subbands:
LL2 , LH2 , HL2 , and HH2 [22]. In [23], a method applied the HL subband feature of two-level, 2D Haar wavelet
transform to significantly highlight vertical edges of license plates and suppress background noise. Figure 7 shows
a decomposed image with two-level DWT.
LL2 HL2
HL1
LH2 HH2
LH1 HH1
Figure 7. Two-level decomposition. Graphical depiction of wavelet coefficient placement.
As a result of this transform, there are 4 subband images at each scale. The coefficients of an image are
reconstructed to the same size as the input image. Each image is examined by the following measures: entropy
and spatial frequency.
Entropy (E): It is a measure of unpredictability or information content. In information theory, a
measure of the uncertainty in a random variable is known as entropy. It is also used as a measure of sharpness
in many studies.
Spatial frequency (SF): It indicates the overall information level in an image and measures the variation
of pixels [24]. It is usually used with wavelets in image fusion studies [9–13,25]. After taking three or more
images from the same images, each of them is compared with each other and thus the focus is determined.
However, in this work, there is no second image to compare with the first one.
The spatial frequency is reduced when the images are more blurred. If the value of spatial frequency
is higher, the quality and contrast of the image are also higher [25]. For an M × N sized image, the spatial
frequency is calculated as follows:
SF =
√(RF )
2+ (CF )
2, (5)
RF =
√√√√ 1
MN
M∑m=1
N∑n=2
[F (m;n)− F (m,n− 1)]2, (6)
2259
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
where RF is the row frequency.
CF =
√√√√ 1
MN
N∑n=1
M∑m=2
[F (m;n)− F (m− 1, n)]2
(7)
CF is column frequency.
SF is spatial frequency of the gray images, SF1 is spatial frequency of LL2 , SF2 is spatial frequency of
HH2 , and Dsf is the difference between SF1 and SF2 (SF1 – SF2).
S: S is defined as S = SF1 / SF2.
Ead: Edge detection is used to determine LL2 and HH2 . Ead is defined as Ead = Ea / Ed.
SF1, SF2, Dsf, S, and Ead abbreviations have been used.
Some features of LL2 , HH2 , and the original image are determined and are shown in Tables 1–3. These
features are SF, SF1, SF2, Ead, E, Dsf, and S. The values in Tables 1–3 are compared with blurred images,
normal images, sharp images, and each other. The best distinctive features are required to be found empirically.
It is observed that some of the features (i.e. entropy, edge detection) could not be distinguished. The best
distinctive feature is the ratio (defined as S) between SF1 and SF2. The value of S is about 1.5 for the best
focused image and it increases when the image blurring increases.
Table 1. Determination of the distinctive features for sharp images.
Samples
images
E = 6.0691 6.8362 6.9853
SF = 1.1649e+04 1.7569e+04 1.7983e+04
SF1 = 6.3078e+03 9.3706e+03 9.9786e+03
SF2 = 4.9550e+03 7.2947e+03 7.4136e+03
Dsf = 1.3528e+03 2.0760e+03 2.5650e+03
S = 1.2730 1.2846 1.3460
Ead = 1.3768 1.3490 1.3897
Iter num* = 13.7423 13.8600 14.4612
A mathematical model is then created based on the value of S. A curve-fitting method is used for the
estimation of the iterative value (Eq. (8)). The algorithms are built in the most proper mathematical model
with a limited amount of data. The fitted curve is shown in Figure 8, where the data points are blue squares
and the obtained fitted line is red.
The value of iteration for the CNN is estimated as follows:
y = −42.21 ∗ S−0.3343 + 52.68, (8)
where y is the iterative value and S is the calculated value.
2260
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
Table 2. Determination of the distinctive features for blurred and normal images.
Samples
images
E = 6.1963 6.4759 5.3614
SF = 6.8788e+03 6.2796e+03 2.8158e+03
SF1 = 5.6276e+03 6.3485e+03 3.4550e+03
SF2 = 2.6827e+03 2.1895e+03 912.5774
Dsf = 2.9449e+03 4.1590e+03 2.5424e+03
S = 2.0978 2.8995 3.7859
Ead = 1.4038 1.9451 1.8261
Iter num* = 19.7302 23.1095 25.6322
Table 3. Determination of the distinctive features for very blurred images (in these tables, iterative number (Iter num*)
was estimated by the proposed technique).
Samples
images
E = 6.3523 5.0802 6.3244
SF = 1.2248e+04 1.5877e+03 4.2816e+03
SF1 = 6.9534e+03 2.0391e+03 6.0490e+03
SF2 = 4.8641e+03 464.5568 1.2589e+03
Dsf = 2.0893e+03 1.5745e+03 4.7901e+03
S = 1.4295 4.3893 4.8052
Ead = 1.4158 1.6983 2.1277
Iter num* = 15.2230 26.9368 27.7041
Then the value of S is calculated for the CNN. The gradt template is selected as a template of the CNN
[18]. Time step = 0.1 and iterative number is S. The architecture of the CNN is simulated with MatCNN [17].
Different iterative values can be required for segmentation for different characteristics of each image.
Figure 9 shows that images are applied to the CNN with different iterations.
2261
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
1 1.5 2 2.5 3 3.5 4 4.510
12
14
16
18
20
22
24
26
28
x
y
Figure 8. The fitted curve; the data points are blue dots and the obtained fitted line is red.
(a) (b) (c) (d)
(e) (f) (g) (h)
grayb20 grayb grayb10
grayb15 grayb20 grayb20 grayb20
Figure 9. Output images in different moments of the iterative step (time): (a) iteration (iter) = 0, (b) iter = 5,
(c) iter = 10, (d) iter = 15, (e) iter = 20, (f) iter = 30, (g) iter = 40, (h) iter = 50.
The entire procedure consists of two stages. In the first stage, the designed software of image acquisition
captures images by using the current settings and saves them automatically. In the second stage, all images are
processed sequentially according to the following steps:
Step 1: Decompose LL2 , LH2 , HL2 , HH2 bands of image.
Step 2: Calculate spatial frequency SF1 of LL2 image.
Step 3: Calculate spatial frequency SF2 of HH2 image.
Step 4: Find the value of S by applying the formula S = SF1 / SF2.
Step 5: Calculate the iterative value using Eq. (8).
Step 6: Apply CNN to the gray image.
2262
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
3. Experimental results
Our software script captures and stores images at fixed intervals along the x and y axes in order to produce a
single stack of 200 images for a given field of a slide. The comparison for microscopic images is shown in Figures
10a–10d with the original image, the threshold result with traditional methods, and the detected floc regions
with the CNN and the proposed method, respectively.
(a) (b)
(c) (d)
Figure 10. (a) The original image, (b) threshold result, (c) results of using CNN (iterative value: 15), (d) results of the
proposed method.
At the same time, the experiment is performed to compare microscopic images with different colors as
shown in Figure 11.
4. Conclusions
Automatic image capturing and systematic scanning of a sample are critical processes for many applications.
A part of the sample should be screened without skips. This would help substantially in everyday operation of
a wastewater treatment plant. The samples in the slide begin to deteriorate after about 10 min. Two hundred
images from each sample may not be possibly taken in 10 min manually. With the motorized scanning stage
moving back and forth, some parts are out of focus. It is quite difficult to obtain optimum focus for all objects
in the image. The focus needs to be adjusted with every movement of the mechanism. After taking three or
more images from the same set, each of them is compared with the others and thus the focus is determined.
However, in this work, there is no second image to compare to the first image. The focus cannot be adjusted
due to the lack of z axis in the mobile stage, and it is impossible to obtain an image that is in focus everywhere.
A new method that uses automatic segmentation of the captured images is developed for solving this problem
via the designed interface. The designed software of image acquisition captures images by using the current
2263
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
settings and saves them automatically. It also provides the analysis of large numbers of images automatically
and effectively.
(a) (b) (c) (d)
Figure 11. (a) The original image, (b) traditional results obtained for binary image, (c) results of using CNN (iterative
value: 15), (d) results of the proposed method.
The LL and HH subband features of 2D Haar wavelet transform of each image are investigated by spatial
frequency, edge detection, and entropy. The features are compared with blurred, normal, and sharp images
and with each other. The best distinctive feature (defined as S) is found empirically. This value is 1.5 for
the best focused image and it increases as the image blurring increases. The curve fitting is then used to
2264
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
estimate the iterative value of the CNN. The value of S is important for this process. Captured images are all
dissimilar in respect to some characteristics such as color, intensity, or texture. Different iterative values can
be required for segmentation as different characteristics of each image. At the same time, the experiment is
performed to compare microscopic images with different colors. The method is independent of image content,
has minimal computation complexity, and is robust to noise. It can also serve as one of the criteria of measuring
the quality of captured images. Approximately 1000 microscopic images are analyzed in this experiment. The
proposed method is compared with the traditional threshold method and CNN through constant iteration. The
experimental results are examined visually. The CNN with adaptive iteration is better than uniform constant
iteration, especially in the separation of filamentous structures.
Acknowledgment
This work was supported by the Coordinatorship of Selcuk University’s Scientific Research Projects under
Project No. 11201043.
References
[1] Lin S, Lee C. Water and Wastewater Calculations Manual. 2nd ed. New York, NY, USA: McGraw-Hill, 2007.
[2] Mesquita DP, Amaral AL, Ferreira EC. Characterization of activated sludge abnormalities by image analysis and
chemometric techniques. Anal Chim Acta 2011; 705: 235–242.
[3] Ernst B, Neser S, O’Brien E, Hoeger SJ, Dietrich DR. Determination of the filamentous cyanobacteria Planktothrix
rubescens in environmental water samples using an image processing system. Harmful Algae 2006; 5: 281–289.
[4] Liwarska-Bizukojc E. Application of image analysis techniques in activated sludge wastewater treatment processes.
Biotechnol Lett 2005; 27: 1427–1433.
[5] Da Motta M, Amaral LP, Casellas M, Pons MN, Dagot C Roche N, Ferreira EC, Vivier H. Characterisation of
activated sludge by automated image analysis: validation on full-scale plants. In: Dochain D, Perrier M, editors.
IFAC Computer Applications in Biotechnology 2001. London, UK: Pergamon Press, 2002. pp. 427–431.
[6] Mesquita DP, Dias O, Amaral AL, Ferreira EC. Monitoring of activated sludge settling ability through image
analysis: validation on full-scale wastewater treatment plants. Bioproc Biosyst Eng 2009; 32: 361–367.
[7] Jenne R, Banadda EN, Smets I, Deurinck J, Van Impe J. Detection of filamentous bulking problems: developing
an image analysis system for sludge composition monitoring. Microsc Microanal 2007; 13: 36–41.
[8] Perez YG, Leite SGF, Coelho MAZ. Activated sludge morphology characterization through an image analysis
procedure. Braz J Chem Eng 2006; 23: 319–330.
[9] Li S, Kwok JT, Wang Y. Combination of images with diverse focuses using the spatial frequency. Inform Fusion 2001;
2: 169–176.
[10] Maddali R, Prasad KS, Bindu CH. Discrete wavelet transform based medical image fusion using spatial frequency
technique. International Journal of Systems Algorithms & Applications 2012; 2: 2277–2677.
[11] Bindu CH, Prasad KS. Automatic scheme for fused medical image segmentation with nonsubsampled contourlet
transform. International Journal of Advanced Computer Science and Applications 2012; 3: 50–53.
[12] Turgay E, Teke O. Autofocus method in thermal cameras based on image histogram. In: IEEE 19th Conference
on Signal Processing and Communications Applications; 20–22 April 2011; Antalya, Turkey. New York, NY, USA:
IEEE. pp. 462–465.
[13] Jain A, Kanjalkar PM, Kulkarni JV. Estimation of image focus measure and restoration by Wavelet. In: IEEE 4th
International Conference on Intelligent Networks and Intelligent Systems; 1–3 November 2011; Kunming, China.
New York, NY, USA: IEEE. pp. 73–76.
2265
BOZTOPRAK and OZBAY/Turk J Elec Eng & Comp Sci
[14] Wu Q, Merchant FA, Castleman KR. Microscope Image Processing. New York, NY, USA: Elsevier, 2008.
[15] Jones PA. Temporal biomass density and filamentous bacteria effects on secondary settling. MSc, University of New
Mexico, Albuquerque, NM, USA, 2010.
[16] Chua LO, Yang L. Cellular neural networks: theory and applications. IEEE Circuits Syst 1988; 35: 1257–1290.
[17] Rekeczky C. MATCNN: Analogic CNN Simulation Toolbox for MATLAB. Budapest, Hungary: Hungarian Academy
of Science, 2006.
[18] Karacs K, Cserey G, Zarandy A, Szolgay P, Rekeczky C, Kek L, Szabo V, Pazienza G, Roska T. Software Library
for Cellular Wave Computing Engines in an Era of Kilo-Processor Chips, Version 3.1. Budapest, Hungary: Cellular
Sensory and Wave Computing Laboratory of the Computer and Automation Research Institute of the Hungarian
Academy of Sciences, 2010.
[19] Gavrilut I, Tepelea L, Gacsadi A. CNN processing techniques for image-based path planning of a mobile robot. In:
WSEAS 15th International Conference on Systems; 2011. pp. 259–263.
[20] Merry RJE, Steinbuch M, van de Molengraft MJG. Wavelet Theory and Applications Literature Study. Eindhoven,
the Netherlands: Eindhoven University of Technology, Department of Mechanical Engineering, Control Systems
Technology Group, 2005.
[21] Dia D, Zeghid M, Saidani T, Atri M, Bouallegue B, Machhout M, Tourki R. Multi-level discrete wavelet transform
architecture design. In: Proceedings of the World Congress on Engineering; 2009. pp. 1–3.
[22] Chun-Lin L. A Tutorial of the Wavelet Transform. Taipei, Taiwan: NTUEE, 2010.
[23] Wu MK, Wei JS, Shih HC, Ho CC. License plate detection based on 2-level 2D Haar wavelet transform and edge
density verification. In: IEEE International Symposium on Industrial Electronics, 5–8 July 2009; Seoul, Korea. New
York, NY, USA: IEEE. pp. 1699–1704.
[24] Eskicioglu AM, Paul SF. Image quality measures and their performance. IEEE T Commun 1995; 43: 2959–2965.
[25] Bulatov A. Cok odaklı goruntu fuzyonu. MSc, Erciyes University, Kayseri, Turkey, 2006 (in Turkish).
2266