Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Contemporary Engineering Sciences, Vol. 10, 2017, no. 16, 751 - 771
HIKARI Ltd, www.m-hikari.com
https://doi.org/10.12988/ces.2017.7880
Analyzing Pre-Processing Filters Sequences for
Underwater-Image Enhancement
A. Ceballos, I. Diaz Bolaño and G. Sanchez-Torres
Engineer Faculty, Universidad del Magdalena
Carrera 32 No. 22-08, Santa Marta, Colombia
Copyright © 2017 A. Ceballos, I. Diaz Bolaño and G. Sanchez-Torres. This article is distributed
under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Abstract
Enhancement of the quality of a digital image is desirable in several scenarios such
as underwater-image analysis, where improving visibility is necessary to reduce
alterations caused by unbalanced lighting and the presence of sediments, among
others. Many algorithms have been proposed oriented towards treatment of specific
factors, such as contrast, color fidelity, noise, and lighting. This work explores the
literature techniques and establishes a filter sequence for enhancing underwater
images based on a pre-established quality metric. The resulting sequence begins by
balancing the lighting using homomorphic filtering, improve the contrast by the
Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm, and
finally noise reduction and edge enhancement are done by using bilateral filtering.
The results of the implementation suggest a qualitative improvement, in contrast,
color, and sharpness of borders.
Keywords: Image Processing, Underwater Image Enhancement, Algorithms,
Contrast, Fish
1 Introduction
Digital image processing is a branch of signal processing whose objective is to
obtain either a processed image or a set of characteristics from a digital image [1].
A digital image is defined as a bi-dimensional function 𝑓(𝑥, 𝑦) that specifies one or
more levels of intensity depending on the color model for every coordinate (𝑥, 𝑦) or pixel [2]. The most common procedures used to improve visibility are contrast
enhancement, noise reduction, and degradation and reconstruction models.
752 A. Ceballos et al.
Due to their attributes, such as reduced visibility and prevalence of specific
color tones, underwater environments add several additional difficulties because of
the density of the medium and refraction. This gradually diminishes the intensity of
colors that differ from blue as a function of depth (see Fig. 1). This results in
reduced contrast, weak colors, and a visibility range between 5 and 20 m [3] or even
less if there are disturbances in the water [4]. Underwater image (UI) treatment must
consider physical phenomena originating from the medium and the effects resulting
from the presence of organic and inorganic matter.
The UI treatment techniques have become a tool for pre-processing used by
computer vision systems that require the identification of objects in underwater
environments [5]. These techniques can be classified into two categories:
restoration and enhancement techniques [3]. The first defines a model that simulates
the degradation of the scene to reconstruct it.
Enhancement methods improve visibility through manipulation of the components
of the scene. The latter are faster and simpler than image restoration, as they do not
involve physical models of the formation of the image.
Fig. 1: Underwater penetration of the different light waves [6].
UI enhancement is defined as a process that receives a low-quality image as input
and generates as output an image with some increased features [7]. These
techniques improve images through the enhancement of their characteristics. The
contrast and noise characteristics are the most frequently mentioned in the literature
[1], [3].
Measuring the effectiveness of these techniques is usually done through the
Medium Square Error (MSE) [7], [8] and Signal-to-Noise Ratio (SNR) [8]. Both
metrics depend on having both the degraded image and an under graded copy to
compare. If such images are not available, quality can instead be measured through
edge detection algorithms and histogram analysis [9].
This work aims to provide a methodology for UI enhancement through applying
existing techniques oriented towards the treatment of their main deficiencies:
lightning enhancement, contrast enhancement, and noise reduction.
Analyzing pre-processing filters sequences 753
2 Previous Work
A proposed taxonomy of UI enhancement methods can be classified considers five
categories [3]:
a. Histogram and contrast stretching and manipulation.
b. The retinex-based model.
c. Techniques based on filters.
d. Polarization or stereo photography.
e. Mixed techniques.
2.1 Histogram and Contrast Stretching and Manipulation.
These are the most common techniques for the enhancement of UIs. They
consist in the extraction of the intensity histograms in order to normalize them either
through equalization or stretching establishing a desired range of values to which
the pixels in the image are restricted. Methods such as contrast stretching, histogram
equalization, and Contrast Limited Adaptive Histogram Equalization (CLAHE) are
some examples [1], [7], [10]–[12], with CLAHE being one of the most recurrent
with versions based on the RGB (Red, Green, Blue) and HSV (Hue, Saturation,
Value) / HSI (Hue, Saturation, Intensity) color models [8]. The stretching of
intensity values of an image is defined as [13]:
𝐼𝑖,𝑗 =
{
𝐼𝑖,𝑗 − 𝑚𝑖𝑛𝐼
𝑚𝑎𝑥𝐼 − 𝑚𝑖𝑛𝐼𝑖𝑓 0 < 𝐼𝑖,𝑗 < 1
0 𝑖𝑓 0 > 𝐼𝑖,𝑗1 𝑖𝑓 1 < 𝐼𝑖,𝑗
(1)
In [14] proposed a method based on stretching several attributes of the
image. It begins with the use of a stretching algorithm to normalize the contrast of
images in the RGB color model. Afterward, the image is transformed into the HSI
model and stretching is done on the saturation and intensity layers to increase the
color quality and to solve lighting problems. The enhanced image is then obtained
by transforming it back to the RGB color model. In [15] observed differences in
effectiveness between the use of Adobe Photoshop’s color correction technique
with manual and automatic adjustment mode. They worked with under- and
overexposed UIs with depths of between 1 and 5 m.
In [16] proposed the color correction based on machine learning by Markov random
fields. The color of input images is corrected with the value that best describes its
surroundings.
In [17] used contrast stretching. Local contrast is treated using CLAHE. The
resulting images have contrast enhancement at both local and global levels and are
successfully used to perform image matching. Histogram manipulation is an
effective way of improving UIs, but it is often necessary to couple it with other
techniques to remove the enhance noise.
754 A. Ceballos et al.
2.2 The Retinex-based Model
The retinex theory uses the capacity of the human brain to recognize two
instances of the same color under different lighting conditions as being the same
color; that is, it searches for the constancy or invariance of color [18].
Computationally speaking, its goal is to eliminate the impact of the lighting
component and to obtain the real component from the image resulting in improved
images when there is little contrast due to inhomogeneous lighting [3].
Chambah et al. [19] applied constancy color theory for the real-time recognition
of fish. The methodology implements an Adaptative Contrast Equalization (ACE)
method that combines the mechanisms of the gray world and white patch methods
with histogram equalization to obtain an adequate constancy for both lighting and
contrast, avoiding the creation of an inverse color emission over the image and
facilitating subsequent segmentation and feature extraction.
2.3 Techniques Based on filters
Filter-based techniques use highly efficient filtering methods to reduce
noise to improve quality in UIs [3]. This noise is often a result of the acquisition
process of the image, resulting in pixel values that do not represent the real intensity
of the scene [10]. Within this category, methods such as homomorphic filtering,
anisotropic filtering, bilateral filtering, linear filtering, median filtering, and wavelet
noise reduction, among others, can be found [10]–[12], [20].
Arnold-Bos et al. [21] describe a methodology for the enhancement of UI
through the use of filters. The first step is contrast equalization using a low-pass
filter and histogram thresholding to treat lighting inequalities. This step increases
noise, so an adaptive smoothing filter is applied to reduce noise in the borders of
objects. Two kinds of adaptive filtering are tested: anisotropic filtering, which
requires manually setting diffusion constants, and Kovesi’s wavelet filtering, which
is applied automatically.
Prabhakar and Kumar [22] presented an algorithm based on several image-
filtering methods. The first filter is the homomorphic filter, which corrects non-
uniform lighting, improves contrast, and then performs wavelet noise reduction.
The last filtering step is bilateral filtering, which smooths the image while
preserving borders through a non-linear combination of image values close to each
other. Finally, contrast stretching is applied to improve contrast and color correction
is used to balance colors. The algorithm was tested with images where objects were
at distances of between 1 and 2 m from the camera. Applying filters is effective
when needed to treat specific problems in images, but when handling UIs, several
attributes need to be corrected at the same time, resulting in the need to apply more
than one method and, consequently, a higher processing cost.
Analyzing pre-processing filters sequences 755
2.4 Polarization or Stereo Photography
Polarization techniques consist in using several images of the same scene
taken rapidly with a variable grade of polarization [7]. These techniques are hard to
apply in subaquatic environments where the speed of variation of the environment
exceeds the acquisition speed. Techniques based on stereo photography avoid this
problem by requiring two images of the same object taken from distinct angles.
Although they can be used to enhance some of the image attributes directly, they
are mostly applied in image restoration.
2.5 Mixed Techniques
Mixed techniques are remarkable for their often-consecutive application of more
than one technique in the same image enhancement algorithm. They imply the
application of not only methods from several image enhancement technique
categories but also image restoration methods [3]. This tends to increase the
computational cost but results in higher quality images.
Bazeille et al. [13] described a methodology composed of several
independent methods. First, spectral analysis filtering is applied to remove
repetitive wave patterns. Afterward, the image is scaled to a square power of two,
allowing the use of Fourier transform and Fast Fourier transform algorithms. The
color space is then transformed from RGB to YCbCr, making use of the luminance
(Y) channel only, where homomorphic filtering is applied to correct non-uniform
lighting. Afterward, wavelet noise reduction is used, and anisotropic filtering
smooths the image while preserving borders. Intensity is adjusted by suppressing
outlier pixels and stretching contrast. The image is then taken back to RGB space,
and the original size is reestablished. The last step is to normalize the color median
to ensure a balance between the three-color channels.
The procedure proposed by [23] is based on genetic algorithms and use the
Empirical Mode Decomposition (EMD) methodology, which is widely used for
non-linear data analysis. Through EMD, the image is decomposed into Intrinsic
Mode Functions (IMFs), with weights for each channel of the RGB model: red,
green, and blue. The improved image results from the combination of the IMFs with
weights calculated through the genetic component of the implementation. The
optimal individual is selected after processing the established generation, and then
a color correction method based on that of Bazeille et al. [13] is applied.
Table 1 depicts a set of works reported in the literature, detailing their
approaches and most relevant characteristics as strengths and weaknesses. Table 2
shows the most relevant sequences according to the number of citations in the
literature. This indicates that sequential-based approaches are effective to handle
contrast, noise, and lighting simultaneously in a similar way to [13].
756 A. Ceballos et al.
3 Methodology
To define the most appropriate sequence for the improvement of UIs, a set of
filters or procedures is established considering the three most important problems
present in a UI: lighting balance, contrast enhancement, and noise removal.
Based on the analysis reported in Tables 1 and 2, the most common procedures
in the literature are homomorphic filtering, CLAHE, and bilateral filtering to noise-
removal.
We determined CLAHE to be the pivotal step of the sequence due to its importance
as a contrast-enhancing method. Since it is known that applying contrast
enhancement methods can introduce artifacts in the image, noise removal was
determined to be the last step in the methodology. Thus, the sequence begins by
balancing lighting using homomorphic filtering, then contrast is enhanced through
CLAHE, and finally noise is removed by bilateral filtering (see Fig. 3).
Fig. 3: Proposed sequence.
Table 1. Some works reported and its main features.
Author Approach Features
[21] Utilization of consecutive filters. + Shows a need to apply noise reduction techniques after equalizing contrast.
+ Establishes Kovesi’s wavelet filtering as superior to anisotropic
filtering in automated implementations. [13] Utilization of consecutive filters,
alternating between RGB and YCbrCr
color models.
+ The algorithm is fully automatic and does not require adjustment
of parameters.
+ Positive results regarding the improvement of color and contrast. – The implementation is not optimal; it can be improved by a C
translation and better selection of filters.
[14] Contrast stretching in the RGB model and intensity and saturation stretching
in the HSI model.
+ Enhancement of both contrast and lighting. – The increase in noise after applying two stretching techniques is
not considered.
[16] Utilization of paired Markov random
fields to infer the true color of
underwater images.
+ Inference-based automatic enhancement of color and contrast.
– Requires and appropriately constructed training images dataset.
[23] Application of genetic algorithm to
improve images, treating RGB color
channels separately and using [13] for color correction.
+ Results are superior to [13] and other techniques such as contrast
stretching and histogram equalization.
– The genetic algorithm results in images with unbalanced color that must be corrected through other techniques.
[15] Comparison of manual and automatic
color correction using histogram equalization.
+ The manual technique shows favorable results in terms of MSE.
– No improvement is proposed for the automatic methodology.
Analyzing pre-processing filters sequences 757
Table 1. (Continued): Some works reported and its main features.
[22] Consecutive application of filters
followed by equalization of color and
contrast.
+ Positive results regarding noise elimination and contrast and lighting enhancement.
+ Best parameters for bilateral filtering determined, based on PSNR
(Peak Signal-to-Noise Ratio).
– Could not be compared with other techniques.
[17] Separate applications of contrast stretching to improve global contrast
and Rayleigh distribution-based
CLAHE to improve local contrast.
+ The use of Rayleigh-based CLAHE with a significance of 5% resulted in a remarkable improvement of MSE.
– Manual contrast stretching did not result in a large improvement
despite the time spent on its implementation.
[19] Separate implementations of retinex
model with WP/WW hybrid and ACE
to improve the balance of colors and contrast.
+ Removes color and/or lighting cast over the scene.
+ The ACE method results in images that are appropriate for
segmentation. – The WP/WW method did not prove effective.
+ Strength – Weakness
Table 2. List of sequences applied to the underwater-image enhancement
Year Author Implemented sequence
2002 [24] 1. CLAHE in RGB model. 2. Homomorphic filtering.
2005 [21] 1. Global contrast equalization. 2.a. Noise removal by wavelet. 2.b Noise
removal by anisotropic filtering.
2006 [13] 1. Removing Moiré effect. 2. Converting image to square proportions. 3.
Homomorphic filtering in YCrBr model.
4. Noise removal by wavelet in YCrBr model. 5. Anisotropic filtering in YCrBr
model. 6. Contrast stretching in YCrBr model. 7. Conversion of the image back
to original proportions. 8. Equalization of color means in RGB model.
2007 [14] 1. Contrast stretching in RGB model. 2. Contrast stretching in HSI model.
2010 [9] 1. Color correction in RGB model. 2. Contrast correction in RGB model. 3.
Contrast correction in HSV model.
2011 [25] 1. Weighed CLAHE. 2. Gaussian filtering.
2012 [22] 1. Homomorphic filtering. 2. Noise removal by wavelet. 3. Bilateral filtering.
4. Contrast stretching.
2013 [26] 1. CLAHE in RGB model. 2. CLAHE in HSV model. 3. The combination of
results from steps 1 and 2.
2014 [27] 1. Contrast correction. 2. Color correction.
2014 [28] 1. Homomorphic filtering. 2. Contrast stretching. 3. Noise removal by wavelet.
2015 [29] 1. Homomorphic filtering. 2. CLAHE in RGB model. 3. Gamma correction.
2015 [30] 1. CLAHE in RGB model. 2. Homomorphic filtering.
A homomorphic filter is a non-linear filter that allows the non-linear features of
an image to be converted into additive ones through Fourier transform to
manipulate. The motivation for the development of this technique is to handle
multiplicative and signal-dependent noise, but it has been successfully utilized in
digital image processing [31]. Ideally, the lighting component of an image is
uniform, while only the reflection component varies abruptly. However, in some
cases, such as UIs context, lighting is not constant. These variations can be modeled
as non-linear noise and treated through homomorphic filtering. This transformation uses a sequence of steps to apply the filters. The first step is to carry out a logarithmic
758 A. Ceballos et al.
space transform through natural logarithms. Fast Fourier transform is used to pass
into the frequency domain, and the chosen filter H is a high-pass filter to attenuate
the low-frequency components without damaging high-frequency information [32].
Once the image has been treated in frequency space, inverse fast Fourier transform
and the exponential function are used to return to the original space and obtain the
processed image (see Fig. 4).
Fig. 4: Steps of the homomorphic filter [32].
Contrast enhancement is often done through histogram equalization based on the
Cumulative Density Function. In a reduced contrast image, it is more likely that
certain intensity values will be found, and thus the function will not have a linear
shape. An image with good contrast is one where every intensity value in the
selected range has a similar representation. This is achieved using a mapping table,
which allows a more balanced apparition likeness to be assigned to every intensity
value [2]. Mapping values 𝑔 are determined for each intensity value f of the original
image as defined in (2), based on a maximum intensity 𝑔𝑚𝑎𝑥 and a minimum
intensity 𝑔𝑚𝑖𝑛, with 𝑃(𝑓) as a uniform probability density distribution.
𝑔 = [𝑔𝑚𝑎𝑥 − 𝑔𝑚𝑖𝑛] ∗ 𝑃(𝑓) + 𝑔𝑚𝑖𝑛 (2)
Although the increase in contrast is often noticeable, it often results in wrong
mappings for some colors, undesirable amplification of noise, and the appearance
of artifacts. The CLAHE algorithm solves this problem by combining two
techniques: Adaptive Histogram Equalization (AHE) and Contrast Limited
Histogram Equalization (CLHE).
The principle behind AHE is to divide the image into several tiles and increase
the contrast for each of these individually to improve the contrast locally rather than
globally. Once this step is finished, the image is reconstructed through linear
interpolation [33]. In CLHE, a parameter is established to limit the increase in
contrast by removing pixels from the highest points in the histogram and
redistributing them evenly throughout it [34]. Usually, this implies that the cut zone
surpasses the established threshold, so the process is repeated until the area above
the limit is small enough.
CLAHE not only combines AHE and CLHE to increase contrast locally in a
limited manner but also makes it possible to define a preferred probability density
distribution, thus also defining the desired shape of the histogram. Some examples
of these distributions are the exponential and Rayleigh distributions.
Bilateral filtering has the goal of reducing noise and enhancing borders in images
at the same time, operating in both the spatial and frequency domains. The
technique used often results in loss of detail in areas where transitions between
Analyzing pre-processing filters sequences 759
colors are softer, easing the use of segmentation techniques when compared to other
noise-removal and border-enhancing algorithms [35]. However, this filter is very
costly in computational terms.
The original bilateral filter implementation of [36], which can be used with both
gray-scale and color images, is employed with the recommended parameters of a
semi-width of 𝑤 = 5 and standard deviations for the spatial and frequency domains
of σspatial = 3 and σfrequency = 0,1.
3.1 Selection of CLAHE Parameters
The CLAHE parameters are NumTiles, Clip Limit, NBins, Range, Distribution,
Alpha, and color space. Due the different color spaces, we were interested about
analyzing its effect, so were selected: LAB, HSV, RGB-HSV, and Red-HSV like
as color spaces variants.
Parameter selection is carried out through experimental testing by verifying the
effectiveness of each parameter combination for each CLAHE color space variant.
The evaluation methodology was based on the principle that contrast improvement
should happen mainly over the area or object of interest in comparison with the
image background. For this reason, for each image, two windows are manually
selected: one focusing on the object of interest, subject to a high contrast increment,
and the other limited to background areas, where a noticeable change is not
expected when applying CLAHE.
The difference in average intensity 𝐶 between the original and treated windows
is defined as:
𝐶 =∑ [𝑅1(𝑖, 𝑗)].𝐼,𝐽
𝐼 ∗ 𝐽−∑ [𝑅2(𝑚, 𝑛)].𝑀,𝑁
𝑀 ∗ 𝑁 (3)
where 𝑅1 and 𝑅2 are the selected windows while 𝐼, 𝐽 and 𝑀,𝑁 are their respective
proportions.
The effectiveness of the contrast enhancement for the image 𝑘, 𝐸𝑘 ∈ [0,1] in each
CLAHE iteration is established as the sum of the differences of average intensities
calculated for the window with the object and the window with the background for
the original image 𝐶𝑜, and the improved image 𝐶𝑓 defined as follows:
𝐸𝑘 = 𝐶𝑜 − 𝐶𝑓 (4)
where 𝐶𝑜 is the difference in average intensity between the two windows for the
original image and 𝐶𝑓 is the difference in average intensity for the improved image.
The total effectiveness of the CLAHE method measured for a set of 𝑁 images is:
𝐸𝑡𝑜𝑡𝑎𝑙 =1
𝑁∑𝐸𝑖
𝑁
𝑖=1 𝑜
Once the CLAHE parameters for each color space variant have been determined,
the effectiveness of the algorithm sequences is analyzed from the perspectives
established in the literature: subjective and objective. The subjective approach
760 A. Ceballos et al.
considers color perception and visibility. The objective approach, is based on [9],
which determines the quality of the method according to some detected borders
with a border-detection algorithm and analysis of the histograms of the processed
images.
3.2 Objetive Evaluation Methodology
The two basic ideas behind objective approach is that a good enhance image
permits an improved border detection results, and that an image with contrast
enhance has a wide range histogram.
Border detection is carried out with the Sobel border-detection algorithm. This
method is a convolution technique where two kernels of uneven size are applied,
one for detecting vertical borders and the other for detecting horizontal ones. The
intensity gradient is calculated for the image at every point, showing how abrupt
the changes in intensity are, and therefore indicating the probability that a border
exists [37]. Eq. (5) defines both masks used in the algorithm, with ∆𝑥 being the
horizontal mask and ∆𝑦 the vertical one. The method is sensitive to noise, and its
selection was based on that fact, since in an image that has not been corrected
properly, not only the borders but also noise will be detected, so even if the border
count is favorable, an extremely high number of detected borders indicates that
some of them are noise or artifacts, so the image can be of poor quality and hard to
segment.
∆𝑥 = [−1 0 1−2 0 2−1 0 1
] ∆𝑦 = [1 2 10 0 0−1 −2 −1
]
(5)
Eq. (6) depicts the counting of separate elements in a binary image, thus:
𝑄 = { 𝑒𝑖 ∈ 𝐸 | 𝑒𝑖 ∩ 𝑒𝑗≠𝑖 = ∅ } (6)
where Q is the container including all separate elements in a binary image,
where an individual element 𝑒𝑖 is one that does not intersect with any other element
of the image.
Additionally, histogram analysis is done with the assumption that an image with
good contrast is one whose histogram covers a wide range of values. In an image
with poor contrast, however, most intensity values will be skewed towards either 0
(dark tonalities) or 255 (bright tonalities). Even if the histogram is not skewed in
either direction, its contrast is reduced too if the range of values covered is narrow.
The two measures used to determinate whether the histogram of the processed
image corresponds to an improved contrast are the difference between its maximum
and minimum values, and the standard deviation in the every intensity value, which
will be higher if there are color or intensity tones that are overrepresented and lower
if there is a balanced representation, indicating better contrast. The intensity
difference w of the histogram, and standard deviation σ are defined as:
Analyzing pre-processing filters sequences 761
𝑤 = 𝐼𝑚𝑎𝑥 − 𝐼𝑚𝑖𝑛 (7)
σ =1
1 − 𝑛∑(𝑥𝑖 − �̅�)
𝑛
𝑖=0
(8)
where 𝐼𝑚𝑎𝑥 and 𝐼𝑚𝑖𝑛 are the maximum and minimum intensity values, respectively,
and n is the highest possible intensity value, �̅� the average number of instances, and
𝑥𝑖 the number of instances for any given intensity value i.
4 Experimental Results
The images used were taken from the QUT image dataset [38] as well as other
smaller databases used in similar works [39]–[44]. The computer used for testing
was a desktop computer with an i7-4820k 3.70-GHz processor and 32 GB of RAM,
with code being executed in high-performance mode.
4.1 CLAHE Parameters
The estimation of the effectiveness of each combination of parameters was
done over a set of 30 images, and the results were averaged for each of 5000
iterations. The 5000 results were placed in descending order. The best results were
manually selected for every CLAHE variant, discarding parameter values where the
contrast increase resulted in extremely noisy images, such as NBins values higher
than 512, which distort color, or fixing the distribution as Rayleigh as recommended
for UI treatment [45], reducing the results to a subset of 2000. Table 3 shows the
parameters of the CLAHE algorithm as well as the evaluated values of each of them.
Although the best results were obtained by applying CLAHE on the LAB color
model, this does not guarantee that the use of LAB is a strictly superior option
compared to using models such as combined RGB-HSV. This is because
manipulating the luminance layer of the LAB color model does not have such a
noticeable effect on background areas as manipulating the red, green, and blue
layers of RGB, skewing the comparison and making it valid only for determining
the best parameters to use for each color model separately. Table 4 shows the
preferred parameter combination for each CLAHE color model variant.
4.2 Selection of CLAHE Variant and Measurement of Effectiveness
According to the results of the realized tests, in Table 5, three of the variants
of the original sequence of Homomorphic Filtering (HF), CLAHE, and Bilateral
Filter (BF) are observed positively, excluding only the Red-HSV implementation,
where CLAHE is applied on the red layer of the RGB color model and the hue and
saturation layers of the HSV color space.
Table 5 shows the average results for each measure of the execution of each variant
of the proposed sequence on 30 images, including the average execution times in
seconds.
762 A. Ceballos et al.
Table 3. List of CLAHE algorithm parameters
Parameter Description Evaluated values
NumTiles Size of the tile matrix [M N[ into which the image is divided for
processing, equal to M*N.
[6 6], [7 7], [8 8], [9 9], [10 10]
ClipLimit Scalar between 0 and 1 that limits contrast increase. Higher values
result in higher contrast but also more distortion of colors and
noise.
0.0025, 0.005, 0.01, 0.015, 0.02
NBins The number of binaries of the histogram used to build the
transformation for the image. A higher value results in a higher
dynamic range of colors but also higher processing cost and occasional color distortion.
64, 128, 256, 512, 1024
Distribution Adjusts the distribution used to give shape to the corrected
histogram.
‘Rayleigh’, ‘exponential.'
Alpha The parameter of the distribution, also called sigma in some
implementations.
0.3, 0.35, 0.4, 0.45, 0.5
Table 4. Best CLAHE parameters combination by color space
Color
model
Tiles Clip Limit Nbins Distribution Alpha Effectiveness # Ej. CLAHE
LAB [7 7] 0.02 512 Rayleigh 0.35 0.072102 1
HSV [6 6] 0.02 512 Rayleigh 0.5 0.057974 2
RGB-HSV [6 6] 0.02 256 Rayleigh 0.5 0.054282 5
RED-HSV [6 6] 0.02 256 Rayleigh 0.5 0.036053 3
Table 5. Sequence evaluation. Sequence Variation
in detected
borders
Average
width of
red
histogram
Average
width of
green
histogram
Average
width of
blue
histogram
Average
std dev.
of red
histogram
Average
std dev.
of green
histogram
Average
std dev.
of blue
histogram
Average
execution
time
(seconds)
Original
image
– 175.8438 203.0625 204.3125 1.4244 0.4976 0.5705 –
HF-LAB CLAHE-
BF
+63.9718% 203.3750 235.4375 226.3750 2.5168 0.3855 0.7441 18.7575
HF-HSV CLAHE-
BF
+96.3990% 212.6250 238.6250 239.5000 0.6406 0.3377 0.4031 18.4341
HF-HSV RGB
CLAHE-
BF
+19.5811% 182.1250 220.4375 214.3750 0.6975 0.3847 0.4752 18.5046
HF-Red
HSV
CLAHE-BF
–31.3528% 180.9375 197.1250 194.3125 0.7040 0.4560 0.5224 18.0022
Green: Best result.
Red: Worst result.
It can be observed that the most effective sequence for almost every evaluated
aspect is HF-HSV CLAHE-BF, with an augmentation of borders detected by means
of the Sobel technique and histogram width superior to that of the other sequences,
as well as lower standard deviation in the histograms for all the color layers, which
indicates higher contrast and color balance. The only aspect in which it is not the
superior implementation is the average execution time. However, the difference
from the others sequences is relatively small (less than 5%) and not significant when
compared to the execution time of the bilateral filter itself.
It is important to note that, as observed in Figs. 10 to 12, the HF-HSV
CLAHE-BF sequence shows a higher occurrence of noise, although subjective
Analyzing pre-processing filters sequences 763
analysis of the images also shows that the borders are better preserved. The HF-
RGBHSV CLAHE-BF results are not remarkable; it has neither the best result nor
the worst for any measurement. On the other hand, although its contribution to
border detection is not big, it generates less noise than the rest of the proposed
sequences. Furthermore, it is the only one of the four implementations that results
in improved color when images with a predominant color cast are processed. The
HF-LAB CLAHE-BF sequence produces histograms with a width very close to that
of its counterpart in the HSV model and detects borders with less noise but has the
worst average standard deviation for the red and blue histograms.
Finally, the HF-RedHSV CLAHE-BF implementation had the worst results
overall, with worse border detection and generation of narrower color histograms
when compared to the original images. The measurement of the average standard
deviation is not positive either for the red and green layers, confirming that it does
not result in images with improved contrast. The only positive point is its running
time, which is not an important advantage given the reduced difference in the rest
of the implementations.
5 Conclusions
In the field of UI enhancement, it is often difficult to implement techniques
designed for non-underwater contexts directly. Lighting inequalities, poor contrast,
and noise are problems when trying to implement computer vision technologies to
solve real world problems. In this case, the segmentation of the fish for
measurement implies the need for images without the mentioned issues, but such
images are difficult to obtain even in controlled environments.
Another of the hardest aspects is the measurement of the quality of the
processed images; usually, metrics such as the median square error and the peak
signal-to-noise ratio are applied, however, they cannot be used without having
ideal, undegraded images with which to compare the results or a model to generate
degraded images from them. Therefore, it becomes necessary to measure the
effectiveness of the implemented methods through histogram analysis and border
detection. Thus, it is important to define a model for the generation of degraded
images that simulate underwater conditions. Other metrics, such as the difference
in contrast between windows used to define the best CLAHE parameters, are
skewed when methods that are executed on different color spaces are compared.
The HF-HSV CLAHE-BF, HF-RGBHSV CLAHE-BF, and HF-LAB
CLAHE-BF sequences result in improved visibility, more balanced lighting, and
reduction of noise that could make segmentation harder. The processed images have
a wider range of colors that is more representative of the scene characteristics, with
more contrast between the distinct objects present in the scene, as confirmed by
Sobel border detection and histogram analysis. Subjectively, removal of color casts
is observed without the appearance of many noticeable artifacts when using
CLAHE on the combined RGB-HSV color model; this is not the case when CLAHE
764 A. Ceballos et al.
is executed on the HSV and LAB models, as the color is not directly manipulated.
However, HF-HSV CLAHE-BF is the best option when handling images where
there is no dominant color tonality.
Although not all the images used for testing were images of fishes in
controlled environments, the effectiveness of the proposed sequences in handling
scenes with noise in the form of particles and background elements is a good
indicator of progress towards the upcoming segmentation step to be carried out on
the processed images. Considering the advantages of using CLAHE on three of the
four proposed color models, future research should address the issue of
automatically selecting an adequate method depending on the attributes of the input
images. It is important to note, however, that the LAB and HSV results are similar
in that they are not capable of fully eliminating color casts typical of many
underwater environments. Considering that the HSV-based implementation has
better results than the LAB one, the automatic selection can be restricted to the HF-
HSV CLAHE-BF and HF-RGB HSV CLAHE-BF sequences.
In closing, the MATLAB implementation had the expected results. CLAHE
and the homomorphic filter did not have very large impacts on running time, even
when running the CLAHE algorithm five times when working with the combined
RGB-HSV model. On the other hand, even if bilateral filtering has a positive impact
on the segmentation of processed images, a real-time implementation will require
a faster implementation, possibly making use of programming languages such as C
or C++ and parallel processing techniques to obtain adequate running times.
a)
b)
c)
Analyzing pre-processing filters sequences 765
d)
e)
Fig. 10: (a) Original image, (b) image after HF-LAB CLAHE-BF processing, (c) image after HF-
HSV CLAHE-BF processing, (d) image after HF-RGB HSV CLAHE-BF processing, and (e)
image after HF-Red HSV CLAHE-BF processing. From left to right: the corresponding image,
border detection with n detected borders, and normalized color histogram.
766 A. Ceballos et al.
Fig. 11: To the left, three underwater images. To the right, the same three images after being
processed with the HF-CLAHE- HSV-BF sequence.
Analyzing pre-processing filters sequences 767
Fig. 12: To the left, three underwater images. To the right, the same three images after being
processed with the HF-CLAHE RGBHSV-BF sequence.
References
[1] B. Singh, R. S. Mishra, and P. Gour, Analysis of contrast enhancement
techniques for underwater image, Int. J. Comput. Technol. Electron. Eng., 1
(2011), no. 2, 190–194.
[2] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. Boston,
MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1992.
[3] R. Wang, Y. Wang, J. Zhang, and X. Fu, Review on underwater image
restoration and enhancement algorithms, Proceedings of the 7th International
Conference on Internet Multimedia Computing and Service - ICIMCS '15,
(2015), 1–6. https://doi.org/10.1145/2808492.2808548
[4] S. Serikawa and H. Lu, Underwater image dehazing using joint trilateral
filter, Comput. Electr. Eng., 40 (2014), no. 1, 41–50.
https://doi.org/10.1016/j.compeleceng.2013.10.016
[5] J. R. Martinez-De Dios, C. Serna and A. Ollero, Computer vision and robotics
techniques in fish farms, Robotica, 21 (2003), no. 3, 233–243.
https://doi.org/10.1017/s0263574702004733
[6] J. Y. Chiang, Y.-C. Chen, and Y.-F. Chen, Underwater Image Enhancement:
Using Wavelength Compensation and Image Dehazing (WCID), Chapter in
Advanced Concepts for Intelligent Vision Systems, Vol. 6915, J. Blanc-Talon,
R. Kleihorst, W. Philips, D. Popescu and P. Scheunders, Eds. Berlin,
Heidelberg: Springer Berlin Heidelberg, 2011, 372–383.
https://doi.org/10.1007/978-3-642-23687-7_34
768 A. Ceballos et al.
[7] D. S. Baajwa, S. A. Khan and J. Kaur, Evaluating the Research Gaps of
Underwater Image Enhancement Techniques, Int. J. Comput. Appl., 117
(2015), no. 20, 19-23. https://doi.org/10.5120/20672-3369
[8] G. Padmavathi, P. Subashini, M. M. Kumar, and S. K. Thakur, Comparison
of filters used for underwater image pre-processing, IJCSNS International
Journal of Computer Science and Network Security, 10 (2010), no. 1, 58–65.
[9] K. Iqbal, M. Odetayo, A. James, Rosalina Abdul Salam and Abdullah Zawawi
Hj Talib, Enhancing the low quality images using Unsupervised Colour
Correction Method, 2010 IEEE International Conference on Systems, Man
and Cybernetics, (2010), 1703–1709.
https://doi.org/10.1109/icsmc.2010.5642311
[10] P. Sahu, N. Gupta and N. Sharma, A Survey on Underwater Image
Enhancement Techniques, Int. J. Comput. Appl., 87 (2014), no. 13, 19–23.
https://doi.org/10.5120/15268-3743
[11] M. S. Sowmyashree, S. K. Bekal, R. Sneha and N. Priyanka, A Survey on the
various underwater image enhancement techniques, Int. J. Eng. Sci. Invent.,
3 (2014), no. 5, 40–45.
[12] S. S. Thakare and A. Sahu, Comparative Analysis of Various Underwater
Image Enhancement Techniques, International Journal of Computer Science
and Mobile Computing, 3 (2014), no. 4, 33–38.
[13] S. Bazeille, I. Quidu, L. Jaulin and J.-P. Malkasse, Automatic underwater
image pre-processing, in CMM’06, 2006.
[14] K. Iqbal, R. Abdul Salam, A. Osman and A. Zawawi Talib, Underwater image
enhancement using an integrated colour model, Int. J. Comput. Sci., 34
(2007), no. 2, 529–534.
[15] N. bt Shamsuddin, W. F. bt Wan Ahmad, B. b Baharudin, M. Kushairi, M.
Rajuddin and F. bt Mohd, Significance level of image enhancement
techniques for underwater images, 2012 International Conference on
Computer & Information Science (ICCIS), (2012), 490–494.
https://doi.org/10.1109/iccisci.2012.6297295
[16] L. A. Torres-Méndez and G. Dudek, Color Correction of Underwater Images
for Aquatic Robot Inspection, Chapter in Energy Minimization Methods in
Computer Vision and Pattern Recognition, A. Rangarajan, B. Vemuri, and A.
L. Yuille, Eds. Springer Berlin Heidelberg, 2005, 60–73.
https://doi.org/10.1007/11585978_5
Analyzing pre-processing filters sequences 769
[17] P. N. Andono, I. K. E. Purnama and M. Hariadi, Underwater Image
Enhancement Using Adaptive Filtering for Enhanced Sift-Based Image
Matching, Journal of Theoretical & Applied Information Technology, 52
(2013), 273-280.
[18] T. Ji and G. Wang, An approach to underwater image enhancement based on
image structural decomposition, J. Ocean Univ. China, 14 (2015), no. 2, 255–
260. https://doi.org/10.1007/s11802-015-2426-2
[19] M. Chambah, D. Semani, A. Renouf, P. Courtellemont and A. Rizzi,
Underwater color constancy: enhancement of automatic live fish recognition,
16th Annual symposium on electronic imaging, 2003, Vol. 5293, 157–168.
[20] A. Mahiddine, J. Seinturier, J.-M. Boï, P. Drap and D. Merad, Performances
Analysis of Underwater Image Preprocessing Techniques on the
Repeatability of SIFT and SURF Descriptors, in WSCG 2012: 20th
International Conferences in Central Europe on Computer Graphics,
Visualization and Computer Vision, 2012.
[21] A. Arnold-Bos, J.-P. Malkasse, and G. Kervern, A preprocessing framework
for automatic underwater images denoising, European Conference on
Propagation and Systems, Brest, France, 2005.
[22] C. J. Prabhakar and P. U. P. Kumar, An Image Based Technique for
Enhancement of Underwater Images, ArXiv12120291 Cs, Dec. 2012.
[23] A. T. Çelebi and S. Ertürk, Empirical mode decomposition based visual
enhancement of underwater images, 2010 2nd International Conference on
Image Processing Theory Tools and Applications (IPTA), (2010), 221–224.
https://doi.org/10.1109/ipta.2010.5586758
[24] R. Garcia, T. Nicosevici and X. Cufi, On the way to solve lighting problems
in underwater imaging, OCEANS ’02 MTS/IEEE, Vol. 2, 2002, 1018–1024.
https://doi.org/10.1109/oceans.2002.1192107
[25] R. Chauhan and S. S. Bhadoria, An Improved Image Contrast Enhancement
Based on Histogram Equalization and Brightness Preserving Weight
Clustering Histogram Equalization, 2011 International Conference on
Communication Systems and Network Technologies (CSNT), (2011), 597–
600. https://doi.org/10.1109/csnt.2011.128
[26] W. Yussof, M. S. Hitam, E. A. Awalludin and Z. Bachok, Performing contrast
limited adaptive histogram equalization technique on combined color models
for underwater image enhancement, Int. J. Interact. Digit. Media, 1 (2013),
no. 1, 1–6.
770 A. Ceballos et al.
[27] A. S. Abdul Ghani and N. A. Mat Isa, Underwater image quality enhancement
through composition of dual-intensity images and Rayleigh-stretching,
SpringerPlus, 3 (2014), no. 1, 757. https://doi.org/10.1186/2193-1801-3-757
[28] X. Luan, G. Hou, Z. Sun, Y. Wang, D. Song and S. Wang, Underwater Color
Image Enhancement Using Combining Schemes, Mar. Technol. Soc. J., 48
(2014), no. 3, 57–62. https://doi.org/10.4031/mtsj.48.3.8
[29] K. Srividhya and M.M. Ramya, Performance analysis of pre-processing
filters for underwater images, 2015 International Conference on Robotics,
Automation, Control and Embedded Systems (RACE), (2015), 1–7.
https://doi.org/10.1109/race.2015.7097234
[30] M. Bhowmik, D. Ghoshal and S. Bhowmik, An improved method for the
enhancement of under ocean image, 2015 International Conference on
Communications and Signal Processing (ICCSP), (2015), 1739–1742.
https://doi.org/10.1109/iccsp.2015.7322819
[31] I. Pitas and A. N. Venetsanopoulos, Homomorphic Filters, Chapter in
Nonlinear Digital Filters, Springer US, 1990, 217–243.
https://doi.org/10.1007/978-1-4757-6017-0_7
[32] University of Patras, Enhancement by Point Processing, 29-Sep-2004.
[Online]. Available: http://www.upatras.gr/en [Accessed: 20-Apr-2017].
[33] N. M. Sasi and V. K. Jayasree, Contrast Limited Adaptive Histogram
Equalization for Qualitative Enhancement of Myocardial Perfusion Images,
Engineering, 05 (2013), no. 10, 326–331.
https://doi.org/10.4236/eng.2013.510b066
[34] R. C. Austin and A. Geselowit, Adaptive Histogram Equalization and Its
Variations, 1986.
[35] S. Paris, P. Kornprobst, J. Tumblin, and F. Durand, Bilateral Filtering: Theory
and Applications, Found. Trends® Comput. Graph. Vision, 4 (2008), no. 1,
1–75. https://doi.org/10.1561/0600000020
[36] C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images,
Sixth International Conference on Computer Vision, (1998), 839–846.
https://doi.org/10.1109/iccv.1998.710815
[37] D. Ray, Edge Detection in Digital Image Processing, Univ. Wash. Dep.
Math., 2013.
Analyzing pre-processing filters sequences 771
[38] K. Anantharajah, ZongYuan Ge, Chris McCool, Simon Denman, Clinton
Fookes, Peter Corke, Dian Tjondronegoro, Sridha Sridharan, Local inter-
session variability modelling for object classification, 2014 IEEE Winter
Conference on Applications of Computer Vision (WACV), (2014), 309–316.
https://doi.org/10.1109/wacv.2014.6836084
[39] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, Enhancing underwater
images and videos by fusion, 2012 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), (2012), 81–88.
https://doi.org/10.1109/cvpr.2012.6247661
[40] N. Carlevaris-Bianco, A. Mohan and R. M. Eustice, Initial results in
underwater single image dehazing, Oceans 2010 MTS/IEEE Seattle, (2010),
1–8. https://doi.org/10.1109/oceans.2010.5664428
[41] S. Emberton, L. Chittka and A. Cavallaro, Hierarchical rank-based veiling
light estimation for underwater dehazing, Procedings of the British Machine
Vision Conference 2015, (2015). https://doi.org/10.5244/c.29.125
[42] R. Fattal, Single image dehazing, ACM Trans. Graph. TOG, 27 (2008), no. 3,
72. https://doi.org/10.1145/1360612.1360671
[43] A. Galdran, D. Pardo, A. Picón and A. Alvarez-Gila, Automatic Red-Channel
underwater image restoration, J. Vis. Commun. Image Represent., 26 (2015),
132–145. https://doi.org/10.1016/j.jvcir.2014.11.006
[44] H. Lu, Y. Li, L. Zhang and S. Serikawa, Contrast enhancement for images in
turbid water, Journal of the Optical Society of America A, 32 (2015), no. 5,
886–893. https://doi.org/10.1364/josaa.32.000886
[45] MATLAB, “Contrast-limited adaptive histogram equalization (CLAHE) -
MATLAB adapthisteq,” 2006. [Online]. Available:
https://www.mathworks.com/help/images/ref/adapthisteq.html.
[Accessed: 26-Feb-2017].
Received: August 23, 2017; Published: September 28, 2017