5
International Journal of Advanced Computer Science, Vol. 3, No. 3, Pp. 139-143, Mar., 2013. Manuscript Received: 13,Nov., 2012 Revised: 5, Dec., 2012 Accepted: 25,Jan., 2013 Published: 15,Feb., 2013 Keywords speed-limit sign, connected component labeling, regions, speed sign detection, speed sign recognition, neural networks Abstract This paper presents a simple and robust U.S. speed sign detection and recognition algorithm applied on grayscale images. The algorithm has two phases in it. In the detection phase, the possible speed sign regions in the input image are obtained by analyzing the region characteristics such as aspect ratio and size, after labeling the input grayscale image using the modified connected component labeling (CCL) algorithm. In the recognition phase the information regarding the speed-limit is extracted from the hypothesized regions, after verifying the hypothesis by recognizing their inner parts. The invariant properties of the speed-limit sign board characters, calculated from the Hu’s moments, are used as the features in the recognition scheme. A statistical based method is used to classify the hypothesized regions and neural network based classification is employed to extract the assigned speed limit. The proposed algorithm was implemented on a number of road scenes and the results are compared with the other available methods. 1. Introduction In U.S. different speed limits apply for different types of roads ranging from rural local roads and residential streets to urban freeways. Sometimes these speed limits vary depending on the weather and road conditions. So, it is not easy for the driver to follow the changing speed limits. He has to watch for the speed sign, while simultaneously performing other driving tasks. So, if there is a system within the vehicle, which automatically detects the speed sign outside, extracts the required information from it and displays it for the driver, it would reduce the effort of the driver to some extent. Nowadays vehicles are being equipped with the navigation systems, which contain the assigned speed limits of different roadways, stored in its memory. But changes made in the speed limits due to weather conditions and road conditions are not updated in these navigation systems. According to the automaker BMW, the speed limits provided by the navigation systems are only 70 percent Pradeep K Kanaparthi and Ezzatollah salari are with Electrical Engineering and Computer Science Dept., University of Toledo, Toledo, OH, USA.( pkaan[email protected]uooledo.edu, eessalari@utneett.utoledo.edu) accurate and by utilizing the vision based speed detection systems in the vehicles this accuracy can be improved to 95 percent. The speed detection system uses a camera mounted on the rear view mirror of the vehicle. It regularly captures the scenes that appear in front of the vehicle, at very short intervals of time. These images are processed in real time using an image processor to detect the speed signs and to extract the speed limits from them. A U.S. speed sign has distinguishing features such as its shape and the characters printed on it. In spite of these useful features it is not easy to develop an automatic speed detection system, because of the obstacles like partial occlusions, different perceptions, tilted and damaged signs and cluttered backgrounds. Our detection algorithm uses region characteristics, because regions encode the shape and scale information naturally and they are mildly affected by the background clutter [1]. And our recognition algorithm is a combination of both statistical and neural networks based approaches. 2. Related Work There are many algorithms available in the literature for the automatic traffic sign detection system. Almost every algorithm has divided the process into two parts, sign detection and sign recognition. A. Traffic Sign Detection Shape is a very useful feature of any object. Many authors have considered the shape of the signs for the detection stage. Fabien and Alexandre [2] used a circular Hough-transform and a rectangle detection method for the detection of European and American speed-limit signs respectively. Angela Tam and Hua Shen [3] also used Hough-transform for the detection of quadrilateral sign boards. The success of these algorithms depends on the efficiency of the edge filters used and also the edges are easily affected by the noise present in the image. Zelinsky and Barnes [4] used the symmetry of the signs to detect circular signs with a measure of radial symmetry in the image. In another paper Nick Barnes [5] used radial symmetry and regular polygon detection algorithm after reducing the noise in the images to improve the detection results. The symmetry of an object, when measured from its image, may alter depending on the occlusions and the camera angle. Jim Torresen, Jorgen W.Bakke and Sekanina [6] used color information for the detection of Norwegian Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles Pradeep K. Kanaparthi , & Ezzatollah Salari

Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles

Embed Size (px)

DESCRIPTION

Pradeep K Kanaparthi, Ezzatollah SalariInternational Journal of Advanced Computer Science, Vol. 3, No. 3, Pp. 139-143, Mar., 2013.

Citation preview

Page 1: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles

International Journal of Advanced Computer Science, Vol. 3, No. 3, Pp. 139-143, Mar., 2013.

Manuscript Received:

13,Nov., 2012

Revised:

5, Dec., 2012

Accepted:

25,Jan., 2013

Published:

15,Feb., 2013

Keywords

speed-limit

sign,

connected

component

labeling,

regions,

speed sign

detection,

speed sign

recognition,

neural

networks

Abstract This paper presents a simple

and robust U.S. speed sign detection and

recognition algorithm applied on grayscale

images. The algorithm has two phases in it. In

the detection phase, the possible speed sign

regions in the input image are obtained by

analyzing the region characteristics such as

aspect ratio and size, after labeling the input

grayscale image using the modified connected

component labeling (CCL) algorithm. In the

recognition phase the information regarding

the speed-limit is extracted from the

hypothesized regions, after verifying the

hypothesis by recognizing their inner parts.

The invariant properties of the speed-limit

sign board characters, calculated from the

Hu’s moments, are used as the features in the

recognition scheme. A statistical based

method is used to classify the hypothesized

regions and neural network based

classification is employed to extract the

assigned speed limit. The proposed algorithm

was implemented on a number of road scenes

and the results are compared with the other

available methods.

1. Introduction

In U.S. different speed limits apply for different types of

roads ranging from rural local roads and residential streets

to urban freeways. Sometimes these speed limits vary

depending on the weather and road conditions. So, it is not

easy for the driver to follow the changing speed limits. He

has to watch for the speed sign, while simultaneously

performing other driving tasks. So, if there is a system

within the vehicle, which automatically detects the speed

sign outside, extracts the required information from it and

displays it for the driver, it would reduce the effort of the

driver to some extent.

Nowadays vehicles are being equipped with the

navigation systems, which contain the assigned speed limits

of different roadways, stored in its memory. But changes

made in the speed limits due to weather conditions and road

conditions are not updated in these navigation systems.

According to the automaker BMW, the speed limits

provided by the navigation systems are only 70 percent

Pradeep K Kanaparthi and Ezzatollah salari are with Electrical

Engineering and Computer Science Dept., University of Toledo, Toledo,

OH, USA.( [email protected], [email protected])

accurate and by utilizing the vision based speed detection

systems in the vehicles this accuracy can be improved to 95

percent.

The speed detection system uses a camera mounted on

the rear view mirror of the vehicle. It regularly captures the

scenes that appear in front of the vehicle, at very short

intervals of time. These images are processed in real time

using an image processor to detect the speed signs and to

extract the speed limits from them. A U.S. speed sign has

distinguishing features such as its shape and the characters

printed on it. In spite of these useful features it is not easy to

develop an automatic speed detection system, because of

the obstacles like partial occlusions, different perceptions,

tilted and damaged signs and cluttered backgrounds. Our

detection algorithm uses region characteristics, because

regions encode the shape and scale information naturally

and they are mildly affected by the background clutter [1].

And our recognition algorithm is a combination of both

statistical and neural networks based approaches.

2. Related Work

There are many algorithms available in the literature for

the automatic traffic sign detection system. Almost every

algorithm has divided the process into two parts, sign

detection and sign recognition.

A. Traffic Sign Detection

Shape is a very useful feature of any object. Many

authors have considered the shape of the signs for the

detection stage. Fabien and Alexandre [2] used a circular

Hough-transform and a rectangle detection method for the

detection of European and American speed-limit signs

respectively. Angela Tam and Hua Shen [3] also used

Hough-transform for the detection of quadrilateral sign

boards. The success of these algorithms depends on the

efficiency of the edge filters used and also the edges are

easily affected by the noise present in the image.

Zelinsky and Barnes [4] used the symmetry of the signs

to detect circular signs with a measure of radial symmetry in

the image. In another paper Nick Barnes [5] used radial

symmetry and regular polygon detection algorithm after

reducing the noise in the images to improve the detection

results. The symmetry of an object, when measured from its

image, may alter depending on the occlusions and the

camera angle.

Jim Torresen, Jorgen W.Bakke and Sekanina [6] used

color information for the detection of Norwegian

Detection and Recognition of U.S. Speed Signs from

Grayscale Images for Intelligent Vehicles Pradeep K. Kanaparthi

, & Ezzatollah Salari

Page 2: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles

International Journal of Advanced Computer Science, Vol. 3, No. 3, Pp. 139-143, Mar., 2013.

International Journal Publishers Group (IJPG) ©

140

speed-limit signs. Fang, Chen and Fuh [7] proposed an

algorithm combining both color and shape features.

Neural network based detection algorithms like [8] also

exist in the literature. Betke and Marris [9], Gavrila [10]

used template matching for the detection stage. Template

based methods are time consuming compared to other

methods.

B. Traffic Sign Recognition

Artificial neural networks (e.g., [15], [16], [17]) is a

popular method used for optical character recognition in

different applications. In [18] probabilistic neural networks

method is employed for recognizing alphanumeric

characters. J.R. Parker and Pavol Federl [19] employed

genetic algorithm for recognizing the characters on the

number plates. Statistical approaches like Hausdorff

distance [11], support vector machines [12], template

matching [13], and Hidden Markov models [14] also exist

in the literature.

The problem of detecting and recognizing the speed

signs under different noisy conditions is mainly focused in

this paper. Our detection algorithm exploits the appearance

based features to avoid the effects of noisy conditions. It

detects the potential speed sign regions in the input image

and then these regions are processed in the recognition

phase to analyze their inner parts. Section 3 of this paper

discusses the proposed method and section 4 presents the

obtained results.

3. Proposed Method

The steps involved in each phase of the algorithm are

shown as a flowchart in the Fig. 1.

A. Speed Sign Detection

In this phase, also known as the hypothesis generation

phase, all the possible speed signs in the input image are

detected. The input image is a collection of multiple objects

and few of which possess speed sign like features. Before

trying to understand the speed sign, first the speed sign

needs to be located in the input image and separated from

the rest of the image to process it separately. Our detection

algorithm uses connected component labeling (CCL)

technique to assign all the pixels, representing the white

region of the speed sign, a unique label value. These white

pixels together form a region, whose features match with

the features of the speed sign. The labeling process also

produces other regions in the image. We use the region

features to generate the hypothesis. The original two-pass

CCL algorithm and how it is adapted for the grayscale

images are explained in the next section.

1) Two-Pass Connected Component Labeling: Our CCL

algorithm is an extension of the algorithm proposed by

Rosenfeld and Pfalt [21]. In this the image is scanned twice

from left to right and top to bottom. During the first scan a

4-neighbor mask as shown in the figure is considered at

each pixel. The pixel “e” in the mask is the current pixel.

Fig. 1 Flow chart of the algorithm

Fig. 2 Forward raster scan mask

If the value at “e” is equal to „0‟ then the mask is moved

to the next pixel. If the value is equal to „1‟ and all of the

4-neighbor labels are zeros then a new label is assigned to

the current pixel. Otherwise minimum label value among

the 4-neighbors is assigned to the current pixel and if there

are more than one label values in the 4-neighbors then they

are considered as equivalent and stored in a one

dimensional array.

𝑔(𝑥, 𝑦) =

{

0 𝑖𝑓 𝑏(𝑥, 𝑦) = 0,

𝑚, (𝑚 = 𝑚 + 1) 𝑖𝑓 𝑏(𝑥, 𝑦) = 1 𝑎𝑛𝑑

⩝ *𝑖, 𝑗 𝜖𝑁+𝑔(𝑖, 𝑗) = 0,

𝑔𝑚𝑖𝑛(𝑥, 𝑦) 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,

(Equ. 1)

𝑔𝑚𝑖𝑛(𝑥, 𝑦) = 𝑚𝑖𝑛[ {𝑔(𝑖, 𝑗)│𝑖, 𝑗 𝜖𝑁 & 𝑏(𝑖, 𝑗) = 1}] (Equ.2)

Note, b(x,y) represents the intensity value of the input

image „b‟ at location (x,y), g(x,y) represents the label value

assigned to the pixel, „N‟ is the neighborhood region and „m‟

represents the new label value. After the first scan of the

image, the label equivalence information is resolved to

eliminate the redundant labels and these resolved labels are

updated in the second scan of the image.

To adapt the algorithm for grayscale images some

Page 3: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles

Kanaparthi et al.: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles.

International Journal Publishers Group (IJPG) ©

141

changes are performed following the ideas mentioned earlier.

This algorithm also employs the same mask and scanning

method. In a binary image two adjacent pixels are considered

as connected if their values are equal otherwise they are

considered as disconnected. In a grayscale image two

adjacent pixels can be considered as connected if their

intensity values are closer to each other. The difference

between their intensity values is calculated and if it falls

below a chosen threshold value then the two pixels are

considered as connected. At each pixel the mask is applied

and the connectivity condition, explained before, is checked

between the center pixel and each of its 4-neighbor pixels. If

none of the neighbor pixels satisfy the connectivity condition,

then a new label is assigned to the center pixel. If only one of

them satisfy the connectivity condition, then its label is

assigned to the center pixel. If two or more of them satisfy

the condition, then their labels are considered as equivalent

and the minimum label among them is assigned to the center

pixel. Following equations represent these steps.

𝑔(𝑥, 𝑦) = {

𝑚, (𝑚 = 𝑚 + 1) 𝑖𝑓 ⩝ *𝑖, 𝑗 𝜖𝑁+

𝑏(𝑥, 𝑦) 𝑏(𝑖, 𝑗) ,

𝑔𝑚𝑖𝑛(𝑥, 𝑦) 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒,

(Equ. 3)

𝑔𝑚𝑖𝑛(𝑥, 𝑦) = 𝑚𝑖𝑛,*𝑔(𝑖, 𝑗)│𝑖, 𝑗 𝜖𝑁 & 𝑏(𝑥, 𝑦) 𝑏(𝑖, 𝑗) ≤ + - (Equ. 4)

Note, „T‟ represents the threshold value in the above

equation. The label equivalency is stored in a one

dimensional matrix and resolved using union-find algorithm.

In the second scan the pixels‟ labels are updated with their

equivalent labels.

2) Region Selection: After labeling the input image and

dividing it into a set of regions, we formed two conditions

using the region parameters to select the possible speed sign

regions. The first condition checks the size of the regions

and ignores them if their sizes fall out of the range of 1000

to 10,000 pixels. This condition will ignore the regions that

are too small and too big to represent a speed sign. The U.S.

speed sign has an aspect ratio of 0.8 or 0.833. An aspect

ratio range of 0.5 to 1 is chosen as the second condition.

The regions which satisfy these two conditions are

hypothesized as speed signs.

B. Region Recognition and Speed-limit Extraction

The recognition phase first verifies the hypothesis

generated from the detection phase and if it is true then it

extracts the assigned speed limit from it. Both these tasks

require the recognition of the inner parts of the speed sign.

The upper half components of the speed sign validate its

identity, while the lower half components give the assigned

speed limit. The recognition phase broadly has three steps in

it.

1) Pre-processing: In this step the detected region is

resized to the size of the database speed sign image,

converted into a black and white image and then each

connected component of it is labeled using CCL technique.

2) Feature Extraction: The feature vectors for each

component of the region are calculated in this step. Hu [22]

had proposed a set of seven moments for the objects, which

are invariant under transformations like rotation, scaling and

translation. But of the seven moments we have found that

the last three moments for the speed sign characters are very

minute values and exhibit very little difference among

different patterns. Hence, we have formed our feature vector

utilizing the first four Hu‟s moments, which are listed

below.

= +

= ( ) +

= ( ) + ( )

= ( + ) + ( + )

(Equ. 5)

Where, „ 𝑝𝑞‟ is the normalized central moment of order

(p+q). To bring equilibrium among the four moments they

are multiplied by appropriate factors and the resulting

feature vector is given as,

,M- = ,log( ∗ 10) log( ∗ 10

) log( ∗ 10 ) log( ∗ 10

)- (Equ. 6)

3) Classification: In this step the features of the each

component of the detected region are compared with that of

a speed sign components. The feature vectors of the speed

sign alphabets and numbers are already stored in the

database. The upper half and lower half components are

compared in different ways. First the locations of the upper

half components are matched with the locations of the

alphabets of the speed sign. Then the correlation coefficient

between the location matched components‟ feature vectors

is calculated using the equation (7).

𝐶 = ∑ (𝑀𝑜(𝑖)− �̅�𝑜)(𝑀𝑡(𝑖)−�̅�𝑡)4𝑖=1

√(∑ (𝑀𝑜(𝑖)−�̅�𝑜)2)(∑ (𝑀𝑡(𝑖)−�̅�𝑡)

2)4𝑖=1

4𝑖=1

(Equ.7)

Where Mo and Mt represent the feature vectors of the

object and template respectively and 𝑀𝑜 and 𝑀𝑡 represent

their mean values. If the value of „C‟ is more than a chosen

threshold value (T = 0.7) then the two components are

considered as a match. The number of matches found is

counted. The upper half of the speed sign contains a total of

10 alphabets. If the number of matches found is at least 6

then the detected region is classified as a speed sign region

and its lower half components are processed to extract the

assigned speed limit.

The lower half of the speed sign contains a two digit

number representing the assigned speed limit. We have used

a feed forward neural network with a back propagation

learning algorithm to recognize the numbers from 0 to 8.

Our neural network consists of 4 input neurons, 7 hidden

layer neurons and 9 output neurons. Initially the weights of

the network are chosen randomly and the network is trained

using the feature vectors of numbers from 0 to 8. The

feature vectors of the lower half components of the region

Page 4: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles

International Journal of Advanced Computer Science, Vol. 3, No. 3, Pp. 139-143, Mar., 2013.

International Journal Publishers Group (IJPG) ©

142

classified as speed sign region are fed to this trained

network and the assigned speed limit is extracted from it.

4. Results

A. Detection Phase

Two sample input images and their respective detection

phase results are shown in the Fig. 3. A threshold level of 35

is chosen to check the connectivity condition in the CCL

algorithm used in the detection phase. The white region in

the output image represents the detected region.

Fig. 3 Hypothesis generation

B. Recognition Phase

The detected part of the input image is pre-processed to

decompose it into its constituent parts by connected

component labeling it.

Fig. 4 Pre-processing

A feature vector is calculated for each component of

the binary image using Hu‟s moments. The upper half

components are analyzed first and the number of matches

found is counted. For the particular example we have

found 8 matches and thus hypothesis is validated. In the

next step the lower half components are recognized using

the neural networks and the obtained results are shown in

the fig. 6.

Fig. 5 Recognizing the inner parts

Fig. 6 successful detection and recognition of speed signs

5. Conclusions

The detection and recognition of speed signs under

noisy conditions such as partial occlusions, deformations

and different perceptions is performed with the proposed

algorithm. The use of region features instead of shape based

features is found to be more effective while detecting the

speed signs. The proposed detection algorithm never missed

the speed sign when experimented on different traffic scenes.

Our recognition algorithm makes use of the structural and

invariant features of the regions. The proposed algorithm

produced promising results upon experimenting in different

conditions.

References

[1] Gu C., Lim J.J., Arbelaez P., & Malik J., “Recognition

using regions,” (2009) IEEE conference on Computer

Vision and Pattern Recognition, USA, pp. 1030-1037.

[2] F. Moutarde, A. Bargeton, A. Herbin & L. Chanussot,

“Modular traffic signs recognition applied to

on-vehicle-time visual detection of American and

European speed limit signs,” (2007) In IEEE

Intelligent Vehicles Symposium Proc.,

Istanbul(Turkey), pp. 1122-1126.

[3] Angela Tam, Hua Shen, Liu & Xiaoou Tang,

Page 5: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles

Kanaparthi et al.: Detection and Recognition of U.S. Speed Signs from Grayscale Images for Intelligent Vehicles.

International Journal Publishers Group (IJPG) ©

143

“Quadrilateral signboard detection and text extraction,”

(2003) CISST, pp. 708-713.

[4] N. Barnes & A. Zelinsky, “Real-time radial symmetry

for speed sign detection,” (2004) In IEEE Intelligent

Vehicles Symposium (IV), Parma, Italy, pp. 566–571.

[5] Nick Barnes, “Improved signal to noise ratio and

computational speed for gradient-based detection

algorithms,” (2005) Robotics and Automation. ICRA,

Proceedings of the 2005 IEEE International

Conference, Australia, pp. 4661-4666.

[6] J. Torresen, J. W. Bakke, & L. Sekanina, “Efficient

recognition of speed limit signs,” (2004) In Intelligent

Transportation Systems, Proceedings. The 7th

International IEEE Conference, pp. 652-656.

[7] C.-Y. Fang, S.-W. Chen, & C.-S. Fuh, “Road-sign

detection and tracking,” (2003) IEEE Trans. Vehicular

Technology, vol. 52, no. 5, pp. 1329–1341.

[8] Lorsakul A., & Suthakorn J., “Traffic Sign

Recognition for Intelligent Vehicle/Driver Assistance

System Using Neural Network on OpenCV,” (2007)

Proceedings of the 4th International Conference on

Ubiquitous Robots and Ambient Intelligence (URAI),

POSTECH, PIRO, KOREA, pp. 279 – 284.

[9] M. Betke & N. Makris, “Fast object recognition in

noisy images using simulated annealing,” (1994)

Technical report, Massachusetts Institute of

Technology, Cambridge, MA, USA.

[10] D. M. Gavrila, “Traffic sign recognition revisited,”

(1999) In Mustererkennung (DAGM), Bonn, Germany,

Springer Verlag.

[11] F. Martín, M. García, & J. L. Alba, “New Methods for

Automatic Reading of VLP's (Vehicle License Plates),”

(2002) IASTED International Conference Signal

Processing, Pattern Recognition, and Applications.

[12] K. K. Kim, K. I. Kim, J. B. Kim, & H. J. Kim,

“Learning-Based Approach, for License Plate

Recognition,” (2000) In proc. IEEE Signal Processing

Society Workshop, Neural Networks for Signal

Processing, Volume 2, pp. 614 - 623.

[13] Yo-Ping Huang, Shi-Yong Lai & Wei-Po Chuang, “A

Template-Based Model for License Plate Recognition,”

(2004) In proc. IEEE Int. Conf. on Networking,

Sensing & Control, pp. 737-742.

[14] Tran Duc Duan, Tran Le Hong Du, Tran Vinh Phuoc,

& Nguyen Viet Hoang, “Building an Automatic

Vehicle License-Plate Recognition System,” (2005) In

proc. Intl. Conf. in Computer Science (RIVF), pp.

59-63.

[15] A.Broumandnia & M.Fathy, “Application of pattern

recognition for Farsi license plate recognition,” (2005)

ICGST International Conference on Graphics, Vision

and Image Processing (GVIP-05).

[16] Shyang-Lih Chang, Li-Shien Chen, Yun-Chung

Chung, & Sei-Wan Chen, “Automatic License Plate

Recognition,” (2004) IEEE Trans. on Intelligent

Transportation Systems, vol. 5, no. 1, pp. 42-53.

[17] C. Anagnostopoulos, T. Alexandropoulos, S. Boutas, V.

Loumos, & E. Kayafas, “A template-guided approach

to vehicle surveillance and access control,” (2005) In

proc. IEEE Conf. on Advanced Video and Signal

Based Surveillance, pp. 534 – 539.

[18] C. Anagnostopoulos, E. Kayafas, & V. Loumos,

“Digital image processing and neural networks for

vehicle license plate identification,” (2000) Journal of

Electrical Engineering, vol. 1, no.2, pp. 2-7.

[19] J. R. Parker and Pavol Federl, “An approach to license

plate recognition,” (1995) Laboratory for Computer

Vision, Computer Graphics Laboratory, University of

Calgary.

[20] Roshan Dharshana Yapa & Koich Harada, “Connected

component labeling algorithms for gray-scale images

and evaluation of performance using digital

mammograms,” (2007) International Journal of

Computer Science and Network Security, vol.8, no.6,

pp. 146-152.

[21] Rosenfeld A. & Pfaltz J.L., “Sequential Operations in

Digital Processing,” (1966) Journal of ACM, vol. 13,

pp. 471-494.

[22] M. K. Hu, “Visual pattern recognition by moment

invariants,” (1962) IRE Trans. Information Theory,

vol. 8, issue 2, pp. 179-187.