Edge Final Report (1)

Embed Size (px)

DESCRIPTION

edge detection using matlab project

Citation preview

  • 1

    INTRODUCTION

    Edge detection is a type of image segmentation techniques which determines

    the presence of an edge or line in an image and outlines them in an appropriate way.

    The main purpose of edge detection is to simplify the image data in order to minimize

    the amount of data to be processed. Generally, an edge is defined as the boundary

    pixels that connect two separate regions with changing image amplitude attributes

    such as different constant luminance and tristimulus values in an image. It contains

    rich information, step property, shape etc, which is able to describe the target object.

    There are two types of edge detection: one is step change edge whose pixels grayscale

    of two sides have significantly difference; the other one is roof edge that is the

    turning point from increase to decrease of gray value. Edge is basically the symbol

    and reflection of discreteness of partial image. It symbolizes the end of one area and

    the beginning of the other area. The detected edge may become wide or discrete with

    the existence of noisy and ambiguity.

    The detection operation begins with the examination of the local discontinuity

    at each pixel element in an image. Amplitude, orientation, and location of a particular

    subarea in the image that is of interest are essentially important characteristics of

    possible edges. Based on these characteristics, the detector has to decide whether each

    of the examined pixels is an edge or not. This paper gives an overview of first and

    second order derivative edge detections, edge fitting detection model as well as the

    detector performance evaluation. Also, several Matlab functions that underlie the

    principle of first and second order derivative edge detection techniques are written.

    The result of the simulations were analyzed and compared to the theoretical result of

    the edge detectors introduced. By writing simple edge detection Matlab functions, one

    can have a better understanding of the various edge detection algorithms developed in

    the past.

    There are an extremely large number of edge detection operators available,

    each designed to be sensitive to certain types of edges. Variables involved in the

    selection of an edge detection operator include Edge orientation, Noise environment

    and Edge structure. The geometry of the operator determines a characteristic direction

  • 2

    in which it is most sensitive to edges. Operators can be optimized to look for

    horizontal, vertical, or diagonal edges. Edge detection is difficult in noisy images,

    since both the noise and the edges contain high frequency content. Attempts to reduce

    the noise result in blurred and distorted edges. Operators used on noisy images are

    typically larger in scope, so they can average enough data to discount localized noisy

    pixels. This results in less accurate localization of the detected edges. Not all edges

    involve a step change in intensity. Effects such as refraction or poor focus can result

    in objects with boundaries defined by a gradual change in intensity. The operator

    needs to be chosen to be responsive to such a gradual change in those cases. So, there

    are problems of false edge detection, missing true edges, edge localization, high

    computational time and problems due to noise etc. Therefore, the objective is to do

    the comparison of various edge detection techniques and analyze the performance of

    the various techniques in different conditions.

    In this project we have compared and implemented five techniques of edge

    detection. These are canny edge detection, sobel operator, roberts cross operator,

    prewitts operator and zero crossing edge detection. The Sobel method finds edges

    using the Sobel approximation to the derivative. It returns edges at those points where

    the gradient of I is maximum.

    The Prewitt method finds edges using the Prewitt approximation to the

    derivative. It returns edges at those points where the gradient of I is maximum. The

    Roberts method finds edges using the Roberts approximation to the derivative. It

    returns edges at those points where the gradient of I is maximum.

    The Laplacian of Gaussian method finds edges by looking for zero crossings

    after filtering I with a Laplacian of Gaussian filter. The zero-cross method finds edges

    by looking for zero crossings after filtering I with a filter you specify.

    The Canny method finds edges by looking for local maxima of the gradient

    of I. The gradient is calculated using the derivative of a Gaussian filter. The method

    uses two thresholds, to detect strong and weak edges, and includes the weak edges in

    the output only if they are connected to strong edges. This method is therefore less

    likely than the others to be fooled by noise, and more likely to detect true weak edges.

  • 3

    Literature Review

    2.1. Edge Detection

    Torre, V. Poggio, Tomaso A. in March 1986. This paper appears in the proceedings of

    the IEEE. Volume: 8, Issue: 2 Page(s): 147-163

    Edge detection is the process that attempts to characterize the intensity changes

    in the image in terms of the physical processes that have originated them. A critical,

    intermediate goal of edge detection is the detection and characterization of significant

    intensity changes. This paper discusses this part of the edge detection problem. This

    shows that this part of edge detection consists of two steps, a filtering step and a

    differentiation step. Following this perspective, the paper discusses in detail the

    following theoretical aspects of edge detection. 1) The properties of different types

    of filters-with minimal uncertainty, with a bandpass spectrum, and with limited support-

    are derived. Minimal uncertainty filters optimize a tradeoff between computational

    efficiency and regularizing properties. 2) Relationships among several 2-D differential

    operators are established. In particular, we characterize the relation between the

    Laplacian and the second directional derivative along the gradient. Zero crossings of the

    Laplacian are not the only features computed in early vision. 3) Geometrical and

    topological properties of the zerocrossings of differential operators are studied in

    terms of transversality and Morse theory.

    2.2. Study And Comparison of Edge Detection Algorithms

    Joshi S. R., Koju R. in Nov, 2012. This paper appears in the proceedings of the IEEE.

    Volume: 2,Issue: 3 Page(s): 23-25

    Edge characterizes boundaries. If the edges of images could be identified

    accurately, all of the objects can be located and basic parameters such as area, perimeter

    and shape can be measured. This paper proposes fusion of Haar wavelet and Prewitt

    operator and compares its performance with frequently used gradient edge detection

    algorithms and canny edge detection method in different conditions. Canny edge

    detection algorithm is implemented with adaptive parameters. Software programs for all

    edge detectors are developed in visual C#.net. It has been shown that canny edge

    detection algorithm with adaptive parameters performs better in almost all conditions in

  • 4

    comparison to other operators in the expense of its execution time.

    2.3. Canny Edge Detection on a Virtual Hexagonal Image Structure

    Xiangjian He, Jianmin Li in Dec, 2009. This paper appears in the Proceedings of the

    IEEE. Page(s): 167-172

    Canny edge detector is the most popular tool for edge detection and has many

    applications in the areas of image processing, multimedia and computer vision.

    The Canny algorithm optimizes the edge detection through noise filtering using an

    optimal function approximated by the first derivative of a Gaussian. It identifies

    the edge points by computing the gradients of light intensity function based on the fact

    that the edge points likely appear where the gradient magnitudes are large. Hexagonal

    structure is an image structure alternative to traditional square image structure. Because

    all the existing hardware for capturing image and for displaying image are produced

    based on square structure, an approach that uses linear interpolation described for

    conversion between square and hexagonal structures. Gaussian filtering together with

    gradient computation is performed on the hexagonal structure. The experimental results

    show the edge detection on hexagonal structure using static and video images, and the

    comparison with the results using Canny algorithm on square structure.

    2.4. Multiscale Edge Detection Based On The Sobel Method

    Lopez-Molina, C. Bustince in Nov, 2011. This paper appears in the Proceedings of the

    IEEE. Volume: 2, Issue: 3 Page(s): 666-671

    The multiscale techniques for edge detection represent an effort to combine the

    spatial accuracy of small-scale methods with the ability to deal with spurious responses

    inherent to the large scale ones. In this work we introduce a multiscale extension of

    the Sobel method for edge detection based on Gaussian smoothing and fine-to-

    coarse edge tracking. We include examples illustrating the procedure and its results, as

    well as some quantitative measurements of the improvement obtained with the

    multiscale approach with respect to the original one.

  • 5

    2.5. A Hardware Architecture of Prewitt Edge Detection

    Seif, A. Salut, M. M. Marsono in Nov, 2010. This paper appears in the Proceedings of

    the IEEE. Page(s): 20-21

    This paper presents efficient hardware architecture of Prewitt edge detection for

    high speed image processing applications. The hardware design is implemented

    by using Verilog hardware description language, whereas the software part is developed

    by using Matlab. The zero computational error analysis indicates that the proposed

    architecture produces similar outputs with ideal result obtained by Matlab software

    simulation. The architecture is capable of operating at a clock frequency of 145 MHz at

    550 frames per second (fps), which implies that the system is suitable for

    both image processing and computer vision applications.

    2.6. Detection Of Composite Edges

    Ghosal, S. Mehrotra in Jan 1994. This paper appears in the Proceedings of the

    IEEE.Volume: 3, Issue: 1 Page(s): 14-25

    The paper presents a new parametric model-based approach to high-precision

    composite edge detection using orthogonal Zernike moment-based operators. It deals

    with two types of composite edges: (a) generalized step and (b) pulse/staircase edges. A

    2-D generalized step edge is modeled in terms of five parameters: two gradients on two

    sides of the edge, the distance from the center of the candidate pixel, the orientation of

    the edge and the step size at the location of the edge. A 2-D pulse/staircase edge is

    modeled in terms of two steps located at two positions within the mask, and

    the edge orientation. A pulse edge is formed if the steps are of opposite polarities

    whereas a staircase edge results from two steps having the same polarity. Two complex

    and two real Zernike moment-based masks are designed to determine parameters of both

    the 2-D edge models. For a given edge model, estimated parameter values at a point are

    used to detect the presence or absence of that type of edge. Extensive noise analysis is

    performed to demonstrate the robustness of the proposed operators. Experimental results

    with intensity and range images are included to demonstrate the efficacy of the proposed

    edge detection technique as well as to compare its performance with the geometric

    moment-based step edge detection technique and Canny's (1986) edge detector

  • 6

    Problems Identified

    The differential masks act as high-pass filters which tend to amplify noise.

    To reduce the effects of noise, the image needs to be smoothed first with a low

    pass filter.

    The noise suppression-localization tradeoff: a larger filter reduces noise, but

    worsens localization (i.e., it adds uncertainty to the location of the edge) and vice

    versa.

    Edge thinning and linking are required to obtain good contours.

    Changes in lighting condition.

    Luminance and geometrical features.

  • 7

    Methodology:

    4.1. First Order Derivative Edge Detection

    There are two methods for first order derivative edge detection.

    4.1.1. Evaluating The Gradients Generated Along Two Orthogonal

    Directions

    An edge is judged present if the gradient of the image exceeds our defined

    threshold value, t = T. The gradient can be computed as the derivatives along both

    orthogonal axes

    , = ( , )

    cos +

    ( , )

    sin (1)

    The gradient is estimated in a direction normal to the edge gradient. The spatial

    average gradient can be written as

    , = (, ) 2 + [ , ]2 (2)

    A simplest discrete row and column gradient is given by

    GR ( j, k ) = F ( j, k ) F ( j, k 1) GC ( j, k ) = F ( j, k ) F ( j +1, k )

    (3)

    (4)

    Running the difference of the contiguous pixels in horizontal and vertical directions

    is found inefficient since the edges cannot be delineated and the detector is quite

    sensitive to small fluctuations. Diagonal edge gradients proposed by Roberts is

    shown as

    G1 ( j, k ) = F ( j, k ) F ( j +1, k +1) G2 ( j, k ) = F ( j, k +1) F ( j +1, k )

    (5)

    (6)

  • 8

    Fig. 1 the convention for 3 by 3 edge detection operator

    Roberts model is still susceptible to fluctuations in the image even though the edges

    can be properly positioned. Prewitt has developed another edge gradient detector

    which uses a different approach to approximate row and column edge gradients. The

    proposed gradients are defined as

    , = 1

    +2 [ 3 + . 6 + 9 1 + . 4 + 7 ] (7)

    , = 1

    +2 [ 1 + . 2 + 3 7 + . 8 + 9 ] (8)

    The equations above follow the convention shown in Fig. 1. The K in the equations

    is equal to one, so that row and column gradient are normalized to provide unit gain

    positive weighted and unit gain negative weighted averages about a separated edge

    position. Sobel edge detector doubles the north, south, west, and east pixels of the

    Prewitt operator (i.e. K=2). This makes the Sobel edge detector more sensitive to

    diagonal edge than horizontal and vertical edges [4]. Frei and Chen [1] have adapted

    the Sobels model and proposed a pair of isotropic operator which makes K equal

    to 2. This makes the gradient for horizontal, vertical, and diagonal edges the same at

    the edge center. The isotropic smoothed weighting operator proposed by Frei and

    Chen can easily pick up subtle edge detail and produce thinner edge lines, but it also

    increase the possibility of erroneously detect noise as real edge points. In [5], Ding

    analyzed the one-dimensional outputs of a general edge detector using

    differentiation method. The 1-D outputs reveal that differentiation method is quite

    susceptible to noise and unable to accurately detect step edges interfered by noise

    and ramp edges.

  • 9

    Pratt [4] mentioned that properly extending the size of the neighborhoods over

    which the differential gradients are computed can alleviate the inability to detect the

    edges precisely in a high noise environment. The first order derivative edge detectors

    do provide solutions to edge detection process but none of the detectors can localize

    the edge to a single pixel.

    4.1.2. Utilizing A Set Of Discrete Edge Templates With Different

    Orientations

    This method is to convolute an image, F(j, k) with a set of template gradient

    impulse response arrays,

    Hm(j, k). The general form of the edge template gradient is

    G( j, k ) = MAX G1 ( j, k ) ,........., Gm ( j, k ) , ...... GM ( j, k ) (9)

    where

    Gm ( j, k ) = F ( j, k ) H m ( j, k ) (10)

    The edge angle is determined by the direction of the largest gradient. The direction

    in that particular template is not the exact orientation of the edge. In fact, the

    direction is only an approximation. The exact orientation is /4 of the orientation

    that gives the maximum gradient. The following equation shows a directional

    gradient proposed by Kirsch:

    7

    G( j, k ) = Max i =0

    5Si 3Ti (11)

    where

    Si = Ai + Ai +1 + Ai +2

    Si = Ai +3 + Ai +4 + Ai +5 + Ai +6 + Ai +7

    (12)

  • 10

    Please note that the subscripts Ai are the parameters in the compass gradient matrix.

    3-level and 5-level impulse response arrays are the other two 3 by 3 templates

    proposed by Robinson. Nevatia and Babu have developed the gain-normalized 5 by

    5 masks that can be used to detect edges in various degree increments. The larger

    template size will result in finer quantization of the edge orientation angle, and less

    noise. The tradeoff is that more computational power will be required.

    4.1.3. Threshold Selection:

    The edge is detected by comparing the edge gradient to a defined threshold

    value. This threshold represents the sensitivity of the edge detector. When

    dealing with noisy edges, one could miss valid edges while creating noise-induced

    false edges. Edge detection can be represented by the following conditional

    probability densities

    Fig. 2 Conditional probability densities of edge gradients [6].

  • 11

    of G(j, k):

    PD = p(G t | edge) =

    p(G | edge)dG t

    PF = p(G t | no edge) =

    p(G | no edge)dG (13)

    t

    where PD and PF represent the probability of correct detection and the probability

    of false edge detection respectively. Also, the t is denoted as the detection

    threshold. Fig. 2 exhibits the conditional probability densities of edge gradient

    that vary in edge and non-edge regions. The probability of misclassification can be

    represented

    as

    PE = [1 PD ]P(edge) + [PF ]P(no edge) (14)

    According to Heyman-Pearson test, a threshold t is chosen to minimize PF for a fixed

    PD. An ideal threshold must produce minimum error PE. This condition can be

    achieved if the following maximum likelihood ratio test associated with the Bayes

    minimum error decision rule of classical decision theory is satisfied:

    P(G | edge)

    P(G | no edge)

    P(no edge)

    P(edge)

    (15)

    The conditional densities for 2 by 2 and 3 by 3 edge detection operators were

    derived by Abdou. The densities apply when the width of a ramp edge is one (w=1)

    and additive Gaussian noise is present. However, Reliability of the stochastic edge

  • 12

    Table 1. The relation of G(x, y) with F(x, y) [4]

    F(x, y) is

    constant

    F(x, y) is

    changing

    linearly

    Rate of change of

    F(x, y) is increasing

    Behavior of

    G(x, y)

    Zero

    Zero

    Sign changes at the point of reflection

    of F(x, y). (Indicates the presence of

    an edge.)

    model and analytic difficulties in deriving the edge gradient conditional densities are

    two difficulties when we are determining the optimal threshold for our edge detector.

    Abdou and Pratt have developed an approach based on pattern recognition

    techniques. Their design produced a table which lists the optimal threshold value for

    several 2 by 2 and 3 by 3 edge detectors and the probability of correct and false edge

    detection. When edges and non-edges are equally probable, PF equals 1 - PD. The

    edge detection threshold should be inversely proportional to SNR (Signal-to-Noise

    Ratio) [4].

    4.2. Second Order Derivative Edge Detection

    If there is a significant spatial change in the second derivative, an edge is

    detected. The following sub-sections introduce different approaches using second

    order derivative on edge detection:

    4.2.1. Laplacian Generation in Continuous and Discrete Domain

    Since the Laplacian is

    2 2 = +

    x2

    2 y2

    (16)

  • 13

    the edge Laplacian of an image F(x, y) in the continuous domain can be written as

    G( x, y) = 2 {F ( x, y)} (17)

    The negative sign gives the zero crossing of G(x, y) a positive slope for an edge

    detected. Table 1 shows the behavior of G(x, y) relative to F(x, y). Computing the

    difference of slopes along each axis, as shown in the equation below, is the simplest

    way to approximate the continuous Laplacian in discrete domain.

    G( j, k ) = [F ( j, k ) F ( j, k 1)] [F ( j, k +1) F ( j, k )] + [F ( j, k ) F ( j +1, k )] [F ( j 1, k ) F ( j, k )]

    (18)

    The convolution operation

    G( j, k ) = F ( j, k ) H ( j, k ) (19)

    with the two arrays

    = 0 0 01 2 10 0 0

    + 0 1 00 2 00 1 0

    (20)

    or

    = 0 1 01 4 10 1 0

    (21)

    can generate this fourneighbor Laplacian. The gain normalized version of the

    previous impulse response is

  • 14

    s

    = 1

    4

    0 1 01 4 10 1 0

    (22)

    The gain normalized eight-neighbor Laplacian impulse response array proposed by

    Prewitt is

    = 1

    8 1 1 11 8 11 1 1

    (23)

    In a separable eight-neighbor Laplacian, the difference of slopes is averaged over

    three rows and three columns. This is given by

    = 1 2 11 2 11 2 1

    + 1 1 12 2 21 1 1

    (24)

    The gain-normalized version is

    = 1

    8 2 1 21 4 12 1 2

    (25)

    4.2.2. Laplacian of Gaussian (LoG) Edge Detection in Continuous and Discrete Domain

    According to the Laplacian of Gaussian edge detector operator proposed by

    Marr and Hildrith, Gaussian-shaped smoothing is applied prior to the application of

    the Laplacian. The LoG gradient in continuous domain can be written as

    G( x, y) = 2 {F ( x, y) H ( x, y)} (26)

  • 15

    2

    where

    H s ( x, y) = g ( x, s) g ( y, s) (27)

    is the impulse response of the Gaussian smoothing function:

    , = 22 1

    2 exp 1

    2

    2 (28)

    Due to the linearity of the second derivative operation and of the linearity of

    convolution, we can express the gradient as

    G( x, y) = F ( x, y) H ( x, y) (29)

    and the impulse response is

    , = 2 g x, s g y, s =1

    s4 1

    x2+y2

    2s2 exp

    x2+y2

    2s2 (30)

    To obtain LoG operator in discrete domain, one can simply sample the impulse

    response H(x, y) in the continuous domain over a W x W window. To avoid any

    negative truncation effects, W should be greater or equal to 3c, where c is 2 2s , the

    centre positive part of the LoG function.

    4.2.3. Directed Second Order Derivative Generation

    There are two approaches that involve second order derivative generation to

    detect edges. The ability to precisely detect the edge direction is the major advantage

    of the directed second order derivative. Equation (31) displays the directed second

    order derivative in continuous domain with an edge angle :

  • 16

    2 2 2

    F ''( x, y) = F ( x, y)

    cos2 +

    x2

    F ( x, y) sin cos +

    xy

    F ( x, y) sin

    2 y 2

    (31)

    An easier approach to detect edge is to determine the edge direction using first

    order derivative method before taking the approximation to the equation (31) in

    discrete domain.

    Haralick proposed an approach called facet modeling which approximates

    continuous F(x, y) in discrete domain using a 2-D polynomial shown in equation

    (32):

    , = 1 + 2 + 3 + 42 + 5 + 6

    2 + 72 + 8

    2 + 922 (32)

    = tan 1 k2 k3

    (33)

    Through this approach, the directed second order derivative can be computed

    analytically. In [4], Pratt suggested that in principle any polynomial expansion can

    be used in the approximation; therefore, the quadratic expansion form of equation

    (32) is presented as

    F (r, c) = N

    n=1

    an Pn (r, c)

    (34)

  • 17

    Fig. 3 Nine 3 by 3 impulse response arrays based on Chebyshev polynomial.

    As a result of the linear property of the approximated weighting coefficient an, the

    weighting coefficient An (j, k) at each point of the image F(j, k) can be found by

    convolution

    An ( j, k ) = F ( j, k ) H n ( j, k ) (35)

  • 18

    4.2.4. Edge Detection Using Edge Fitting Method

    The image data of a real edge could be similar to the ideal edge model in

    either one-dimensional or two-dimensional aspects. Edge fitting detection, however,

    requires more computation in comparison with derivative edge detection techniques.

    In the one-dimensional edge fitting model as shown in Fig. 4, the actual image f(x) is

    fitted to an ideal step function. The 1-D ideal step function is defined as

    = , <

    + , 0 (36)

    Fig. 4 One-dimensional edge fitting model [4].

    Fig. 5 Two-dimensional edge fitting model [4].

    The two-dimensional ideal step function is

    a S ( x, y) =

    a + h

    ( x cos + y sin ) < ( x cos + y sin )

    (37)

    Fig. 5 demonstrates the two-dimensional edge fitting model.

  • 19

    4.3. Edge Detection Performance Evaluation

    It is quite difficult to develop standard performance criteria and methods to

    evaluate the effectiveness of each edge detector. Locating a real edge pixel

    becomes extremely crucial. Edge slope angle and its spatial orientation are also

    important criteria in the evaluation. A good edge detector must have a good edge

    decision which the closeness of fit between the actual and the detected image is

    optimized.

    4.3.1. Edge Detection Probability

    When we determine the performance of an edge detector, the probability of

    correct detection PD and the probability of false edge detection PF play a key

    role. Both probabilities are displayed in equation (38) and (39).

    PD =

    p(G | edge)dG

    (38) t

    PF = p(G | no edge)dG

    (39) t

    Pratt [4] plotted the probability of correct edge detection against probability of false

    detection and made a comprehensive comparison of several edge detectors. Based

    on his plots, he found that Sobel and Prewitt 3-by-3 operators are superior to the

    Roberts 2-by-2 operator and the performances of Sobel and Prewitt differential

    operators are slightly better than the Robinson 3-level and 5-level operators.

    4.3.2. Edge Detection Orientation and Localization

    The sensitivity to edge orientation and the ability to localize an edge are both

    important properties of an edge detector. Pratt [4] plotted the edge gradient as a

    function of actual edge orientation and concluded that square root combination of

    orthogonal gradients is superior to the magnitude combination of the orthogonal

    gradients. He also sampled continuous ramp edge and examined the edge

    displacement from the center of the first order derivative operator Fig. 6 contains

  • 20

    the analysis performed to determine the edge detectors ability of edge localization.

    Pratts analysis also reveals that all the edge detectors except Kirsch operator have

    the same edge displacement property which the edge displacement increases as the

    edge gradient amplitude decreases. Similar properties are possessed by variable size

    boxcar operators and several orthogonal gradient operators. By setting the threshold

    to half or higher of the edge height, edge location can be properly localized. Setting

    a high threshold could cause the detector to miss the real edge with low amplitude.

    4.4. Edge Detector Performance Characterization

    Failure to detect real edges, misclassification of noise-induced points as edge

    points, and inability to localize edge points are the three major errors that could be

    made by edge detectors. The probability of the true edge detection can be

    found by comparing the detected image with the edge maps resulted from an ideal

    edge

    Fig 6. Edge localization analysis for (a) 2-by-2 model and (b) 3-by-3 model

  • 21

    1 2 3

    Fig. 7 The 3 by 3 Line Detector Impulse Response [4].

    4.5. Colour Edge Detection

    Three tristimulus values T1, T2, and T3 can be used to quantify the amount of

    RGB colors at each pixel of a color image. Several different definitions of color edge

    detection have been proposed. A definition states that the detection depends on the

    vector sum gradient of the three tristimulus values:

    1

    G( j, k ) = {[G ( j, k )]2 + [G ( j, k )]2 + [G ( j, k )]2 }2 (41)

  • 22

    4

    Each gradient in (41) represent the three tristimulus component values. A color edge

    is detected if the gradient exceeds the defined threshold.

    4.6. Line and Spot Detection

    The approach introduced by Pratt [4] for the unit width line and spot detection

    can be achieved by finding a line gradient

    G( j, k ) = MAX { F ( j, k ) H m ( j, k ) } m =1

    (42)

    The Hm(j,k) could be one of the weighted or unweighted 3 by 3 line detector impulse

    response shown in Fig. 7. For spot detection, a spot gradient displayed in equation

    (43) is used to detect unit width step spots.

    G( j, k ) = F ( j, k ) H ( j, k ) (43)

    There several ways to implement the impulse response operator in the above

    equation. One of the approaches is using the Laplacian operators that are display in

    the equation (22), (23), and (25). These operators are thresholded for spot detection,

    but they could end up with detecting the false spots in a noisy image. Prewitt has

    developed another operator

    = 1

    8

    1 2 12 4 21 2 1

    (44)

    which only detects diagonally oriented edges. By using the operator proposed by

    Prewitt, the noise-induced false spot detections

  • 23

    4.7. Matlab Simulations for Edge Detection

    General differential edge detection process contains 2 fundamental elements:

    spatial differentiator and differential detector. A spatial differentiator take the original

    image F(j, k) as an input and produce an output differential image G(j, k). The

    differential image G(j, k) is the spatial amplitude changes between the pixels in a defined

    direction. After the spatial differentiation process, a differential detection operation is

    performed to determine the pixel locations of the significant differentials [4]. A block

    diagram in Fig. 9 illustrates the process of differential edge detection.

    4.7.1. Simple Edge Detectors

    Edge1, a Matlab function for simple edge detection, was written based on the

    differential edge detection process. The edge1 function allows two inputs, f and t.

    The input f is the image to be processed which is shown in Fig. 10 and t is a defined

    edge detection threshold. The function first computes the row and column gradients as

    shown in equation (45) and (46):

    GC ( j, k ) = F ( j +1, k ) F ( j, k )

    GR ( j, k ) = F ( j, k +1) F ( j, k )

    (45)

    (46)

    Then the spatial gradient amplitude is calculated using the equation (2). Instead of

    using differential detector, edge1 function directly compare the spatial gradient to

    the defined threshold input by the function user. Through the comparison, a binary

    indicator map is generated indicating the position of edges detected within the

    original image. Fig. 11 displays an example binary map produced by this function

    and the binary image obtained is quite satisfactory considering the edge1 algorithm

    is only a simple approximation of row and column gradients. A similar function,

    edge2, calculates the spatial differential in both orthogonal directions. The Gradient

    equations in both orthogonal directions are defined as

    G1 ( j, k ) = F ( j +1, k +1) F ( j, k ) G2 ( j, k ) = F ( j, k +1) F ( j +1, k )

    (47)

    (48)

  • 24

    1 2

    Again, the spatial gradient amplitude is computed as the following square root form:

    G( j, k ) = [G ( j, k )]2

    + [G ( j, k )]2

    (49)

    An example binary map was generated in Fig. 10. This figure demonstrates the

    strength of orthogonal edge detector because by comparing Fig. 10 to Fig. 9, the

    resolution of the orthogonal edges in Fig. 10 is improved. In both Fig. 9 and Fig. 10,

    the undesired edge thickness reveals that neither of the outcomes of edge1 and

    edge2 functions can precisely position the edge within only a few pixels.

    According to the Laplacian Generation in discrete domain, the simplest way to

    approximate the continuous Laplacian in discrete domain is displayed in equation

    (50):

    G( j, k ) = [F ( j, k ) F ( j, k 1)] [F ( j, k +1) F ( j, k )] + [F ( j, k ) F ( j +1, k )] [F ( j 1, k ) F ( j, k )]

    (50)

    A Matlab program edge3 was written to simulate the Laplacian approximation. Fig.

    11 shows an example binary map generated by edge function. The f i g u r e

    demonstrates the advantageous edge locating ability of second order derivative edge

  • 25

    Fig. 8 The original input image.

    Fig. 9 The binary indicator map generated by the approximation to row and

    column gradients.

  • 26

    Fig. 10 The binary indicator map generated by the approximation to orthogonal

    Gradients.

    Fig. 11 The binary image generated by the Laplacian approximation.

  • 27

    4.8. General Steps In Edge Detection

    Generally, Edge detection contains three steps namely:

    Filtering

    Enhancement

    Detection.

    Localization

    4.8.1. Filtering

    Some major classical edge detectors work fine with high quality pictures, but often

    are not good enough for noisy pictures because they cannot distinguish edges of different

    significance. Noise is unpredictable contamination on the original image. There are various

    kinds of noise, but the most widely studied two kinds are white noise and salt and pepper

    noise. In salt and pepper noise, pixels in the image are very different in color or intensity

    from their surrounding pixels; the defining characteristic is that the value of a noisy pixel

    bears no relation to the color of surrounding pixels. Generally this type of noise will only

    affect a small number of image pixels. When viewed, the image contains dark and white

    dots, hence the term salt and pepper noise. In Gaussian noise, each pixel in the image will be

    changed from its original value by a small amount. Random noise to describe an unknown

    contamination added to an image. To reduce the influence of noise, Marr suggested filtering

    the images with the Gaussian before edge detection.

    4.8.2. Enhancement:

    Digital image enhancement techniques are concerned with improving the quality of

    the digital image. The principal objective of enhancement techniques is to produce an image

    which is better and more suitable than the original image for a specific application. Linear

    filters have been used to solve many image enhancement problems. Throughout the history

    of image processing, linear operators have been the dominating filter class. Not all image

    sharpening problems can be satisfactorily addressed through the use of linear filters. There is

    a need for nonlinear geometric approaches, and selectively in image sharpening is the key to

  • 28

    its success. A powerful nonlinear methodology that can successfully address the image

    sharpening problem is mathematical morphology.

    4.8.3. Detection

    Position of the detected edge to be shifted from its true location Some methods

    should be used to determine which points are edge points or not. The edges are detected are

    decoded as ones and the rest of the part of image as zeros.

    4.8.4. Localization

    After detecting the edges gradient is calculated so that the edges are located

    accurately and the orientations are estimated accurately. Locate the edge accurately, estimate

    edge orientation.

    4.9. Challenges in Classification and Detection Methods:

    Extraction and segmentation has to deal with the following challenges:

    1. The changes in lighting conditions

    2. The background is dynamic

    3. Luminance and geometrical features,

    4. Noise volume has a great impact on shaping the edge.

    5. Missing to detect existing edges

    6. Detecting edges where it does not exist (false edge) and

    7. Position of the detected edge to be shifted from its true location (shifted edge or

    dislocated edge). The classification of the edge detection algorithms based on the

    behavioural study of edges with respect to the operators. Classical or Gradient based

    edge detectors (first Derivative) Zero crossing (second derivative) Laplacian of

    Gaussian (LoG) Gaussian edge detectors Colored edge detectors

  • 29

    4.10. Block Diagram Of Edge Detection

    Fig.12 Block diagram of edge detection

    This project uses Matlab where an image is treated as a matrix. However, in hardware the

    pixels are stored in memory and are not similar to the matrix structure of Matlab. In

    hardware (Microchip dsPIC), each memory location is identified by a unique address. For a

    QVGA image of size 320x240, Matlab stores the data in a matrix where as in hardware. The

    data is stored in a memory location unlike in a matrix. As precursor to implement an

    algorithm such as canny edge detection in hardware, it would first be implemented in Matlab

    without using built in functions. This will then be easily transferred to software language

    implementation that can be converted to machine code. Edge detection comprises of

    following procedure :

    1. Smoothing: Blurring of the image to remove noise.

    2. Finding gradients: The edges should be markedwhere the gradients of the image has

    large magnitudes.

    3. Thresholding: Potential edges are determined by thresholding. It is inevitable that all

    images taken from a camera will contain some amount of noise. In order to prevent noise

    being mistaken for edges, noise must be reduced. Therefore the image is first smoothed by

    applying a Gaussian filter. The mask of Gaussian filter of size 5x5 with a standard deviation

  • 30

    4.11. Flowchart for Edge Detection:

    Fig.13 Flowchart of Edge Detection

  • 31

    4.12. Methods of Edge Detection:

    The various methods of edge detection are:

    Sobel Operator

    Roberts cross operator:

    Prewitts operator

    Laplacian of Gaussian or ZeroCrossing

    Canny Edge Detection

    4.12.1. Sobel Operator:

    The Sobel method uses the derivative approximation to find edges. Therefore, it

    returns edges at those points where the gradient of the considered image is maximum. The

    Sobel operator performs a 2-D spatial gradient measurement on images. It uses a pair of

    horizontal and vertical gradient matrices whose dimensions are 33 for edge detection

    operations. It will also demonstrate how to build a Sobel detector function of 5 5 dimension

    in matlab to find edges. Standard Sobel operators, for a 33 eighborhood, each simple

    central gradient estimate is vector sum of a pair of orthogonal vectors . Each orthogonal

    vector is a directional derivative estimate multiplied by a unit vector specifying the

    derivatives direction. The vector sum of these simple gradient estimates amounts to a vector

    sum of the 8 directional derivative vectors. Thus a point on cartesian grid and its eight

    neighbouring density values as shown

    Standard Sobel operators, for a 3 3 neighborhood, each simple central gradient

    estimate is vector sum of a pair of orthogonal vectors [1]. Each orthogonal vector is a

    directional derivative estimate multiplied by a unit vector specifying the derivatives

    direction. The vector sum of these simple gradient estimates amounts to a vector sum of

    the 8 directional derivative vectors. Thus for a point on Cartesian grid and its eight

    neighbors having density values as shown:

  • 32

    a

    b c

    d

    e

    f

    g

    h

    i

    The directional derivative estimate vector G was defined such as density difference /

    distance to neighbor. This vector is determined such that the direction of G will be given

    by the unit vector to the approximate neighbor. Note that, the neighbors group into

    antipodal pairs: (a,i), (b,h), (c,g), (f,d). The vector sum for this gradient estimate:

    G = (c g )

    [1,1]

    + (a i)

    [1,1]

    + (b h) [0,1] + ( f d ) [1,0] R R R R

    where, R = 2 .

    This vector is obtained as

    G = [(c g a + i) / 2 + f d , (c g + a i) / 2 + b h]

    Here, this vector is multiplied by 2 because of replacing the divide by 2. The resultant

    formula is given as follows:

    G = 2.G = [(c g a + i) + 2.( f

    d ), (c g + a i) + 2.(b h)]

  • 33

    5 8 10 8 5 4 10 20 10 4 0 0 0 0 0 -4 -10 -20 -10 -4 -5 -8 -10 -8 -5

    The following weighting functions for x and y components were obtained by using the above

    vector.

    1 2 1

    0 0 0

    -1 -2 -1

    Fig.14 Sobel operator convolution kernels

    The following figure 5 5 neigborhood.

    a b c d e

    f g h i j

    k l m n o

    p r s t u

    v w x y z

    The horizantal and vertical masks are obtained by using the coefficents in this

    equation

    -5 -4 0 4 5 -8 -10 0 10 8 -10 -20 0 20 10 -8 -10 0 10 8 -5 -4 0 4 5

    These masks are used by the edge detection function in the following section.

    Each direction of Sobel masks is applied to an image, then two new images

    are created. One image shows the vertical response and the other shows the

    horizontal response. Two images combined into a single image. The purpose is

    to determine the existence and location of edges in a picture.

    -1 0 1

    -2 0 2

    -1 0 1

  • 34

    This two image combination is explained that the square of created masks pixel

    estimate coincidence each other as coordinate are summed. Thus new image on which

    edge pixels are located obtained the value which is the squared of the above

    summation. The value of threshold in this above process is used to detect edge pixels

    4.12.1.1. Flowchart for Sobel Edge Detection:

    Fig.15 Flowchart for Sobel Edge Detection

    4.12.2. Prewitts Operator:

    Prewitt operator is similar to the Sobel operator and is used for detecting vertical and

    horizontal edges in images.

    Fig.16 Prewitts operator convolution kernels

  • 35

    The Prewitt operator is one type of an edge model operator. Fig shows that two convolution

    kernels formed Prewitt operator. Model operator is made from the ideal edge sub-image

    composition. Detect the image using edge model one by one, and take the maximum value of the

    model operator that is most similar to the detected region as the output of the operator. Both

    Prewitt operator and Sobel operator use the same differential and filtering operations, the only

    difference is that the template does not use the same image.

    4.12.3. Roberts Cross Operator: The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient

    measurement on an image. It thus highlights regions of high spatial frequency which often

    correspond to edges. In its most common usage, the input to the operator is a grayscale image, as

    is the output. Pixel values at each point in the output represent the estimated absolute magnitude

    of the spatial gradient of the input image at that point.

    In theory, the operator consists of a pair of 22 convolution kernels as shown in Figure 1.

    One kernel is simply the other rotated by 90. This is very similar to the Sobel operator.

    Fig.17 Roberts Cross convolution kernels

    These kernels are designed to respond maximally to edges running at 45 to the pixel grid, one

    kernel for each of the two perpendicular orientations. The kernels can be applied separately to

    the input image, to produce separate measurements of the gradient component in each orientation

  • 36

    (call these Gx and Gy). These can then be combined together to find the absolute magnitude of

    the gradient at each point and the orientation of that gradient. The gradient magnitude is given

    by:

    although typically, an approximate magnitude is computed using:

    which is much faster to compute. The angle of orientation of the edge giving rise to the spatial

    gradient (relative to the pixel grid orientation) is given by:

    In this case, orientation 0 is taken to mean that the direction of maximum contrast from black to

    white runs from left to right on the image, and other angles are measured clockwise from this.

    Often, the absolute magnitude is the only output the user sees --- the two components of the

    gradient are conveniently computed and added in a single pass over the input image using the

    pseudo-convolution operator shown in Figure 2.

    Fig.18 Pseudo-convolution kernels used to quickly compute approximate gradient magnitude

  • 37

    Using this kernel the approximate magnitude is given by:

    The main reason for using the Roberts Cross operator is that it is very quick to compute. Only

    four input pixels need to be examined to determine the value of each output pixel, and only

    subtractions and additions are used in the calculation. In addition there are no parameters to set.

    Its main disadvantages are that since it uses such a small kernel, it is very sensitive to noise. It

    also produces very weak responses to genuine edges unless they are very sharp. The Sobel

    operator performs much better in this respect.

    4.12.4. Canny Edge Detection:

    The Canny edge detection algorithm is known to many as the optimal edge detector.

    Canny's intentions were to enhance the many edge detectors already out at the time he started his

    work. He was very successful in achieving his goal and his ideas and methods can be found in

    his paper, "A Computational Approach to Edge Detection"[11]. In his paper, he followed a list of

    criteria to improve current methods of edge detection. The first and most obvious is low error

    rate. It is important that edges occurring in images should not be missed and that there be no

    responses to non-edges. The second criterion is that the edge points be well localized. In other

    words, the distance between the edge pixels as found by the detector and the actual edge is to be

    at a minimum. A third criterion is to have only one response to a single edge. This was

    implemented because the first two were not substantial enough to completely eliminate the

    possibility of multiple responses to an edge. Based on these criteria, the canny edge detector first

    smoothes the image to eliminate and noise. It then finds the image gradient to highlight regions

    with high spatial derivatives. The algorithm then tracks along these regions and suppresses any

    pixel that is not at the maximum (nonmaximum suppression). The gradient array is now further

    reduced by hysteresis. Hysteresis is used to track along the remaining pixels that have not been

    suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is

    set to zero (made a non edge). If the magnitude is above the high threshold, it is made an edge.

    And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from

    this pixel to a pixel with a gradient above T2.

  • 38

    Step 1:

    In order to implement the canny edge detector algorithm, a series of steps must be

    followed. The first step is to filter out any noise in the original image before trying to locate and

    detect any edges. And because the Gaussian filter can be computed using a simple mask, it is

    used exclusively in the Canny algorithm. Once a suitable mask has been calculated, the Gaussian

    smoothing can be performed using standard convolution methods. A convolution mask is usually

    much smaller than the actual image.

    As a result, the mask is slid over the image, manipulating a square of pixels at a time. The

    larger the width of the Gaussian mask, the lower is the detector's sensitivity to noise. The

    localization error in the detected edges also increases slightly as the Gaussian width is increased.

    Step 2:-

    After smoothing the image and eliminating the noise, the next step is to find the edge strength by

    taking the gradient of the image. The Sobel operator performs a 2-D spatial gradient

    measurement on an image. Then, the approximate absolute gradient magnitude (edge strength) at

    each point can be found. The Sobel operator uses a pair of 3x3 convolution masks, one

    estimating the gradient in the x-direction (columns) and the other estimating the gradient in the

    y-direction (rows). They are shown below:

    -1

    0

    +1

    -2

    0

    +2

    -1

    0

    +1

    Gx Gy

    Fig.19 Cannys convolution kernels

    +1

    +2

    +1

    0

    0

    0

    -1

    -2

    -1

  • 39

    The magnitude, or edge strength, of the gradient is then approximated using the formula:

    |G| = |Gx| + |Gy|

    Step 3:-

    The direction of the edge is computed using the gradient in the x and y directions.

    However, an error will be generated when sumX is equal to zero. So in the code there has to be a

    restriction set whenever this takes place. Whenever the gradient in the x direction is equal to

    zero, the edge direction has to be equal to 90 degrees or 0 degrees, depending on what the value

    of the gradient in the y-direction is equal to. If GY has a value of zero, the edge direction will

    equal 0 degrees. Otherwise the edge direction will equal 90 degrees. The formula for finding the

    edge direction is just:

    Theta = invtan (Gy / Gx)

    Step 4:-

    Once the edge direction is known, the next step is to relate the edge direction to a direction that

    can be traced in an image. So if the pixels of a 5x5 image are aligned as follows:

    x x x x x

    x x x x x

    x x a x x

    x x x x x

    x x x x x

    Then, it can be seen by looking at pixel "a", there are only four possible directions when

    describing the surrounding pixels - 0 degrees (in the horizontal direction), 45 degrees (along the

    positive diagonal), 90 degrees (in the vertical direction), or 135 degrees (along the negative

    diagonal). So now the edge orientation has to be resolved into one of these four directions

    depending on which direction it is closest to (e.g. if the orientation angle is found to be 3

    degrees, make it zero degrees). Think of this as taking a semicircle and dividing it into 5 regions.

  • 40

    Therefore, any edge direction falling within the yellow range (0 to 22.5 & 157.5 to 180 degrees)

    is set to 0 degrees. Any edge direction falling in the green range (22.5 to 67.5 degrees) is set to

    45 degrees. Any edge direction falling in the blue range (67.5 to 112.5 degrees) is set to 90

    degrees. And finally, any edge direction falling within the red range (112.5 to 157.5 degrees) is

    set to 135 degrees.

    Step 5:-

    After the edge directions are known, non-maximum suppression now has to be applied.

    Non-maximum suppression is used to trace along the edge in the edge direction and suppress any

    pixel value (sets it equal to 0) that is not considered to be an edge. This will give a thin line in the

    output image.

    Step 6:-

    Finally, hysteresis is used as a means of eliminating streaking. Streaking is the breaking

    up of an edge contour caused by the operator output fluctuating above and below the threshold.

    If a single threshold, T1 is applied to an image, and an edge has an average strength equal to T1,

    then due to noise, there will be instances where the edge dips below the threshold. Equally it will

    also extend above the threshold making an edge look like a dashed line. To avoid this, hysteresis

    uses 2 thresholds, a high and a low. Any pixel in the image that has a value greater than T1 is

    presumed to be an edge pixel, and is marked as such immediately. Then, any pixels that are

    connected to this edge pixel and that have a value greater than T2 are also selected as edge

    pixels. If you think of following an edge, you need a gradient of T2 to start but you don't stop till

    you hit a gradient below T1.

  • 41

    4.12.4.1. Flowchart For Canny Edge Detection

    Fig.20 Flowchart for canny edge detection

    4.9.5. Laplacian of Gaussian:

    The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The

    Laplacian of an image highlights regions of rapid intensity change and is therefore often used for

    edge detection. The Laplacian is often applied to an image that has first been smoothed with

    something approximating a Gaussian Smoothing filter in order to reduce its sensitivity to noise.

    The operator normally takes a single graylevel image as input and produces another graylevel

    image as output. The Laplacian L(x,y) of an image with pixel intensity values I(x,y) is given by:

  • 42

    Since the input image is represented as a set of discrete pixels, we have to find a discrete

    convolution kernel that can approximate the second derivatives in the definition of the Laplacian.

    Three commonly used small kernels are shown in Figure 1.

    Fig.21 Three commonly used discrete approximations to the Laplacian filter.

    Because these kernels are approximating a second derivative measurement on the image,

    they are very sensitive to noise. To counter this, the image is often Gaussian Smoothed before

    applying the Laplacian filter. This pre-processing step reduces the high frequency noise

    components prior to the differentiation step.

    In fact, since the convolution operation is associative, we can convolve the Gaussian

    smoothing filter with the Laplacian filter first of all, and then convolve this hybrid filter with the

    image to achieve the required result. Doing things this way has two advantages:

    Since both the Gaussian and the Laplacian kernels are usually much smaller than the

    image, this method usually requires far fewer arithmetic operations.

    The LoG (`Laplacian of Gaussian') kernel can be precalculated in advance so only one

    convolution needs to be performed at run-time on the image.

    The 2-D LoG function centered on zero and with Gaussian standard deviation has the form:

  • 43

    and is shown in Figure.

    Fig.22 Discrete approximation to LoG function with Gaussian = 1.4

  • 44

    Note that as the Gaussian is made increasingly narrow, the LoG kernel becomes the same as the

    simple Laplacian kernels shown in Figure 1. This is because smoothing with a very narrow

    Gaussian ( < 0.5 pixels) on a discrete grid has no effect. Hence on a discrete grid, the simple

    Laplacian can be seen as a limiting case of the LoG for narrow Gaussians.

    4.13. Software Used:

    4.13.1. Matlab:

    MATLAB (matrix laboratory) is a numerical computing environment and fourth-

    generation programming language. Developed by MathWorks, MATLAB

    allows matrix manipulations, plotting of functions and data, implementation of algorithms,

    creation of user interfaces, and interfacing with programs written in other languages,

    including C, C++, Java, and Fortran. Although MATLAB is intended primarily for numerical

    computing, an optional toolbox uses the MuPAD symbolic engine, allowing access to symbolic

    computing capabilities. An additional package, Simulink, adds graphical multi-domain

    simulation and Model-Based Design fordynamic and embedded systems.

    In 2004, MATLAB had around one million users across industry and

    academia.[2]

    MATLAB users come from various backgrounds of engineering, science,

    and economics. MATLAB is widely used in academic and research institutions as well as

    industrial enterprises.

    4.13.2. Matlab R2010a:

    Release 2010a includes new features in MATLAB and Simulink, one new product,

    and updates and bug fixes to 85 other products. Subscribers to MathWorks Software

    Maintenance Service can download product updates. Visit the License Center to download

    products, activate software, and manage your license and user information.

  • 45

    New Capabilities for the MATLAB Product Family Include:

    Custom enumerated data types, 64-bit integer arithmetic, and desktop enhancements

    in MATLAB.

    GPU computing with CUDA-enabled NVIDIA devices in Parallel Computing Toolbox.

    Support for the GigE Vision hardware standard in Image Acquisition Toolbox.

    Automated PID tuning in Control System Toolbox.

    New System objects for communications design in MATLAB, supporting 95 algorithms

    inCommunications Blockset.

    Spline Toolbox capabilities merged into Curve Fitting Toolbox.

    OAS and CDS calculations in Fixed Income Toolbox, Reuters Contribute functionality

    in Datafeed Toolbox, and credit risk enhancements in Financial Toolbox.

    Graphical tool for fitting dynamic networks to time-series data in Neural Network Toolbox

  • 46

    Result

    Comparison Of Edge Detection

    The result of Robert Operator is:

    (a) (b) (c) (d)

    Fig.23

    The result of Sobel operator

    (a) (b) (c)

    Fig.24

    The result of Prewitt operator

    (a) (b) (c)

    Fig.25

    The result of Canny operator

    (a) (b) (c)

    Fig.26

  • 47

    Result analysis: figures are the result of first order derivative of edge detection. The

    greater the threshold is, the clearer image edge processing effect is and the more coherent the

    edge points are significant. However, when the threshold is over 0.3, the effective information of

    the image edge will be lost. We can see that Canny algorithm is best among all algorithms.

    Because Canny can filter noise and maintain the integrity of valid information. Canny operator

    also can ensure high positioning accuracy of the image . And other operators are more sensitive

    to noise than Canny, and cannot be filtered.

    The result of LOG operator

    (a) (b) (c)

    Fig.27

    Result analysis: above Figure is the result of second order derivative of edge detection,

    the smaller the threshold is, the clearer marginal treatment effect of the image is, more coherent

    the edge points are significant. LOG algorithm is sensitive to noise by using the algorithm of

    image intensity second order derivative zero crossing point. Therefore, it should do denoising

    before enhance the image.

    Its very necessary to detect pseudo-edge caused by noise because detection accuracy can

    be improved; In order to improve noise immunity, position deviation takes place. Actual image

    contains noise, and noise distribution, variance, and other information that are unknown to us.

    Smoothing filter operation to eliminate noise with high-frequency signal, but detected edge

    shifts.

    Due to factor of physical and light the edge of the actual image often has different scales,

    and each edge pixel scale is unknown. Its not possible to detect edge very well by using a single

    fixed-scale edge detection operator.

  • 48

    Classical edge detection methods are extremely sensitive to noise due to the introduction of

    various forms of differential operation. Noise is detected as edge points in edge detection instead

    of real edge with interference of noise. Thus a good edge detection method should have good

    noise immunity and outstanding property of restraining noises which are the advantages of

    Canny operator.

    Advantages and Disadvantages of Edge Detector

    Advantages

    Simplicity,Detection of edges and their orientations

    Detection of edges & theirorientations.

    Having fixed characteristics in all directions.

    Testing wider area around the pixel.

    Better detection specially in noise conditions.

    Disadvantages

    Inaccurate

    Malfunctioning at the corners,curves and where the graylevel intensity function varies.

    Complex Computations

  • 49

    Table 2

  • 50

    Conclusion

    One-dimensional operator Roberts, Sobel and Prewitt are able to handle treatment effect

    of images of more gray-scale gradient and noise. The Sobel operator is more sensitive to the

    diagonal edge is than to the horizontal and vertical edges. On the contrary, Prewitt operator is

    more sensitive to horizontal and vertical edges.

    LOG often produces the edge of double pixels wide; therefore, LOG operator is rarely

    directly used for edge detection. It is mainly used to determine pixels to determine if the pixels

    of image are in the dark areas or bright area of the known edge.

    Canny operator is based on three criteria. The basic idea uses a Gaussian function to

    smooth image firstly. Then the maximum value of first derivative also corresponds to the

    minimum of the first derivative. In other words, both points with dramatic change of gray-scale

    (strong edge) and points with slight change of gray-scale correspond to the second derivative

    zero-crossing point. Thus these two thresholds are used to detect strong edges and weak edges.

    The fact that Canny algorithm is not susceptible to noise interference enables its ability to detect

    true weak edges. Canny algorithm is not susceptible to noise interference enables its ability to

    detect true weak edges. Its optimal edge detection algorithm.

  • 51

    Reference:

    1. R.C.Gonzalez, "Digital image processing".

    2. L.P. Han and W.B. Yin. An Effective Adaptive Filter Scale Adjustment Edge Detection

    Method (China, Tsinghua university, 1997).

    3. D. Marr and E. Hildreth, Theory of Edge Detection (London, 1980).

    4. Q.H Zhang, S Gao, and T.D Bui, Edge detection models, Lecture Notes in Computer

    Science, 32(4), 2005, 133-140.

    5. D.H Lim, Robust Edge Detection In Noisy Images, Computational Statistics & Data

    Analysis, 96(3), 2006, 803-812.

    6. Abbasi TA, Abbasi MU, A novel FPGA-based architecture for Sobel edge detection

    operator, International Journal of Electronics, 13(9), 2007, 889-896.

    7. Canny John, A Computational Approach to Edge Detection, IEEE Transactions on

    Pattern Analysis and Machine Intelligence, PAMI-8(6), 1986, 679-6987

    8. X.L Xu, Application of Matlab in Digital Image Processing, Modern Computer, 43(5),

    2008, 35-37.

    9. Y.Q Lv and G.Y Zeng , Detection Algorithm of Picture Edge, TAIYUANSCIENCE &

    TECHNOLOGY, 27(2), 2009, 34-35

    10. D.F Zhang, MATLAB Digital Image Processing(Beijing, Mechanical Industry, 2009)

    zmw57dd0e13232zmw57dd0e13241zmw57dd0e13250zmw57dd0e13259zmw57dd0e132682