Final MC-0086 New_2

Embed Size (px)

Citation preview

  • 7/28/2019 Final MC-0086 New_2

    1/8

    Summer 2013

    Master of Computer Application (MCA) Semester 6

    MC0086 Digital Image Processing 4 Credits

    (Book ID: B1007)

    Question.1.- Explain the process of formation of image in human eye.

    Answer - Image formation in the Eye:- The principal difference between the lens of the eye and an

    ordinary optical lens is that the former is flexible. As illustrated in the radius of curvature of the anterior

    surface of the lens is greater than the radius of its posterior surface. The shape of the lens is controlled by

    tension in the fibers of the ciliary body. To focus on distant objects, the controlling muscles cause the lens

    to be relatively flattened. Similarly, these muscles allow the lens to become thicker in order to focus on

    objects near the eye. The distance between the center of the lens and the retina (called the focal length)

    varies from approximately 17 mm to about 14 mm, as the refractive power of the lens increases from its

    minimum to its maximum. When the eye focuses on an object farther away than about 3 m, the lens

    exhibits its lowest refractive power. When the eye focuses on a nearby object, the lens is most strongly

    refractive. This information makes it easy to calculate the size of the retinal image of any object.

    Figure 1.1: Graphical representation of the eye looking at a palm tree. Point c is the opticalcenter of the lens.

    In Fig. 1.1, for example, the observer is looking at a tree 15 m high at a distance of 100 m. If h is the

    height in mm of that object in the retinal image, the geometry of Fig. 2.2 yields 15/100=h/17 or h=2.55

    mm. The retinal image is reflected primarily in the area of the fovea. Perception then takes place by the

    relative excitation of light receptors, which transform radiant energy into electrical impulses that are

    ultimately decoded by the brain.

    Brightness Adaptation and Discrimination Since digital images are displayed as a discrete set

    of intensities, the eyes ability to discriminate between different intens ity levels is an important

    consideration in presenting image-processing results. The range of light intensity levels to which the

    human visual system can adapt is of the order of 1010 from the scotopic threshold to the glare limit.

    Experimental evidence indicates that subjective brightness (intensity as perceived by the human visual

    system) is a logarithmic function of the light intensity incident on the eye. A plot of light intensity versus

    subjective brightness, illustrating this characteristic is shown in Fig. 1.2.

  • 7/28/2019 Final MC-0086 New_2

    2/8

    Figure 1.2: Range of subjective brightness sensations showing a particular adaptation level.

    The long solid curve represents the range of intensities to which the visual system can adapt. In

    photopic vision alone, the range is about 106. The transition from scotopic to photopic vision is gradual

    over the approximate range from 0.001 to 0.1 millilambert (3 to 1 mL in the log scale), as the double

    branches of the adaptation curve in this range show.

    The End !

    Question 2.- Explain different linear methods for noise cleaning?

    Answer.- Noise reduction is the process of removing noise from a signal. Noise reduction techniques are

    conceptually very similar regardless of the signal being processed, however a priori knowledge of the

    characteristics of an expected signal can mean the implementations of these techniques vary greatly

    depending on the type of signal.

    All recording devices, both analogue or digital, have traits which make them susceptible to

    noise. Noise can be random or white noise with no coherence, or coherent noise introduced by the

    device's mechanism or processing algorithms.

    Noise Cleaning - An image may be subject to noise and interference from several sources, including

    electrical sensor noise, photographic grain noise and channel errors. Image noise arising from a noisy

    sensor or channel transmission errors usually appears as discrete isolated pixel variations that are not

    spatially correlated. Pixels that are in error often appear visually to be markedly different from their

    neighbors.

    Linear Noise Cleaning - Noise added to an image generally has a higher-spatial-frequency spectrum

    than the normal image components because of its spatial decorrelatedness. Hence, simple low-pass

    filtering can be effective for noise cleaning. We will now discuss convolution method of noise cleaning. A

    spatially filtered output image G(j,k) can be formed by discrete convolution of an input image F(m,n) with

    a L * L impulse response array H(j,k) according to the relation G(j,k)= F(m,n) H(m+j+C, n+k+C) where

    C=(L+1)/2 [Eq 4.8] For noise cleaning, H should be of low-pass form, with all positive elements.

  • 7/28/2019 Final MC-0086 New_2

    3/8

    Several common pixel impulse response arrays of low-pass form are used and two such forms are given

    below

    .

    These arrays, called noise cleaning masks, are normalized to unit weighting so that the noise-cleaning

    process does not introduce an amplitude bias in the processed image. Another linear noise cleaning

    technique Homomorphic Filtering. Homomorphic filtering (16) is a useful technique for image

    enhancement when an image is subject to multiplicative noise or interference. Fig. 4.9 describes the

    process

    .

    Figure : Homomorphic Filtering

    The input image F(j,k) is assumed to be modeled as the product of a noise-free image S(j,k) and

    an illumination interference array I(j,k). Thus, F(j,k) = S(j,k) I(j,k) Taking the logarithm yields the additive

    linear result log{F(j, k)} = log{I(j, k)} + log{S(j, k) Conventional linear filtering techniques can now be

    applied to reduce the log interference component. Exponentiation after filtering completes the

    enhancement process.

    The End !

    Question .3.- Which are the two quantitative approaches used for the evaluation of image

    features? Explain.

    Answer. - There are two quantitative approaches to the evaluation of image features: prototype

    performance and figure of merit. In the prototype performance approach for image classification, a

    prototype image with regions (segments) that have been independently categorized is classified by a

    classification procedure using various image features to be evaluated. The classification error is then

    measured for each feature set. The best set of features is, of course, that which results in the least

    classification error. The prototype performance approach for image segmentation is similar in nature. A

    prototype image with independently identified regions is segmented by a segmentation procedure using a

    test set of features. Then, the detected segments are compared to the known segments, and the

    segmentation error is evaluated. The problems associated with the prototype performance methods of

    feature evaluation are the integrity of the prototype data and the fact that the performance indication is

    dependent not only on the quality of the features but also on the classification or segmentation ability of

    the classifier or segmenter. The figure-of-merit approach to feature evaluation involves the establishment

    of some functional distance measurements between sets of image features such that a large distance

  • 7/28/2019 Final MC-0086 New_2

    4/8

    implies a low classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya

    distance figure-of-merit for texture feature evaluation. The method should be extensible for other features

    as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability

    densities of features of a pair of classes defined as

    Where X denotes a vector containing individual image feature measurements with conditional density p ( x

    | S1).

    The End !

    Question .4.- Explain with diagram Digital image restoration model.

    Answer. - In order to effectively design a digital image restoration system, it is necessary quantitatively to

    characterize the image degradation effects of the physical imaging system, the image digitizer and the

    image display. Basically, the procedure is to model the image degradation effects and then perform

    operations to undo the model to obtain a restored image. It should be emphasized that accurate image

    modeling is often the key to effective image restoration. There are two basic approaches to the modeling

    of image degradation effects: a priori modeling and a posteriori modeling. In the former case,

    measurements are made on the physical imaging system, digitizer and display to determine their

    response to an arbitrary image field. In some instances, it will be possible to model the system response

    deterministically, while in other situations it will only be possible to determine the system response in a

    stochastic sense. The posteriori modeling approach is to develop the model for the image degradations

    based on measurements of a particular image to be restored.

    Figure 4.1: Digital image restoration model.

  • 7/28/2019 Final MC-0086 New_2

    5/8

    Basically, these two approaches differ only in the manner in which information is gathered to

    describe the character of the image degradation. Fig. 5.1 shows a general model of a digital imaging

    system and restoration process. In the model, a continuous image light distribution C(x,y,t,) dependent on

    spatial coordinates (x, y), time (t) and spectral wavelength () is assumed to exist as the driving force of a

    physical imaging system subject to point and spatial degradation effects and corrupted by deterministic

    and stochastic disturbances. Potential degradations include diffraction in the optical system, sensor

    nonlinearities, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image

    motion blur and geometric distortion. Noise disturbances may be caused by electronic imaging sensors or

    film granularity. In this model, the physical imaging system produces a set of output image fields FO (i) ( x

    ,y ,t j) at time instant t jdescribed by the general relation.

    Where OP { . } represents a general operator that is dependent on the space coordinates (x, y), the

    time history (t), the wavelength () and the amplitude of the light distribution (C). For a monochrome

    imaging system, there will only be a single output field, while for a natural color imaging system, FO (i) ( x

    ,y ,t j ) may denote the red, green and blue tristimulus bands for i= 1, 2, 3, respectively. Multispectral

    imagery will also involve several output bands of data. In the general model of Fig. 5.1 each observed

    image field FO (i)( x ,y ,t j ) is digitized, to produce an array of image samples E S (i) ( m1 , m2 , t j ) at

    each time instant t j. The output samples of the digitizer are related to the input observed field by

    The End !

    Question .5.- Discuss Orthogonal Gradient Generation for first order derivative edge detection.

    Answer.-

    First-Order Derivative Edge Detection There are two fundamental methods for generating

    first-order derivative edge gradients. One method involves generation of gradients in two orthogonal

    directions in an image; the second utilizes a set of directional derivatives. We will be discussing the first

    method.

    Orthogonal Gradient Generation- An edge in a continuous domain edge segment F(x,y) can be

    detected by forming the continuous one-dimensional gradient G(x,y) along a line normal to the edge

    slope, which is at an angle with respect to the horizontal axis. If the gradient is sufficiently large (i.e.,

    above some threshold value), an edge is deemed present. The gradient along the line normal to the edge

    slope can be computed in terms of the derivatives along orthogonal axes according to the following

  • 7/28/2019 Final MC-0086 New_2

    6/8

    In Above Fig. describes the generation of an edge gradient in the discrete domain in terms of a rowgradient and a column gradient. The spatial gradient amplitude is given by

    For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude

    combination

    The orientation of the spatial gradient with respect to the row axis is

    The remaining issue for discrete domain orthogonal gradient generation is to choose a good

    discrete approximation to the continuous differentials of Equation..

    The simplest method of discrete gradient generation is to form the running difference of pixels

    along rows and columns of the image. The row gradient is defined as

    and the column gradient is

    Diagonal edge gradients can be obtained by forming running differences of diagonal pairs of pixels.

    This is the basis of the Roberts cross-difference operator, which is defined in magnitude form as and in

    square-root form as

  • 7/28/2019 Final MC-0086 New_2

    7/8

    Prewitt has introduced a pixel edge gradient operator described by the pixel numbering The

    Prewitt operator square root edge gradient is defined as

    With where K= 1. In this formulation, the row and column gradients are normalized to provide unit-

    gain positive and negative weighted averages about a separated edge position.

    The Sobel operator edge detector differs from the Prewitt edge detector in that the values of the

    north, south, east and west pixels are doubled (i.e., K = 2). The motivation for this weighting is to give

    equal importance to each pixel in terms of its contribution to the spatial gradient.In Above Fig. shows examples of the Prewitt and Sobel gradients of the peppers image.

    The row and column gradients for all the edge detectors mentioned previously in this subsection

    involve linear combination of pixels within a small neighborhood. Consequently, the row and columngradients can be computed by the convolution relationships.

  • 7/28/2019 Final MC-0086 New_2

    8/8

    Prewitt has suggested an eight-neighbor Laplacian defined by the gain normalized impulse

    response array..

    The End !

    Question .6.- Explain about the Region Splitting and merging with example .

    Answer.- Region Splitting and Merging:-Sub-divide an image into a set of disjoint regions and thenmerge and/or split the regions in an attempt to satisfy the conditions.

    Let R represent the entire image and select predicate P. One approach for segmenting is tosubdivide it successively into smaller and smaller quadrant regions so that, for ant region, P() = TRUE.We start with the entire region. If then the image is divided into quadrants. If P is FALSE for any quadrant,we subdivide that quadrant into sub quadrants, and so on. This particular splitting technique has aconvenient representation in the form of a so called quad tree (that is, a tree in which nodes have exactlyfour descendants). The root of the tree corresponds to the entire image and that each node corresponds

    to a subdivision. In this case, only was sub divided further.If only splitting were used, the final partition likely would contain adjacent regions with identicalproperties. This drawback may be remedied by merging as well as splitting.

    Sub-divide an image into a set of disjoint regions and then merge and/or split the regions in anattempt to satisfy the conditions stated in section 10.3.1.

    Let R represent the entire image and select predicate P. One approach for segmenting R is tosubdivide it successively into smaller and smaller quadrant regions so that, for ant region,

    R1. P(R1) = TRUE. We start with the entire region.If P(R) = FALSE,

    then the image is divided into quadrants. IfP is FALSE for any quadrant, we subdivide that quadrantinto sub quadrants, and so on. This particular splitting technique has a convenient representation in theform of a so called quad tree (that is, a tree in which nodes have exactly four descendants). The root ofthe tree corresponds to the entire image and that each node corresponds to a subdivision. In this case,only was sub divided further.

    If only splitting were used, the final partition likely would contain adjacent regions with identicalproperties. This draw back may be remedied by allowing merging, as well as splitting. Satisfying theconstraints of section 10.3.1 requires merging only adjacent regions whose combined pixels satisfy thepredicate P. That is, two adjacent regions and are merged only if = TRUE.

    The End !