Assignment Image Processing

Embed Size (px)

Citation preview

  • 8/3/2019 Assignment Image Processing

    1/17

    #1: What do you understand by Image Processing? Discuss its real-time application?

    In electrical engineering and computer engineering, image processing isany form of signal processing for which the input is an image, such asa photograph or video frame; the output of image processing may be either animage or, a set of characteristics or parameters related to the image. Mostimage-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.

    Image processing usually refers to digital image processing,

    but optical and analog image processing also are possible. The acquisition of images (producing the input image in the first place) is referred to as imaging.

    A few applications of image processing:

    Medical Imaging: Medical imaging is the technique and process used tocreate images of the human body (or parts and function thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science (including the study of normalanatomy and physiology). Although imaging of removed organs and tissues can be performed for medical reasons, such

    procedures are not usually referred to as medical imaging, but rather are a part of pathology.

    Face Detection: Face detection is a computer technology that determinesthe locations and sizes of human faces in arbitrary (digital) images. Itdetects facial features and ignores anything else, such as buildings, treesand bodies.

    Computer Vision: Computer vision (or machine vision) is the scienceand technology of machines that see. Here see means the machine is ableto extract information from an image, to solve some task, or perhaps"understand" the scene in either a broad or limited sense.

    Applications range from (relatively) simple tasks, such as industrialmachine vision systems which, say, count bottles speeding by on a

    production line, to research into artificial intelligence and computers or robots that can comprehend the world around them.

  • 8/3/2019 Assignment Image Processing

    2/17

    Exploration geophysics: Exploration geophysics is the applied branchof geophysics which uses surface methods to measure the physical

    properties of the subsurface Earth, in order to detect or infer the presenceand position of ore minerals, hydrocarbons, geothermal reservoirs,groundwater reservoirs, and other geological structures. Explorationgeophysics is the practical application of physical methods (such asseismic, gravitational, magnetic, electrical and electromagnetic) tomeasure the physical properties of rocks, and in particular, to detect themeasurable physical differences between rocks that contain ore depositsor hydrocarbons and those without.

    Remote Sensing: Remote sensing is the acquisition of information aboutan object or phenomenon, without making physical contact with theobject. In modern usage, the term generally refers to the use of aerialsensor technologies to detect and classify objects on Earth (both on thesurface, and in the atmosphere and oceans) by means of propagatingsignals (e.g. electromagnetic radiation emitted from aircraft or satellites).

    Augmented Reality: Augmented reality (AR) is a term for a live director an indirect view of a physical, real-world environment whose elementsare augmented by computer-generated sensory input such as sound,video, graphics or GPS data. It is related to a more general conceptcalled mediated reality, in which a view of reality is modified (possiblyeven diminished rather than augmented) by a computer. As a result, thetechnology functions by enhancing ones current perception of reality. Bycontrast, virtual reality replaces the real world with a simulated one .

  • 8/3/2019 Assignment Image Processing

    3/17

    #2: Define visual perception and its elements.

    Visual perception is the ability to interpret information and surroundingsfrom the effects of visible light reaching the eye. The resulting perception is

    also known as eyesight , sight , or vision . The various physiological componentsinvolved in vision are referred to collectively as the visual system, and are thefocus of much research in psychology, cognitive science, neuroscience,and molecular biology.

    The visual system in humans allows individuals to assimilate information fromthe environment. The act of seeing starts when the lens of the eye focuses animage of its surroundings onto a light-sensitive membrane in the back of theeye, called the retina. The retina is actually part of the brain that is isolated toserve as a transducer for the conversion of patterns of light into neuronalsignals. The lens of the eye focuses light on the photoreceptive cells of theretina, which detect the photons of light and respond by producing neuralimpulses. These signals are processed in a hierarchical fashion by different partsof the brain, from the retina upstream to central ganglia in the brain.

    Elements of visual perception are:

    Structure of human eye

    Image formation in the human eye

    Brightness adaptation and discrimination

    The human eye is an organ which reacts to light for several purposes.

    As a conscious sense organ, the eye allows vision. Rod and cone cells inthe retina allow conscious light perception and vision including color differentiation and the perception of depth. The human eye can distinguish

    about 10 million colors.

    In common with the eyes of other mammals, the human eye's non-image-forming photosensitive ganglion cells in the retina receive the light signalswhich affect adjustment of the size of the pupil, regulation and suppression of the hormone melatonin and entrainment of the body clock.

  • 8/3/2019 Assignment Image Processing

    4/17

    The image formed on the retina is inverted but the human brain perceives theimage erect.

  • 8/3/2019 Assignment Image Processing

    5/17

    #3: Explain image formation model and representation of digital image.

    The two parts of the image formation process:

    The geometry of image formation which determines where in theimage plane the projection of a point in the scene will be located.

    The physics of light which determines the brightness of a point inthe image plane as a function of illumination and surface

    properties.

    A simple model

    - The scene is illuminated by a single source.

    - The scene reects radiation towards the camera.

    - The camera senses it via chemicals on lm.

    Digital Image Representation:

    A digital image is a numeric representation (normally binary) of a two-dimensional image. Depending on whether or not the image resolution is fixed,it may be of vector or raster type. Without qualifications, the term "digitalimage" usually refers to raster images also called bitmap images.

  • 8/3/2019 Assignment Image Processing

    6/17

    Raster images have a finite set of digital values, called pictureelements or pixels. The digital image contains a fixed number of rows andcolumns of pixels. Pixels are the smallest individual element in an image,holding quantized values that represent the brightness of a given color at anyspecific point.

    Typically, the pixels are stored in computer memory as a raster image or raster map, a two-dimensional array of small integers. These values are oftentransmitted or stored in a compressed form.

    Raster images can be created by a variety of input devices and techniques, suchas digital cameras, scanners, coordinate-measuring machines, seismographic

    profiling, airborne radar, and more. They can also be synthesized from arbitrary

    non-image data, such as mathematical functions or three-dimensional geometricmodels; the latter being a major sub-area of computer graphics. The fieldof digital image processing is the study of algorithms for their transformation.

    Vector graphics is the use of geometrical primitives suchas points, lines, curves, and shapes or polygon(s), which are all based onmathematical equations, to represent images in computer graphics.

    Vector graphics formats are complementary to raster graphics, which is therepresentation of images as an array of pixels, as is typically used for therepresentation of photographic images. Vector graphics are stored asmathematical expressions as opposed to bit mapped graphics which are storedas a series of mapped 'dots', also known as pixels (Picture cells).

  • 8/3/2019 Assignment Image Processing

    7/17

    #4:Define: Gray level, Monochromatic light Luminance, Scotopic andGlare limit.

    Gray level: A shade of gray assigned to a pixel . The shades are

    usually positive integer value s taken from the gray -scale . In a 8-bit image a gray level can have a value from 0 to 255.In photography and computing, a grayscale or greyscale digitalimage is an image in which the value of each pixel is asingle sample, that is, it carries only intensity information. Imagesof this sort, also known as black-and-white, are composedexclusively of shades of gray, varying from black at the weakestintensity to white at the strongest.

    Monochromatic light: It is an electromagnetic wave of onespecific and strictly constant frequency in the frequency rangedirectly perceivable by the human eye. The term monochromaticlight originated because a person perceives a difference in thefrequency of light waves as a difference in color. However, theelectromagnetic waves of the visible region do not differ in

    physical nature from those of other regions (such as the infrared,ultraviolet, and X-ray regions). The term monochromatic is alsoapplied to the other regions, although such waves do not produceany perception of color.

    Scoptopic vision and Glare: Scotopic vision is the vision of the eye under low light conditions. The term comesfrom Greek skotos meaning darkness and -opia meaning acondition of sight . In the human eye cone cells are non-functionalin low light scotopic vision is produced exclusively through rod

    cells which are most sensitive to wavelengths of light around498 nm (green-blue) and are insensitive to wavelengths longer thanabout 640 nm (red). Scotopic vision occurs at luminance levels of 102 to 10 6 cd/m. In other species, such as the Elephant Hawk-moth, advanced color discrimination is displayed. Night-visiongoggles and similar devices take advantage of the fact that humaneyesight is most sensitive to light with a wavelength of 540 nm(slightly lime green).

    http://en.mimi.hu/photography/gray.htmlhttp://en.mimi.hu/photography/pixel.htmlhttp://en.mimi.hu/photography/positive.htmlhttp://en.mimi.hu/photography/value.htmlhttp://en.mimi.hu/photography/gray.htmlhttp://en.mimi.hu/photography/scale.htmlhttp://en.mimi.hu/photography/8-bit_image.htmlhttp://en.mimi.hu/photography/8-bit_image.htmlhttp://en.mimi.hu/photography/value.htmlhttp://en.mimi.hu/photography/gray.htmlhttp://en.mimi.hu/photography/pixel.htmlhttp://en.mimi.hu/photography/positive.htmlhttp://en.mimi.hu/photography/value.htmlhttp://en.mimi.hu/photography/gray.htmlhttp://en.mimi.hu/photography/scale.htmlhttp://en.mimi.hu/photography/8-bit_image.htmlhttp://en.mimi.hu/photography/8-bit_image.htmlhttp://en.mimi.hu/photography/value.html
  • 8/3/2019 Assignment Image Processing

    8/17

    Glare is difficulty seeing in the presence of bright light such asdirect or reflected sunlight or artificial light such as car headlampsat night. Because of this, some cars include mirrors with automaticanti-glare functions.

    Glare is caused by a significant ratio of luminance between the task (that which is being looked at) and the glare source. Factors such asthe angle between the task and the glare source and eyeadaptation have significant impacts on the experience of glare.Glare can be generally divided into two types, discomfort glare anddisability glare. Discomfort glare results in an instinctive desire tolook away from a bright light source or difficulty in seeing a task.

    Disability glare renders the task impossible to view, such as whendriving westward at sunset. Disability glare is often caused by theinter-reflection of light within the eyeball, reducing the contrast

    between task and glare source to the point where the task cannot bedistinguished. When glare is so intense that vision is completelyimpaired, it is sometimes called dazzle

    Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the

    amount of light that passes through or is emitted from a particular area, and falls within a given solid angle. The SI unit for luminanceis candela per square metre (cd/m 2). A non-SI term for the sameunit is the "nit". The CGS unit of luminance is the stilb, which isequal to one candela per square centimetre or 10 kcd/m 2.

    Luminance is often used to characterize emission or reflection fromflat, diffuse surfaces. The luminance indicates how much luminous

    power will be perceived by an eye looking at the surface from a

    particular angle of view. Luminance is thus an indicator of how bright the surface will appear. In this case, the solid angle of interest is the solid angle subtended by the eye's pupil. Luminanceis used in the video industry to characterize the brightness of displays. A typical computer display emits between 50 and300 cd/m 2. The sun has luminance of about 1.610 9 cd/m 2 at noon.

    Luminance is invariant in geometric optics. This means that for anideal optical system, the luminance at the output is the same as the

    input luminance. For real, passive, optical systems, the outputluminance is at most equal to the input. As an example, if you form

  • 8/3/2019 Assignment Image Processing

    9/17

    a demagnified image with a lens, the luminous power isconcentrated into a smaller area, meaning that the illuminance ishigher at the image. The light at the image plane, however, fills alarger solid angle so the luminance comes out to be the sameassuming there is no loss at the lens. The image can never be"brighter" than the source.

  • 8/3/2019 Assignment Image Processing

    10/17

    #5: Define Image Sensing and different types of image acquisitiontechniques.

    Image Acquisition:

    Images are typically generated by illuminating a scene and absorbing the energyreflected by the objects in that scene.

    Typical notions of illumination and scene can be way off:

    X-rays of a skeleton

    Ultrasound of an unborn baby

    Electro-microscopic images of molecules

    Image Sensing:

    Incoming energy lands on a sensor material responsive to that type of energyand this generates a voltage

    Collections of sensors are arranged to capture images

  • 8/3/2019 Assignment Image Processing

    11/17

    Different types of sensing elements are used for varied applications. For eg. A aline of image sensors may be used in bar-code scanners, an array of CCDs or CMOS is used in Digital Cameras.

  • 8/3/2019 Assignment Image Processing

    12/17

    #6: Explain: Spatial Resolution, Image Interpolation.

    Spatial Resolution:

    The measure of how closely lines can be resolved in an image is called spatialresolution, and it depends on properties of the system creating the image, not

    just the pixel resolution in pixels per inch (ppi). For practical purposes theclarity of the image is decided by its spatial resolution, not the number of pixelsin an image. In effect, spatial resolution refers to the number of independent pixel values per unit length.

    The spatial resolution of computer monitors is generally 72 to 100 lines per inch, corresponding to pixel resolutions of 72 to 100 ppi. With scanners, optical resolution is sometimes used to distinguish spatial resolution from the number of pixels per inch.

    In geographic information systems (GISs), spatial resolution is measured bythe ground sample distance (GSD) of an image, the pixel spacing on the Earth'ssurface.

    In astronomy one often measures spatial resolution in data points per arcsecondsubtended at the point of observation, since the physical distance betweenobjects in the image depends on their distance away and this varies widely with

    the object of interest. On the other hand, in electron microscopy, line or fringeresolution refers to the minimum separation detectable between adjacent parallel lines (e.g. between planes of atoms), while point resolution insteadrefers to the minimum separation between adjacent points that can be bothdetected and interpreted e.g. as adjacent columns of atoms, for instance. Theformer often helps one detect periodicity in specimens, while the latter (although more difficult to achieve) is key to visualizing how individual atomsinteract.

    In Stereoscopic 3D images, spatial resolution could be defined as the spatialinformation recorded or captured by two viewpoints of a stereo camera (left andright camera). The effects of spatial resolution on overall perceived resolutionof an image on a person's mind are yet not fully documented. It could be arguedthat such "spatial resolution" could add an image that then would not dependsolely on pixel count or Dots per inch alone, when classifying and interpretingoverall resolution of an given photographic image or video frame.

  • 8/3/2019 Assignment Image Processing

    13/17

    Image Interpolation:

    Interpolation (sometimes called resampling) is an imaging method to increase(or decrease) the number of pixels in a digital image. Some digital cameras use

    interpolation to produce a larger image than the sensor captured or tocreate digital zoom. Virtually all image editing software support one or moremethods of interpolation. How smoothly images are enlarged withoutintroducing aliasing effects depends on the sophistication of the algorithm.

    The above is the example of use of Nearest Neighbour Algorithm.

    Some other interpolation algorithms are:

    Bilinear interpolation

    Bicubic Interpolation

    Fractal Interpolation

  • 8/3/2019 Assignment Image Processing

    14/17

    #7: Discuss important relationships between pixels in a digital image.

    There are five basic parameters which govern the relationship between pixels:

    Neighborhood

    Adjacency

    Connectivity

    Paths

    Regions and boundaries

    Neighborhood

    Any pixel p(x, y) has two vertical and two horizontal neighbors, given by

    (x+1, y), (x-1, y), (x, y+1), (x, y-1)

    This set of pixels are called the 4-neighbors of P, and is denoted by N 4(P).

    The four diagonal neighbors of p(x,y) are given by,

    (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1 ,y-1)

    This set is denoted by N D(P).

    Each of them are at Euclidean distance of 1.414 from P.

    Adjacency

    Two pixels are connected if they are neighbors and their gray levels satisfy

    some specified criterion of similarity.

    For example, in a binary image two pixels are connected if they are 4-neighbors and have same value (0/1).

  • 8/3/2019 Assignment Image Processing

    15/17

    #8: When you enter a dark cinema-hall on a bright day, it takes anappreciable amount of time before you can see well enough to find anempty seat. What visual processes take place? Explain.

    Adaptation is the ability of the eye to adjust to various levels of darkness andlight.

    The human eye can function from very dark to very bright levels of light; itssensing capabilities reach across nine orders of magnitude. This means that the

    brightest and the darkest light signal that the eye can sense are a factor of roughly 1,000,000,000 apart. However, in any given moment of time, the eyecan only sense a contrast ratio of one thousand. What enables the wider reach isthat the eye adapts its definition of what is black. The light level that isinterpreted as "black" can be shifted across six orders of magnitudea factor of one million.

    The eye takes approximately 2030 minutes to fully adapt from bright sunlightto complete darkness and become ten thousand to one million times moresensitive than at full daylight. In this process, the eye's perception of color changes as well. However, it takes approximately five minutes for the eye toadapt to bright sunlight from darkness. This is due to cones obtaining moresensitivity when first entering the dark for the first five minutes but the rodstake over after five or more minutes.

    Due the facts stated above it takes a while before one can find an empty seatwhen one enters a cinema-hall.

  • 8/3/2019 Assignment Image Processing

    16/17

    ASSIGNMENT

    IMAGEPROCESSING

    SUBMITTED BY:

    AMIT MALHOTRA

    ROLL #883276

    B.TECH(ECE) SEMESTER 7

  • 8/3/2019 Assignment Image Processing

    17/17