Upload
asvini001
View
32
Download
0
Tags:
Embed Size (px)
DESCRIPTION
biomedical imaging notes
Citation preview
Division of Bioengineering, School of Chemical and Biomedical Engineering 1
Part I
Asst/Prof Poh Chueh Loo Division of Bioengineering
Room: N1.3 B2-09; Tel: 6514 1088 [email protected]
BG3104 Biomedical Imaging
Division of Bioengineering, School of Chemical and Biomedical Engineering 2
Topics to be covered in Part I
Medical Image processing techniques Fundamentals of digital image representation Basic Relationships between pixels Edge detection and Enhancement Spatial Filtering Image segmentation
Magnetic Resonance Imaging (MRI)
Principles of MRI Image formation Spin Echoes Contrast Mechanism Image Quality
Division of Bioengineering, School of Chemical and Biomedical Engineering 3
Books
Text Book Medical Imaging Signals and Systems
Jerry L. Prince and Jonathan M.Links Pearson Prentice Hall. 2006
References Digital Image Processing. Rafael C. Gonzalez and Richard E. Woods. International Edition, Prentice Hall. 2008.
Division of Bioengineering, School of Chemical and Biomedical Engineering 4
OVERVIEW
Medical Imaging allows us to see the inside of the human body without cutting it open, for example, through surgery.
In this course, we will cover the most common imaging methods in Radiology today. For example, MRI X-ray/Computed Tomography (CT) Ultrasound Imaging Optics
Each of these different methods is a different imaging
modality, and the signals that arise are fundamentally different.
Division of Bioengineering, School of Chemical and Biomedical Engineering 5
Chest X-ray
Example of Biomedical images
Division of Bioengineering, School of Chemical and Biomedical Engineering 6
Head CT
Example of Biomedical images
Division of Bioengineering, School of Chemical and Biomedical Engineering 7
Example of Biomedical images
Magnetic Resonance (MR) Images
Division of Bioengineering, School of Chemical and Biomedical Engineering 8
Example of Biomedical images
Multi Slice Magnetic Resonance (MR) Images
Division of Bioengineering, School of Chemical and Biomedical Engineering 9
Ultrasound
Example of Biomedical images
Division of Bioengineering, School of Chemical and Biomedical Engineering 10
Learning objectives Medical Image processing techniques
Able to appreciate the importance of image processing and have an understanding of the various applications of image processing in medical imaging.
Able to determine the relationship of pixels in terms of neighbours, adjacency and paths.
Have an understanding of fundamental image processing techniques (namely, image enhancement and image segmentation) used in processing medical images.
Able to perform spatial image filtering (convolution) given different filtering masks/kernels such as Averaging, Sobel, Prewitt, and Laplacian.
Division of Bioengineering, School of Chemical and Biomedical Engineering 11
Why do we need digital image processing? To improve pictorial information for human
interpretation
To process image data for storage, transmission, and representation for machine perception.
Digital Image Processing
Division of Bioengineering, School of Chemical and Biomedical Engineering 12
Definition of an image An image maybe defined as a 2-dimensional function, f(x, y) where x and y are spatial (plane) coordinates. The amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. An image is called a digital image when x, y, and the amplitude values of f are all finite, discrete quantities. Pixels is the term to denote the elements of a digital image. (a digital image is composed of a finite number of elements, each of which has a particular location and value)
Fundamentals of image and signal processing
Division of Bioengineering, School of Chemical and Biomedical Engineering 13
Coordinate convention used in the course
Fundamentals of image and signal processing
For example, 1. Coordinates at the origin are
(x, y) = (0, 0)
2. The next coordinates along the first row of the image is (x, y) = (0, 1) signify second sample along the first row.
Division of Bioengineering, School of Chemical and Biomedical Engineering 14
A digital image can be represented in a compact matrix form:
=
)1,1(...)1,1()0,1(...:::
)1,1(...)1,1()0,1()1,0(...)1,0()0,0(
),(
NMfMfMf
NfffNfff
yxf
The right side of this equation is by definition a digital image.
Each element of this matrix array is called a pixel.
Representing digital images in Matrix form
Division of Bioengineering, School of Chemical and Biomedical Engineering 15
The digitization process requires decisions about values of M, N and for the number of discrete gray levels, L, allowed for each pixel.
M and N have to be positive integers.
Due to processing, storage, and sampling hardware considerations, L, the number of gray levels typically is an integer power of 2:
kL 2= Assumption: the discrete levels are equally spaced and that they
are integers in the interval [0, L-1]
Representing digital images
Division of Bioengineering, School of Chemical and Biomedical Engineering 16
The number of bits (b) required to store a digitized image is given by
When M = N, this equation becomes
kNb = 2
Representing digital images
kNMb =
Division of Bioengineering, School of Chemical and Biomedical Engineering 17
The number of bits required to store square images with various values of N and K. The number of gray levels corresponding to each value of k is shown in parentheses.
When an image can have 2k gray levels, it is common practice to refer to the image as a k-bit image for example, an image with 256 possible gray-level values is called an 8-bit image.
Representing digital images
Division of Bioengineering, School of Chemical and Biomedical Engineering 18
Image Storage Sizes Digital images used up disk space and system bandwidth.
A simple 8-bit grey scale image containing 256x256 pixels requires
65536 bytes of computer storage. For example A 256 x 256 x 128 MR volume at 12 bits/pixel requires 12MB of
storage! The rapid increase of storage requirements with dimensionality is one
aspect of a general issue with processing N-dimensional data known as the curse of dimensionality.
Representing digital images
Division of Bioengineering, School of Chemical and Biomedical Engineering 19
Medical Image File Formats In medical imaging, there is a special format known as DICOM
3.0. DICOM stands for Digital Image COmmunications in Medicine.
DICOM 3.0 represents a standard for the medical image data.
For all DICOM 3.0 images, the header is very long, and contains information on the patient that has been scanned, the scan orientation, the name of the physician/radiographer/sonographer, other patient details, scan parameters, pixel resolution, etc.
All of this information is vital in correctly interpreting a medical image.
Representing digital images
Division of Bioengineering, School of Chemical and Biomedical Engineering 20
A widely used definition of Spatial resolution is simply the smallest number of discernible line pairs per unit.
Gray-level resolution similarly refers to the smallest discernible change In gray level.
The number of gray levels is usually an integer power of 2. The most common number is 8 bits, with 16 bits being used in some applications where enhancement of specific gray level ranges is necessary, for example in medical images.
Spatial and Gray-level Resolution
Division of Bioengineering, School of Chemical and Biomedical Engineering 21
What is a binary image? A binary image is one where
each pixel can take any one of two values, typically represented as 0 or 1.
Binary images are often produced as the result of some decision process on pixels of an input image - used to specify regions in the image where some property or condition is true.
Binary Images
Pixels in the output image are set to 1 if the corresponding input image pixel satisfies some criterion, and are set to 0 otherwise.
Division of Bioengineering, School of Chemical and Biomedical Engineering 22
Binary images are very widely used in image processing.
For example, if one needs to write an algorithm to identify and count cells in a histology image, at some stage in the algorithm it is quite usual to produce an intermediate binary image whose pixels contain ones where there is a cell present, and zeros where there is no cell present. Counting up the "blobs" of ones can then yield an estimate for the number of cells. These cell masks can be used to keep a record of where the cells are in the original image for further processing.
Binary Images
An example of counting the number of rice
Original Binary
Division of Bioengineering, School of Chemical and Biomedical Engineering 23
Neighbours of a Pixel A pixel p at coordinates (x, y) has four horizontal and vertical
neighbours whose coordinates are given by
)1,(),1,(),,1(),,1( ++ yxyxyxyx This set of pixels, called the 4-
neighbours of p, is denoted by N4(p).
Each pixel is a unit distance from (x, y).
Basic Relationships between pixels
(x-1,y)
(x,y-1)
(x,y)
(x,y-1)
(x+1,y)
y
x N4
Division of Bioengineering, School of Chemical and Biomedical Engineering 24
Neighbours of a Pixel The four diagonal neighbours of p, ND(p), have coordinates
)1,1(),1,1(),1,1(),1,1( ++++ yxyxyxyx These points together with the 4-neighbours, are called the 8
neighbours of p, denoted by N8(p).
Basic Relationships between pixels
(x-1,y-1)
(x-1,y)
(x-1,y+1)
(x,y-1)
(x,y)
(x,y+1)
(x+1,y-1)
(x+1,y)
(x+1,y+1)
(x-1,y-1)
(x-1,y+1)
(x,y)
(x+1,y-1)
(x+1,y+1)
ND N8
Division of Bioengineering, School of Chemical and Biomedical Engineering 25
Connectivity Connectivity between pixels is important in the determination of
boundaries.
To establish if two pixels are connected it must be determined if they are neighbours if their gray levels satisfy a specified criterion of similarity
(for example, if their gray levels are equal).
For example, Consider a binary image with values 0 and 1, Two pixels may be 4-neighbours, but they are said to be connected
only if they have the same value.
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 26
Adjacency (Connectivity) Let V be the set of gray-level values used to define adjacency.
In a binary image, V = {1} if we are referring to adjacency of pixels
with value 1.
In a gray scale image, set V typically contains more elements. For example, in the adjacency of pixels with a range of possible
gray-level values 0 to 255, set V could be any subset of these 256 values.
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 27
Consider 3 types of adjacency: 1. 4-adjacency. Two pixels p and q with values from V are 4-adjacent if
q is in the set N4(p).
2. 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is the set N8(p).
3. M-adjacency (mixed adjacency). Two pixels p and q with values from V are m-adjacent if 1. q is in N4(p), or 2. q is in ND(p) and the set N4(p) N4(q) has no pixels whose
values are from V.
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 28
Adjacency Mixed adjacency is a modification of 8-adjacency. It is introduced to
eliminate the ambiguities that often arise when 8-adjacency is used. For example Considering the pixel arrangement shown in Figure for V= {1}. The
three pixels at the top of Figure show multiple (ambiguous) 8-adjacency. This ambiguity is removed by using m-adjacency.
Basic Relationships between pixels
Figure. (a) Arrangement of pixels; (b) pixels that are 8-adjacent (shown dashed) to the center pixel; (c) m-adjacency
Division of Bioengineering, School of Chemical and Biomedical Engineering 29
Adjacency Two image subsets S1 and S2 are adjacent if some pixel in S1 is
adjacent to some pixel in S2. It is understood here and in the following definitions that adjacent mean 4-, 8- or m- adjacent.
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 30
Example Problem Consider the two image subsets, S1 and S2, shown in the following figure. For V = {1}, determine whether these two subsets are (a) 4-adjacent (b) 8-adjacent or (c) m-adjacent
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 31
Solution
Basic Relationships between pixels
Let p and q be as shown in the Figure below. (a) S1 and S2 are not 4-connected because q is not in the set N4(p); (b) S1 and S2 are 8-connected because q is in the set N8(p); (c) S1 and S2 are m-connected because (i) q is in ND(p) and (ii) the set N4(p) n N4(q) is empty.
Division of Bioengineering, School of Chemical and Biomedical Engineering 32
A digital path (or curve) from pixel p with coordinates (x, y) to pixel q with coordinates (s, t) is a sequence of distinct pixels with coordinates
),(),...,,(),,( 1100 nn yxyxyx
n.i1for adjacent are )y,(x and )y,(x pixels and ),,(),( ),,(),( where
1-i1-i
ii00
== tsyxyxyx nn
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 33
N is the length of the path.
If (x0,y0)=(xn,yn), the path is a closed path.
4-, 8-, m- paths depends on the type of adjacency specified.
Referring to figure(b) below between the northeast and southeast points are 8-paths, and the path in figure(c) is an m-path.
Basic Relationships between pixels
Figure. (a) Arrangement of pixels; (b) pixels that are 8-adjacent (shown dashed) to the center pixel; (c) m-adjacency
Division of Bioengineering, School of Chemical and Biomedical Engineering 34
Example Problem Consider the image segment shown. Let V = {0, 1} and compute the lengths of the shortest 4-, 8-, and m-
path between p and q. If a particular path does not exist between these two points, explain why.
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 35
Solution When V = {0; 1}, 4 path does not exist between p and q because it is impossible to get from p to q by traveling along points that are both 4adjacent and also have values from V . Figure (a) shows that it is not possible to get to q. The shortest 8 path is shown in Figure (b); its length is 4. The length of the shortest m- path (shown dashed) is 5. Both of these shortest paths are unique in this case.
Basic Relationships between pixels
Division of Bioengineering, School of Chemical and Biomedical Engineering 36
For pixels p, q and z, with coordinates (x, y), (s, t) and (v, w) respectively, D is a distance function if
(a) D(p, q) > 0 (unless p = q) (b) D(p, q) = D(q, p), and (c) D(p, z) < D(p, q) + D(q, z) The Euclidean distance between p and q is defined as
[ ]21
22 )()(),( tysxqpDe +=
Distance Measures
Division of Bioengineering, School of Chemical and Biomedical Engineering 37
The D4 distance (also called city-block distance) between p and q is defined as
tysxqpD +=),(42
2 1 2 2 1 0 1 2
2 1 2 2
Distance Measures
The D8 distance (also called chessboard distance) between p and q is defined as
( )tysxqpD = ,max),(82 2 2 2 2 2 1 1 1 2 2 1 0 1 2 2 1 1 1 2 2 2 2 2 2
Division of Bioengineering, School of Chemical and Biomedical Engineering 38
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 39
Objective of enhancement To process an image so that the result is more suitable than the original
image for a specific application.
Image enhancement techniques are very much problem oriented. For example, A method that is quite useful for enhancing X-ray images may not be
necessarily be the best approach for enhancing pictures of Mars transmitted by a space probe.
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 40
Image enhancement approaches fall into 2 board categories: Spatial Domain approaches are based on direct
manipulation of pixels in an image.
Frequency Domain techniques are based on modifying the Fourier transform of an image.
We will only cover Spatial Domain methods in this course
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 41
Spatial domain refers to the pixels composing an image.
Procedures operate directly on these pixels.
Spatial domain processes will be denoted by the expression
)],([),( yxfTyxg =
where f(x, y) is the input image g(x, y) is the processed image T is an operator on f, defined over some neighborhood of (x, y).
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 42
Principal Approach Using a square or rectangular subimage area centered at (x, y) in
defining a neighborhood about a point (x, y)
The center of the subimage is moved from pixel to pixel starting, say, at the top left corner.
Image Enhancement in the spatial domain
Figure. A 3 x 3 neighborhood about a point (x, y) in an image
The operator T is applied at each location (x, y) to produce the output, g, at that location. The process utilizes only the pixels in the area of the image spanned by the neighborhood.
Division of Bioengineering, School of Chemical and Biomedical Engineering 43
The simplest form of T is when the neighborhood is of size 1 x 1. In this case, g depends only on the value of f at (x, y), and T becomes gray-level transformation function of the form
)(rTs =Where r variables denotes the gray level of f(x,y) at any point (x, y) s variable denotes the gray level of g(x,y) at any point (x, y)
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 44
For example, If T(r) has the form shown, the transformation would produce an image
of higher contrast than the original by darkening the levels below m brightening the levels above m in the original image.
In this technique, known as contrast stretching, the values of r below
m are compressed by the transformation function into a narrow range of s, toward black. The opposite effect takes place for values of r above m.
Image Enhancement in the spatial domain
Figure. Gray level transformation functions for contrast enhancement
Division of Bioengineering, School of Chemical and Biomedical Engineering 45
In the limiting case shown in Figure, T(r) produces a two-level (binary) image. A mapping of this form is called a Thresholding function.
Image Enhancement in the spatial domain
Figure. Gray level transformation functions for contrast enhancement
Division of Bioengineering, School of Chemical and Biomedical Engineering 46
Example of Thresholding
Image Enhancement in the spatial domain
Original MR image Thresholding using m = 150
Division of Bioengineering, School of Chemical and Biomedical Engineering 47
The general approach is to use a function of the values of f in a predefined neighborhood of (x,y) to determine the value of g at (x,y).
One of the principal approaches in this formulation is based on the use of so-called masks (also referred to as filters)
Image Enhancement in the spatial domain
What is a mask? A mask is basically a small 2-D array in which the values of the mask coefficients determine the nature of the process. This approach often are referred to as mask processing or filtering.
Division of Bioengineering, School of Chemical and Biomedical Engineering 48
Some examples of Basic Gray Level Transformations: 1. Image negatives 2. Log Transformations (more on Fourier transform) 3. Power-Law Transformations
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 49
Image negatives The negative of an image with gray levels in the range [0, L 1] is
obtained by using the negative transformation which is given by the expression
rLs = 1 Reversing the intensity levels of an image in this manner produces the
equivalent of a photographic negative.
This type of processing is particularly suited for enhancing white or gray detail embedded in dark regions of an image.
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 50
Image Enhancement in the spatial domain
Example
Original digital mammogram. Negative image obtained using the negative transformation
The original image is a digital mammogram showing a small lesion. In spite of the fact that the visual content is the same in both images, note how much easier it is to analyze the breast tissue in the negative image in this particular case.
Division of Bioengineering, School of Chemical and Biomedical Engineering 51
Log Transformations (more on fourier transform) The general form of the log transformation is
)1log( rcs += Where c is a constant, and it is assumed that r > 0.
Image Enhancement in the spatial domain
Division of Bioengineering, School of Chemical and Biomedical Engineering 52
Image Enhancement in the spatial domain
The shape of the log curve in Figure shows that this transformation maps a narrow range of low gray-level values in the input image into a wider range of output levels.
The log function has the important characteristic that it compresses the dynamic range of images with large variations in pixel values.
Division of Bioengineering, School of Chemical and Biomedical Engineering 53
This operation is carried out on a pixel-by pixel basis between two or more images. This excludes the logic operation NOT, which is performed on a single image
For example, Subtraction of two images would result in a new image whose pixel at
coordinates (x, y) is the difference between the pixels in that same location in the two images being subtracted.
Pixel values are processed as strings of binary numbers in logic operations on gray-scale images,
Main logic operators AND OR NOT
Enhancement using Arithmetic/Logic Operation
Division of Bioengineering, School of Chemical and Biomedical Engineering 54
NOT Logic Operator The NOT logic operator performs the same function as the negative
transformation
Enhancement using Arithmetic/Logic Operation
Original digital mammogram.
Negative image obtained using the negative transformation
Division of Bioengineering, School of Chemical and Biomedical Engineering 55
AND and OR operations Used for selecting subimages in an image (i.e. masking).
Masking is referred to as region of interest processing and is used
primarily to isolate an area for processing.
Enhancement using Arithmetic/Logic Operation
(a) Original Image (b) AND image mask (c) Result of the AND
operation on images (a) and (b)
(d) Original image (e) OR image mask. (f) Result of operation OR
on images (d) and (e)
Division of Bioengineering, School of Chemical and Biomedical Engineering 56
Subtraction and addition are the most useful arithmetic operation. Image Subtraction The difference between two images f(x, y) and h(x, y) is expressed as
),(),(),( yxhyxfyxg =
Is obtained by computing the difference between all pairs of corresponding pixels from f and h.
The key usefulness of subtraction is the enhancement of differences between images.
Enhancement using Arithmetic/Logic Operation
Division of Bioengineering, School of Chemical and Biomedical Engineering 57
One of the most commercially successful and beneficial uses of image subtraction is in the area of medical imaging called mask mode radiography.
Enhancement using Arithmetic/Logic Operation
Enhancement by image subtraction. (a) Mask image. (b) An image (taken after injection of a contrast medium into the bloodstream) with mask subtracted out.
Division of Bioengineering, School of Chemical and Biomedical Engineering 58
Spatial filtering process involve moving the filter mask from point to point in an image.
At each point (x, y), the response of the filter at that point is calculated using a predefined relationship.
For linear spatial filtering, the response is given by a sum of products of the filter coefficients and the corresponding image pixels in the area spanned by the filter mask.
Spatial filtering is used in smoothing, edge detection, noise removal, etc.
Basics of Spatial Filtering
Division of Bioengineering, School of Chemical and Biomedical Engineering 59
For example, the 3 X 3 mask shown, the result (or response), R, of linear filtering with the filter mask at a point (x, y) in the image is
)1,1()1,1(),1()0,1(...),()0,0( ...),1()0,1(),()1,1(
+++++++++=
yxfwyxfwyxfwyxfwqyqxfwR
which is the sum of products of the mask coefficients with the corresponding pixels directly under the mask
Note that the coefficient w(0,0) coincides with the image value f(x, y), indicating the mask is centered at (x, y) when the computation of the sum of the products takes place.
Basics of Spatial Filtering
Division of Bioengineering, School of Chemical and Biomedical Engineering 60
Basics of Spatial Filtering
Mask coefficients showing coordinate arrangement
Pixels of image section under mask
Division of Bioengineering, School of Chemical and Biomedical Engineering 61
In general, linear filtering of an image f of size M x N with a filter mask of size m x n is given by the expression:
( ) = =
++=a
as
b
bttysxftswyxg ),(),(,
2/)1( = ma2/)1( = nb
1,....,2,1,0 = Mx1,....,2,1,0 = Ny
where
The process of linear filtering is similar to a frequency domain concept called convolution.
Basics of Spatial Filtering
Division of Bioengineering, School of Chemical and Biomedical Engineering 62
The response, R, of an m x n mask at any point (x, y) given by the following expression:
=
=
+++=mn
iii
mnmn
zw
zwzwzwR
1
2211
...
where the ws are mask coefficients the zs are the values of the image gray levels corresponding to those
coefficients mn is the total number of coefficients in the mask.
Basics of Spatial Filtering
Division of Bioengineering, School of Chemical and Biomedical Engineering 63
For the 3 x 3 general mask, the response at any
point (x, y) in the image is given by
=
=
+++=9
1
992211
...
iii zw
zwzwzwR
Basics of Spatial Filtering
Division of Bioengineering, School of Chemical and Biomedical Engineering 64
Smoothing Spatial Filters are used for blurring and for noise reduction.
Blurring is used in preprocessing steps, for example, removal of small details from an image prior to object extraction, and bridging of small gaps in lines or curves.
The output of a smoothing, linear spatial filter is the average of the pixels contained in the neighborhood of the filter mask.
These filters are also call averaging filters/low pass filters
Smoothing Spatial Filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 65
Basic idea The value of every pixel in an image is replaced by the average of the
gray levels in the neighborhood defined by the filter mask.
As a result, the resulting image has reduced sharp transitions in gray levels.
Because random noise typically consists of sharp transitions in gray levels, smoothing is applied to reduce noise.
However, edges (which are desirable) are characterized by sharp transitions in gray levels, so averaging filters have the undesirable side effect that they blur edges.
Smoothing Spatial Filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 66
Example of two 3 x 3 smoothing filters
Smoothing Spatial Filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 67
Example of 3 x 3 smoothing filters (1) This produces the standard average of the pixels
under the mask.
=
=
+++=9
1
992211
...
iii zw
zwzwzwR
Substituting the coefficient of the mask into the above equations
=
=9
191
iizR
Which is the average of the gray levels of the pixels in the 3 x 3 neighborhood defined by the mask.
Smoothing Spatial Filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 68
Example of 3 x 3 smoothing filters (2) This produces a weighted average.
This give more importance (weight) to some
pixels at the expense of others.
The center point has the highest weight and reducing the weight as function of increasing distance from the origin.
This reduces blurring in the smoothing process.
Smoothing Spatial Filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 69
What is the effect of smoothing when the size of the filter increases?
Original n = 3
n = 5 n = 9
n = 35 n = 15
Smoothing Spatial Filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 70
The response of nonlinear spatial filters is based on ordering (ranking) the pixels contained in the image area encompassed by the filter, and then replacing the value of the center pixel with the value determined by the ranking result.
An example is the median filter The median filter replaces the value of a pixel by the median of the
gray levels in the neighborhood of that pixel (the original value of the pixel is included in the computation of the median)
Median filter for certain types of random noise (e.g. impulse noise) Provide excellent noise-reduction capabilities, with considerably
less blurring than linear smoothing filters of similar size
Order Statistics filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 71
What is median? Recall The median of a set of values is such that half the values in the set
are less than or equal to , and half are greater than or equal to . Steps in performing median filtering at a point in an image 1. Sort the values of the pixel in question and its neighbors 2. Determine their median 3. Assign the median value to that pixel at the point.
Median filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 72
For example Suppose that a 3 x 3 neighborhood has values (10, 20, 20, 20, 15, 20,
20, 25, 100).
These values are sorted as (10, 15, 20, 20, 20, 20, 20, 25, 100).
In a 3 x 3 neighborhood the median is the 5th largest value. Thus, the median will be 20.
The principal function of the median filters is to force points with distinct gray levels to be more like their neighbours.
Median filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 73
Figure 1. Application of the median filter
Median filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 74
Other types of order statistics filters 1. Max filter, useful in finding the brightest points in an image
2. Min filer, used in finding the darkest points in an image
3. Mean filter, simply smoothes local variations in an image
Order Statistics filters
Division of Bioengineering, School of Chemical and Biomedical Engineering 75
Image Segmentation
Division of Bioengineering, School of Chemical and Biomedical Engineering 76
Segmentation subdivides an image into its constituent regions or objects.
Segmentation accuracy determines the eventual success or failure of computerized analysis procedures.
Image segmentation algorithms generally are based on one of two basic properties of intensity values: Discontinuity and similarity
Discontinuity : the approach is to partition an image based on abrupt changes in intensity, such as edges in an image.
Similarity: based on partitioning an image into regions that are similar according to a set of pre defined criteria, eg. Thresholding.
Image Segmentation
Division of Bioengineering, School of Chemical and Biomedical Engineering 77
Application of Segmentation Quantifying a pathology The strength of medical imaging is that it provides very visual
information that often gives very important localisation information.
The downside is that quantifying a pathology from an image (which is normally interpreted by a radiologist) is not easy. Indeed diagnoses sometimes lack repeatability, and standardizing the diagnostic process, particularly in uncertain cases, becomes difficult.
Because many images are now digital by nature, the possibility of automated or computer-assisted diagnosis is theoretically possible.
Image Segmentation
Division of Bioengineering, School of Chemical and Biomedical Engineering 78
Application of Segmentation Rendering 3D data
Figure (a) Single MR T1 weighted image slice. (b) Rendered using segmentation of structures from each slice of the volume, placing these together, constructing surfaces and finally rendering in 3D. The difficult part is the segmentation!
Image Segmentation
Division of Bioengineering, School of Chemical and Biomedical Engineering 79
Thresholding can be defined as a labelling operation on an image.
For gray scale images, thresholding is a means of distinguishing pixels that have higher intensity from those of lower intensity.
If g(m; n) represents a binary version of the gray scale image f(m; n), then we can express the thresholding operation as g(m; n) if
f(m; n) > T then g(m; n) = 1 else g(m; n) = 0
This is a simple example of decision process upon an input (gray-scale) image which yields a binary image.
Thresholding is a well known way of re-quantizing an image into just two levels; a good way of reducing data bandwidth.
Thresholding
Division of Bioengineering, School of Chemical and Biomedical Engineering 80
For example, applying a thresholding operation on a gray scale image containing pixel values in the range 0 - 255.
An image (as shown) contains pixels which correspond to an object of interest, and also pixels which correspond to the background. We would like to be able to label which pixels belong to the object, and which to the background. This represents the simplest example of image segmentation.
Thresholding
Figure: (a) Leaf on Grass, grey scale image of object and background. (b) Thresholded at a gray level of 140
Division of Bioengineering, School of Chemical and Biomedical Engineering 81
Techniques to detect three basic types of gray-level discontinuities in a digital image:
Points Lines Edges
The common way to look for discontinuities is to run a mask through
the image in the manner described in previous lecture.
For a 3 x 3 mask, this procedure involves computing the sum of products of the coefficients with the gray levels contained in the region encompassed by the mask.
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 82
Recall The response of the mask at any point in the image is given by
=
=
+++=9
1
992211
...
iii zw
zwzwzwR
Where zi is the gray level of the pixel associated with mask coefficient wi. The response of the mask is defined with respect to its center location.
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 83
Point Detection Using the mask shown, a point is detected at the location on which the mask is centered if
TR
Where T is a pre-defined non negative threshold and R is given by the equation below. This formulation measures the weighted differences between the center point and its neighbors.
=
=
+++=9
1
992211
...
iii zw
zwzwzwR Point detection mask
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 84
Point Detection (For example)
Single pixel discontinuities
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 85
Line Detection Consider the mask shown
If the first mask was moved around an image, it would respond more strongly to lines (one pixel thick) oriented horizontally.
With a constant background, the maximum response would result when the line passed through the middle row of the mask.
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 86
Line Detection (Example) A digitized (binary) portion of a wire-bond mask for an electronic circuit. Suppose we are interested in finding all the lines that are one pixel thick and are oriented at -45. For this purpose, we choose the following mask.
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 87
Line Detection (Example)
In order to determine which lines best fit the mask, we simply threshold this image.
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 88
Edge Detection An ideal edge is a set of connected
pixels (in the vertical direction here).
The edge is located at an orthogonal step transition in gray level (as shown)
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 89
Edge Detection In practice, edges are more closely modeled as
having a ramplike profile.
This is caused by imperfections determined by factors such as quality of the image acquisition system, the sampling rate etc.
The slope of the ramp is inversely proportional to the degree of blurring in the edge.
The edge is no longer a thin (one pixel thick) path. The thickness of the edge is determined by the length of the ramp, as it transits from an initial to a final gray level.
Detection of Discontinuities
Division of Bioengineering, School of Chemical and Biomedical Engineering 90
First and second-order digital derivatives can be use for the detection of edges in an images.
Edge Detection
Two regions separated by a vertical edge. Detail near the edge, showing a gray level profile, and the first and second derivatives of the profile.
Division of Bioengineering, School of Chemical and Biomedical Engineering 91
First derivative Moving from left to right along
the profile, the first derivative is positive at the points of transition into and out of the ramp.
It is constant for points in the ramp
It is zero in areas of constant gray level. Two regions separated by a vertical edge.
Edge Detection - First derivative
Division of Bioengineering, School of Chemical and Biomedical Engineering 92
Second Derivative is positive at the transition
associated with the dark side of the edge
Is negative at the transition associated with the light side of the edge
Is zero along the ramp and in areas of constant gray level.
The sign of the derivatives would be reversed for an edge that transitions from light to dark.
Two regions separated by a vertical edge.
Edge Detection - Second derivative
Division of Bioengineering, School of Chemical and Biomedical Engineering 93
Observations The magnitude of the first derivative can be used to detect the
presence of an edge at a point in an image (i.e., to determine if a point is on a ramp)
The sign of the second derivative can be used to determine whether an edge pixel lies on the dark or light side of an edge.
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 94
For a function f(x, y), the gradient of f at coordinates (x, y) is defined as the two-dimensional column vector
=
=
dyfxf
GG
y
xf
The magnitude of this vector is given by
[ ] 2/122)f( yx GGmagf +==It is a common practice to refer to as the gradient. f
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 95
The direction of the gradient vector also is an important quantity. Let (x, y) represent the direction angle of the vector at (x, y). Then,
from vector analysis, f
Where the angle is measured with respect to the x-axis. The direction of an edge at (x, y) is perpendicular to the direction of the gradient vector at that point.
=
x
y
GG
yx 1tan),(
( ) ( )321987 zzzzzzGx ++++=
( ) ( )741963 zzzzzzGy ++++=
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 96
Computation of the gradient of an image is based on obtaining the partial derivatives and at every pixel location. xf /
An approach using masks of size 3 x 3 is given by
yf /
( ) ( )321987 zzzzzzGx ++++=
( ) ( )741963 zzzzzzGy ++++=
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 97
Example of two most commonly used 3 x 3 masks for computing the gradient
These masks are used to obtain the gradient component Gx and Gy.
The prewitt masks are simpler to implement than the sobel masks
But sobel have slightly superior noise-suppression characteristics, an important issue when dealing with derivatives.
Note that the coefficients in all the masks show sum to 0!
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 98
The computational burden of implementing Equation over an entire image is not trivial, it is common practice to approximate the magnitude of the gradient by using absolute values instead of squares and square roots:
yx GGf +This equation is simpler to compute and it still preserves relative changes in gray levels However, the resulting filters will not be isotropic (invariant to rotation) in general. But Prewitt and Sobel masks can give isotropic results only for vertical and horizontal edges.
[ ] 2/122)f( yx GGmagf +==
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 99
Examples
Edge Detection
Original image |Gx| component of the gradient in the x-direction.
|Gy| component of the gradient in the y-direction.
Gradient image, |Gx| + |Gy|
Division of Bioengineering, School of Chemical and Biomedical Engineering 100
The Laplacian of a 2-D function f(x, y) is a second order derivative defined as
2
2
2
22
yf
xff
+
=
Digital approximations to the Laplacian for a 3 x 3 region, one of the two forms encountered most frequently in practice is
( )864252 4 zzzzzf +++= Digital approximations including the diagonal
neighbors is given by
( )9876432152 8 zzzzzzzzzf +++++++=
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 101
Masks for implementing the two equations
( )864552 4 zzzzzf +++= ( )9876432152 8 zzzzzzzzzf +++++++=
These masks are isotropic for rotation increments of 90and 45 respectively.
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 102
There are problems using Laplacian in its original form for edge detection
1. Sensitive to noise 2. Magnitude of the Laplacian
produces double edges this complicates segmentation
3. Unable to detect edge direction
Role of Laplacian 1. Using its zero-crossing property for
edge location 2. Complementary purpose of
establishing whether a pixel is on the dark or light side of an edge
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 103
Example
Laplacian Zero Crossing
Gradient Operator Sobel Original
Edges in the zero crossing image are thinner than the gradient edges.
Edge Detection
Division of Bioengineering, School of Chemical and Biomedical Engineering 104
What is histogram of a image? Histogram of a digital image with gray levels in the range [0, L 1] is a discrete function given by
Histogram
kk nrh =)(
where rk is the kth gray level and nk is the number of pixels in the image having gray level rk.
Division of Bioengineering, School of Chemical and Biomedical Engineering 105
Histogram
A histogram can be normalized by dividing each of its values by the total number of pixels in the images, denoted by n.
Thus a normalized histogram is given by p(rk) = nk/ n, for k = 0, 1, , L - 1.
p(rk) gives an estimate of the probability of occurrence of gray level rk.
The sum of all components of a normalized histogram is equal to 1.
Division of Bioengineering, School of Chemical and Biomedical Engineering 106
Histograms are the basis for numerous spatial domain processing techniques. Image enhancement Image compression Image segmentation
The pollen image shown in four basic gray-level characteristics: dark,
light, low contrast and high contrast. these histogram plots are simply plots of h(rk) = nk versus rk or p(rk) = nk/ n versus rk. The horizontal axis of each histogram plot corresponds to gray
level values, rk
The vertical axis corresponds to values of h(rk) = nk or p(rk) = nk/n if the values are normalized.
Histogram
Division of Bioengineering, School of Chemical and Biomedical Engineering 107
The component of the histogram of the dark image are concentrated on the low(dark) side of the gray scale.
The component of the histogram of the bright image are concentrated on the high(bright) side of the gray scale.
Histogram
Division of Bioengineering, School of Chemical and Biomedical Engineering 108
An image with low contrast has a histogram that will be narrow and will be centered toward the middle of the gray scale.
An image with high contrast has a histogram that cover a broad range of the gray scale, and the distribution of pixels is not too far from uniform. (This is what we like to see!)
Histogram
BG3104 Biomedical ImagingSlide Number 2Slide Number 3Slide Number 4Slide Number 5Slide Number 6Slide Number 7Slide Number 8Slide Number 9Slide Number 10Slide Number 11Slide Number 12Slide Number 13Slide Number 14Slide Number 15Slide Number 16Slide Number 17Slide Number 18Slide Number 19Slide Number 20Slide Number 21Slide Number 22Slide Number 23Slide Number 24Slide Number 25Slide Number 26Slide Number 27Slide Number 28Slide Number 29Slide Number 30Slide Number 31Slide Number 32Slide Number 33Slide Number 34Slide Number 35Slide Number 36Slide Number 37Slide Number 38Slide Number 39Slide Number 40Slide Number 41Slide Number 42Slide Number 43Slide Number 44Slide Number 45Slide Number 46Slide Number 47Slide Number 48Slide Number 49Slide Number 50Slide Number 51Slide Number 52Slide Number 53Slide Number 54Slide Number 55Slide Number 56Slide Number 57Slide Number 58Slide Number 59Slide Number 60Slide Number 61Slide Number 62Slide Number 63Slide Number 64Slide Number 65Slide Number 66Slide Number 67Slide Number 68Slide Number 69Slide Number 70Slide Number 71Slide Number 72Slide Number 73Slide Number 74Slide Number 75Slide Number 76Slide Number 77Slide Number 78Slide Number 79Slide Number 80Slide Number 81Slide Number 82Slide Number 83Slide Number 84Slide Number 85Slide Number 86Slide Number 87Slide Number 88Slide Number 89Slide Number 90Slide Number 91Slide Number 92Slide Number 93Slide Number 94Slide Number 95Slide Number 96Slide Number 97Slide Number 98Slide Number 99Slide Number 100Slide Number 101Slide Number 102Slide Number 103Slide Number 104Slide Number 105Slide Number 106Slide Number 107Slide Number 108