Upload
jainatin
View
618
Download
0
Tags:
Embed Size (px)
Citation preview
Image and related concepts
Aditya Tatu
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
What is an Image
Image is a representation of some property of a physical entity.
The property can be represented as a function f (x , y , z) of 3variables.
A 2D image is obtained by:
perspective projection through a pin-hole camera Assuming that the objects are very far away from the imaging
system (for eg: z →∞), thereby giving f ′(x , y) = f (x , y , z).
When the independent variables x , y and the function value fare discretized, we get a Digital Image.
IT472 - DIP: Lecture 2 2/23
Image formation model
IT472 - DIP: Lecture 2 3/23
Let i(x , y) be the illumination at a point (x , y) and r(x , y) bethe reflectance at the same point, then the image f (x , y) atthe point is given by f (x , y) = i(x , y) r(x , y).
From Physics, we get 0 < f (x , y), i(x , y) <∞ and0 < r(x , y) < 1.
The image capturing device is directly related to theillumination source used, for eg: Infrared source - Infrareddetector, X-ray source - X-ray film, Visible light - CCD arraydetectors.
Summary
At the end, we get a mathematical object f (x , y) to work with,that represents an aspect of the real object that we are interestedin.
IT472 - DIP: Lecture 2 4/23
Let i(x , y) be the illumination at a point (x , y) and r(x , y) bethe reflectance at the same point, then the image f (x , y) atthe point is given by f (x , y) = i(x , y) r(x , y).
From Physics, we get 0 < f (x , y), i(x , y) <∞ and0 < r(x , y) < 1.
The image capturing device is directly related to theillumination source used, for eg: Infrared source - Infrareddetector, X-ray source - X-ray film, Visible light - CCD arraydetectors.
Summary
At the end, we get a mathematical object f (x , y) to work with,that represents an aspect of the real object that we are interestedin.
IT472 - DIP: Lecture 2 4/23
Let i(x , y) be the illumination at a point (x , y) and r(x , y) bethe reflectance at the same point, then the image f (x , y) atthe point is given by f (x , y) = i(x , y) r(x , y).
From Physics, we get 0 < f (x , y), i(x , y) <∞ and0 < r(x , y) < 1.
The image capturing device is directly related to theillumination source used, for eg: Infrared source - Infrareddetector, X-ray source - X-ray film, Visible light - CCD arraydetectors.
Summary
At the end, we get a mathematical object f (x , y) to work with,that represents an aspect of the real object that we are interestedin.
IT472 - DIP: Lecture 2 4/23
Let i(x , y) be the illumination at a point (x , y) and r(x , y) bethe reflectance at the same point, then the image f (x , y) atthe point is given by f (x , y) = i(x , y) r(x , y).
From Physics, we get 0 < f (x , y), i(x , y) <∞ and0 < r(x , y) < 1.
The image capturing device is directly related to theillumination source used, for eg: Infrared source - Infrareddetector, X-ray source - X-ray film, Visible light - CCD arraydetectors.
Summary
At the end, we get a mathematical object f (x , y) to work with,that represents an aspect of the real object that we are interestedin.
IT472 - DIP: Lecture 2 4/23
What sort of objects are images?
Since we want to process, operate on and play with images,we should first characterize what sort of objects images areand what should be possible to do with images?
Should it be possible to apply filters on images (say, usingconvolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
What sort of objects are images?
Since we want to process, operate on and play with images,we should first characterize what sort of objects images areand what should be possible to do with images?
Should it be possible to apply filters on images (say, usingconvolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
What sort of objects are images?
Since we want to process, operate on and play with images,we should first characterize what sort of objects images areand what should be possible to do with images?
Should it be possible to apply filters on images (say, usingconvolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
What sort of objects are images?
Since we want to process, operate on and play with images,we should first characterize what sort of objects images areand what should be possible to do with images?
Should it be possible to apply filters on images (say, usingconvolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
What sort of objects are images?
Since we want to process, operate on and play with images,we should first characterize what sort of objects images areand what should be possible to do with images?
Should it be possible to apply filters on images (say, usingconvolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
What sort of objects are images?
Since we want to process, operate on and play with images,we should first characterize what sort of objects images areand what should be possible to do with images?
Should it be possible to apply filters on images (say, usingconvolution)?
If yes, then what operations should be allowed on images?
Addition and Scalar multiplication → Vector Spaces!
What sort of vector space? - Differentiable functions?Continuous functions? Finite bandwidth?
NO!
IT472 - DIP: Lecture 2 5/23
Vector space of images
Images are defined on a set with finite area, i.e., images arefunctions with compact support.
The image values must be finite at all points,
→ the energy: ||f || =∫supp(f ) f
2(x , y) dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compactsupport Ω which are square integrable. This vector space isdenoted as L2(Ω).
IT472 - DIP: Lecture 2 6/23
Vector space of images
Images are defined on a set with finite area, i.e., images arefunctions with compact support.
The image values must be finite at all points,
→ the energy: ||f || =∫supp(f ) f
2(x , y) dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compactsupport Ω which are square integrable. This vector space isdenoted as L2(Ω).
IT472 - DIP: Lecture 2 6/23
Vector space of images
Images are defined on a set with finite area, i.e., images arefunctions with compact support.
The image values must be finite at all points,
→ the energy: ||f || =∫supp(f ) f
2(x , y) dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compactsupport Ω which are square integrable. This vector space isdenoted as L2(Ω).
IT472 - DIP: Lecture 2 6/23
Vector space of images
Images are defined on a set with finite area, i.e., images arefunctions with compact support.
The image values must be finite at all points,
→ the energy: ||f || =∫supp(f ) f
2(x , y) dx dy has to be finite.
Vector space of images
Images are part of a vector space of 2-d functions with compactsupport Ω which are square integrable. This vector space isdenoted as L2(Ω).
IT472 - DIP: Lecture 2 6/23
Image sensors
Figure: Single Sensor
Figure: Array of sensors
Figure: Line of sensors
Figure: Circular Sensor
IT472 - DIP: Lecture 2 7/23
Sampling & Quantization
Although theoretically 0 < f (x , y) <∞, in practiceLmin ≤ f (x , y) ≤ Lmax , where Lmin > 0 and Lmax <∞ depend onsensor ratings.
For gray scale digital images, typically we use Lmin = 0 representingblack and Lmax = L− 1 representing white.
Sampled and quantized image gives a digital image which can berepresented as a m × n matrix, say A, of which each element iscalled a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
Sampling & Quantization
Although theoretically 0 < f (x , y) <∞, in practiceLmin ≤ f (x , y) ≤ Lmax , where Lmin > 0 and Lmax <∞ depend onsensor ratings.
For gray scale digital images, typically we use Lmin = 0 representingblack and Lmax = L− 1 representing white.
Sampled and quantized image gives a digital image which can berepresented as a m × n matrix, say A, of which each element iscalled a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
Sampling & Quantization
Although theoretically 0 < f (x , y) <∞, in practiceLmin ≤ f (x , y) ≤ Lmax , where Lmin > 0 and Lmax <∞ depend onsensor ratings.
For gray scale digital images, typically we use Lmin = 0 representingblack and Lmax = L− 1 representing white.
Sampled and quantized image gives a digital image which can berepresented as a m × n matrix, say A, of which each element iscalled a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
Sampling & Quantization
Although theoretically 0 < f (x , y) <∞, in practiceLmin ≤ f (x , y) ≤ Lmax , where Lmin > 0 and Lmax <∞ depend onsensor ratings.
For gray scale digital images, typically we use Lmin = 0 representingblack and Lmax = L− 1 representing white.
Sampled and quantized image gives a digital image which can berepresented as a m × n matrix, say A, of which each element iscalled a pixel (or picture element).
IT472 - DIP: Lecture 2 8/23
L is typically a power of 2, L = 2k . L levels require k bits ofmemory.
For a general image of size 1024× 1024 pixels with L = 256,we will need approximately 8MB memory.
Compare this with the file size of one image in your computer.
IT472 - DIP: Lecture 2 9/23
L is typically a power of 2, L = 2k . L levels require k bits ofmemory.
For a general image of size 1024× 1024 pixels with L = 256,we will need approximately 8MB memory.
Compare this with the file size of one image in your computer.
IT472 - DIP: Lecture 2 9/23
L is typically a power of 2, L = 2k . L levels require k bits ofmemory.
For a general image of size 1024× 1024 pixels with L = 256,we will need approximately 8MB memory.
Compare this with the file size of one image in your computer.
IT472 - DIP: Lecture 2 9/23
Spatial Resolution
Resolution of an imaging system determines the smallestdiscernible detail possible and technically is defined as thelargest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly foundsensors have individual pixel length/width ' 2− 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number ofphotons (a discrete random variable - Poisson pdf) incidenton each sensor, bigger sensors are found to be more reliable orhave higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
Spatial Resolution
Resolution of an imaging system determines the smallestdiscernible detail possible and technically is defined as thelargest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly foundsensors have individual pixel length/width ' 2− 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number ofphotons (a discrete random variable - Poisson pdf) incidenton each sensor, bigger sensors are found to be more reliable orhave higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
Spatial Resolution
Resolution of an imaging system determines the smallestdiscernible detail possible and technically is defined as thelargest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly foundsensors have individual pixel length/width ' 2− 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number ofphotons (a discrete random variable - Poisson pdf) incidenton each sensor, bigger sensors are found to be more reliable orhave higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
Spatial Resolution
Resolution of an imaging system determines the smallestdiscernible detail possible and technically is defined as thelargest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly foundsensors have individual pixel length/width ' 2− 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number ofphotons (a discrete random variable - Poisson pdf) incidenton each sensor, bigger sensors are found to be more reliable orhave higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
Spatial Resolution
Resolution of an imaging system determines the smallestdiscernible detail possible and technically is defined as thelargest number of discernible lines per unit distance.
(1) Is number of pixels enough to define resolution?
Not Always!- Also depends on (2) pixel size. Commonly foundsensors have individual pixel length/width ' 2− 8 microns.
Are smaller sensors always better?
NO! - Since an image is produced based on number ofphotons (a discrete random variable - Poisson pdf) incidenton each sensor, bigger sensors are found to be more reliable orhave higher SNR ratio compared to smaller sensors.
IT472 - DIP: Lecture 2 10/23
For color images, sensors are arranged in a (3) mosaic pattern
It also depends on the (4) spatial resolution of the lens.
To summarize, a camera with 10 megapixels is said to have abetter resolution then a 3 megapixel camera assuming similarlenses and sensors and that images are taken at the samedistance.
IT472 - DIP: Lecture 2 11/23
For color images, sensors are arranged in a (3) mosaic pattern
It also depends on the (4) spatial resolution of the lens.
To summarize, a camera with 10 megapixels is said to have abetter resolution then a 3 megapixel camera assuming similarlenses and sensors and that images are taken at the samedistance.
IT472 - DIP: Lecture 2 11/23
For color images, sensors are arranged in a (3) mosaic pattern
It also depends on the (4) spatial resolution of the lens.
To summarize, a camera with 10 megapixels is said to have abetter resolution then a 3 megapixel camera assuming similarlenses and sensors and that images are taken at the samedistance.
IT472 - DIP: Lecture 2 11/23
Imaging system
We can assume that the imaging system is linear and positioninvariant/shift invariant.
A meaningful conclusion about the spatial resolution can beobtained by looking at the impulse response of the imagingsystem.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
Imaging system
We can assume that the imaging system is linear and positioninvariant/shift invariant.
A meaningful conclusion about the spatial resolution can beobtained by looking at the impulse response of the imagingsystem.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
Imaging system
We can assume that the imaging system is linear and positioninvariant/shift invariant.
A meaningful conclusion about the spatial resolution can beobtained by looking at the impulse response of the imagingsystem.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
Imaging system
We can assume that the imaging system is linear and positioninvariant/shift invariant.
A meaningful conclusion about the spatial resolution can beobtained by looking at the impulse response of the imagingsystem.
What is an impulse/impulse response for a camera?
IT472 - DIP: Lecture 2 12/23
Spatial resolution
Print technology: dots per inch (dpi), Computer screens:pixels per inch (ppi)
Difference: Collection of dots forms one pixel.
IT472 - DIP: Lecture 2 13/23
Spatial resolution
Print technology: dots per inch (dpi), Computer screens:pixels per inch (ppi)
Difference: Collection of dots forms one pixel.
IT472 - DIP: Lecture 2 13/23
Spatial resolution
Print technology: dots per inch (dpi), Computer screens:pixels per inch (ppi)
Difference: Collection of dots forms one pixel.
IT472 - DIP: Lecture 2 13/23
Intensity resolution
Smallest discernible change in the intensity level.
IT472 - DIP: Lecture 2 14/23
Intensity resolution
Smallest discernible change in the intensity level.
IT472 - DIP: Lecture 2 14/23
Intensity resolution
IT472 - DIP: Lecture 2 15/23
Topological concepts
Neighbors of a pixel p = (x , y)
4-NeighborhoodN4(p) = (x + 1, y), (x − 1, y), (x , y + 1), (x , y − 1).
Diagonal NeighborhoodND(p) = (x+1, y+1), (x−1, y+1), (x+1, y−1), (x−1, y−1).
8-Neighborhood N8(p) = N4(p) ∪ ND(p).
IT472 - DIP: Lecture 2 16/23
Topological concepts
Neighbors of a pixel p = (x , y)
4-NeighborhoodN4(p) = (x + 1, y), (x − 1, y), (x , y + 1), (x , y − 1).
Diagonal NeighborhoodND(p) = (x+1, y+1), (x−1, y+1), (x+1, y−1), (x−1, y−1).
8-Neighborhood N8(p) = N4(p) ∪ ND(p).
IT472 - DIP: Lecture 2 16/23
Topological concepts
Neighbors of a pixel p = (x , y)
4-NeighborhoodN4(p) = (x + 1, y), (x − 1, y), (x , y + 1), (x , y − 1).
Diagonal NeighborhoodND(p) = (x+1, y+1), (x−1, y+1), (x+1, y−1), (x−1, y−1).
8-Neighborhood N8(p) = N4(p) ∪ ND(p).
IT472 - DIP: Lecture 2 16/23
Topological concepts
Neighbors of a pixel p = (x , y)
4-NeighborhoodN4(p) = (x + 1, y), (x − 1, y), (x , y + 1), (x , y − 1).
Diagonal NeighborhoodND(p) = (x+1, y+1), (x−1, y+1), (x+1, y−1), (x−1, y−1).
8-Neighborhood N8(p) = N4(p) ∪ ND(p).
IT472 - DIP: Lecture 2 16/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Adjacency: Used to define relation between pixels of animage.
Let V be the set of gray levels used to define the relation.Example: V = 0, . . . , 10,V = 0.
4-adjacency: Two pixels p and q with values in V are4-adjacent if q ∈ N4(p).
8-adjacency: Two pixels p and q with values in V are8-adjacent if q ∈ N8(p).
m-adjacency: Two pixels p and q with values in V arem-adjacent if:
q ∈ N4(p), orq ∈ ND(p) and the set N4(p) ∪ N4(q) has no pixels whosevalues are in V .
IT472 - DIP: Lecture 2 17/23
Topological concepts
Path: Path from pixel p = (x , y) to pixel q = (s, t) is asequence of distinct pixels with coordinates(x0 = x , y0 = y), (x1, y1), . . . , (xn = s, yn = t) such that pixels(xi−1, yi−1) and (xi , yi ),∀1 ≤ i ≤ n are adjacent. If the firstand last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,p, q ∈ S are said to be connected in S if there exists a pathconnecting the two, consisting of pixels only from S .
Connected component: For p ∈ S , the set of all pixelsconnected to p is a connected component in S .
Connected Set: If S has only one connected component, it iscalled a connected set. A connected set in an image is oftencalled a region.
IT472 - DIP: Lecture 2 18/23
Topological concepts
Path: Path from pixel p = (x , y) to pixel q = (s, t) is asequence of distinct pixels with coordinates(x0 = x , y0 = y), (x1, y1), . . . , (xn = s, yn = t) such that pixels(xi−1, yi−1) and (xi , yi ),∀1 ≤ i ≤ n are adjacent. If the firstand last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,p, q ∈ S are said to be connected in S if there exists a pathconnecting the two, consisting of pixels only from S .
Connected component: For p ∈ S , the set of all pixelsconnected to p is a connected component in S .
Connected Set: If S has only one connected component, it iscalled a connected set. A connected set in an image is oftencalled a region.
IT472 - DIP: Lecture 2 18/23
Topological concepts
Path: Path from pixel p = (x , y) to pixel q = (s, t) is asequence of distinct pixels with coordinates(x0 = x , y0 = y), (x1, y1), . . . , (xn = s, yn = t) such that pixels(xi−1, yi−1) and (xi , yi ),∀1 ≤ i ≤ n are adjacent. If the firstand last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,p, q ∈ S are said to be connected in S if there exists a pathconnecting the two, consisting of pixels only from S .
Connected component: For p ∈ S , the set of all pixelsconnected to p is a connected component in S .
Connected Set: If S has only one connected component, it iscalled a connected set. A connected set in an image is oftencalled a region.
IT472 - DIP: Lecture 2 18/23
Topological concepts
Path: Path from pixel p = (x , y) to pixel q = (s, t) is asequence of distinct pixels with coordinates(x0 = x , y0 = y), (x1, y1), . . . , (xn = s, yn = t) such that pixels(xi−1, yi−1) and (xi , yi ),∀1 ≤ i ≤ n are adjacent. If the firstand last pixels are same then we have a closed path.
Connectedness: For a given subset S of pixels in an image,p, q ∈ S are said to be connected in S if there exists a pathconnecting the two, consisting of pixels only from S .
Connected component: For p ∈ S , the set of all pixelsconnected to p is a connected component in S .
Connected Set: If S has only one connected component, it iscalled a connected set. A connected set in an image is oftencalled a region.
IT472 - DIP: Lecture 2 18/23
Application
Figure: Count the number of components in the image
IT472 - DIP: Lecture 2 19/23
Application
Figure: Convert it into a binary image
IT472 - DIP: Lecture 2 20/23
Application
Figure: Do some morphological processing on the image. Let V = 1.Find the connected sets in the image
IT472 - DIP: Lecture 2 21/23
Application
Figure: 11 components!
IT472 - DIP: Lecture 2 22/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23
Neighborhood using distances
We may define neighborhood of a pixel using distances:N(p) = p1 = (x1, y1) |d(p, p1) ≤ a.
Euclidean distance: d(p, p1) =√
(x − x1)2 + (y − y1)2.
In general, a distance function (metric) should satisfy:
Positive Definiteness: d(p, p1) ≥ 0,= 0 iff p = p1. Symmetry: d(p, p1) = d(p1, p). Triangular inequality: d(p, p1) ≤ d(p, q) + d(q, p1).
Examples:
City block distance - d4(p, p1) = |x − x1|+ |y − y1| Chessboard distance - d8(p, p1) = max|x − x1|, |y − y1|
IT472 - DIP: Lecture 2 23/23