Upload
trinhtram
View
233
Download
4
Embed Size (px)
Citation preview
PARTICULATE TEXTURE IMAGE ANALYSIS WITH
APPLICATIONS
Ronald T. Elunai
BEng(Hons)
SUBMITTED AS A REQUIREMENT OF
THE DEGREE OF
DOCTOR OF PHILOSOPHY
AT
QUEENSLAND UNIVERSITY OF TECHNOLOGY
BRISBANE, AUSTRALIA
MARCH 2011
Keywords
Texture
Particulate
Granulometrty
Coarseness
Regularity
Particle size
Aggregate
Micro-texture
Macro-texture
Texture depth
Edge detection
i
ii
Abstract
Texture analysis and textural cues have been applied for image classification, segmen-
tation and pattern recognition. Dominant texture descriptors include directionality,
coarseness, line-likeness etc. In this dissertation a class of textures known as particulate
textures are defined, which are predominantly coarse or blob-like. The set of features
that characterise particulate textures are different from those that characterise classical
textures. These features are micro-texture, macro-texture, size, shape and compaction.
Classical texture analysis techniques do not adequately capture particulate texture fea-
tures. This gap is identified and new methods for analysing particulate textures are
proposed. The levels of complexity in particulate textures are also presented ranging
from the simplest images where blob-like particles are easily isolated from their back-
ground to the more complex images where the particles and the background are not
easily separable or the particles are occluded. Simple particulate images can be anal-
ysed for particle shapes and sizes. Complex particulate texture images, on the other
hand, often permit only the estimation of particle dimensions.
Real life applications of particulate textures are reviewed, including applications to
sedimentology, granulometry and road surface texture analysis. A new framework for
computation of particulate shape is proposed.
A granulometric approach for particle size estimation based on edge detection is devel-
oped which can be adapted to the gray level of the images by varying its parameters.
This study binds visual texture analysis and road surface macrotexture in a theoretical
iii
framework, thus making it possible to apply monocular imaging techniques to road sur-
face texture analysis. Results from the application of the developed algorithm to road
surface macro-texture, are compared with results based on Fourier spectra, the auto-
correlation function and wavelet decomposition, indicating the superior performance of
the proposed technique.
The influence of image acquisition conditions such as illumination and camera angle
on the results was systematically analysed. Experimental data was collected from over
5km of road in Brisbane and the estimated coarseness along the road was compared
with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9
was obtained when correlating the proposed imaging technique with the state of the
art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.
iv
Contents
Keywords i
Abstract iii
List of Tables xii
List of Figures xv
List of Abbreviations and Notations xxi
Authorship xxv
Acknowledgments xxvii
1 Introduction 1
1.1 Motivation and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Main objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Major research contributions . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 List of Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
v
2 Background 7
2.1 Texture analysis overview . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Particulate Textures (General) . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Applications of particulate texture analysis . . . . . . . . . . . . 10
2.3 Features of Particulate Textures . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Micro-texture and Macro-texture . . . . . . . . . . . . . . . . . . 11
2.3.2 Compaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Levels of complexity of particulate textures . . . . . . . . . . . . . . . . 14
2.5 2D and 3D particulates . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.6 Application of Standard Texture Analysis Techniques to Particulate Tex-
tures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6.1 Standard Tasks in Texture Analysis . . . . . . . . . . . . . . . . 17
2.6.2 Standard Methods of Texture Analysis . . . . . . . . . . . . . . . 17
2.6.3 Applicability to Particulate textures . . . . . . . . . . . . . . . . 21
2.6.4 Coarseness vs. Particle size . . . . . . . . . . . . . . . . . . . . . 21
2.6.5 Placement regularity vs. Particulate shape . . . . . . . . . . . . 23
2.7 Morphological Texture Analysis . . . . . . . . . . . . . . . . . . . . . . . 25
2.7.1 Granulometry and Size Distribution . . . . . . . . . . . . . . . . 26
2.7.2 Limitations of mathematical morphology . . . . . . . . . . . . . 27
2.8 Segmentation of particulate textures . . . . . . . . . . . . . . . . . . . . 28
2.8.1 Region-based segmentation . . . . . . . . . . . . . . . . . . . . . 28
vi
2.8.2 Boundary-based segmentation . . . . . . . . . . . . . . . . . . . . 31
2.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3 Particulate Shapes 35
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 A Brief review of the Geometrical Shape Features . . . . . . . . . . . . . 37
3.2.1 Circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.2 Elongation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2.3 Convexity and Concavity . . . . . . . . . . . . . . . . . . . . . . 38
3.2.4 Roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Analysis techniques of geometrical shape features . . . . . . . . . . . . . 38
3.3.1 Classical techniques . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 Imaging techniques . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 A Novel framework for image analysis of geometrical particulate shapes 40
3.4.1 The Euclidean Jordan Curve . . . . . . . . . . . . . . . . . . . . 40
3.4.2 The digitized Jordan curve . . . . . . . . . . . . . . . . . . . . . 41
3.4.3 Boundary Pixel trajectories and Internal Angles . . . . . . . . . 42
3.4.4 Smoothing of a Digital Jordan Curve . . . . . . . . . . . . . . . . 44
3.4.5 The Equivalent Circle . . . . . . . . . . . . . . . . . . . . . . . . 44
3.5 Extraction of Shape parameters . . . . . . . . . . . . . . . . . . . . . . . 44
3.5.1 Circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
vii
3.5.2 Elongation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5.3 Roundness from Edges, Corners and their curvature . . . . . . . 46
3.5.4 Convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6 Impact of particle shape on analysis of particulate textures . . . . . . . 47
3.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4 Granulometry by Edge Detection 51
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Development of Edge Detectors . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 The EDPC Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.1 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.2 Edge Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3.3 Pixel Run Statistics . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.3.4 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 Sources of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4.1 Effects of Gaps and occlusions . . . . . . . . . . . . . . . . . . . 61
4.4.2 Boundary Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.5 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.5.1 Consistency of the EDPC estimator . . . . . . . . . . . . . . . . 63
4.6 Adaptive techniques for edge detection . . . . . . . . . . . . . . . . . . . 67
4.6.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . 68
4.6.2 Test Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
viii
4.7 Extracting σ from Image spectra . . . . . . . . . . . . . . . . . . . . . . 71
4.8 Data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.9 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.10 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5 Application of Particulate Texture Analysis to Road Surfaces 77
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.1.1 Direct friction measurements . . . . . . . . . . . . . . . . . . . . 77
5.1.2 Indirect methods - Surface texture depth measurement . . . . . . 78
5.2 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2.1 The Sand Patch Test . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2.2 Laser Profile Technique . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.3 The Circular Track Meter . . . . . . . . . . . . . . . . . . . . . . 82
5.3 Units of Texture Depth Measurement . . . . . . . . . . . . . . . . . . . 82
5.3.1 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.4 Image-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.4.1 Stereoscopic techniques . . . . . . . . . . . . . . . . . . . . . . . 84
5.4.2 2D still Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5 The Proposed technique . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.6 Data Acquisition and Experiment Set-up . . . . . . . . . . . . . . . . . 94
5.6.1 Camera Set-up and SMTD data . . . . . . . . . . . . . . . . . . 94
5.6.2 Nudgee Road . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
ix
5.6.3 Colburn Avenue . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.6.4 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.7 Performance Comparison of The Imaging techniques . . . . . . . . . . . 97
5.7.1 The selection of adaptive σ for road surfaces . . . . . . . . . . . 100
5.8 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.9 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6 Conclusion and Future Work 111
6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A Standard texture analysis techniques 115
A.1 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
A.2 Co-occurrence Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
A.3 Fourier Domain Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.4 Gabor Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.5 Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
A.6 Random Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
A.7 Structural techniques and Mathematical morphology . . . . . . . . . . . 119
A.7.1 The basic morphological operators . . . . . . . . . . . . . . . . . 119
A.7.2 Convexity of structural elements . . . . . . . . . . . . . . . . . . 123
A.7.3 Gray-scale morphological operations . . . . . . . . . . . . . . . . 123
B Mathematical descriptions of selected edge detectors 125
x
B.1 Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
B.2 Prewitt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
B.3 Sobel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
B.4 Roberts Cross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
B.5 The Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
B.6 The Canny detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Bibliography 130
xi
xii
List of Tables
2.1 Particle/Background interaction parameters for the binary images in
Fig.2.6. Average particle diameter is 32 pixels and M = N = 512 . . . . 14
2.2 HVS inspired features and the corresponding Particulate features . . . . 21
2.3 Shape regularity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Circularity, elongation, roundness and convexity measures of the shapes
shown in Fig.3.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2 The Roundness Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1 Confusion matrix for autocorrelation method showing accuracy in per-
centage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Confusion matrix for EDPC method showing accuracy in percentage . . 64
4.3 Image statistics for three image dimensions N. . . . . . . . . . . . . . . . 65
4.4 Mean and Variance of particle size as a function of number of averaged
independent samples (Image dimension is fixed at N=512) . . . . . . . 66
4.5 Performance results of non-adaptive and adaptive σ, showing rank cor-
relation κ and discrimination D, averaged over 10,000 experiments . . . 76
4.6 The EDPC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
xiii
5.1 Values of κ (percentage of edge pixels) as a function of (σ,Θ) for the
profiles in Fig.5.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2 Coefficient of determination (R2) values, for the four imaging methods . 103
5.3 Image acquisition details showing date, starting and ending time of ac-
quisition. Also shown are correlation results with SMTD (R2) and the
average RMSE values for all the sites when the EDPC technique is ap-
plied with the parameters: σ = 0.75, Θ = 0.05, α = 0.11 and β = 0.36. . 107
5.4 Parameters for Lane-1 Nudgee road (φ = 60o). Optimal parameters are
shown in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.5 Parameters for Lane-1 Nudgee road (φ = 90o). Optimal parameters are
shown in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.6 Parameters for Lane-2 Nudgee road (φ = 60o). Optimal parameters are
shown in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.7 Parameters for Lane-2 Nudgee road (φ = 90o). Optimal parameters are
shown in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.8 Parameters for Colburn avenue (φ = 60o). Optimal parameters are
shown in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.9 Parameters for Colburn avenue (φ = 90o). Optimal parameters are
shown in bold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
A.1 Co-occurrence matrices for the image in Fig.A.1 for different displacements116
A.2 Common texture features from co-occurrence matrices . . . . . . . . . . 117
xiv
List of Figures
1.1 Examples of particulate and non-particulate textures. (a) and (c) are
particulate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 An image of (a) almonds , and its boundary showing (b)microtexture
details and (c) macrotexture . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Relationship between coarseness and particle size at various packing den-
sities P2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Examples of particulate texture in the Brodatz album [1], row1:D03 D05
D10 D22, row2:D23 D27 D28 D30, row3:D48 D62 D66 D67, row4:D74
D75 D98 D99 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Perceptual texture dimensions proposed by Laws[2] . . . . . . . . . . . . 18
2.5 Textures and their normalised log-magnitude (base10) Fourier spectra.
The axes are spatial frequency units in cycles/pixel). . . . . . . . . . . 19
2.6 Three binary images (particle = 1, background = 0) showing aggregates
of same particle size but different spacing . . . . . . . . . . . . . . . . . 22
2.7 Autocorrelation sequence for the images in Fig.2.6. The lag at 1/e de-
termines the coarseness as per Kaizer [3]. . . . . . . . . . . . . . . . . . 23
2.8 Two patterns tested for global regularity and particulate regularity . . . 24
2.9 Circularity distribution of the images in Fig.2.8 . . . . . . . . . . . . . . 25
xv
2.10 Synthetic binary images showing random particles of mean radius 24
pixels and 56 pixels with the 56 pixel having (a) 10% (b)50% and (c)
90% of the mixture by area . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.11 Size distribution for the binary images in Fig.2.10 (a) Density Function
(b) Cumulative Distribution Function . . . . . . . . . . . . . . . . . . . 27
2.12 Some particulate textures from the Brodatz album [1] suitable for region
based segmentation of objects from background. left to right and top to
bottom: D3,D22,D48,D62,D66,D67,D75 and D98 . . . . . . . . . . . . . 29
2.13 Gray level histograms for the images shown in Fig.2.12 in corresponding
order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.14 Segmentation of the D75 image (see Fig.2.12) from the Brodatz album
using simple threshold operations (left) and watershed algorithm (right).
The watershed algorithm is prone to over-segmentation . . . . . . . . . 31
2.15 A particulate image with particles of varying gray levels and its edge
profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Krumbein’s visual estimation chart [4] for roundness and sphericity . . 36
3.2 Digitisation of a simple closed curve (Jordan curve) . . . . . . . . . . . . 42
3.3 Labeling convention used in the development of shape characterisation
algorithm. (a) An m×n image indexing and (b) Angular directions and
vector translations w.r.t to the pixel (i, j) of its 8-neighbourhood. . . . . 43
3.4 Set of shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5 Plots of internal angles of the shapes shown in Fig.3.4 against the internal
angles of the equivalent circle . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6 A star shape showing (a) its inscribed circle and (b) its inscribing circle 49
4.1 Taxonomy of edge detectors . . . . . . . . . . . . . . . . . . . . . . . . . 52
xvi
4.2 (a)A particulate image and its edge profiles using adaptive thresholding
and operators (b) Prewitt (c) Sobel (d) Robert (e) Laplacian of Gaussian
and (f) Canny detector. The Canny detector has an additional parameter
σ = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3 Sediment data used in the experiments. For these images, mechanically
sieved sand was used and the average particle size is thus known: row 1:
180µm, 210µm, 250µm; row 2: 300µm, 355µm, 425µm; row 3: 600µm,
710µm, 850µm; row 4: 1000µm, 1190µm, 1400µm. . . . . . . . . . . . . 55
4.4 (a)Pavement sample (b) Edge profile of the sample (c) The dilation of the
edge profile (d) Skeletonization (size:128× 128 pixels). The skeletonized
image is used for determining average grain size using pixel run statistics
from lineal intercepts [5]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5 A sediment image with average grain size of 1mm (left), and its edge map
(right). The dotted lines are used in Fig.4.6 do demonstrate run-lengths 59
4.6 Pixel run-lengths from the scan lines shown in Fig.4.5 (left), and the
resulting run-lengths distribution for the entire image (right). . . . . . . 59
4.7 Sand particles of sizes: (a) 180 microns, (b) 600 microns (c) 1190 microns
and (d) illustration of Particle widths (P) and Gaps (G) in a 1400 microns
sand image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.8 Errors in measuring the average diameter of aggregates in an image when
the dimensions of the image N vary in relation the average aggregate size c. 62
4.9 Estimation of particle sizes using EDPC for the twelve size categories . 65
4.10 Pixel run-lengths for the images in Fig.2.10 . . . . . . . . . . . . . . . . 66
4.11 (a) A sediment image showing mixtures (250µm and 850µm ) at 127
pixels/mm and (b) its autocorrelation sequence and (c) The pixel run-
length distribution from its edge map . . . . . . . . . . . . . . . . . . . 67
xvii
4.12 Log-magnitude Fourier spectra of some sieved sediments from Fig.4.3 —
row 1: 180µm, 600µm and row2: 1000µm , 1400µm. The axes are spatial
frequency units in cycles/pixel . . . . . . . . . . . . . . . . . . . . . . . 72
4.13 Edge maps for 180µm sediment image (above) and 1400µm (below) used
for grain size distribution computation . . . . . . . . . . . . . . . . . . . 73
4.14 Plots showing the performance of each grain size estimators ranking and
discrimination capacity as σ varies. Note that in the y-axis the lowest φ
corresponds to the largest size and vice versa (see Eqn.(4.6.9)) . . . . . 74
4.15 Boundary detection using the Canny edge detector for D48, D66 and
D75 (see Fig.2.12) from the Brodatz Album. Top: σ = 1, Upper middle:
σ = 2 , Lower middle:σ = 5 Bottom: adaptive σ . . . . . . . . . . . . . . 75
5.1 Effects of texture wavelength on pavements/vehicles interactions (adapted
from [6]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 SPTD and SMTD as obtained from a calibration site . . . . . . . . . . . 83
5.3 (a) Fine (left) and Coarse (right) pavement samples, where the coarse
sample is considered to have more texture depth (size:512 × 512 pixels) 86
5.4 Normalised log-magnitude (log base 10) Fourier spectra of the images
in Fig.5.3 in corresponding order (size:512 × 512 pixels). The axes are
spatial frequency units in cycles/pixel . . . . . . . . . . . . . . . . . . . 86
5.5 Autocorrelation sequence of the fine and coarse textures in Fig.5.3 . . . 87
5.6 (a) Level 1 and (b) Level 2, wavelet transforms for the image in Fig.5.3b
using wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.7 Edge profiles of the images in Fig.5.3 in corresponding order (size:512×
512 pixels) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
xviii
5.8 Images of a road surface captured at different projection angles (size:512×
512 pixels). The patch in the middle highlights the distortional impact
of viewing angle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.9 Edge profiles of the images in Fig.5.8, showing the edge density κ of
each, (size:512× 512 pixels). The distortion of the central patch in each
image indicates the relative distortion of average grain size for each φ . 90
5.10 Images of a road surface at various lux levels: Approximate Lux levels
left to right and top to bottom, 100, 300, 2000, 7000, 20, 000 and above
30, 000 (size:512 × 512 pixels) . . . . . . . . . . . . . . . . . . . . . . . . 91
5.11 Edge profiles of the images in Fig.5.10 (scaled down for visibility (size:256×
256 pixels)). The labels show the lux levels and the percent of edge pixels κ 92
5.12 Autocorrelation sequence for each of the images in Fig.5.10. . . . . . . . 92
5.13 Effect of σ and Θ on edge contour of a pavement sample (central 200×200
pixels of Fig.5.3a). Visual impact along σ is more noticeable than along
Θ but both effects are also described in terms of edge densities κ as
shown in Table 5.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.14 Dimensions used in road surface image acquisition . . . . . . . . . . . . 95
5.15 SMTD values for Outer Wheel Path (OWP) and Between Wheel Path
(BWP) for Lane-1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.16 Maps showing (a) Nudgee Road and (b) Colburn Avenue. Images were
captured roughly from the highlighted sections pointed to by the arrow 97
5.17 Plots showing (a) FFT method and Laser profilometer SMTD side by
side and (b) their coefficient of determination . . . . . . . . . . . . . . . 99
5.18 Plots showing (a) Autocorrelation method and Laser profilometer SMTD
side by side and (b) their coefficient of determination . . . . . . . . . . . 100
xix
5.19 Plots showing (a) Wavelet transform method and Laser profilometer
SMTD side by side and (b) their coefficient of determination . . . . . . 101
5.20 Plots showing (a) EDPC method and Laser profilometer SMTD side by
side and (b) their coefficient of determination . . . . . . . . . . . . . . . 102
5.21 (a) Correlation between EDPC and SMTD for Lane-1 on Nudgee Beach
Rd (b) EDPC and SMTD in mm both scaled to match the Sand Patch
equivalent and (c) the standard error as a function of chainage, of using
EDPC as compared to SMTD. Θ = 0.05, σ = 0.75 and φ = 60o . . . . 104
5.22 (a) Correlation between EDPC and SMTD for Lane-1 on Colburn Ave
(b) EDPC and SMTD in mm both scaled to match the Sand Patch
equivalent and (c) the standard error as a function of chainage, of using
EDPC as compared to SMTD. Θ = 0.05, σ = 0.75 and φ = 60o . . . . 105
A.1 A 5× 5 image with three gray levels . . . . . . . . . . . . . . . . . . . . 116
A.2 Frequency domain representation of Gabor filter banks used in texture
browsing [7], having four scales and six orientations. The axes are spatial
frequency units in cycles/pixel) . . . . . . . . . . . . . . . . . . . . . . . 117
A.3 Wavelet decomposition of an image into four subsampled images . . . . 119
A.4 A binary image A and a structuring element B. . . . . . . . . . . . . . . 120
A.5 (a) Erosion and (b) Dilation of the binary image A by structural element
B (see Fig.A.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
A.6 (a) Opening and (b) Closing of the binary image A by structural element
B (see Fig.A.4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
xx
List of Abbreviations and
Notations
A Binary image or 2D Set
A(R) Area of a region R
B Structural element
BWP Between Wheel Path
C Closed curve in R2
C Morphological Closing
CTM Circular Texture Meter (or Circular Track Meter)
D Digital Curve
D Morphological Dilation
E Edge Profile of an Image
E Morphological Erosion
EDPC Edge Detection & Pixel Counts
ETD Estimated Texture Depth
g 2D-Gaussian function
G Gaussian filter in Frequency domain
HVS Human Visual System
I Binary or gray-scale Image
J Flat Gray-scale structuring element
JN Non-flat Gray-scale structuring element
K Co-occurence Matrix
xxi
l Perimeter of a Jordan curve
L Perimeter of a Digital Jordan curve
L(x) Cardinality of a set x
M [i, j] Markov Random field Model
MPD Mean Profile Depth
O Roundness
O Morphological Opening
OWP Outer Wheel Path
P1 Porosity
P2 Packing
Q Canny Edge detection Operator
Qprewitt Prewitt Edge detection Operator
Qrobert Robert’s cross Edge detection Operator
Qsobel Sobel Edge detection Operator
R Placement rule
Ri Region internal to a Jordan curve
Ro Region external to a Jordan curve
Rn Set of n-dimensional real numbers
S Morphological skeletonization
SMTD Sensor Measured Texture Depth
SPTD Sand Patch Texture Depth
U Area of a particle
Zi Region internal to a Digital Jordan curve
Zo Region external to a Digital Jordan curve
γ Gabor channel variance of scale
Γ perceptual based regularity
Π An array of lattice points
σ Gaussian width parameter
τ Trajectory angles vector of a closed curve
υ Convexity
xxii
Υ Particulate regularity
φ Gabor channel variance of directionality
ΨB(A) Granulometry of A generated by structuring element B
χ Elongation
Ω Circularity
xxiii
xxiv
Authorship
The work contained in this thesis has not been previously submitted for a degree or
diploma at any other higher educational institution. To the best of my knowledge and
belief, the thesis contains no material previously published or written by another person
except where due reference is made.
Signed:
Date:
xxv
xxvi
Acknowledgments
This dissertation would not have been possible without the support of many people. Firstly,
I wish to thank and give my appreciation to my principal supervisor, Prof. Vinod Chandran
whose expertise, understanding, and patience, added considerably to my graduate experience
and was a great source of support and guidance throughout this work. Sincere gratitude is also
due to my associate supervisor Prof. Sridha Sridharan for his continual support.
I would also like to convey thanks to Mr. Hari Krishnan from the Queensland Department
of Transport and Main Roads and Dr. Edith Gallagher from Franklin and Marshall College -
Pennsylvania. They have not only provided data used in this research but also through discus-
sion with them it was possible to apply the concepts developed in this dissertation, to specific
problems.
Special thanks also to all my friends who stood by me in many ways throughout this journey
with their best wishes and for their constant reminder of the necessity to balance between work
and play.
I would also like to thank my parents Abebech and Elunai Solomon for the support and un-
wavering encouragement they provided me through my entire life. I specially acknowledge my
wife and best friend, Hannah, for her love , patience, encouragement and support.
Ebenezer!
RONALD T. ELUNAI
Queensland University of Technology
March 2011
xxvii
xxviii
Chapter 1
Introduction
1.1 Motivation and Overview
The study of visual textures spans at least half a century, with the earliest known ex-
position attributed to psychologists interested in human perception of textures. In his
seminal work, Gibson [8] listed eight properties suggested as being essential in char-
acterising determinate visual surfaces. In that context, the term “determinate” was
understood to denote visual surfaces with definite spatial qualities. An example of
indeterminate visual surface according to Gibson, was a cloudless sky. Of the eight
suggested qualities of the determinate visual surfaces, texture was considered the most
fundamental owing to its visual and tactile nature and its relation to the rest of the
other suggested visual qualities such as colour, contour and distance impression (depth).
Psychological studies of texture usually involved experiments with human subjects
asked to describe visual textured targets. These descriptions were then correlated with
the hitherto underlying theory of texture perception. The sixties witnessed the nexus
between texture as a psychological discipline and texture as a discipline of compu-
tational pattern recognition. Julesz [9, 10] studied the relationship between the ar-
rangement of pixels in images and the ability of the human subjects to pre-attentively
discriminate them. He generated two images with predetermined statistical and/or
structural attributes and presented them to human subjects for discrimination and con-
cluded that the human visual system is unable to pre-attentively distinguish between
textures that have statistics at or beyond second-order statistics. This conclusion was
later refuted and revised a few years later by several other researchers including Julesz,
who demonstrated cases of discrimination by human subjects of textures identical even
in third-order statistics [11, 12, 13, 14]. However, the work by Julesz became a pio-
neering concept linking human vision of texture to statistical pattern recognition.
The next significant step in the study of textures, was the complete shift into the realm
1
2 Chapter 1. Introduction
of machine vision, using the learnings from the human visual system and the tools of
statistical and spectral analysis. Efforts to automate textural descriptors for machine
vision commenced in the early seventies by several researchers [15, 16, 17], but perhaps
the work by Tamura.et.al [18] and later Laws [2] were amongst the few that translated
the psychological perceptual features of texture to computational features for machine
vision.
Around the same era of the transition from perceptual analysis of texture to machine
analysis of texture, a group of scientists were also independently developing an abstract
approach to texture analysis using the concept of mathematical morphology. Indeed
the first texture analyser was developed from the concepts of random set theory and
the hit-or-miss transformation based on mathematical morphology, by Klein and Serra
[19]. The morphological approach is the basis of granulometry, and since it is originated
from mathematical concepts, it is inherently a machine-based system.
Thus, two approaches for computational analysis of texture were identified. One is
based on the perceptual model of the human visual system and therefore adopts a sta-
tistical view of texture analysis, whereas the other is based on the analysis of the basic
elements that constitute the textures themselves and therefore adopts a structural view
of texture analysis.
Several factors influence the development of texture analysis algorithms. One of these
factors is whether the algorithm is based on biological systems such as the Human Visual
System, or based on mathematical/mechanical principles of machine vision. Another
factor is the “type” of texture being analysed, resulting in algorithms that are applica-
tion dependent. This thesis defines and examines a class of textures that are particulate
in appearance and some of their applications. As an interlude, Fig.1.1 shows examples
of what constitute particulate textures ((a) and (c)), and non-particulate textures ((b)
and (d)) from the Brodatz album [1]. A more technical review and definition of partic-
ulate textures is provided in chapter 2.
1.2 Research questions
Following a literature review on texture analysis, this research attempts to revisit a
number of questions. Below are some of those questions:
(i) What are particulate textures?
(ii) How should textured images be analysed in order to determine size distribution
of particles statistically or structurally?
1.2. Research questions 3
Figure 1.1: Examples of particulate and non-particulate textures. (a) and (c) areparticulate.
4 Chapter 1. Introduction
(iii) Does what we look for in textures, influence the method of analysis we adopt?
Do the different methods of analysis result in conflicting outcomes regarding a
desired output? Why?
(iv) How does a statistical method such as autocorrelation, compare to a structural
method based on edge detection? How do these methods work for applications in
grain size analysis?
(v) Can 3D texture depth of particulate textures be adequately represented by their
2D coarseness obtained from still images?
1.3 Main objectives
The general objectives of this thesis are:
(i) Identify the limitations of the global state of the art texture analysis techniques
in estimating local parameters associated with particulate textures.
(ii) Adapt existing texture analysis and image processing techniques to the analysis
of particulate textures
(iii) Provide an understanding regarding particulate textures through a thorough study
of all its descriptors
(iv) Characterise particle shape
(v) Study texture descriptors from the view-point of textons and particle geometry
Specific objectives of this research are to:
(i) Develop a grain size analysis technique based on edge detection and pixel counts.
(ii) Develop an adaptive granulometric technique suited for classification of particu-
late textured images based on particle size.
(iii) Provide a framework for the analysis of particulate textures
(iv) Apply particulate texture analysis techniques to real life problems including sed-
imentology and road surface analysis
1.4 Major research contributions
(i) In chapter 2 a definition of particulate textures and a formal development of a
framework for analysing them is presented. This work advances the frontier of
texture analysis to include the class of particulate textures which have a significant
array of applications. Important features of particulate textures, namely, particle
1.4. Major research contributions 5
size, particle shape, compaction, micro-texture and macro-texture are defined.
Deficiencies in classical texture analysis techniques in capturing these features are
identified.
(ii) In chapter 3 a new image processing framework for shape analysis and specifically
particle roundness and convexity is developed based on the geometrical proper-
ties of the digital Jordan curve. This technique captures the classical definition
of particle roundness in algorithmic form.
(iii) In chapter 4 a new method for computing particle size using edge detection and
pixel-run statistics is developed for complex particulate textures (sediments). The
technique captures the edge profiles even in low contrast images by implementing
edge linking. Sediment images of known sieved particle sizes are used to demon-
strate the efficiency of the technique. Parameters of the developed techniques are
easily related to the imaging conditions, such as illumination and viewing angle,
and therefore can be tuned and calibrated to outperform existing texture analysis
techniques.
(iv) Using an extensive road image database, the developed EDPC technique is applied
for macrotexture evaluation of road surfaces. In doing so, this work combines the
two disciplines road surface texture analysis and that of visual texture analysis
in a unified theoretical framework and reinforces the place of digital imaging in
pavement management systems.
6 Chapter 1. Introduction
1.5 List of Publications
The research has resulted in the following fully refereed publications.
1. Elunai R, Chandran V & Sridharan S, (2005), Texture Classification using Gabor
filters and higher order statistical Features, A comparative Study, Proceedings of
the Eighth International Symposium on Signal Processing and its Applications
(ISSPA),Volume 2, August 28-31, 2005, Sydney, Australia, Page(s): 659 - 662
2. Elunai R & Chandran, V. (2006), Particle size determination in granular images
using edge detection, with application to agricultural products, Proceedings of
the The thirteenth annual Conference on Mechatronics and Machine Vision in
Practice, December 5 - 7, 2006, Toowoomba, Australia,Page(s):1-6
3. Elunai R, Chandran V &Mabukwa P. (2010), Digital image processing techniques
for pavement macro-texture analysis, Proceedings of the 24th ARRB Conference
- Building on 50 years of road and transport research, Melbourne, Australia
4. Elunai R, Chandran V & Gallagher E (2010), Asphalt Concrete Surfaces Macro-
Texture Determination from Still Images - IEEE Transactions on Intelligent
Transportation Systems - accepted for publication
Chapter 2
Background
2.1 Texture analysis overview
Texture plays an important role in many machine vision tasks such as remote sensing,
surface inspection, medical image processing and document processing. In its span of
about six decades, the study of texture has expanded tremendously, supporting a num-
ber of applications and their accompanying analysis techniques. Attempts to arrive at
a universal definition of what constitutes texture have proven to be elusive. Conse-
quently, the definition of texture is formulated by different researchers in connection to
specific application focus. Applications, are in turn motivated by two major underly-
ing perspectives, namely, perceptual-based (the psychophysiology of the human visual
system - HVS) or machine vision based. Although human vision in general has been a
subject of interest from time immemorial, digital image processing has largely evolved
in the last 50 years. However, even after the advent of digital image processing, the
two approaches to texture, one based on the HVS and the other purely machine based,
maintained their parallel stride with occasional merger of concepts. For example, com-
puter analysis was applied to the understanding of the HVS and also for automated
vision independent of the HVS. A comprehensive review highlighting the similarities
and differences in the objectives and manners of vision processing in man and machine,
which are also applicable to texture processing can be found in [20].
A notable milestone in the paradigm shift from using machine vision as a tool for
understanding HVS interpretation of texture, to one that uses machine vision as an
abstract tool of texture analysis, was the pioneering work by Matheron [21] on random
sets and integral geometry, which is the basis for the morphological approach to image
and texture analysis.
Due to the above major threads in texture analysis, and the applications based on
them, several definitions of texture have been formulated by various researchers. A
7
8 Chapter 2. Background
number of definitions are compiled in [22, 23, 24]. The following are a few examples:
• “We may regard texture as what constitutes a macroscopic region. Its structure
is simply attributed to the repetitive patterns in which elements or primitives are
arranged according to a placement rule.” [18]
• “A region in an image has a constant texture if a set of local statistics or other local
properties of the picture function are constant, slowly varying, or approximately
periodic.” [25]
• “The image texture we consider is non figurative and cellular... An image texture
is described by the number and types of its (tonal) primitives and the spatial
organization or layout of its (tonal) primitives... A fundamental characteristic of
texture: it cannot be analyzed without a frame of reference of tonal primitive
being stated or implied. For any smooth gray-tone surface, there exists a scale
such that when the surface is examined, it has no texture. Then as resolution
increases, it takes on a fine texture and then a coarse texture.” [26]
• “Texture is defined for our purposes as an attribute of a field having no com-
ponents that appear enumerable. The phase relations between the components
are thus not apparent. Nor should the field contain an obvious gradient. The
intent of this definition is to direct attention of the observer to the global proper-
ties of the display i.e., its overall coarseness, bumpiness, or fineness. Physically,
non enumerable (aperiodic) patterns are generated by stochastic as opposed to
deterministic processes. Perceptually, however, the set of all patterns without ob-
vious enumerable components will include many deterministic (and even periodic)
textures.” [27]
• “Texture is the characteristic physical structure given to an object by the size,
shape, arrangement, and proportions of its parts. The goal of texture analysis in
image processing is to map the image of a textured object into a set of quantitative
measurements revealing its very nature.” [28]
It can be deduced from the above definitions of texture that there are two overrid-
ing natures of texture namely, statistical and structural and this categorization is made
explicit in [20]. Other authors, for example [23] suggested four major categories for tex-
ture identification , namely, statistical, geometrical, model-based, and signal processing
(or filtering) techniques. However upon close examination of the major techniques out-
lined in [23], it is obvious that the categories still fit within the framework of statistical,
structural or a combination of both. This is the view adopted here. Therefore it follows
from this understanding that texture is a composite of a placement rule R applied to
primitives p [18], and consequently the Texture T could be defined by:
2.2. Particulate Textures (General) 9
T = R(p) (2.1.1)
Accordingly, if the placement rule R or the primitives p or both, are deterministic, a rel-
atively simple algorithm could be developed to extract the structure of the texture; such
textures lend themselves to structural analysis. If on the other hand both the placement
rule and the primitives are random, as is the case in many naturally occurring textures,
then the texture parameters could be extracted only through statistical analysis. Where
either R or p, but not both, are deterministic, a hybrid statistical/structural technique
may be used to analyse the texture.
2.2 Particulate Textures (General)
The various definitions proposed for texture in the previous section are a testament to
the complexity of texture analysis. One possible measure to address this complexity
arising from the myriad of descriptions of what constitutes texture, is to identify a sub-
class of textures with similar properties and analysis methods and develop a framework
around them. This in part, is the motivation behind the study of the class of particulate
textures.
2.2.1 Definition
Thus, the definition of what constitutes particulate textures is best explained in the
context of an overall definition of texture. Haralick [26], wrote:
When a small-area patch of an image has little variation of tonal primitives, the dom-
inant property of that area is tone. When a small-area patch of an image has wide
variation of tonal primitives, the dominant property of that area is texture.
He then goes on to suggest the methods for analysis of texture:
...to characterise texture we must characterise the tonal primitive properties as well as
the spatial interrelationships between them.
The above definition of texture is in agreement with the empirical formulation in [18]
described by equation (2.1.1) where the placement ruleR is synonymous with the spatial
relationship, and p represents the tonal primitives. What distinguishes particulate
textures from other textures is the nature of the tonal primitives p. There are certain
properties that p must satisfy to become an element of a particulate texture. Thus p
has size and shape, where the shape boundaries form a closed contour or are blob-like.
The primitive p also displays properties of roughness or smoothness. In other words,
the primitive p itself displays properties of texture. This reiterative property can be
mathematically explained based on equation (2.2.1) as follows:
10 Chapter 2. Background
T = R(p)
p = Ro(po) (2.2.1)
where T is the particulate texture, R the placement rule of primitives p, where p also
displays texture with smaller primitives po arranged with placement rule Ro. An ex-
ample of such arrangement is shown in Fig.2.1, where the almonds collectively display
texture, but individually each almond displays its own texture. These are respectively
defined as macro-texture and micro-texture discussed in section 2.3.1.
(a) (b) (c)
Figure 2.1: An image of (a) almonds , and its boundary showing (b)microtexture detailsand (c) macrotexture
Particulate textures can therefore be defined as: textures with tonal primitives,
where the tonal primitives individually display the properties of size, shape,
closed boundaries, and roughness and collectively have a spatial arrange-
ment based on statistical or deterministic rules.
Examples of particulate textures are shown in Fig.1.1(a and c) and Fig.2.3. Examples
of textures that do not qualify under this definition include Fig.1.1(b and d) and D15
in Fig.2.5 since the blob-like property of closed boundaries is missing.
2.2.2 Applications of particulate texture analysis
Another motivation for studying particulate textures, is the range of potential applica-
tions that could be addressed. In general, applications include extraction of quantities
such as particle size, shape, particle surface texture, aggregate compaction, or texture
depth from textural images. The following are a few specific examples.
The sedimentologist examining the sedimentary particles is interested in the size, shape
and texture of the particles as each of these provides clues regarding the nature or his-
tory of the sediment [29]. For example, size is related to both medium of transportation
and the velocity of transportation of the sediment sample. Surface texture is related
to methods of transportation or changes due to solution. The shape is related to the
2.3. Features of Particulate Textures 11
rigours of the transportation and possibly the distance traveled [30]. Collectively these
features furnish the sedimentologist with information regarding the environment. Par-
ticle shape analysis is also applied in the study of aggregates and their impact on hot
mix asphalt pavements [31]. Photographic images of textured particles make it possible
to analyse these features through image processing.
Applications of image processing techniques to measure compaction include, the quan-
tification of volumetric shrinkage strains of compacted particulates [32] and in the study
of soil compaction which has long been associated with crop yields and water transport
[33].
The concept of texture scale is applicable to the study of pavements, where both micro-
texture, macrotexure and megatexture are defined in the context of depth, to determine
the level of skid resistance of a road surface. This is discussed in detail in chapter 5.
2.3 Features of Particulate Textures
In studying particulate textures the main focus is on the properties of the primitives p
described by Eqn.(2.2.1) even though their spatial arrangement R is also studied. These
primitives are more like aggregates with size, shape and even texture. Another term
used to describe the tonal primitives is texels [34], except that the intended meaning
in [34] did not include close boundaries or blob-like property for the texels, in which
case they were not necessarily particulate but a general description of texture elements.
In any case, the texture of these individual aggregates then becomes a micro-texture
(or subtexture) from the vintage point of the overall image. The spatial arrangement
of the aggregates constitutes macro-texture which is simply a description of how the
aggregates are arranged in a compact or sparse manner.
The 3D analysis of particulate textures adds the feature of depth. Therefore in sum-
ming up the features it can be said that the goal of particulate texture analysis is to
map the textured images into a set of quantitative measurements revealing their very
nature in terms of shape,size, micro-texture, macro-texture, texture depth and com-
paction. These features are the topic of this thesis and each will be considered in turn.
Particulate shape will be discussed in chapter 3, followed by particulate size in 4. For
convenience, macro-texture, micro-texture and compaction are discussed in this sec-
tion for their importance in clarifying important concepts regarding the fundamental
difference between standard textures and particulate textures.
2.3.1 Micro-texture and Macro-texture
The fine details of an aggregate’s surface, independent of its shape, size or composition
is termed the surface texture of the aggregate [29]. The surface texture thus defined is
referred to as micro-texture. If on the other hand the textured aggregates are arrayed
in a mixture so that the collections of aggregates form a texture of larger scale, the
12 Chapter 2. Background
resulting texture is termed macro-texture. Fig.2.1 illustrates this point, where a single
almond displays details (micro-texture), and a collection of almonds display macro-
texture. In general, whether a texture is considered micro-texture or macro-texture is
dependent on the application and the resolution by which a textured image is acquired.
Micro-texture analysis requires higher resolution images.
There are several applications where texture is analysed at either of the scales. An ap-
plication where digital image processing is used for microtexture analysis is detailed in
[35] where high resolution images of samples with aggregates intended for road surfaces
were used to determine microtexture using a method that involves variations in pixel
intensity with thresholding. One example for the analysis of macro-texture is the appli-
cation for extracting shape from texture using texel boundaries [34], where the texels
themselves had internal texture (termed sub-texture therein), but only macro-texture
(supertexture) was used. The importance of both micro-texture and macrotexture in
road surfaces is discussed in chapter 5.
2.3.2 Compaction
In the context of digital image processing techniques, compaction is the quantification
of the spatial interaction between particles with each other and with their background.
A similar concept has been defined in the context of mass properties of aggregates in
sediments by Krumbein in [4] who suggested three measures, Porosity, Permeability
and Packing defined as follows:
Porosity is defined as the total percentage of void space in-between aggregates, whereas
effective porosity is defined as the total percentage of connected void space. In the
context of 2D image processing, for an M ×N binary image I where the particles are
binary 1 and the background is binary 0 (e.g Fig. 2.6) porosity P1 is defined as:
P1 = 100
(MN −L(I = 1)
MN
)
(2.3.1)
where L(x) is the length of the vector x.
Permeability is defined as a measure of the ease of fluid flow through the aggregates.
This property is quantified by physical means whereby fluids are injected and the
resulting pressure required for fluid flow is measured. Image processing techniques
cannot directly quantify permeability except in cases when it is known a priori that
permeability and porosity are monotonically related. In this case Eqn.2.3.1 is used as
a measure of permeability as well.
Packing or the degree of packing has two operational definitions in the context of
studying rocks. The first is packing proximity, defined as the total percentage of grain
to grain contacts along a traverse measured on a thin section of rocks. The second,
known as packing density, is defined as the cumulated grain intercept length along a
2.3. Features of Particulate Textures 13
traverse in a thin section. We shall designate the packing density as P2 measured in
percentage. Both porosity and packing density lend themselves to image processing
techniques and shall be used here to synthesize a working definition for compaction in
particulate textures, but first we seek a mathematical description of packing density
from the above definition as follows: The cumulative grain intercept length along a row
i (or column j) of an M ×N (rows× columns) binary image I is given by
P2(i) = 100
(L(I(i, :) = 1)N
)
P2(j) = 100
(L(I(:, j) = 1)M
)
(2.3.2)
It is apparent from Eqns.2.3.1 and 2.3.2, that porosity is a property of the entire image
or matrix, whereas packing is a property of a line or vector. Another important obser-
vation is that porosity is a measure of the total areas of pores which are represented by
the background pixels whereas packing is a measure of particle intensity. Therefore they
are conceptually complements of each other. It can be shown that for a given binary
image I of size M × N , where particles are represented by binary 1 and backgrounds
by binary 0, that:
P1 = 100
(
1− 1
M
M∑
i
P2(i)
)
= 100
1− 1
N
N∑
j
P2(j)
(2.3.3)
Since porosity is a property of the background pixels, we define compaction κ by simple
complementation of Eqn.2.3.1 , thus:
κ = 100 − P1 (2.3.4)
Table 2.1 shows the relationship of packing density, porosity and compaction in refer-
ence to the images in Fig.2.6. Note that for packing density, the maximum packing is
used. Maximum packing is 100% for touching particles.
Although the packing density P2 is not used in the analysis of particulate images, it is
useful in the synthesis of particulate binary images for the purpose of study, as those
shown in Fig.2.6. The images are synthesized by setting a constraint on the spacing
between the particles. This constraint is defined by the number of background pixels
permitted between the particles at the intercept point that passes through the diameters
of the particles. With the packing density defined in such manner, the relationship
between coarseness and particle size can be studied. Consequently the equivalence
14 Chapter 2. Background
Image MaximumPackingdensity(P2 %)
Porosity(P1%)
Compaction(κ %)
2.6a 100 43 57
2.6b 75 68 32
2.6c 57 79 21
Table 2.1: Particle/Background interaction parameters for the binary images in Fig.2.6.Average particle diameter is 32 pixels and M = N = 512
between coarseness and particle size is demonstrated by a monotonic relationship in
Fig.2.2. Note that with greater packing densities the relationship between particle
size and coarseness becomes more linear. This indicates that the ease by which imaging
techniques measure grain size, is greatly improved with packing density. A further study
of the relation between coarseness and particle size is presented in section 2.6.4.
10 15 20 25 30 35 40 45 50 550
10
20
30
40
50
Particle Size (pixels)
Coa
rsen
ess
P2 = 30%
P2 = 50%
P2 = 75%
P2=100%
Figure 2.2: Relationship between coarseness and particle size at various packing densi-ties P2
2.4 Levels of complexity of particulate textures
In general, particulate textures which lend themselves to simple analysis are those with
a reasonable level of contrast between particle and background. As the contrast drops,
the level of complexity rises. However, there are also other factors that determine how
simple is a particulate texture. The ease by which features of particulate objects in a
texture can be quantified, is another reasonable inference regarding the level of com-
plexity of the texture. Particle shape is perhaps the fundamental benchmark that can
reveal how simple a particulate texture is. If particle shapes are uniform and could
be isolated by the simplest segmentation techniques e.g. binary thresholding, then the
2.5. 2D and 3D particulates 15
texture is consider simple enough. Some examples of simple particulate textures in-
clude D48, D66, D67 and D75 from the Brodatz album in Fig.2.3. It is also apparent
that if shape could be extracted easily, then the level of packing (or compaction) is also
possible to determine. Compaction was discussed in section 2.3.2 and determination of
particle shape is discussed in section 3.1.
Although D62 has almost similar contrast to D48, D66, D67 and D75, it is relatively
more complex in that it has a diverse set of particle shapes. It is thus complex com-
pared to the simpler textures. The complexity then increases with textures such as
D05 and D28.
The level of complexity of the particulate texture determines the approach or method
used for analysis. Segmentation is one of the earliest stages of the analysis process of
particulate texture. Region based and boundary-based approaches are both useful as
the case might be. Methods of segmentation are discussed in section 2.8.
2.5 2D and 3D particulates
Particles and aggregates in nature are three dimensional objects and therefore are ob-
jects in a 3-D scene. One specific application where 3-D features are sought from
monocular images, is the extraction of surface texture depth from a 2D image of a road
surface. A monocular image of a particulate surface is a 2D projection of the 3-D scene,
and therefore, to recover the 3-D scene from 2-D projections certain cues in the images
are used. These include shading, boundary configurations and type of junctions that
permit one to infer 3-D shape from line drawings of objects.
Stevens [36] outlined six means available to the HVS for determining the shape of a
surface. These are stereopsis, motion, shading, texture gradients, boundary contours
and surface contours. A number of researchers have used at least one of the approaches
for extraction of shape from texture. For the several techniques that do not employ
stereoscopic vision systems, other techniques involving correction for projectional dis-
tortion are applied. Witkin [37] developed a method to recover surface orientation
from still monocular images affected by projection distortion. Bajcsy and Lieberman
[38] used texture gradients on natural outdoor scenes, by classifying local texture ac-
cording to characteristics of the Fourier transform of local image windows. Texture
gradients were found by comparing quantitative and qualitative values of adjacent win-
dows. The gradients are then interpreted as a depth cue for longitudinal (receding)
surfaces. Ulupinar [39] used parallel and skew symmetries to infer 3D characteristics
from monocular images. Blostein and Ahuja [34] applied an integrated approach of
texel extraction and surface orientation estimation for the task of shape inference from
natural texture. By their functional definition, these texels were the tonal primitives
16 Chapter 2. Background
(or aggregates) of the textures analysed, and therefore, in effect texel extraction was
fundamentally aggregate characterization.
The method they applied for texel extraction is edge-detection. The basic difficulty in
applying edge detection techniques for texel extraction inferences regarding 3D char-
acteristics as outlined in [34] is the distinction between boundaries of texture and
subtexture. A method that applies adaptive edge detection is presented in chapter 4.
For convenience, the term micro-texture shall be used here to imply what in [34] was
referred to as subtexture.
Figure 2.3: Examples of particulate texture in the Brodatz album [1], row1:D03 D05D10 D22, row2:D23 D27 D28 D30, row3:D48 D62 D66 D67, row4:D74 D75 D98 D99
2.6 Application of Standard Texture Analysis Techniques
to Particulate Textures
This section briefly reviews standard texture analysis tasks and techniques and subse-
quently discuss their application to features of particulate textures, specially size and
shape features.
2.6. Application of Standard Texture Analysis Techniques to Particulate Textures 17
2.6.1 Standard Tasks in Texture Analysis
Unlike the ambiguity inherent in the definition of texture (section 2.1), there appears
to be a consensus amongst texture researchers regarding the major tasks involved in
texture analysis and the methods used to achieve these tasks. The most common, of
texture analysis problems are texture classification [16, 23], texture segmentation [25],
feature extraction [2, 18, 40, 41] and the extraction of shape from texture [20, 23, 24, 28,
37, 39, 40, 42]. Texture synthesis [23, 24, 43, 44], and quantitative measurements (which
is the main task in this thesis) [19, 28, 40] are less common. Quantitative measurements
refer to more definitive features than those obtained using feature extraction techniques.
Examples of features ordinarily used to describe texture, developed by K.Laws [2] are
as shown in 2.4. Examples of quantitative measurements in texture analysis include
obtaining the average grain size in a granular image. Grain size is a measure that
is in many cases synonymous to granularity or coarseness, but the concept of size, is
more specific than the feature of coarseness. Other HVS concepts such as regularity
and density could also have a more definite meaning in the context of particulate
textures. Section 2.6.3 explore grain size, coarseness and packing in further details.
The quantification of texture depth, and its application to road surface is presented in
chapter 5. Quantitative measurements could also be used to categorise texture, based
on the average grain size, or shape measure, and in this sense they can be considered
as a texture classifier.
2.6.2 Standard Methods of Texture Analysis
Statistical methods for texture analysis could be applied to structured textures but the
reverse (namely, applying structural methods of texture analysis to statistical textures)
is usually a difficult problem. Consequently, of the two approaches to texture analysis,
statistical analysis is predominant. In what follows standard techniques that are either
statistical, structural and/or hybrid techniques are briefly described. Detailed descrip-
tions of these techniques can be found in appendix A.
Statistical Techniques
An important feature of many images is the repetitive pattern of their texture elements.
This feature is captured by the autocorrelation function, which exhibits periodic
behaviour with a period equal to the spacing between adjacent texture primitives. The
function was first applied in [3] for quantifying texture coarseness. It was also used
for grain size analysis of sediments [45], where the rate of decay of the autocorrelation
function was taken as a representative of average grain size. The application of the au-
tocorrelation function to particulate textures and its limitation, are discussed in detail
in section 2.6.4.
18 Chapter 2. Background
Figure 2.4: Perceptual texture dimensions proposed by Laws[2]
2.6. Application of Standard Texture Analysis Techniques to Particulate Textures 19
Co-occurrence matrices are also used for texture analysis [24, 26, 42]. Features
obtained from co-occurrence matrices could be used for classification of texture and
therefore suited for detecting patterns in texture. Two different texture units having
the same shapes but different in scale, cannot be captured by the same co-occurrence
matrix unless the co-occurrence matrix is scaled as well. This makes co-occurrence
matrices not suited for quantification of grain size.
Frequency domain filtering: One of the fundamental uses of image analysis in the
frequency domain (or Fourier domain) is for filtering [20, 46]. However, the mere rep-
resentation of images in the Fourier domain reveal characteristics that may not be
immediately obvious in the spatial domain. This property of the Fourier domain to
represent textures (and images in general) is a powerful tool and is the basis of all
filtering methods for texture analysis. Fourier domain analysis could also be used for
determining grain size in compactly arranged particles. In this case the spread of the
Fourier spectrum is inversely or inverse-log (base 10) proportional to the grain size.
Normalised magnitude spectra of sample textures are shown in Fig.2.5. The axes in
Fig.2.5 are normalised spatial frequency units in cycles per image.
(D15)
(D20)
(D15)
−0.5 0 0.5−0.5
0
0.5
0.2
0.4
0.6
0.8
1
(D20)
−0.5 0 0.5−0.5
0
0.5
0.2
0.4
0.6
0.8
1
Figure 2.5: Textures and their normalised log-magnitude (base10) Fourier spectra. Theaxes are spatial frequency units in cycles/pixel).
The family of two-dimensional Gabor functions as they are used in computer vision
was proposed by Daugman [47] and subsequently employed in in texture analysis tasks
successfully to characterize features including coarseness, directionality, regularity and
20 Chapter 2. Background
homogeneity [7, 41]. The applicability of Gabor filters for grain size measurements is
similar to that of Fourier spectrum features, where the spread of the spectrum is re-
lated to the grain size. Detailed descriptions of Gabor filters are found in appendix A.4.
Wavelet transforms are based on limited duration signals known as wavelets. These
time limited signals also have varying frequency, thus enabling the capture of both spa-
tial and frequency information simultaneously, subject to constraints governed by the
uncertainty principle [47]. Wavelets were found to be an important element of signal
processing when the multiresolution analysis (MRA)theory was formulated by Mallat
[48]. Wavelet features have been used for texture classification in [49] using wavelet en-
ergy features. These features to a large extent captured the coarseness of the textures,
and therefore wavelets are useful for classification of textures based on aggregate size
in the images. The application of wavelets to road surface macrotexture is discussed
in chapter 5. Appendix A.5, details the algorithmic development of wavelet transforms
for image and texture processing.
Application of Structural Techniques to texture analysis is a difficult undertaking
unless the textures under analysis are also uniform and structured. For this reason,
structural techniques have limited applications for analysis of naturally occurring tex-
ture. Examples of structural techniques include the Voronoi tessellation [50] and
Structuring Elements as applied in mathematical morphology.
The structural element approach to texture analysis was proposed by Matheron [21].
The basic idea behind texture analysis using structuring elements is to choose a pre-
determined set of shapes such as lines, squares or circles and then generate a new binary
image by eroding the image using the elements . The response of the image to this
operation is then used as a feature used for a specific application. Consequently, the
structuring element is chosen with the end application in mind. Structuring elements
are essential in several granulometric applications and are discussed separately in sec-
tion 2.7 in the framework of mathematical morphology. The Markov Random Field
(MRF) is a hybrid of statistical and structural technique and has been used exten-
sively as a model for texture [43, 51, 52]. In the discretised Gauss-Markov random field
model, the gray level at any pixel of an image is modeled as a linear combination of gray
level pixels of the neighbouring pixels and additive noise. The intention is to extract
parameters , using least squares method or any other approach. This is mathematically
give by. The technique is useful for classification of texture based on the relationships
between pixels in a given neighbourhood. Applications for particle analysis or grain
size are non-existent.
2.6. Application of Standard Texture Analysis Techniques to Particulate Textures 21
2.6.3 Applicability to Particulate textures
The texture methods and their resulting interpretation described in sections 2.6.1 and
2.6.2 have limited use when applied to particulate textures. This limitation is princi-
pally owing to the fact that the majority of the techniques are predominantly motivated
by the need to understand texture from the HVS viewpoint and in that context, this
meant that applications such as classification and segmentation were sufficient. In the
study of particulate textures, however, there are a new set of applications not neces-
sarily connected to the HVS. A sedimentologists is not merely interested in the level
of coarseness of the magnified image of sand particles, but is more interested in the
distribution of sand particles’ size in the image. The soil scientist is not just interested
in the contrast of the image of a porous sample of soil, but rather in the compaction,
porosity and permeability of the same. Highway engineers looking at the surface of a
motorway are interested in more than just surface roughness in a general sense; They
are interested in a quantitative measure of texture depth which translates directly to
the measure of skid resistance. Clearly, such outcomes are not in the domain of the
HVS as they require more precise descriptions. In retrospect, however, the techniques
used for texture analysis could also be applied to quantify particulate images provided
some conditions are satisfied. Some of these parallelisms are discussed here with few
examples.
Table 2.2 shows some of the HVS features and their corresponding particulate features.
In general, the features on the left column are more qualitative descriptions, whereas
those in the right column are more quantitative. In the next sub-sections some of the
conceptual definitions of textural features as perceived by HVS-inspired techniques,
are compared with textural features arising from the viewpoint of particulate texture
analysis.
HVS features Particulate features
Coarseness/Roughness Particle size/ Texture Depth
Directionality Particle Orientation
Density Porosity/Compaction
Homogeneity/Regularity/Uniformity Particle Shape
Scale/Multiresolution Micro-texture and Macrotexture
Table 2.2: HVS inspired features and the corresponding Particulate features
2.6.4 Coarseness vs. Particle size
Generally, a coarse texture implies a more granular texture and therefore in the con-
text of particulate texture analysis, texture coarseness and particle size appear to cor-
respond. However, there are some subtle differences between the two, as verified by
experimentation. The difference between the coarseness and grain size distribution can
22 Chapter 2. Background
be demonstrated using the synthesized binary images in Fig.2.6, having the same par-
ticle size but different spacing. Fig.2.6a shows particles touching each other. Fig.2.6b
and 2.6c shows the aggregates with progressively wider spacing. The coarseness of the
binary images is measured using the 1/e point of the autocorrelation function as shown
in Fig.2.7 (Kaizer [3] was the first to measure coarseness this way). Clearly the images
show different coarseness levels as depicted in Fig.2.7, yet the particles themselves are
of the same size. This is just one simple example of the subtle differences between
the features inspired by the HVS and the particulate measures inspired by machine
vision and mathematical morphology. This apparent discrepancy in the measures of
coarseness and size is largely due to the measure of compaction (explored in section
2.3.2. We next determine the conditions for the equivalence of coarseness and particle
size.
(a) (b)
(c)
Figure 2.6: Three binary images (particle = 1, background = 0) showing aggregates ofsame particle size but different spacing
Equivalence of coarseness and particle size
In reference to Fig.2.2 an important finding is the gauging of the relationship between
the measure of coarseness, and that of particle size. The linearity of a plot in Fig.2.2
is an indicator of the correspondence between coarseness and particle size. The overall
slope of a plot is an indicator of discriminative power, where a larger slope implies better
discrimination. Thus it is apparent that at maximum packing, texture coarseness can
be used as a reliable measure of particle size as can be seen in the linear relationship
between coarseness and particle size. As the packing levels reduce, the reliability of
coarseness measure to represent particle size also reduces as seen in the zig-zag nature
of the plots at lower compaction, for example in the packing density of 30%, where
a particle of 35 pixels in radius is shown to be of less coarseness than a 30 pixel
2.6. Application of Standard Texture Analysis Techniques to Particulate Textures 23
0 20 40 60 80 100−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Lag
Aut
ocor
rela
tion
(a):−13(b):−17(c):−22
1/e
Coarser
Figure 2.7: Autocorrelation sequence for the images in Fig.2.6. The lag at 1/e deter-mines the coarseness as per Kaizer [3].
radius particle. However, it can also be argued that at lower packing, the measure of
coarseness displays better overall discrimination between particle sizes, and could be
used as a more reliable measure for discriminating well-spaced particle sizes. In either
case there is a limit to how a coarseness measure can represent grain size and more
reliable techniques for the task of grain size exist which are based on mathematical
morphology discussed in section 2.7.
2.6.5 Placement regularity vs. Particulate shape
The HVS inspired measure of texture regularity uses the global property of the image
(the placement rule) and therefore is at odds with particulate approaches to texture
regularity where textures are viewed from the vintage point of the particles forming
the texture and therefore based on particle shape. It is generally agreed that the shape
of the primitives (or particles) and their placement gives the particulate texture its
appearance of regularity [2, 18]. Thus, in relation to Eqn.2.1.1, where R is a placement
rule and p the particle unit, a texture where R is structured and p less structured or
uniform (more than one shape), is shown in Fig.2.8a. This is what would be described
as regular by most HVS-inspired computational measures of regularity. Similarly, in
Fig.2.8b , R is less structured whereas p is relatively uniform (showing only trapezia).
Therefore Fig.2.8b is less regular or more erratic in comparison to (a) if judgment is
based on placement rule. It is left for the reader to judge which picture looks more
regular. The regularity of both textures is also analysed computationally using a bank
of Gabor filters shown in appendix A.4. Gabor filters are widely accepted as a rep-
resentation of vision in biological systems following the pioneering work by Daugman
[47]. According to [7, 18], a pattern is considered regular if it has a well defined coarse-
ness and directionality. In order to quantify this, the image energy response from each
channel shown in appendix A is calculated. Each channel outputs a measure of direc-
tionality and coarseness (or scale) and the variances φ and γ of directionality and
24 Chapter 2. Background
coarseness respectively of the channel outputs could be used to characterize regularity.
Since regularity diminishes with increase in φ and γ the products of their reciprocal
determines regularity. Thus, the regularity due to directionality is given by Γφ = 1φ
and that due to scale Γγ = 1γ.
(a) (b)
Figure 2.8: Two patterns tested for global regularity and particulate regularity
Another measure of regularity based on particle shape, which we call particulate regu-
larity Υ is defined here
Υ =1
√∑ni=1(Ωi − Ωµ)2
(2.6.1)
Ωi =4πUiL2i
, Ωµ =Ωin
(2.6.2)
where for each of the n shapes in the image, the ith particle has circularity Ωi which
is the ratio between the area Ui enclosed by the shape and the area of a circle having
the same perimeter Li as the shape [53, 54, 55]. The circularity of a shape is unique
to that shape and is invariant to rotation. Note that the denominator in Eqn.2.6.1 is a
measure of circularity variance thus the proposed mathematical definition of particulate
regularity is the reciprocal of the variance of circularity of the aggregates. A perfect
circle would have a circularity of 1, and all other shapes have values between 0 and
1. It is impossible to implement a perfect disc digitally, and therefore discrete circles
(or discs) have a circularity sometimes less than a square with an equivalent perimeter.
This problem was addressed by [54, 55] , who developed a discrete version of circularity,
based on mathematical morphology. We shall discuss this further in section 3.1. The
simple measure of circularity used here employs the extraction of shape perimeters
using boundary detection, and uses the radius of an equivalent theoretical circle as a
standard.
2.7. Morphological Texture Analysis 25
0 0.2 0.4 0.6 0.8 10
0.02
0.04
0.06
0.08
0.1
0.12
Circularity
Inte
nsity
Circularity distribution
Fig.2.8aFig.2.8b
Figure 2.9: Circularity distribution of the images in Fig.2.8
Fig.2.9 shows the circularity distribution for the textured images shown in Fig.2.8. It
is clear from the plots, that Fig.2.8b has less variance in particle shapes compared
to Fig.2.8a and therefore has more particulate regularity, yet in a global measure of
regularity Fig.2.8a, is deemed more regular. Calculated values for regularity to compare
both approaches are shown in table 2.3.
Fig.2.8a Fig.2.8b
Scale regularity Γγ 17.31 1.64
Directional regularity Γφ 0.86 0.34
Particulate regularity Υ 7.05 16.25
Table 2.3: Shape regularity analysis
2.7 Morphological Texture Analysis
Morphological image processing stemmed from the discipline of mathematical morphol-
ogy in the mid-sixties as a result of a study by Matheron [56] who set out to investigate
the geometry and permeability of porous media, and to quantify the petrography of
iron ores to predict their characteristics. Morphological image processing is now a
technique, with applications ranging from image binarisation, filtering, segmentation,
texture analysis, and granulometry. At around the same time the theory was being
developed, the ideas were also used to construct a specialised hardware which became
the first texture analyser, developed by Klein and Serra [19].
The basic operations of mathematical morphology are Erosion, Dilation, Opening and
Closing. Detailed descriptions are found in appendix A.7. The structuring element is
26 Chapter 2. Background
a standard function applied in morphological operations to probe textured images in
order to extract information regarding the size of the particles of interest. An essential
characteristic of structural elements that facilitates this operation, is their convexity.
Matheron [21] was the first to characterise convexity in the context of mathematical
morphology, and did so with proofs from set theory and geometry. Two of the prop-
erties of convex sets, that Matheron developed, and which are relevant to subsequent
works on size, are presented in appendix A. Structural elements form the basis for the
Granulometric axioms presented in the following section.
2.7.1 Granulometry and Size Distribution
Granulometry as a branch of mathematical morphology is the formalisation of the con-
cept of size. In image analysis it provides an approach to compute a size distribution of
grains in binary images, using a series of morphological opening operations. Some re-
searchers associate, in addition to size, the concept of shape with granulometry. Shape
analysis is explored in the next chapter.
Granulometric quantification of size is inspired by the technique of sieving. If a sample
of grain is worked through a series of sieves arranged from top to bottom with decreasing
hole sizes, the smallest grains will fall all the way to the lower sieves, while the larger
ones get filtered in the larger sieves. This operation is described mathematically by
the notation ΨB(A), where A is the image containing the grains, and the B is a set
of structuring elements representing the sieves and satisfying the convexity conditions
mentioned in section A.7.2. The followings are axioms pertaining to ΨB(A) [57]:
1. Anti-extensivity : ΨB(A) ⊂ A
2. Increase : Y ⊂ A⇒ ΨB(Y ) ⊂ ΨB(A)
3. Importance of stronger sieve:
ΨB[ΨW (A)] = ΨW [ΨB(A)] = ΨSup(B,W )(A) (2.7.1)
Fig.2.10 shows three binary images of granular images of average radii 24 and 56 pix-
els, on a 512 × 512 pixels grid. The proportions of the grain sizes are varied in the
three images Fig.2.10 (a,b and c) so that the larger grains are 10%, 50% and 90% in
aerial proportion respectively . Structural elements B of radii 4 pixels to 60 pixels, in
increments of 4 pixels were used to probe the images.
The intensity distributions and cumulative distributions resulting from the probing are
shown in Fig.2.11. The plots shown in Fig.2.11a describe the relative response of each
of the images in Fig.2.10 when probed by successive structural elements. Notice the
peaks just before the 24 and 56 pixel points and the falling edges at exactly these points.
The falling edges in the plots simply indicate the composition by radii of grains in the
2.7. Morphological Texture Analysis 27
(a) (b) (c)
Figure 2.10: Synthetic binary images showing random particles of mean radius 24 pixelsand 56 pixels with the 56 pixel having (a) 10% (b)50% and (c) 90% of the mixture byarea
image and the associated peaks indicate the proportion of these components in the
overall mixture of grains. The axioms in Eqn.2.7.1 are a formalisation of the probing
operation. The technique is useful for analysis of grain size in particulate textures but
its drawback is that its efficiency is limited only to binary images.
10 20 30 40 50 600
0.05
0.1
0.15
0.2
0.25
0.3
0.35
radii of Structuring element B (pixels)
Inte
nsity if particle
s g
reate
r than B
10% large particles50% large particles90% large particles
(a)
10 20 30 40 50 600
0.2
0.4
0.6
0.8
1
radii of Structuring element B (pixels)
Cum
ula
tive d
istr
ibution (
ΨB(A
))
10% large particles50% large particles90% large particles
(b)
Figure 2.11: Size distribution for the binary images in Fig.2.10 (a) Density Function(b) Cumulative Distribution Function
2.7.2 Limitations of mathematical morphology
Generally, particulate texture analysis using mathematical morphology requires a high
contrast image that easily segregates particles from background as shown in the exam-
ples above. For gray-level images, there is difficulty in applying gray-level structural
elements owing to the fact that non-flat structural elements are not bounded by the
values of the image, and also due to the difficulty associated in choosing meaningful
structural elements. There are few examples of gray-scale morphological operations
applied to granulometry [57, 58, 59] and algorithmic details are provided in appendix
28 Chapter 2. Background
A. Thus in all cases structural elements are used where images are of high contrast
between particle and background such that no binarisation was necessary. Therefore
rather than using gray-scale structural elements, we make an effort to binarise images
to suit the binary structural elements. Successful morphological operations result in
images that have clear object-background segregation. This is possible through effective
segmentation techniques.
2.8 Segmentation of particulate textures
Analysis of particulate textures requires as a first step the isolation of individual par-
ticles or aggregates in order for the particulate features, such as size, shape, packing,
macro-texture, micro-texture and texture depth to be computed. This isolation can be
straightforward or complex depending on the textured image in question. For high con-
trast images, simple isolation of foreground particles from the background is feasible.
In the extreme case where the number of particles is large and their size is relatively
small in comparison to the dimensions of the image and the contrast is poor, such iso-
lation becomes difficult. However, in all cases the isolation of particles for analysis will
necessarily result in a binary image. The resulting binary image depends on the chosen
image segmentation method in preprocessing. Two principal methods of segmentation
are used for the binarisation step, namely, region-based segmentations and boundary-
based segmentation [59]. Following is a discussion involving the two approaches and
their potential applications to particulate textures.
2.8.1 Region-based segmentation
Region based segmentation partitions regions based on similarity. It is effective with
particulate images that have a good contrast between particles representing foreground,
and background. Examples of particulate images that are specially suitable for region
based segmentation are shown in Fig.2.12 . A successful region-based segmentation
makes it possible for measuring compaction, shape profiles and size distribution. The
micro texture of individual particles could also be analysed following the extraction of
the objects. For analysis of macrotexture, some of the methods described in Chapter
2 are well suited to the images in Fig.2.12. Three most common region-based segmen-
tation methods are thresholding, region growth and morphological watersheds [59].
Thresholding is the simplest regional segmentation implementation. It is based on
the Otsu thresholding algorithm [60]. The method is especially effective in images with
gray level histograms showing dual peaks (Fig.2.13), namely, D48, D62, D66, D67 and
D75. For slightly more complex images e.g D3, D22 and D98 dual thresholding or
adaptive thresholding could be used.
The second method, region growth, is based on grouping pixels or subregions into
2.8. Segmentation of particulate textures 29
Figure 2.12: Some particulate textures from the Brodatz album [1] suitable for re-gion based segmentation of objects from background. left to right and top to bottom:D3,D22,D48,D62,D66,D67,D75 and D98
0 100 200 3000
0.05
0.1
0.15
0.2
D30 100 200 300
0
0.01
0.02
0.03
0.04
0.05
0.06
D220 100 200 300
0
0.02
0.04
0.06
0.08
D480 100 200 300
0
0.02
0.04
0.06
0.08
D62
0 100 200 3000
0.01
0.02
0.03
0.04
0.05
0.06
D660 100 200 300
0
0.02
0.04
0.06
0.08
0.1
D670 100 200 300
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
D750 100 200 300
0
0.01
0.02
0.03
0.04
0.05
0.06
D98
Figure 2.13: Gray level histograms for the images shown in Fig.2.12 in correspondingorder.
30 Chapter 2. Background
larger regions based on predetermined criteria for growth. In its simplest form a seed
pixel is chosen and then neighbouring pixels are appended to the seed pixel based on
predetermined criteria of gray level proximity. The selection of the seed point and
the appended pixels in most cases is done by applying thresholding. Consequently,
region growth is most effective in the same images mentioned above for thresholding
and shown in Fig.2.13, where there is a distinct object background demarcation, and
in colour aerial images where colour information is used for the region growth process.
Color gradients were used in region growth segmentation in [61]. Gray level or color
images images with complex object-background intensity levels pose a difficulty for
the method. A complex object-background arrangement is where there are no distinct
intensity values for both. Therefore the main problem in applying the method to par-
ticulate textures, is the selection of seed points. This is especially true for images with
various particles packed together and each having a distinct color or gray-level which
might sometimes be confused with the background For example as shown in Fig.2.15.
In such cases, selection of seed points based on thresholding become difficult.
A third region based segmentation with applications in image analysis is the method of
morphological watersheds [59, 62]. The method treats an image as a topographic
surface with two spatial co-ordinates and the pixel intensity value orthogonal to the
spatial co-ordinates. In this view, there are three regions of interest. The first region
Rmis the set of pixels belonging to a regional minimum. The second set are the pixels
Rw such that if a drop of water is placed at any Rm it will settle at Rw. The third set
of point Re are such that if a drop of water is placed on them, they are equally likely
to settle to more than one set of Rw. Rw is called the watersheds or catchment basin.
Re is called the watershed line or divide line, which is synonymous to an edge between
two boundaries. A direct application of the watershed algorithm fails drastically even
in segmenting figures with obvious object-background distinction as shown in Fig.2.14
which compares simple thresholding, to simple watershed approach. This is known as
over-segmentation and one practical step to address it is by the use of markers. The
use of markers however, is difficult as it depends on the image under analysis and
in most cases requires a-priori knowledge of the image. In [63] a classification driven
watershed segmentation was proposed that used an eroded version of the original image.
Since the goal of isolating particle and background in particulate textures is to make it
possible to compute particulate features the effectiveness of an algorithm depends on
the image in question, the type of features required for extraction and the computa-
tional cost involved. For example watershed based segmentation with markers might
be suitable for size estimation as applied in [63] analysis but not for quantifying shape
or packing. In some cases, none of the region-based segmentation methods may apply,
in which case boundary-based segmentation approaches become the choice.
2.8. Segmentation of particulate textures 31
Figure 2.14: Segmentation of the D75 image (see Fig.2.12) from the Brodatz album us-ing simple threshold operations (left) and watershed algorithm (right). The watershedalgorithm is prone to over-segmentation
(a) (b)
Figure 2.15: A particulate image with particles of varying gray levels and its edgeprofile
2.8.2 Boundary-based segmentation
In the context of particulate images, boundary-based segmentation is generally applied
to images showing regional boundaries sufficiently different from each other and from
the background, yet are complex to be segmented using region-based techniques. The
method involves edge detection and therefore is potentially used for tracing particle
outlines and using the boundary trace to compute quantities of interest. An example
application where region-based segmentation would fail is shown in Fig.2.15, where the
edge detector captures the boundaries of particles in the image.
32 Chapter 2. Background
Usually boundary based segmentation is accompanied by edge linking algorithms to
ensure that the particle sizes computed via the linear intercept method [5] do not yield
misleading results due to discontinuities in edges. One common approach for linking
edges from resulting edge profiles of particulate textures is dilation with a suitably
chosen structuring element, followed by skeletonization or thinning. This approach was
used in [64] for calculating grain size of sediments from high resolution DEM (Dig-
ital Elevation models) images and produced results of grain sizes that corresponded
with true sizes. However, the edge detector used in [64] was not optimised to account
for variations in grain size. An adaptive method to address this problem is presented
in chapter 4. A method for edge-linking using morphological dilation with elliptic
structuring elements that tracks the edge directions was developed in [65]. In [66] a dy-
namic morphological operation known as roll dilation was applied to object boundary
detection. A structuring element rolls along the boundaries filling the gaps on object
boundaries. In both cases the algorithms were conceived with a countable number of
objects in mind and demand a fair level of distinction between object and background.
The number of edges in particulate textures are substantially more numerous and more
random than edges in other images commonly encountered such as those used in scene
analysis or biometrics, and therefore edge linking is computationally complex for par-
ticulate textures. Several edge detectors exist that provide different options. However,
the best option is to select an optimal edge detector that will accurately capture the
highest possible proportion of the edges in the particles in such a way that simple dila-
tion and skeletonization are sufficient, to produce the edge map of the actual image. A
method for the selection of Canny edge detector parameters adaptively for the purpose
of ranking sediment sizes is discussed in chapter 4.
2.9 Chapter Summary
The main contribution of this chapter is the characterisation of particulate textures.
Examples of what constitute and not constitute particulate textures were provided with
illustrations. Important features that describe particulate textures were presented.
These include size, shape, compaction, micro-texture and macro-texture.
The complexity levels of particulate textures were discussed and methods of pre-
processing particulate textures presented, including boundary-based and region-based
segmentation techniques.
The limitations of classical texture analysis techniques such as the autocorrelation func-
tion and Gabor filters in determining certain aspects of particulate textures have been
identified. The conditions by which texture coarseness could be used as a measure of
particle size has been determined. This condition is that the the particles in the given
2.9. Chapter Summary 33
textures have to be compactly placed, as the relation between coarseness and particle
size is more linear for images with relatively compact particles.
Morphological texture analysis, using structuring elements for the computation of grain
size in granular images is discussed. These morphological techniques are suited to bi-
nary images and could compute certain features of textures better than statistical tech-
niques but their application to gray-level images is less common owing to the difficulty
of applying structural elements to gray-level images. Thus there is a gap in analysing
gray-level granular texture for querying size, shape and other features of interest that
are difficult to capture using statistical methods of texture analysis or morphological
techniques.
34 Chapter 2. Background
Chapter 3
Particulate Shapes
3.1 Introduction
One of the earliest methods for charatcterisation of particle shapes was due to Sorby [67]
whose classification of shape and surface characteristics were descriptive and generic
rather than quantitative. The first quantitative system of measurement for shapes
was due to Wentworth [68, 69] who used a custom made device to compute round-
ness and flatness ratios. However, the earliest mathematical expressions for roundness
measurements were due to Pentland [70] and Wadel [71]. Cox [53] measured the de-
gree of sphericity in particles by using the 2D projections of aggregates and measuring
the circularity in each projection. Krumbein [4] developed a chart that would enable
a quick classification of shapes based on roundness and sphericity. His chart has su-
perseded the then existing laborious methods. A subset of this chart is shown in Fig.3.1.
In order to place the various descriptors of shape into a comprehensive framework,
the shape of a particle was identified to have three independent properties [72]. These
are, form (sphericity, convexity, concavity and elongation etc), roundness (large-scale
smoothness and angularity) and surface texture. The first two (form and roundness)
are geometrical properties, whereas texture is a statistical/structural property per-
taining to the micro-components of the aggregates’ surface. The geometrical shape
characteristics in particles determine to a large extent, their physical characteristics
such as strength or amount of void present when the particles are mixed. Certain prop-
erties of sediments such as porosity, permeability (see section 2.3.2) are related to the
shape of the component grains of the sediment [73]. From the vintage point of texture
analysis, particle shapes provide two scales for texture. These geometrical properties
of individual aggregates are directly related to the macro-texture of the image surface
formed from the aggregates. The textural properties of individual particles constitute
the microtexture which was briefly discussed in section 2.3.1 and the tools of analysis
are similar to that of classical texture analysis. The focus here is on the geometrical
35
36 Chapter 3. Particulate Shapes
Figure 3.1: Krumbein’s visual estimation chart [4] for roundness and sphericity
3.2. A Brief review of the Geometrical Shape Features 37
properties of shape and how to quantify them using image processing.
3.1.1 Applications
There are a number of applications that require characterisation of particle shape.
Geologists use information regarding sediment shape to determine the likely source of
the sediment on the grounds that sand grains and pebbles become progressively more
round as they are transported, and therefore theoretically the direction from which a
sediment came could be determined if a progressive increase in roundness, or roundness
gradient were detectable [29]. This information was used in [74] to characterise the
source-shape relationship in glacial quartz sand grains. The effect of sharp silicate
sediment particles on the stress and mortality of marine of the family of salmonids
was investigated in [75] using the parameter known as angularity. Pavement engineers
have identified that shape geometry in aggregates do influence the performance and
maintenance of hot-mix asphalts and asphalt concrete [76, 77]. Therefore the proper
characterisation of aggregate shape is important for high quality pavements to meet
increased traffic load. Following is a detailed description of these geometrical properties.
3.2 A Brief review of the Geometrical Shape Features
3.2.1 Circularity
Circularity is a measure of how circular an object is. The first measurement were per-
formed by Cox [53] who used 2D projections of aggregates and measured the circularity
in each projection. Mathematical descriptions of the features of circularity were also
developed and refined in [54] and led to the development of circularity metric based on
a distance measure from a hypothetical discrete disc, where a customised digital circle
is designed for every shape. In [55] the distance from the centre of gravity of the shapes
to their borders was used (taking 8 or 16 cardinal directions), and then the variance of
the distances was used as a measure of circularity. The algorithms in [54, 55] are the
basis for image analysis of circularity and are discussed further in section 3.5.
3.2.2 Elongation
The elongation χ of a particle [78] is expressed as the ratio between the longest axis
dmax and the shortest axis dmin.
χ =dmaxdmin
, (3.2.1)
The elongation of a particle is close to 1 for circular objects or objects whose shape is
close to a regular polygon shapes, and will be greater than one for flat or elongated
aggregates.
38 Chapter 3. Particulate Shapes
3.2.3 Convexity and Concavity
A convex polygon is defined as a polygon that is curved out or bulging outward. Con-
cave polygons are polygons that are not convex. A novel mathematical description for
the computation of convexity, based on the parameters of the Jordan curve, is developed
in section 3.5.4.
3.2.4 Roundness
The roundness of a particle is a description of the level of smoothness in its outline. In
his earliest work on shape analysis, Wadell [71] defined the roundness O of an object as
the ratio of the average radius of corners and edges Λ and the radius R of the largest
inscribed circle.
O =1N
∑Ni=1 Λi
R(3.2.2)
The measurement of roundness using Wadell’s algorithm was conducted using manual
instruments and is both time consuming and laborious task; Therefore the Krumbein
chart shown in Fig.3.1 was developed to overcome this problem, and provided a quick
way for determining roundness and sphericity. The difficulty with implementing (3.2.2)
is mainly due to the characterisation of what constitutes an edge or corner. A review of
existing imaging techniques for computing roundness, and a proposed novel technique
are discussed in section 3.5.3.
3.3 Analysis techniques of geometrical shape features
3.3.1 Classical techniques
One of the earliest works on the sphericity of a particle was due to Cox [53] who used the
projection of shapes on a screen and computed their geometrical circularity. Similarly,
fundamental study on the nature of roundness by Wadell and Wentworth [68, 69, 71]
emphasised the curvature of edges and corners. These were the basis for the devel-
opment of modern techniques for aggregate shape analysis. Wadells algorithm defines
roundness as the ratio between the curvature of “edges and corners” of an aggregate
to that of the maximum inscribed circle.
In [79] Schwarcz et.al described a procedure for quantifying the shape of two-dimensional
closed curves from projections or sections of particles based on the earlier works by
Wadell and Wenthworth. The curves were plotted in polar coordinates displaying
the radius and orientation. Harmonic analysis using Fourier techniques was then im-
plemented on the resulting polar representation at equally spaced interval. Quantities
corresponding to sphericity and roundness were derived from the Fourier coefficients. In
[80] a similar procedure was implemented utilising the coordinates of peripheral points
and specifying the center of the particle to be the center of mass. Diepenbroek.et.al
3.3. Analysis techniques of geometrical shape features 39
[81] further developed the Fourier analysis methods by recognizing the conceptual dis-
crepancies between the method and the original definition of roundness as proposed by
Wadell. The shape of the particle was compared to a hypothetical ellipsoid which was
assumed to be the ultimate abraded shape of the particle. Fourier analysis was also
used in [82] to measure elongation, triangularity and squareness and asymmetry.
Techniques of geometrical shape analysis eventually culminated in several ASTM stan-
dards including ASTM D4791: Standard Test Method for Flat Particles, Elongated
Particles, or Flat and Elongated Particles in Coarse Aggregate, ASTM C1252: Stan-
dard Test Methods for Uncompacted Void Content of Fine Aggregate (as Influenced by
Particle Shape, Surface Texture, and Grading), and ASTM D5821: Standard Test
Method for Determining the Percentage of Fractured Particles in Coarse Aggregate.
3.3.2 Imaging techniques
In [83], Kuo et.al used the image morphological characteristics of shapes to quantify
flatness and elongation of aggregates used for asphalt concrete mixture. They concluded
that the image analysis method was more time efficient and provided more information
than the ASTM D4791 prescribed method. In a similar experiment Kuo and Freeman
[78] related void compacts to surface roughness and aggregate angularity showing the
superiority of image processing techniques against the ASTM C1252 prescription. In
[84], image processing methods were correlated against existing methods for angularity
measure showing further promise of image processing techniques. Chetana et.al [85]
developed a new image-based angularity index invariant to rotation and calibrated for
round gravel and crushed stone, to supplement ASTM D5821. Polygonal approxima-
tion of shapes were used to determined angularity, and therefore the accuracy depended
on the dimension of the polygon used.
In [86] the inscribed circle is modeled using a disc, which is used to morphologically
open a given shape. The reduction in resulting area of the shape is then computed. This
is done for several discs of increasing size. The authors measured their method against
a digitized version of the extended Krumbein chart (Fig.3.1 is a subset of that chart).
They concluded that the Krumbein roundness is simply a measure of the relative area of
a particle that is removed through opening with a disk of radius approximately 40-50%
of the radius of the maximum inscribed circle. One drawback of the method is that it
ignored edges and corners and therefore reduced the definition of roundness to the ratio
of area between the shape and its inscribed circle. Further, the approach was limited
because it is possible to construct two shapes with equal maximum inscribed circle
and equal peripheral areas, yet with different configurations of edges and corners. An
example is the X-cross and the star in Fig.3.4, whereby it is quite possible to construct
equal inscribed circle for each and also equal areas for the different shapes, and obtain
40 Chapter 3. Particulate Shapes
a similar ratio, yet the shapes are starkly different.
3.4 A Novel framework for image analysis of geometrical
particulate shapes
The proposed approach to shape analysis of aggregates springs from the established
concepts of both form and roundness analysis. The computed features therefore are
those discussed in section 3.2 , namely, circularity, elongation, roundness and convex-
ity. These features will be measured using the framework developed here. Circularity
is a measure of how circular a particle is. The original method used is the method
developed for image processing in [54, 55]. Here a technique using a slight variant of
the digitized methods for circularity is applied. The framework is based on the mathe-
matical description of the Jordan curve and its implications in digital analysis of closed
curves. One issue that this covers is the digital disc, which is a distorted version of the
circle as defined in the Euclidean space. The effects of digitisation of straight lines and
curves is detailed in [17] and forms part of this framework.
As a consequence of this framework, an algorithm is developed which is based on vectors
of transition angles. Transition angles are simply the trajectories between subsequent
neighbouring pixels that form the outline of a particle or aggregate in the clockwise
direction. Angular properties useful for roundness measure are inferred from the tran-
sition angles of the shape. The method is based on the idea that the turning angles
of the boundaries of the 2-D projections of an aggregate, hold essential information
regarding the projected shape in R2 and also capture the nature of edges and corners
which are essential in the definition of roundness. In geometrical terms, the boundaries
of the 2D projection of a particle trace what is known as the Jordan curve. Therefore
in many aspects, the properties of a Jordan curve apply to the characterisation of ag-
gregate shapes. In digital images processing, the 2D boundary projection of the shape
is a digital Jordan curve.
3.4.1 The Euclidean Jordan Curve
The projection of the outline of any particle on a plane results in a simple closed
curve, formally known as the Jordan curve [87, 88]. A simple closed curve is a space
homeomorphic to the sphere Sn in the (n+ 1)-dimensional plane. Formally,
Sn =x ∈ R
n+1 : ||x|| = 1
(3.4.1)
Thus, in R2 we have the circle S1 and a Jordan curve in R
2 is simply a closed curve
homeomorphic to S1. Similarly, in R3 we get the sphere S2, to which all solid 3-D
shapes are homeomorphic.
Therefore, we define the projection of the boundaries of any solid shapes into R2 to be a
3.4. A Novel framework for image analysis of geometrical particulate shapes 41
Jordan curve. For a full characterisation of 3D shapes, therefore, several 2D projections
are necessary. An important property of the Jordan curve that will be useful in shape
analysis is the Jordan curve theorem [87, 88], which we here present as a definition:
Definition 1. Let C be a simple closed curve in R2. Then the complement in R
2 of C
consists of two components Ri and Ro, each of which has C as its boundary.
Ri and Ro are the internal region and external region to the Jordan curve C respec-
tively. The above definition is a useful 2D characterisation of the boundaries of solid
objects or aggregates, for two reasons. First, it defines the boundary to be closed. Sec-
ondly, it precludes the possibility of any two points in the curve crossing each other (or
intersecting). These two properties are sufficient for the class of objects whose shape
we seek to characterize.
3.4.2 The digitized Jordan curve
Properties of Euclidean geometrical shapes become greatly simplified in their discrete
form albeit distorted. The continuous points x ∈ R2 become pixelated discrete points
z ∈ Z2. As a result of the discretization, the set of rules used to characterise a Jordan
curve in R2 also become slightly modified in Z
2.
Digital equivalents of the Jordan curve theorem have been studied for the last two
decades [89, 90]. However, the geometric aspects of digital image processing [91] and
the idea that we can track the borders of given shapes using digital image processing
and use the resulting sequences for encoding region shape were suggested by Rosenfeld
[92]. Of relevance to our application, is the definition of the digital closed curve (referred
to in [92] as curve).
Definition 2. Let Π be an array of lattice points having coordinates i, j ∈ N, i, j 6= 0.D ⊆ Π is a digital closed curve if it is connected and each of its points has exactly two
neighbours in D.
To the above definition, we add the definition of a perimeter of a curve.
Definition 3. The Perimeter L of a digital closed curve D ⊆ Π is the total number of
connected points of D. This is simply the total number of boundary pixels.
In a similar fashion to the Euclidean Jordan curve, the digital Jordan curve also has
internal and external regions, which shall be designated Zi and Zo, respectively.
If we number the points of a closed curve z1, ..., zL then zm is a neighbour of zn if and
only if m = n ± 1 (modulo L). The point (i, j) could be connected to its neighbours
in one of two ways, namely, 4-neighbourhoods or 8-neighbourhoods. The 4-neighbours
of (i, j) are its four horizontal and vertical neighbours (i± 1, j) and (i, j ± 1), whereas
its 8-neighbours consist of the 4-neigbours together with its 4 diagonal neighbours
(i± 1, j − 1) and (i± 1, j + 1).
42 Chapter 3. Particulate Shapes
Thus, the smallest digital curve possible with 4-neighbour connectedness has 8 points,
whereas the smallest digital curve with 8-neighbours connections has 4 points. Clearly
the 8-neighbourhood scheme is both economical and provides more directional vari-
ations and shall be used in our algorithm for shape characterization. In [54] it is
established that 8-connected boundaries result in an improved measure of circularity,
which is a parameter of shape. Fig.3.2 shows a continuous Jordan curve digitized into
a digital closed curve. Note the 8-neighbourhood connection. For the digital curve in
Fig.3.2, L = 43.
Figure 3.2: Digitisation of a simple closed curve (Jordan curve)
3.4.3 Boundary Pixel trajectories and Internal Angles
In the analysis of shapes using digital closed curves, a convention regarding the handling
of the associated parameters is necessary for a consistent development of concepts. It
is important to be clear about how the pixels are indexed in any given image and how
we intend to manipulate the arrangement in the ensuing analysis. Thus, the top-most
left pixel in an image is the starting pixel of the image lattice as shown in Fig.3.3-
a. In a digital closed curve, a point z1 is designated to be the starting point and
subsequent pixels z2, ..., zL labeled clockwise. Thus z1 is also a neighbour of zL. With
this convention, the trajectories and internal angles can now be defined.
Trajectories
The trajectory τn of a point zn, is the slope of the line joining two neighbourhood points
zn−1 to zn in the clockwise direction. Let zm = (im, jm) and zn = (in, jn) (m = n− 1
(modulo L)), be two neighbourhood points on an arbitrary shape boundary (a Jordan
curve) of perimeter L, and let τn be the trajectory of the zn. Define δin = in − im and
δjn = jn − jm and θ = tan−1(|δin|/|δjn|). The trajectory τn is given by:
τn|δin≤0 =
π − θ, if δjn < 0
θ, if δjn ≥ 0(3.4.2)
3.4. A Novel framework for image analysis of geometrical particulate shapes 43
τn|δin>0 =
π + θ, if δjn < 0
2π − θ, if δjn ≥ 0(3.4.3)
For an 8-neighbourhood connection, there are only 8 possible trajectories. The table
shown in Fig3.3-b can be used as a guide for obtaining these trajectories, where in each
of the 8 directions the angles in degrees, in radians and (δin, δjn) are shown.
As an example, the vector τ (in radians) of the boundary pixel trajectories of the dig-
itized closed curve in Fig.3.2 starting at the bottom left pixel, (pointed by the arrow)
and moving in the clockwise direction, is as follows:
τ = [π2 ,π2 ,
π2 ,
π2 ,
π2 , 0,
π2 , 0,
π2 ,
π2 ,
π2 ,
π4 ,
π2 , 0, 0,
3π2 , 0,
7π4 , 0,
3π2 , 0,
3π2 , 0,
7π4 ,
7π4 , 0,
3π2 ,
3π2 ,
3π2 ,
π, 3π4 , π, π,3π2 , π, π, π,
5π4 , π, π, π,
5π4 , π]
Figure 3.3: Labeling convention used in the development of shape characterisation algo-rithm. (a) An m×n image indexing and (b) Angular directions and vector translationsw.r.t to the pixel (i, j) of its 8-neighbourhood.
Internal Angles
The vector listing of all the trajectories of the closed curve in Fig.3.2 shown in the
previous section do not in themselves provide the information required for shape anal-
ysis, but they can be manipulated to do so. The difference vector of the trajectories,
which we call the internal angle could be used to extract shape parameters irrespective
of their boundary orientation. We designate this internal transition angle α. For two
consecutive trajectories τ1 and τ2 in the clockwise direction, the internal angle α is
given by:
αij = π + (τj − τi) 0 ≤ α ≤ 2π (3.4.4)
Based on Eqn.3.4.4, The vector of internal angles of the closed curve in Fig.3.2 is as
44 Chapter 3. Particulate Shapes
follows:
α = [π, π, π, π, π2 ,3π2 ,
π2 ,
3π2 , π, π,
3π4 ,
5π4 ,
π2 , π,
π2 ,
3π2 ,
3π4 ,
5π4 ,
π2 ,
3π2 ,
π2 ,
3π2 ,
3π4 , π,
5π4 ,
π2 , π, π,
π2 ,
3π4 ,
5π4 , π,
3π2 ,
π2 , π, π,
5π4 ,
3π4 , π, π,
5π4 ,
3π4 ,
π2 ]
3.4.4 Smoothing of a Digital Jordan Curve
The Jordan curve example shown in Fig.3.2 has boundaries displayed on a standard
image grid. If we translate the pixel coordinates into the Euclidean space, then it is
possible to smooth the shape using moving averages. If moving averages of order p are
used, then the nth pixel ζn is given by:
ζn =1
p
p∑
k=0
zn−k (3.4.5)
The above operation is done (modulo L), since the curve is a closed curve and therefore
the last point zL is in fact the previous pixel w.r.t z1.
Smoothing affects the original form in two ways. One the one hand it smooths sharp
pixel transitions which might be a potential edge or corner of interest. This is clearly
undesirable. On the advantage side it removes noise, especially those associated with
pixelation effects of curves and diagonal lines, which otherwise give false impression of
sharpness. Experiments conducted on synthetic shapes reveal that true edges or corners
that get smoothed could still be detected using custom designed algorithms, whereas
the false edges and corners resulting from noise are hard to characterise. However,
a major advantage of smoothing a digital Jordan curve is that it provides a means of
abstraction to analyse the shape parameters. Once the smoothed curve is obtained, the
shape parameters are computed by matching the smoothed curve with the equivalent
circle. The equivalent circle is discussed next.
3.4.5 The Equivalent Circle
Definition 4. Let D ⊆ Π be a digital closed curve of perimeter L. The equivalent
circle to the curve D is a circle with perimeter L.
The idea of equivalent circle is used to determine the classical measure of circularity
of a shape. In the subsequent sections it shall be used to determine all the geometric
shapes discussed in section 3.2.
3.5 Extraction of Shape parameters
Some of the important shape parameters include the circularity, elongation, roundness
and convexity. These are defined below using illustrations.
3.5.1 Circularity
The classical definition for the circularity Ω of a shape is :
3.5. Extraction of Shape parameters 45
Ω =4πU
L2
=4πA(Ri)
l2(3.5.1)
Where U = A(R) is the area of the shape and l its perimeter. the equivalent definition
for in the discrete domain is therefore:
Ω =4πU
L2
=4πA(Zi)
L2(3.5.2)
Ω is a measure of how circular an object is. A major issue with the implementation of
Eqn.(3.5.2), is that the measure becomes more a measure of octogonality than circu-
larity specially for shapes with smaller perimeters. This situation was addressed in [54]
and led to the development of circularity metric based on a distance measure from a
hypothetical discrete disc where a customised digital circle is designed for every shape.
In [55] the distance from the centre of gravity of the shapes to their borders was used
(taking 8 or 16 cardinal directions), and then the variance of the distances was used as a
measure of circularity. We adopt a similar concept to that in [55] in our algorithm, but
rather than a few border pixels, we shall utilise the totality of the pixels forming the
boundary for more accurate results. Let z1...zL, be the points forming the perimeter L
of the digital shape D and let di be the distance from each zi to the centre of gravity
z0 of the shape D. Then the circularity Ω2 is described in terms of the mean distance
zµ = 1L
∑Li di, and the absolute deviation from the mean ∆i = |zi − zµ| as:
Ω2 = 1− 1
L
L∑
i
∆i
zµ, (3.5.3)
The above definition of circularity can be applied to both convex and concave shapes.
Table 3.1 shows both the measure of circularity in of Eqn.(3.5.2) and of Eqn.(3.5.3) for
the shapes shown in Fig.3.4.
3.5.2 Elongation
Elongation is the simplest feature of shape and is given by the expression in Eqn.3.2.1.
Table 3.1 shows elongation values for the shapes shown in Fig.3.4. The elongation
is close to 1 for circular objects or objects whose shape is close to a regular polygon
shapes, and will be greater than one for flat or elongated aggregates.
46 Chapter 3. Particulate Shapes
Figure 3.4: Set of shapes
Shape 3.5.2(Ω) 3.5.3(Ω2) Elongation(χ) Roundness(On) Convexity(υ)
Isosceles triangle 0.571 0.755 3.125 0.125 1.000
Equilateral Triangle 0.627 0.792 2.654 0.138 1.000
Square 0.635 0.905 1.419 0.142 1.000
Pentagon 0.859 0.937 1.373 0.16 1.000
Hexagon 0.885 0.962 1.156 0.159 1.000
Heptagon 0.644 0.972 1.147 0.133 0.958
Octagon 1.048 0.979 1.09 0.169 1.000
Nonagon 0.842 0.981 1.109 0.405 0.995
Ellipse 0.691 0.734 2.489 0.184 1.000
Circle 0.991 0.997 1.027 1.000 1.000
X-cross 0.331 0.706 4.269 0.064 0.668
Star 0.28 0.756 1.645 0.072 0.552
Table 3.1: Circularity, elongation, roundness and convexity measures of the shapesshown in Fig.3.4.
3.5.3 Roundness from Edges, Corners and their curvature
The classical definition of roundness is shown in Eqn.3.2.2. It incorporates measuring
the relative curvature of edges and corners of a shape as compared to the curvature of
a base shape. This was first attempted by in [81], where the curvatures of a particle
were compared to a hypothetical ellipsoid. The vector of the internal angles of a closed
curve are used here as an indicator of shape curvature or shape angularity. For a circle,
the internal angles are close to 180o and especially so for circle with larger perimeters.
Therefore for any given shape, a deviation from π radians (180o) is an indicator of
deviation from roundness.
Fig.3.5 shows plots of all the shapes in Fig.3.4 in relation to their equivalent circle.
The number of edges or corners in each shape is precisely half the number of crossings
between the internal angle vector of the shape and that of its equivalent circle. For
example the internal angles vector of the square crosses that of its equivalent circle 8
times indicating 4 corners.
Let α be the vector of internal angles of any given closed curve, and θ the vector of
angles of the equivalent circle, and n be the number of edges in the curve. A measure
of roundness O1 is given by:
3.6. Impact of particle shape on analysis of particulate textures 47
J = GetOutline(Particle); % Jordan curve from Particle
EqC = GetEquivalentCircle(J); % Equivalent circle of resulting Jordan curve
(a1,a2) = GetInternalAngle(J); % Max and Min. of internal angles for Jordan curve
(t1,t2) = GetInternalAngle(EqC); % Max. and Min. of internal angles of Eq. Circle
n = GetEdges(J); % Number of edges and corners
R = n*(a2-a1)/(t2-t1); % Eqn 3.5.4
Roundness = exp(-log10(ceil(R))); % Eqn 3.5.5
Table 3.2: The Roundness Algorithm
O1 = n(αmax − αminθmax − θmin
) (3.5.4)
Judging by the vectors shown in Fig.3.5 it is apparent that in Eqn.3.5.4 the denominator
is much smaller than the numerator and therefore O1 is a large number for shapes with
sharp edges. For the circle, numerator and denominator are roughly equal and n is
unity, thus O1 is typically less than 1. Note also the α vectors for the shapes do not
display the actual angles. For example, a square should ideally show 90o four times, and
an equilateral triangle should show 60o three times. The large angles are a consequence
of smoothing, but they do still capture those edges and corners. Lack of smoothing
results in noise that would give misleading information regarding the quantity of edges.
The values of Roundness displayed in Table.3.1 are the normalised value of roundness
On.
On = e−(log10dO1e) (3.5.5)
where dxe is the next integer greater than x. Table 3.2 outlines the roundness algorithm
using Matlab R© coding.
3.5.4 Convexity
A convex polygon is defined as a polygon with all its interior angles less than 180.
Mathematically, we define convexity υ as:
υ =L(α ≤ 180o)
L(α) , (3.5.6)
where L(α) is the length of the vector α.
3.6 Impact of particle shape on analysis of particulate tex-
tures
An important aspect of particulate texture analysis is the determination of average
grain size. Whenever the concept of size is applied to aggregates or particles, a con-
ventional definition of how we intend to measure size is necessary if consistency is to
48 Chapter 3. Particulate Shapes
500 1000 1500
177.5
178
178.5
179
179.5
180
Trace along perimeter L
α (d
egre
es)
isosc.trEq. circle
200 400 600 800 1000 1200 1400177.5
178
178.5
179
179.5
180
Trace along perimeter L
α (d
egre
es)
equil.trEq. circle
500 1000 1500 2000
179.2
179.4
179.6
179.8
Trace along perimeter L
α (d
egre
es)
squareEq. circle
500 1000 1500
179.2
179.4
179.6
179.8
180
Trace along perimeter L
α (d
egre
es)
pentagonEq. circle
(a)
500 1000 1500
179.4
179.5
179.6
179.7
179.8
179.9
180
Trace along perimeter L
α (d
egre
es)
hexagonEq. circle
500 1000 1500 2000
179.6
179.7
179.8
179.9
180
Trace along perimeter L
α (d
egre
es)
heptagonEq. circle
500 1000 1500179.5
179.6
179.7
179.8
179.9
180
Trace along perimeter L
α (d
egre
es)
octagonEq. circle
500 1000 1500
179.6
179.7
179.8
179.9
180
Trace along perimeter L
α (d
egre
es)
nonagonEq. circle
(b)
200 400 600 800178.6
178.8
179
179.2
179.4
179.6
179.8
Trace along perimeter L
α (d
egre
es)
ellipseEq. circle
100 200 300 400178.9
179
179.1
179.2
179.3
179.4
179.5
Trace along perimeter L
α (d
egre
es)
circleEq. circle
500 1000 1500 2000 2500 3000 3500
179
179.5
180
180.5
Trace along perimeter L
α (d
egre
es)
X−crossEq. circle
500 1000 1500 2000
177
178
179
180
Trace along perimeter L
α (d
egre
es)
starEq. circle
(c)
Figure 3.5: Plots of internal angles of the shapes shown in Fig.3.4 against the internalangles of the equivalent circle
3.7. Chapter Summary 49
be attained. The importance of a conventional definition is highlighted by two differ-
ing approaches shown in Fig.3.6. The question that arises, is whether the size of the
star is determined by the inscribing circle or the inscribed circle. Both answers are
valid if consistency is applied throughout. In Fig.3.6a the inscribed circle can be easily
computed using the opening operation described in section 2.7.1. The inscribing circle
convention (Fig.3.6b) for size measurement can be implemented by assigning to any
given shape the largest diameter between its edges. Other conventions include the area
enclosed by the shape boundaries, or the perimeter of the boundaries. However, the
implementation of any of these, is dependent on the application and also the complexity
or quality of the image being analysed. For complex images such as sediments (chapter
4) and road surfaces (chapter 5) requiring measurements of grain size, a method of
pixel runs obtained from edge profiles is developed.
Another aspect of particulate textures that requires analysis is the distribution of the
particulate shapes within the textured images. A corresponding parameter in classical
texture analysis is regularity. The comparison between the concept of texture regularity
and that of shape distribution of their composing particles was discussed in section 2.6.5.
The characterisation of individual particle shapes contributes at a fundamental level,
to the understanding of particulate textures.
Figure 3.6: A star shape showing (a) its inscribed circle and (b) its inscribing circle
3.7 Chapter Summary
The main contribution of this chapter is the development of a novel framework for
particle shape based on the Jordan curve. A literature review of existing techniques for
characterising aggregate shape is also presented. Experiments with synthetic shapes
show the robustness of the technique. The impact of particle shape on the computation
of grain size is briefly discussed.
50 Chapter 3. Particulate Shapes
Chapter 4
Granulometry by Edge Detection
4.1 Introduction
In the context of image processing, granulometry is an approach to compute the size
and shape distribution of grains in granular images. It has been defined and briefly dis-
cussed in its formal mathematical setting in section 2.7.1 with accompanying examples
of binary images shown in Fig.2.10 and their size distribution in Fig.2.11. There are
several applications that require quantification of grain size. A few examples include,
the study of grain size in metallography to characterize the microstructure of metals
[93, 94] or grading of sediments by size for an understanding of sediment transportation
and deposition [4, 45, 64, 95].
Grain size determination methods are application specific and the level of required pre-
cision varies. However, all image processing techniques for particle size determination
undergo a binarisation step to isolate grain boundaries, by using some form of segmen-
tation.
Both region-based and boundary-based segmentation methods [59] are applicable to
particulate image segmentation. Both segmentation techniques are discussed in section
2.8. Region based methods are best suited to images with relatively distinct parti-
cle and background pixels whereas boundary-based techniques segment the particles
based on the abrupt changes in intensity. These changes in intensity occur at particle
boundaries. An example of a particulate image that is difficult to segment using region-
based methods but work favorably with boundary-based edge detection techniques is
shown in Fig.2.15. The studied images in this chapter are those that lend themselves to
boundary-based segmentation in which edge detection features strongly. Methods em-
ploying edge detection for grain size analysis were implemented by several researchers
[64, 93, 94, 95]. Generally, edge detectors could be tuned for optimal performance
depending on the image in question and the type of edge detection operator used. One
51
52 Chapter 4. Granulometry by Edge Detection
important aspect in using edge detectors for measuring granular size, is the effect of
the parameters of the detector on the measurement. For example, the choice of the
Gaussian width parameter of the Canny detector may influence the results of grain size
distributions in images. The choice of thresholding parameter may show bias in the
selection of foreground or background and therefore affect the boundaries of some of
the objects of interest for granulometry. Overall, the purpose of using edge detection
techniques is to overcome the complexity associated with images found to be difficult
for region-based segmentation approaches, while keeping the boundary distortions to
a minimum. A discussion on the complexity of particulate images with examples from
the Brodatz album [1] is presented in section 2.4.
4.2 Development of Edge Detectors
In general, edge detectors fall into three categories as displayed in Fig.4.1. The earliest
detectors were the gradient-based (or first derivative), then followed by the second-
derivative based operators and the Gaussian operators.
Figure 4.1: Taxonomy of edge detectors
The first derivative operators are the simplest of all the edge operators. These in-
clude the Sobel operator, the Prewitt operator and the Roberts cross operator [42, 59].
A mathematical description of these operators and their accompanying masks can be
found in appendix B.
Compared to first derivative operators, second derivative operators are an improved
4.3. The EDPC Method 53
set of operators in that they use the local maxima of gray level pixels to determine
the edge rather than just using a simple gradient. Second derivative operators include
the Laplacian (appendix B) and the Marr-Hildreth operator also known as Laplacian of
Gaussian (LoG) [96]. The LoG operator suffers from two main limitations. It generates
responses that do not correspond to edges, or false edges, and is prone to localization
error at curved edges. These limitations were addressed by Canny in his development of
the Canny detector [97]. The Canny detector satisfies the desirable properties of edge
detectors including the capacity to suppress noise and optimize the trade-off between
detection and localisation, minimizing the possibility of multiple edges where a single
edge point exists. This is useful in our case with images of particulate textures from
which we seek to obtain size distribution of the aggregates.
Two parameters of interest in the Canny detector are the Gaussian width σ used for
smoothing the image, and the threshold pair Θ and ρΘ, used to reduce the false edge
points following non-maximum suppression. Canny [97] suggested the ratio between
the upper and lower threshold be 2:1 or 3:1. In our algorithms a ratio of 2.5:1 is used.
Thus Θ denotes the upper threshold and ρΘ the lower threshold where ρ = 0.4.
Fig.4.2 shows the edge maps resulting from the different detectors discussed in this
section. Note the Canny detector’s superior performance in comparison to the rest,
regarding the handling of noise from missed edges. The Canny algorithm is mathemat-
ically explained in appendix B.
4.3 The EDPC Method
The method proposed here is known by EDPC to signify Edge Detection and Pixel
Counting steps employed in the method. Let I(x, y) be a gray level image of a partic-
ulate texture, from which the average size is sought. An example set of sand sediment
images is shown in Fig.4.3.
We obtain an edge map V (x, y) of the image I(x, y) as follows:
Q(I(x, y)) 7−→ I(x, y) (4.3.1)
S(I(x, y)⊕W ) 7−→ V (x, y) (4.3.2)
where Q is the canny edge detection operator, I(x, y) the resulting edge map, W the
structural element used for dilation of the edge map, ⊕ the dilation operator and S the
skeletonization operator. The steps of Eqn.(4.3.1) and (4.3.2) are discussed in further
detail below.
54 Chapter 4. Granulometry by Edge Detection
(a) (b)
(c) (d)
(e) (f)
Figure 4.2: (a)A particulate image and its edge profiles using adaptive thresholding andoperators (b) Prewitt (c) Sobel (d) Robert (e) Laplacian of Gaussian and (f) Cannydetector. The Canny detector has an additional parameter σ = 1
4.3. The EDPC Method 55
Figure 4.3: Sediment data used in the experiments. For these images, mechanicallysieved sand was used and the average particle size is thus known: row 1: 180µm,210µm, 250µm; row 2: 300µm, 355µm, 425µm; row 3: 600µm, 710µm, 850µm; row4: 1000µm, 1190µm, 1400µm.
56 Chapter 4. Granulometry by Edge Detection
4.3.1 Edge Detection
The edge detection operation Q(I(x, y)) on the image I(x, y) results in a binary version
of the image with distinct edges at the boundary of the objects in the image [42, 59].
Here we applied the Canny detector [97] as it satisfies desirable properties of edge
detectors including the capacity to suppress noise and optimize the trade-off between
detection and localisation, minimizing the possibility of multiple edges where a single
edge point exists. This is useful in our case with textured images from which we seek to
obtain size distribution of the aggregates. Fig.4.2 shows the performance of the Canny
edge detector in comparison to the rest of the detectors.
It was shown [97] using numerical optimization, that the ideal edge detector of a signal
corrupted by noise, is the first derivative of the Gaussian. The detector is applied to
digital images by first convolving the image with a 2-D Gaussian function and then
computing the magnitude and direction of the intensity gradient. In most cases this
results in ridges around a local maximum. Potential edge point are further isolated
using non-maximal suppression. Once potential edge points are located, the actual
edges must be generated using hysteresis with application of thresholds. Here two
thresholds are used such that the edge points of gradient magnitudes above an upper
threshold are traced until the magnitude falls below the lower hysteresis threshold.
This reduces the number of false edges while maintaining edge connectedness. Thus
the two parameters of interest in the Canny detector are the Gaussian width σ and the
threshold Θ.
The Gaussian width σ
The Gaussian width σ of the Canny detector controls the trade off between detection
and localisation in edge maps. Therefore the choice of sigma may affect the size dis-
tribution of aggregate textured images. Thus σ of the Gaussian function of the edge
detector, is one of the parameters that would need to be controlled in our proposed
method. The discrete Gaussian function G(n) is a sampled version of the continuous
function Gc(x), thus
Gc(x) =1
σ√
(2π)exp
−x22σ2
G(n) =N∑
n=−N
Gc(n)δ(x − n) (4.3.3)
where 2N + 1 is the length of the Gaussian filter G(n) and δ is the impulse function.
It can be shown that for every discrete Gaussian filter with central maximum value A
4.3. The EDPC Method 57
and width σ, the length L of the filter such that the end points are εA is given by
L = 1 + 2σ
√
2 ln1
ε(4.3.4)
where in our algorithms we use ε = 10−4 (0.01% of A).
Application of thresholds
Application of thresholds has been discussed in section 2.8.1 in the context of region
based segmentation. It has a slightly different application in the field of edge detection
though the concept is fundamentally similar. Canny suggested the ratio between the
upper and lower threshold be 2:1 or 3:1. In our algorithms we use a ratio of 2.5:1. The
question then becomes of what value to select for the upper threshold to begin with.
In general, pixels below a given threshold are not considered as edge pixels. Thus,
depending on the selection of the thresholding parameter Θ, the resulting edge pixels
in an edge map may be too many or too few resulting in false edges or missed edges,
respectively. Therefore, in order to determine the value of Θ automatically, a target ν
must be set for a given set of images.
This target varies from image to image. For example images used for analysis of large
object shape outlines in scene analysis require less edges than particulate textures. Thus
the threshold is set at the point in the histogram where the gray-level is ν percentage
points below the maximum gray level. In our applications we use a ν of 0.3 so that
all pixels with gray level values within 30% of the maximum pixel in the edge map,
are considered edge pixels. This will fix the threshold Θ (which is also the upper
threshold for dual threshold cases). The lower threshold is a constant fraction of the
upper threshold, and any pixels between the lower threshold and upper thresholds are
considerd edge pixels if there is a continuity. Methods of selecting thresholds using
histograms were originally developed by Otsu [60], and dual thresholds were applied in
the Canny detector [97].
4.3.2 Edge Linking
Edge detection should ideally yield a set of pixels lying only on edges, but in practice
this is not always the case due to noise or breaks in edges. The type of discontinuities
encountered from sediment images are as shown in Figs.4.2. Edge linking is an impor-
tant step for joining broken edge points. We use a method of edge linking described
in [64] where it was applied to high resolution DEM (Digital Elevation Models). The
connected regions are first formed by morphologically dilating the edge image with a
circular mask of a known radius followed by skeletonization. The dilation operator
[42, 59] is applied to the edge profiles in order to achieve connectivity between loose
edge pixels. In Eqn.(4.3.2) dilation is denoted by the ⊕ operator with a structuring
element W . The size of the structuring element must be large enough to close the
58 Chapter 4. Granulometry by Edge Detection
majority of regions,while not being so large as to remove the smallest regions. This
requires an apriori knowledge of the dataset under investigation. For the experiment
with sediments (Fig.4.3) we use a circular disc of radius 11 pixels for W , which is about
the size of the smallest grain size (180µ at 127 pixels/mm = 22.86 pixels). The S(.)
operator denotes skeletonization. The technique is illustrated in Fig.4.4
(a)
(b) (c) (d)
Figure 4.4: (a)Pavement sample (b) Edge profile of the sample (c) The dilation of theedge profile (d) Skeletonization (size:128 × 128 pixels). The skeletonized image is usedfor determining average grain size using pixel run statistics from lineal intercepts [5].
4.3.3 Pixel Run Statistics
The edge maps so obtained, as demonstrated in Fig.4.4d, are used for calculating the
aggregate size distribution. The method used to compute this distribution is similar to
the lineal intercept method suggested in [5] for determining chord lengths in grains.
Let Π(n) be a probability distribution of pixels n ∈ N between boundaries in E(x, y),
obtained by measuring the chord length between successive edges. We seek to obtain
the average pixel count between successive edges by taking horizontal and vertical
scans.This is demonstrated in Fig.4.5 and 4.6. Thus Π(n) is a distribution of particles
(and gaps) widths, and therefore:
max(x,y)∑
n=1
Π(n) = 1 (4.3.5)
Let Πm(n) be the run-lengths distribution of the sediment image whose known average
particle size after conversion into pixel units is m pixels, and Zm, the estimate of the
mean of the particle size as obtained by the proposed method. Thus
4.3. The EDPC Method 59
Col 102
Row 52
Row 315
Figure 4.5: A sediment image with average grain size of 1mm (left), and its edge map(right). The dotted lines are used in Fig.4.6 do demonstrate run-lengths
0 128 256 384 512
row 52
row 315
col 102
0 128 256 384 5120
0.002
0.004
0.006
0.008
0.01
0.012
Inte
nsity
(Π
)
1
0
1
0
1
0
Pixels (n)Pixels (n)
Figure 4.6: Pixel run-lengths from the scan lines shown in Fig.4.5 (left), and the re-sulting run-lengths distribution for the entire image (right).
60 Chapter 4. Granulometry by Edge Detection
Zm =
max(x,y)∑
n=1
nΠm(n) (4.3.6)
4.3.4 Calibration
The estimated value of Zm in pixels is then used to label the actual size m also ex-
pressed in pixels. In general there will be an offset and possibly a proportionality factor
between Zm and m for each known grain size. The relationship can also be non-linear
over a linearly related range. The determination of this relationship is an important
step of the calibration stage and is used to determine an estimate of the actual sizes in
unseen images.
Therefore if the average grain size of sediments in an image is sought, the value of the
obtained Zm is translated into the actual size by using the calibrated values and max-
imum likelihoods. The accuracy therefore depends on the number of samples used for
calibration. The more the samples used in calibrating, the more reliable the estimates
resulting from the association of calculated Zm and actual size m. In general, there
are two approaches for gathering the ground truth data for conducting the calibration
step, as follows:
(a) Gather size categories of particles using techniques such as mechanical sieving as
shown in Fig.4.3.
(b) Gather more accurate size categories with finer size intervals. This requires spe-
cialised equipment such as laser profilometers used for estimating road surface
macrotexture from particulate fluctuations of the road surface.
The first type of calibration data (a), result in coarse categorisation or bins, and is use-
ful for classification of particulate textures by size. The second type of calibration data
(b), results in a finer distribution of particles and is useful for estimation of particle size.
The dataset used in developing the EDPC technique are 12 sediment size shown in
Fig.4.3, and therefore the accuracy could also be tested using classification rate. Clas-
sification by granulometry has been applied in [98]. Preliminary results and discussions
of applying the technique both for classification and estimation of grain size are dis-
cussed in section 4.5.
4.4 Sources of Error
Ideally, in the absence of gaps and occlusions, Π(n) is a distribution from which the
average particle size is easily computed. However, the presence of gaps, occlusions and
the dimension of the image in relation to the particles, affect the distribution.
4.4. Sources of Error 61
4.4.1 Effects of Gaps and occlusions
The images in Fig.4.7(a-d), all of dimension 512×512 pixels, are of sediments of various
sizes showing both particles and gaps that would affect the computation of grain size.
Ideally the particle size distribution reflects the true size when there are less gaps.
This has been illustrated in section 2.3.2 and Fig.2.2. It is difficult to separate the
gap pixels from the particle pixels for images such as shown in Fig.4.7 using region-
based segmentation. This is in part owing to the different colours of particles leading to
unclear demarcation between particle and background, and in other cases, the similarity
of colours leading to the merging of distinct particles into a single blob. One way to
address this problem is to isolate as best as possible the boundaries of each particle
and measure the estimated average distance between boundaries. The method of edge
detection is effective if some a-priori information regarding the size of gaps is known
or if the mean size obtained from a distribution obtained from an image, is calibrated
to the known sieved size. Fig.2.7 also shows that the autocorrelation technique can
lead to misleading results when aggregates in an image of the same size but different
spacing are queried for average size.
Figure 4.7: Sand particles of sizes: (a) 180 microns, (b) 600 microns (c) 1190 micronsand (d) illustration of Particle widths (P) and Gaps (G) in a 1400 microns sand image.
62 Chapter 4. Granulometry by Edge Detection
4.4.2 Boundary Effects
For a particulate image to result in a reliable particle size distribution, the dimension
of the image should be large enough in comparison to the dimension of the particles
to contain a sufficient number of particles. Inadequate image dimensions may result
in fewer particles or fractions of particles and therefore the pixel runs are less reliable
for computing the mean particle size. An experiment to characterise the relationship
between the dimensions of a square image N and average diameter of particles c, where
(100 ≤ N ≤ 2300) was conducted using synthetic images similar to those shown in
Fig.2.6a , but of particles of diameter and c = 100 pixels. The dimension of the square
image was varied from 100 to 2300 in steps of 50 and therefore each image contained
progressively more particles. In each case the mean size was calculated (including gaps).
The error in computing the true average size of the particles is plotted in Fig.4.8 as a
function of N/c. For example a proportion of N/c greater or equal to 4, guarantees an
error of less than 25%. Therefore as a rule of thumb, the image dimension N and the
particle diameter c are related by:
N
c≥ 4 (4.4.1)
5 10 15 20
10
20
30
40
50
60
70
80
(N/c)
% E
rror
in d
imen
sion
mea
sure
Figure 4.8: Errors in measuring the average diameter of aggregates in an image whenthe dimensions of the image N vary in relation the average aggregate size c.
4.5 Preliminary Results
Fig.4.3 shows the dataset used for computing the average particle size. The resolution
of each image is 127 pixels per mm, and therefore the average particles size in each
image can be expressed in pixels. The method developed in section 4.3 was applied
4.5. Preliminary Results 63
to the images in Fig.4.3. The Canny detector parameters were adaptive thresholding
(section 4.3.1) and σ = 6, which was empirically found to be suitable for a small subset
of the range of grain sizes considered.
120 images from each size class in Fig.4.3 were used (1440 total). Each image was of
size 768× 768 . This is the least dimension that would satisfy the boundary condition
in Eqn.4.4.1. However, the errors in estimation are still more for larger grain sizes. Of
these images per set, 50 images per class were selected for calibration. The remaining
were used for classification. The method was compared with the autocorrelation method
(see section 2.6.2 and Fig.2.7). The autocorrelation method was used in [45] for grain
size measurement in sediments. For consistency, the same data for calibrating and
testing were applied to both the autocorrelation method and the proposed method.
Tables 4.1 and 4.2 show the accuracy rates of the autocorrelation method and the
proposed method respectively. The values are read row-wise (and the values in a row
add to 100%). For instance, the autocorrelation method mis-classifies a 250 sample as
210 sample 1.45% of the time, whereas the EDPC method mis-classifies a 250 sample
as 300 sample 1.27% of the time.
Size(µm)
180 210 250 300 355 425 600 710 850 1000 1190 1400
180 100 0 0 0 0 0 0 0 0 0 0 0
210 0 97.94 2.06 0 0 0 0 0 0 0 0 0
250 0 1.45 98.55 0 0 0 0 0 0 0 0 0
300 0 0 0 99.2 0.8 0 0 0 0 0 0 0
355 0 0 0 0.34 98.28 1.38 0 0 0 0 0 0
425 0 0 0 0 3.11 96.89 0 0 0 0 0 0
600 0 0 0 0 0 0 95.24 4.76 0 0 0 0
710 0 0 0 0 0 0 7.79 88.12 3.81 0.28 0 0
850 0 0 0 0 0 0 0 1.68 69.56 28.68 0.08 0
1000 0 0 0 0 0 0 0 0 33.55 55.94 10.51 0
1190 0 0 0 0 0 0 0 0 0.47 10.02 64.4 25.11
1400 0 0 0 0 0 0 0 0 0 0.05 24.86 75.09
Table 4.1: Confusion matrix for autocorrelation method showing accuracy in percentage
4.5.1 Consistency of the EDPC estimator
The box plot in Fig.4.9 shows the statistical properties of the EDPC particle size es-
timator for a fixed N=512. Standard deviations are within 2% of the sizes when the
images are of size 512x512 pixels. For larger images the deviation could be reduced
to within 1.2% as shown in table 4.3. The standard deviation of the estimate is seen
to increase with particle size (Fig.4.9 and table4.3). Table 4.3 shows the statistics of
the EDPC estimator for the sediment classes for varying image dimension N (assumed
square). The table shows the estimated mean and the standard deviation of that error.
64 Chapter 4. Granulometry by Edge Detection
Size(µm)
180 210 250 300 355 425 600 710 850 1000 1190 1400
180 100 0 0 0 0 0 0 0 0 0 0 0
210 0 100 0 0 0 0 0 0 0 0 0 0
250 0 0 98.73 1.27 0 0 0 0 0 0 0 0
300 0 0 4.8 95.2 0 0 0 0 0 0 0 0
355 0 0 0 0.08 92.19 7.73 0 0 0 0 0 0
425 0 0 0 0 5.02 94.98 0 0 0 0 0 0
600 0 0 0 0 0 0 95.97 4.03 0 0 0 0
710 0 0 0 0 0 0 6.45 93.55 0 0 0 0
850 0 0 0 0 0 0 0 0.03 89.11 10.86 0 0
1000 0 0 0 0 0 0 0 0 17.08 78.79 0.07 4.06
1190 0 0 0 0 0 0 0 0 0 0 53.84 46.16
1400 0 0 0 0 0 0 0 0 0.01 14.01 26.6 59.38
Table 4.2: Confusion matrix for EDPC method showing accuracy in percentage
The ratio N/c for the 12 class of sediment images is given for N=512, 1000 and 1500
(with and a resolution of 127 pixels per mm for each image). There is evidence that for
a large N there is reduction in the variance of the mean for the EDPC estimator, which
makes it a consistent estimator. There is a significant reduction in variance between
N=512 and N=1000, and only fluctuating variance between N=1000 and N=1500 .
There are two fundamental reasons for the reduced precision (larger variance) with
increasing particle size. Firstly the manual sieves which were used to produce the sedi-
ments in their different sizes, are less precise for larger particle sizes. There is apparent
reduction in discrimination ability in sieves with increased particle size (The phi scale,
upon which calibrated sieves are based, is an exponential scale and larger size classes
have a larger separation than smaller size classes. These result in a broader distribu-
tion of sand being trapped in a larger sieve and progressively narrower distributions
are trapped in smaller sieves). The other reason which was explained in section 4.4.2
and depicted in Fig.4.8 is the boundary effect. This is demonstrated by the trends of
standard deviations with increased N in table4.3, which imply that further increases in
N/c would have only minor improvement over the error.
There is also a consistent and significant reduction in the variance of the estimate as
the numbers of averaged images are increased. To demonstrate this, 20 images from
each class are used for calibration and from the remaining 100, subsets were chosen in
increasing intervals from n=10 to n=90 in steps of 20. The experiment for each n was
run 1000 times (meaning 1000 instances of say n=10 were averaged to obtain the mean
due to n=10) and the resulting average value for the mean and standard deviation
recorded. Table 4.4 shows the progression of the mean and standard deviation as a
function of number of images averaged.
4.5. Preliminary Results 65
180 210 250 300 355 425 600 710 850 1000 1190 1400
200
400
600
800
1000
1200
1400
1600
Par
ticle
siz
e ca
tego
ry (
µ)
Distribution of 120 samples from each category (µ)
Figure 4.9: Estimation of particle sizes using EDPC for the twelve size categories
N=512 N=1000 N=1500
Class N/c Estimateof mean
std.devof mean
N/c Estimateof mean
std.devof mean
N/c Estimateof mean
std.devof mean
180 22.4 179.31 1.91 43.74 179.91 1.16 65.62 180.82 0.87
210 19.2 208.32 2.45 37.5 209.4 1.24 56.24 210.4 1.56
250 16.13 248.37 4.42 31.5 249.21 2.28 47.24 251.94 2.93
300 13.44 301.43 4.85 26.25 300.17 2.67 39.37 300.64 2.95
355 11.36 355.41 6.01 22.18 353.77 3.33 33.27 354.59 4.28
425 9.49 422.06 5.87 18.53 424.39 4.07 27.79 424.91 3.65
600 6.72 602.55 10.79 13.12 599.07 5.86 19.69 599.53 6.88
710 5.68 710.92 9.89 11.09 711.73 5.54 16.64 717.46 4.92
850 4.74 849.16 11.89 9.26 849.69 6.11 13.9 845.5 6.18
1000 4.03 999.6 14.16 7.87 1000.8 7.97 11.81 998.23 5.8
1190 3.39 1194.67 16.58 6.62 1185.66 7.32 9.93 1191.41 9.56
1400 2.88 1396.55 19.94 5.62 1400.16 10.41 8.44 1404.51 7.36
Table 4.3: Image statistics for three image dimensions N.
66 Chapter 4. Granulometry by Edge Detection
n=10 n=30 n=50 n=70 n=90
Class mean std. dev mean std. dev mean std. dev mean std. dev mean std. dev
180 180.03 2.99 180.03 1.53 180.03 1 180.03 0.66 180.03 0.34
210 210.47 3.85 210.48 1.97 210.48 1.28 210.48 0.84 210.48 0.43
250 249.96 5.79 249.99 2.96 249.97 1.93 249.97 1.26 249.97 0.64
300 299.9 6.82 299.93 3.48 299.93 2.27 299.93 1.49 299.92 0.76
355 356.61 9.53 356.55 4.87 356.57 3.18 356.56 2.08 356.54 1.06
425 427.08 10.81 427.15 5.51 427.12 3.6 427.13 2.36 427.12 1.2
600 599.07 15.62 599.06 7.95 599.05 5.2 599.06 3.4 599.07 1.74
710 709.32 16.42 709.37 8.35 709.34 5.48 709.32 3.57 709.34 1.82
850 852.09 18.72 852.06 9.57 852.1 6.26 852.11 4.09 852.08 2.08
1000 1002.93 22.57 1002.91 11.49 1002.9 7.51 1002.89 4.89 1002.91 2.5
1190 1193.77 24.05 1193.72 12.22 1193.77 8.01 1193.79 5.25 1193.78 2.66
1400 1400.41 27.2 1400.41 13.87 1400.35 9.07 1400.38 5.96 1400.37 3.03
Table 4.4: Mean and Variance of particle size as a function of number of averagedindependent samples (Image dimension is fixed at N=512)
Another advantage of using the EDPC technique over the autocorrelation method is for
the analysis of mixtures of sizes. For example the images shown in Fig.2.10 which have
two distinct sizes, are hard to characterise by the single-slope autocorrelation method,
but are easily discriminated using the developed technique as shown in Fig.4.10. This
advantage becomes all the more significant with natural images. A comparison of both
techniques to mixtures of sediment particles are shown in Fig.4.11.
The value of σ influences the computation of grain size from images. This is discussed
in section 4.6.
10 20 30 40 50 600
0.005
0.01
0.015
0.02
0.025
0.03
0.035
radii (pixels)
Inte
nsity
10% large particles50% large particles90% large particles
Figure 4.10: Pixel run-lengths for the images in Fig.2.10
4.6. Adaptive techniques for edge detection 67
20 40 60 80
0.2
0.4
0.6
0.8
Lag
Aut
ocor
rela
tion
100 200 300 400 5000
2
4
6
8
x 10−3
Pixels (n)In
tens
ity (
Π)
(b)
(a)
(c)
Figure 4.11: (a) A sediment image showing mixtures (250µm and 850µm ) at 127 pix-els/mm and (b) its autocorrelation sequence and (c) The pixel run-length distributionfrom its edge map
4.6 Adaptive techniques for edge detection
The width of the Gaussian filter σ used for smoothing, and also for the Canny detector
kernel is an important parameter that influences the edge map of the different sizes of
particles, and ultimately their size estimation, ranking or classification.
The Gaussian width σ is one of two variables in the Canny detector. The other vari-
able - the threshold Θ, is that which determines which pixels become edge candidates
and which ones remain background. Threshold selection is done using the technique
of hysteresis. Adaptive thresholding using information from the image histogram is a
technique that already exists [60, 97] and shall be used in our Canny detector imple-
mentation. The purpose here is to develop an adaptive σ for granular images using in-
formation from the images. This is done by first studying the results from using several
σ values on granular images of differing particles sizes and observing their performance.
The concept of using several Gaussian widths for individual images, was first examined
by Rosenfeld [99] and later on formalised in a scale-space framework by Witkin [100]
who reaffirmed that edge maps resulting from various filter scales could be used to syn-
thesize the final edge map using some pre-defined rules. The probing of these concepts
led researchers to the study of adaptive selection of σ.
68 Chapter 4. Granulometry by Edge Detection
In [101] a technique was developed to determine the optimal σ of a Gaussian function
for each pixel location in an image by minimising an energy function. The algorithm
showed improved performance compared to other non-adaptive techniques but had the
disadvantage of being computationally expensive and also ,as pointed out in [102], of
reduced performance in detecting straight lines in vertical and horizontal directions.
The ability to capture horizontal and vertical edges is specially important for the tech-
nique developed here as demonstrated in Fig.4.6 for vertical and horizontal pixel runs.
In [103, 104] adaptive filters tuned to the local signal and noise variance were developed.
The methods generate different σ for different regions within the image depending on
the local image and noise statistics. These approaches may be suitable for granular
images with mixtures of sizes. However, for granulometric applications on images with
uniform sizes as those in Fig.4.3 in our experiment, a single σ for the entire image will
suffice. The method developed here is also based on adapting a filter width to the signal
and noise statistics of granular images with known average particle sizes. A Gaussian
filter that captures a pre-determined portion of the overall image energy is designed
in the frequency domain and then fed back to the Canny detector in space domain
by utilising the shape preservation characteristic of the Gaussian filter under Fourier
transformation. Therefore a single σ is developed for each image. A comprehensive
review regarding the Gaussian function in image processing and in the Canny detector
can be found in [105]
In general, an image-based grain size estimator works by extracting the grain dimen-
sions in pixels and then relates it to the traditional sieved (or caliper-measured) esti-
mate, using some internal calibration. Here we examine the EDPC grain size estimator
discussed in 4.3 that utilise the boundary based segmentation using the Canny detec-
tor, and establish that the use of adaptive σ, results in the best grain size estimator,
when compared to selected fixed values of σ. Different fixed values of σ are used to
investigate whether they can discriminate between the twelve different sediment, of
known ground truth sizes, ranging from 180µm to 1.4mm shown in Fig.4.3 and their
performance compared with that of an adaptively selected σ. The level of discrimina-
tion determines whether a given σ can be used for calibrating the procedure. To that
end it is necessary to examine the variation of σ and its effects on grain size. All other
parameters discussed in 4.3 are either constant (W and S) or adaptive (Θ).
4.6.1 Problem Formulation
Let Zσ be a set of sizes extracted from the edge map E resulting from I through the
Canny operator with Gaussian width σ as described in Eqn.(4.3.1) to (4.3.6). Zσ is
obtained by taking vertical and horizontal chord lengths of the particles in E using
pixel runs (section 4.3.3). Fig.4.13 shows an example of two different sizes of sediment
4.6. Adaptive techniques for edge detection 69
images and their edge map, from which the mean grain size is obtained as outlined in
section 4.3. We define Ψλ(Zσ) to mean the set in Zσ with sizes greater than λ. The
followings are axioms pertaining to Ψλ(Zσ) which were discussed in 2.7.1, but presented
here for context.
1. Anti-extensivity : Ψλ(Zσ) ⊂ Zσ
2. Increase : Y ⊂ Zσ ⇒ Ψλ(Y ) ⊂ Ψλ(Zσ)
3. Importance of stronger sieve:
Ψλ[Ψµ(Zσ)] = Ψµ[Ψλ(Zσ)] = ΨSup(λ,µ)(Zσ) (4.6.1)
On the basis of the above axioms, the grain size distribution of I(Zσ, y) , is given by
the cumulative distribution
FZσ(λ) = 1− A(Ψλ(Zσ))
A(Ψ0(Zσ))(4.6.2)
where A is area (for continuous set), or discrete count (for discrete set). The density
function fZσ(λ) is:
fZσ(λ) = − d
dλ
A(Ψλ(Zσ))
A(Ψ0(Zσ))(4.6.3)
The mean grain size p is therefore given by:
pσ =
∫ ∞
0λfZσ(λ)dλ (4.6.4)
The average grain size thus obtained is a function of the Gaussian parameter σ, used
in the Canny detector.
4.6.2 Test Criteria
Eqn.4.3.1 to 4.6.4 describe the process of obtaining the mean grain size p from an image
I(x, y). For notational convenience let us denote the process described by Eqn.(4.3.1)
to (4.6.4) by the mapping:
Oσ(Id) 7−→ (pσ)d (4.6.5)
where Id is the grey level image containing sand particles whose known average diam-
eter is d, and (pσ)d, its estimated mean grain size obtained from Oσ . We define two
performance measures for image-based grain size estimators Oσ :
1. The estimator shall rank the images in the set by size in accordance to their true
values as obtained by sieving .
70 Chapter 4. Granulometry by Edge Detection
d1 > d2 ⇒ (pσ)d1 > (pσ)d2 (4.6.6)
The condition described by Eqn.4.6.6 could be tested using Spearman’s rank
correlation as follows:
Let Πσ be the the set of estimated mean sizes obtained from the n images, as a
result of using σ
Πσ = (pσ)180, (pσ)210, ..., (pσ)1400 (4.6.7)
Let Λ(Πσ) be the ranking of Πσ in ascending order and the optimal Oσ is that
which shall maximize κ, given by:
κ(σ) = 1− 6∑n
i=1 [i− Λ(Πσ)]2
n(n2 − 1)(4.6.8)
thus the second performance measure is defined as follows:
2. The estimator shall maximally discriminate between images in the set that are
adjacent in sizes.
This means that for the set shown in Fig.4.3, the operator shall discriminate maximally
between, say, (pσ)180 and (pσ)210 or (pσ)1190 and (pσ)1400. In addition, owing to the
relative difficulty of discriminating smaller grain sizes than larger sizes, the discrimina-
tion of smaller grain size diameters should be given more weight. In [29] the phi-scale
(or φ-scale) was introduced for this same purpose of favoring discrimination of smaller
particles. The φ-scale is logarithmic and is related to d (in µm) as follows:
φn = −log2(dn/1000) (4.6.9)
Thus 1 mm corresponds to 0 on the φ-scale and diameters larger than 1 mm have
negative values whereas diameters in the µm range are positive. Thus for n different
sizes ranked in ascending order from φ1 to φn, the average discrimination power D of
the estimator is given by:
D =1
n− 1
n−2∑
i=0
(pσ)φ(n−i) − (pσ)φ(n−i−1)
|φ(n−i) − φ(n−i−1)|(4.6.10)
Results for grain size estimators, using the performance measures in Eqn.4.6.6 and
4.6.10, were obtained for values of σ from 1 to 20 and compared with results from the
adaptive technique, which is developed in the following section.
4.7. Extracting σ from Image spectra 71
4.7 Extracting σ from Image spectra
The frequency domain representation of the sediment images as depicted in Fig.4.12
forms the basis for the selection of adaptive σ. For an image I(m,n) of size M×N, the
Fourier transform and the accompanying Energy (or variance) EI obtained from the
centered transform are given by:
I(k, l) =N∑
n=0
M∑
m=0
I(m,n)e−j(ωkm+ωln) (4.7.1)
EI =
N2−1∑
l=−N2
M2−1∑
k=−M2
I2(k, l) (4.7.2)
In general smaller grain sizes have a larger spread than larger grain sizes (Fig.4.12).
The intention is to design a Gaussian filter in the frequency domain that could capture
a fraction α : 0 ≤ α < 1 of the overall energy EI from all the images. The problem,
then becomes that of choosing u and v, such that:
v−1∑
l=−v
u−1∑
k=−u
I2(k, l) ≈ αEI (4.7.3)
The lengths of the Gaussian filter in the frequency domain are U = 2u and V = 2v.
The respective Gaussian widths σk and σl are related to the filter length U and V by:
σk =U − 1
2√
2 ln 1ε
, σl =V − 1
2√
2 ln 1ε
(4.7.4)
where ε is the ratio between the smallest and largest taps in the Gaussian filter (ε = 10−4
in our algorithms). The resulting discrete Gaussian filter in the frequency domain,
G(k, l) is then converted to the spatial domain Gaussian filter g(m,n) using the inverse
Fourier transform:
G(k, l) = Ae−(
ω2k
2σ2k
+ω2l
2σ2l
)(4.7.5)
g(m,n) =
N2−1∑
l=−N2
M2−1∑
k=−M2
G(k, l)ej(ωkm
M+
ωln
N)
= Be−( m2
2σ2m
+ n2
2σ2n)
(4.7.6)
72 Chapter 4. Granulometry by Edge Detection
180µm
−0.5 0 0.5−0.5
0
0.5
2
4
6
600µm
−0.5 0 0.5−0.5
0
0.5
0
2
4
6
1mm
−0.5 0 0.5−0.5
0
0.5
0
2
4
6
1.4mm
−0.5 0 0.5−0.5
0
0.5
2
4
6
Figure 4.12: Log-magnitude Fourier spectra of some sieved sediments from Fig.4.3 —row 1: 180µm, 600µm and row2: 1000µm , 1400µm. The axes are spatial frequencyunits in cycles/pixel
where A and B are constants, and for a circular Gaussian kernel,
M = N, σk = σl, σm = σn, σmσk =
√MN
2π(4.7.7)
4.8 Data set
The images shown in Fig.4.3 of 12 different sediment sizes ranging from 180 µm to
1400 µm were obtained by a 6Mpixel Nikkon Camera, with a zoom resulting in 127
pixels/mm. The same dataset was used in the results presented in section 4.5.
4.9 Results
The tabulated results in table 4.5 were obtained using 40 non-overlapping images of size
(768x768) from each size class (total 480), and the ranking performed 10,000 times by
permuting the images, and the average recorded for σ = 1 to 20. The results from the
adaptive σ are also shown, with the α term in the bracket used to control the Gaussian
width in the Frequency domain. Values of α ranging from 0.9 to 0.95 seem to produce
the best discrimination for this dataset. The results shown in bold in the table, are the
best performances both from the adaptive and non-adaptive σ, and some of them were
chosen for display in Fig.4.14 along with results for σ = 1, to give a visual appreciation
of the superior performance when using adaptive σ. The adaptive technique spans a
wider range of pixels, making it a better grain size estimator.
4.9. Results 73
Figure 4.13: Edge maps for 180µm sediment image (above) and 1400µm (below) usedfor grain size distribution computation
74 Chapter 4. Granulometry by Edge Detection
30 40 50 60 70 80 90 100 110−0.5
0
0.5
1
1.5
2
2.5
Estimated size (pixels)
Tru
e si
ze in
phi
−sc
ale
(φ)
σ = 1σ = 8σ = 10σ
A(α = 0.9)
Figure 4.14: Plots showing the performance of each grain size estimators ranking anddiscrimination capacity as σ varies. Note that in the y-axis the lowest φ correspondsto the largest size and vice versa (see Eqn.(4.6.9))
Fig.4.15 shows examples of boundary detection algorithms applied to three particulate
images from the Brodatz album (see Fig.2.12). Different selections of σ from the Canny
detector result in different representations of the images. The last representation shows
adaptive σ applied to all images. This works well for the particulate images, except
D66 (in the middle), which is better represented using region-based segmentation owing
to its distinct object-background contrast.
4.10 Chapter Summary
The chapter commenced by a brief review of edge detection algorithms and their per-
formance when applied to particulate images. A detailed mathematical presentation of
the detectors is in appendix B. The Canny detector was found to be the best detector
for particulate textures from amongst the tested detectors.
The main contribution of this chapter is the presentation of a technique based on the
Canny detector and pixel runs, for grain size estimation and classification by size. Com-
parison of the method with the autocorrelation method is presented. Potential sources
of error in applying the developed technique have been identified and discussed ,with
the view of mitigating them. These include the gaps and occlusions amongst aggre-
gates, boundary effects resulting from the relative image size to aggregate size. The
consistency of the technique was tested on a set of sediment images with promising
results.
The technique was further tuned by adapting the σ parameter of the Canny detector
to the image data being analysed. Results from adaptively selected σ parameter were
4.10. Chapter Summary 75
Figure 4.15: Boundary detection using the Canny edge detector for D48, D66 and D75(see Fig.2.12) from the Brodatz Album. Top: σ = 1, Upper middle: σ = 2 , Lowermiddle:σ = 5 Bottom: adaptive σ
76 Chapter 4. Granulometry by Edge Detection
Width κ D Width κ D
σ = 1 0.928 5.042 σ = 16 0.966 15.478
σ = 2 0.909 4.500 σ = 17 0.961 14.936
σ = 3 0.920 5.433 σ = 18 0.960 14.991
σ = 4 0.948 6.814 σ = 18 0.960 15.042
σ = 5 0.959 8.292 σ = 19 0.960 15.181
σ = 6 0.964 9.862 σ = 20 0.957 14.896
σ = 7 0.969 11.341 σA(α = 0.875) 0.968 24.037
σ = 8 0.971 12.584 σA(α = 0.9) 0.974 23.808
σ = 9 0.969 13.429 σA(α = 0.915) 0.968 22.792
σ = 10 0.971 14.486 σA(α = 0.925) 0.970 20.776
σ = 11 0.967 14.703 σA(α = 0.9375) 0.971 19.226
σ = 12 0.966 15.410 σA(α = 0.95) 0.968 16.874
σ = 14 0.965 15.199 σA(α = 0.975) 0.954 10.047
σ = 15 0.965 15.251 σA(α = 0.99) 0.911 4.537
Table 4.5: Performance results of non-adaptive and adaptive σ, showing rank correla-tion κ and discrimination D, averaged over 10,000 experiments
sigma = SelectSigma(Image); % Adaptive selection of sigma
Theta = SelectTheta(Image,nu); % Adaptive selection of Theta
E = CannyEdge(Image, sigma, Theta); % Canny edge detection
D = Dilate(E,w); % Morphological dilation with structural element w
S = Skeletonize(D, s); % Morphological Skeletonization with structural element s
P = MeanPixelCounts(S); % Mean pixel count represents grain size in pixels
Table 4.6: The EDPC Algorithm
compared with results from various σ ranging from 1 to 20. Two performance param-
eters were chosen, namely, rank correlation and discrimination power. The adaptive
selection of σ outperformed the rest of the manually chosen σ. The algorithm for the
EDPC method showing the main operations is summarised in the code shown in table
4.6. The EDPC technique developed in this chapter is applied to a more diverse image
data captured from road surfaces extending more that 5km. This is the topic of chapter
5.
Chapter 5
Application of Particulate
Texture Analysis to Road
Surfaces
5.1 Introduction
Road surface macro-texture is an important element of “skid resistance” which is the
term used to describe a road surface’ contribution to the overall friction between the
surface and a tyre. Investigation into the relationship between skid resistance and car
accidents is thought to date back to as early as 1906 [106]. It is now established by
transport and road authorities and practitioners, that improved skid resistance reduces
crash rates particularly in wet weather conditions[6, 107, 108]. There are a number of
methods employed to determine skid resistance of road surfaces and they are classified
into two broad categories i.e. direct and indirect. Direct methods simulate the vehicle
surface interaction whereas indirect methods here refer to inference of skid resistance
from micro-texture and macro-texture. First, we present a brief description of the
methods and tools used in the direct methods followed by a description of the indirect
methods. In [109] an inventory of tools and methods used for road surface analysis is
detailed.
5.1.1 Direct friction measurements
The direct method involves friction measuring devices, which simulate a car on the
road surface. Typically a predetermined vertical load is applied on a rubber slider or
tyre forced to slide across the road surface and the traction force measured. The ratio
between the load and traction force determines the friction. This direct method relies
on other factors such as rainfall and vehicle parameters such as speed and tyre threads.
In fact for a full measurement of friction using the direct method, it is advisable to
apply water jets to the wheel paths to simulate wet weather and also to vary the speed
77
78 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
of the simulator. Some of the instruments used for measuring friction directly include
the British pendulum, ASTM locked-wheel trailer, SCRIM and Griptester. Each of the
devices outputs a number that indicates the level of skid resistance in their own native
unit, so that it is not readily possible to convert between devices without harmonization
trials involving the devices. In [108, 109, 110] these devices are explained in detail.
5.1.2 Indirect methods - Surface texture depth measurement
Indirect measurement of skid resistance is obtained inferentially from the physical char-
acteristics of the road surface. Pavement surface characteristics have been classified by
the World Road Association (PIARC)[111] based on the fluctuations of the pavement
surfaces from a reference planar surface. These fluctuations, characterized by wave-
length λ in mm, occur as per the details in Fig.5.1. Surface friction is determined by
microtexture (λ < 0.5mm) and to a larger extent by macrotexture(λ = 0.5 to 50mm).
In one of the most comprehensive studies on the subject [112], three different highways
in Australia were examined for association between macrotexture and crash rates. It
was confirmed in nearly all cases that there is increased risk of road crash with low
macrotexture even in dry weather conditions. The study also investigated the critical
macrotexture value in mm below which the risk becomes critical. On average, across
the three highways 0.5mm was the critical macrotexture.
In wet weather, the influence of both microtexture and macrotexture on aquaplaning
(the loss of vehicle traction as a result of build-up of water between vehicle tyre and
road surface) is described empirically in [113]. The speed at which aquaplaning occurs
is directly proportional to tyre pressure and therefore at lower speeds, the microtexture
zone dominates wet and dry friction levels. At higher speeds macrotexture becomes
significant as it facilitates water drainage such that the adhesive component of friction
provided by the microtexture is maintained by being above the water levels. Macrotex-
ture is also essential in developing hysteretic friction with tire treads, a friction force
that is described by a displacement-dependent hysteresis function as a result of an im-
posed displacement trajectory.
A study on the relation between macrotexture (measured by SMTD) and surface fric-
tion (measured using SCRIM) was found to be inconclusive [112, 114] even though
both macrotexture and surface friction were independently found to correlate with
crash rates. The reason for the mismatch was attributed to other information that
were not readily available in the correlation exercise but which would explain the rela-
tionship.
Both microtexture and macrotexture also depend on the type of material or asphalt
5.1. Introduction 79
mix used [114, 115]. Microtexture levels are generally examined before road surfac-
ing, by studying the aggregate material properties such as surface polish and therefore
microtexture is intended to be a once-off test durable to the life of the road surface.
A tool used to examine the properties in aggregates, the Wehner-Schulze machine is
described in [116]. Macrotexture, on the other hand, can be measured easily from the
road surface in a number of ways. Currently, the most common methods of macro-
texture measurement are the SPTD obtained from the Sand Patch test (ASTM E965)
[117] and the SMTD obtained from the laser profilometer (ASTM E1845) [118, 119].
The use of imaging technology for the visual inspection of pavements is not common in
comparison to other transport applications using imaging such as ANPR and CCTV
applications, but is currently a growing trend. Some road authorities have highly devel-
oped Pavement Management Systems (PMS) that include photographic images to aid
in remote visual inspection [120] but these images are only suited to analyse the more
visible pavement defects such as cracks and potholes, but not of the quality that would
be suited for automated characterisation of surface macrotexture. Both macrotexture
and microtexture, require a higher image quality compared to other transportation ap-
plications. Image resolution is an important factor. If p is the average number of pixels
adequate to capture a wavelength λ of texture fluctuation, the resolution required is
p/λ pixels/mm. Therefore for microtexture (where average λ = 0.25mm) the average
resolution must be at least 4p pixels/mm. For macrotexture (0.5mm ≤ λ ≤ 50mm) a
much lower resolution will suffice.
Consequently, microtexture analysis requires more image storage space and therefore
more processing power for analysis. The validation of imaging methods for microtex-
ture require a benchmarking equipment (e.g. Wehner-Schulze machine) which can only
be used in labs thus imposing a controlled testing environment. In [35] high resolution
images of samples with aggregates were used to determine microtexture using a method
that involves variations in pixel intensity with application of thresholds described in
[121]. This resulted in a correlation of (R2 = 0.985) with results obtained from the
Wehner-Schulze machine. Thus micro-texture is determined from the aggregates prior
to laying of the concrete and is relatively difficult to determine outright from existing
road surface, whereas macro-texture can be easily determined from road surfaces using
a number of techniques including image processing. In what follows, only macrotexture
is considered and therefore all references to texture shall imply to mean macro-texture.
A few image processing techniques have been suggested for characterising road surface
macrotexture and here these methods are explained briefly. Two methods of texture
analysis, namely, the autocorrelation method and the wavelet analysis method (section
80 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
Figure 5.1: Effects of texture wavelength on pavements/vehicles interactions (adaptedfrom [6])
2.6.2), which have only first been applied to road surfaces in [122], are also briefly pre-
sented and then the method proposed in section 4.3 for grain size determination, using
the equivalence of coarseness and texture depth in particulate textures. The method is
compared with the other imaging techniques and with the laser profilometer or SMTD
(Sensor Measured Texture Depth) technique using data obtained for the same roads.
5.2 State of the art
5.2.1 The Sand Patch Test
A known volume V of fine material (typically sand) is spread over a circular area on
the road surface using a spreading tool. The spreading is done so that the sand is level
with the tops of the aggregates. The method is prescribed by the ASTM E965 standard
[117]. The average diameter d of the resulting patch is measured from which the sand
patch texture depth (SPTD) is calculated as follows:
SPTD =4V
πd2(5.2.1)
The method is low cost, simple to implement and reliable but is manual and therefore
labour intensive especially for longer roads where repetition is required several times
at every segment. In order to obtain texture depth from a segment of road a few ar-
eas within the segment are chosen as a representative. The reliability improves with
the number of patch areas chosen for every road segment. The manual nature of the
task and the precision required, also demand that traffic control be implemented, thus
adding to the cost of conducting the sand patch test. Owing to its reliability however,
the SPTD has become the baseline measure for any of the subsequent texture depth
measures that were developed. In a report in [123] average texture depth of 0.8mm
and up to a minimum of 0.5mm for individual cases are deemed acceptable. In [124]
5.2. State of the art 81
average texture depths of 0.74mm and minimum depths of up to 0.64mm have been
specified for state highways in New Zealand.
Therefore if a texture depth measurement methodM is developed, its efficiency could be
determined by the extent of how the methodM correlates with SPTD (or its surrogate
ETD (Estimated Texture Depth) [125], which is an empirically determined ’approxi-
mated’ form of SPTD). Therefore the relation of any given method M that correlates
with ETD with some correlation R2 shall in our deliberations be expressed as :
ETD = αM + β (5.2.2)
where α and β are constants that characterise the linear relation between the method
M and ETD. The values of α and β that result in the best R2 are chosen as the optimal
calibration parameters for the method M .
5.2.2 Laser Profile Technique
The laser profilometer method was developed as an alternative for the SPTD, moti-
vated by the need for automated approach to texture depth measurement. The vehicle
mounted laser profilometer [125] has demonstrated sound cost-benefit ratio in compar-
ison to the sand patch test. A focused laser beam, directed towards the road surface
results in scattered light which is then detected with an array of light-sensitive cells.
These detected points indicating depth are then analysed to obtain the Mean Profile
Depth (MPD). Stationary laser profilometer techniques are slow and require traffic con-
trol or complete lane closure. However, vehicle mounted laser profilers can operate at
highway speeds of up to 90 km/h. Details regarding the MPD measure obtained by the
profilometer can be found in [118, 119], where the relation between ETD and MPD is
established as per (5.2.2), with α = 0.79 and β = 0.23.
Obtaining the MPD from a laser profilometer involves sophisticated algorithms. As such
a simpler method known as the Sensor Measured Texture Depth (SMTD) is widely
used. To obtain the SMTD, the laser profilometer first takes a sequence of depth
measurements from a segment of road typically 100mm in length, from which a surface
profile is obtained and fitted with a polynomial of best fit. The polynomial of best fit
also removes the effects of vehicle bounce. The root mean square of the displacements
from the best fit is then calculated to determine the SMTD. The mean of the deviations
of the detected points from the trend curve constitute the standard deviation. In [125]
it is demonstrated that SMTD strongly correlates with ETD (in mm) with R2 values
of up to 0.97 and with α = 0.82 and β = 0.12. The data used here are based on a
profilometer calibrated at α = 2.5 and β = 0.
82 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
5.2.3 The Circular Track Meter
The Circular Track Meter or Circular Texture Meter (CTM) [126] uses a laser to mea-
sure a profile of a circle 284mm in diameter. The profile is then divided into eight equal
portions. The CTM-derived MPD is reported as the average of all eight portions. The
relation between the CTM-derived MPD and ETD is as per (5.2.2), with α = 0.947 and
β = 0.069. An evaluation of the CTM method in comparison with the sand patch tex-
ture depth (SPTD) conducted in [127] resulted in coefficient of determination R2 = 0.95
after the removal of several outliers.
5.3 Units of Texture Depth Measurement
Texture depth in the macro-texture range is measured in millimeters. However, with
the exception of the sand-patch test (SPTD) all the methods mentioned above are
dimensionless. For example, the units for the SMTD data are simply termed “SMTD
units of texture depth”. To convert each of the dimensionless methods, say SMTD
texture units, into mm requires transformations of the sort shown in Eqn.5.2.2, where
α and β are constants and M represents the dimensionless SMTD and ETD represents
SPTD in (mm). In our work of comparison with imaging techniques, the SMTD data
is used as a benchmark to our proposed method because it is closely correlated to
the sand-patch test as per Eqn.5.2.2. The other reason is that SMTD is more widely
available than the SPTD owing to the proliferation of laser profilometer devices as
the devices of choice for surface texture measurement. Fig.5.2 shows graphically the
relation between the SPTD and SMTD as obtained from a calibration site at Nudgee
Beach (see section 5.6.2). In this case, as per Eqn.5.2.2 the calibration values are
α = 2.5 and β = 0. Thus for subsequent references to SMTD values, the texture depth
in mm is simply 2.5 × SMTD.
5.3.1 Calibration
In [109] a number of devices for measuring road surface conditions are described, includ-
ing direct and indirect methods of skid resistance measurements. Calibration enables
conversions from one device to another. However, calibration requires that there be
an absolute reference measurement of skid resistance. Currently, there is no reference
device that itself has been calibrated against surfaces with known, stable, reproducible
friction properties. Therefore the best attempt is to harmonise devices of the same
type [128]. Harmonisation results in a device such that at any given time, the correct
result can only be estimated and that the best estimate for any particular type of mea-
surement device would be the average value given by all machines of that type. An
example provided by the authors in [128] is that of the SCRIM (Sideway-Force Coef-
ficient Routine Investigation Machine) device, where the best estimate would be the
average of all acceptable SCRIM machines in a given approved fleet, where acceptance
test are conducted on a regular basis.
5.4. Image-based methods 83
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2
0.5
1
1.5
2
2.5
Chainage (km)
SP
TD
(mm
) an
d S
MT
D(d
imen
sion
less
)
SPTDSMTD
Figure 5.2: SPTD and SMTD as obtained from a calibration site
The absence of a golden standard for measuring friction directly, does not impact the
indirect methods of surface texture depth measurements. Texture depth measurement
devices rely on the SPTD as working equivalent of a golden standard. In our experiment
we do not have SPTD data for the entire road and so we use the SMTD data which is
obtained from a device tested and calibrated to correlate with SPTD. Therefore, our
approach is to examine whether imaging techniques measure as closely as practicable
to the existing SMTD methods and establish detailed correlations between the imaging
technique and the SMTD and examine whether a consistent calibration factor could be
obtained.
In section 5.2, values of α and β that relate different devices to SPTD are cited. These
calibration parameters are not always consistent even amongst devices of the same
technology. This discrepancy in calibration has been identified for the SMTD values
in [112], and therefore it is was recommended to regularly calibrate devices as deviations
from normal operations are likely to be encountered with time.
5.4 Image-based methods
Image based methods employ cameras to extract texture depth from visual cues. These
visual cues are either monocular or stereoscopic. Texture features characterisation using
monocular cues require 2D still images whereas stereoscopic techniques require multiple
cameras. Both methods have been tested on road surfaces, with varying results.
84 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
5.4.1 Stereoscopic techniques
One of the earliest usages of images in computation of road surface macrotexture in-
volved stereo imaging [129]. The method involved the measuring of six parameters:
height, width, angularity, distribution, harshness of projections above the matrix, and
harshness of the matrix itself. A drawback of the method was that both acquisition and
analysis were done manually, which was extremely time consuming. This was mainly
owing to the construction of the device. In [130, 131] surface texture was measured
using the response from a reconstructed 3D surface model of road samples. The recon-
struction in both cases was done using custom made devices and the results correlated
with MPD. The results indicate correlation with MPD, even though details regarding
the data used in the findings were absent [130] or very limited [131].
5.4.2 2D still Images
Unlike the sand patch and laser profilometer methods that capture texture depth di-
rectly from the fluctuations of the road surfaces, still cameras capture 2D images that
are prone to distortions owing to gray-level variations, shadowing, illuminance and
viewing angle. Stereo imaging resolves some of this issues. However, these can also be
addressed in the context of a single still image, where depth could still be inferred from
a still image if these distortion factors are adequately addressed. For road surfaces, a
greater texture depth implies a rougher surface. In the context of still images, surface
texture roughness (which is generally quantified as coarseness) is the parameter that
closely describes texture depth. Texture coarseness from still images is defined and
quantified in [2, 18, 20]. A spectral method for the determination of texture depth
was first implemented in [132] and later on developed in [133]. The method used the
Fourier transforms of surface images to characterize texture energy in different bands.
The energy values from each band were correlated with the sand-patch method, achiev-
ing a best correlation value of R = 0.65(R2 = 0.42) with the Sand Patch Test. After
removing 40% of outliers from the data and using second order polynomial (R2 = 0.94)
was achieved. One of the recommendations resulting from the study in [133] was the
need to strengthen the correlation with the Sand Patch Test, by significantly increasing
the total size of data used in the experiments. The dataset used in our experiments is
described in 5.6
In [122], the Fourier spectrum , autocorrelation and the wavelet technique were applied
for road surface macrotexture. 2D coarseness is the single most important factor in
using 2D image analysis techniques for macro-texture analysis and therefore the appli-
cation of these techniques was based on the assumption of equivalence between road
surface macro-texture (depth) and the 2D surface coarseness. Therefore the question
is one of finding the best algorithm that would consistently map 2D coarseness values
into 3D texture depth. In Fig.4.12 it was demonstrated that the Fourier spectrum of a
5.4. Image-based methods 85
particulate image spreads in proportion to the fineness of the aggregates in the texture.
This principle is applied to road surfaces. The ability to measure texture coarseness
and aggregate size from particulate images was shown for the autocorrelation func-
tion in Fig.2.7 and table 4.1 respectively. Wavelets have been used widely in texture
analysis [24, 49]. The parameter used to characterise texture coarseness in wavelets is
the energy captured in the decomposed images. A wavelet decomposition schematic is
shown in Fig.A.3 of appendix A, and a decomposition of the road surface image shown
in Fig.5.3b is displayed in Fig.5.6.
In the implementation of the proposed EDPC technique, as described in section 4.3,
the assumption is that the average size of texels represented in 2D is proportional to
average texture depth. The use of texels as basic texture elements was proposed by
Blostein and Ahuja [34] who showed that the correct measurement of texture gradi-
ents required explicit identification of image texels, particularly when textures show
three-dimensional relief. They used the fact that the variation in texel sizes along a
texture gradient is related to the image orientation, and estimated shape from texture
using this relationship. Here we use the reverse process, namely, we know the texture
gradient and orientation (represented in the camera angle) and we seek the texel size.
Fig.5.3 shows pavement samples that could be described as fine and coarse, where the
coarser texture (b) has more macro-texture depth than (a).
Fig.5.4 shows the magnitude spectrum of the road surface images of Fig.5.3. Note
that this is a centered Fourier transform, meaning the centre of the spectrum is the
d.c component, and all regions in the proximity of this value are the low frequency
components. The highest frequency (0.5 cycles/pixel), is a consequence of the Nyquist
criterion which stipulates that a signal cannot have frequency components exceeding
half the sampling frequency. The frequency values along the x-axis and y-axis of Fig.5.4
are multiples of the sampling frequency.
The autocorrelation function drops off slowly when the texture is coarse, and rapidly for
fine textures. The method has been used for estimating sediment grain size in [45] with
promising results. This can potentially be used to determine the relative coarseness
of road surface texture. In order to examine the decay of the autocorrelation function
with lag we use the representation of Fig.5.5, which shows the slope for each of the
images in Fig.5.3 with the autocorrelation values squared, to remove negative values.
The slope is simply an indicator of how fine or how coarse a texture is.
The edge profiles of the images in Fig.5.3 are shown in Fig.5.7. The average distance
between two adjacent edge pixels in the horizontal scan and vertical scan of the images
in Fig.5.7 shall be used as a measure to determine texture depth. The usage of pixel
86 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
run statistics to measure coarseness is discussed in section 4.3 and illustrated in Fig.4.6.
In this section, some of the factors mentioned here that directly impact on imaging
techniques are considered. These factors include aggregate shape, camera viewing an-
gle, and illumination. These factors are addressed experimentally whenever possible
and in order to quantify the effect of each factor on the resulting image, a measure of
edge density κ is used. This is the percentage of edge pixels in the edge profile image,
and therefore the coarser an image, the smaller is κ resulting from its edge profile.
(b)(a)
Figure 5.3: (a) Fine (left) and Coarse (right) pavement samples, where the coarsesample is considered to have more texture depth (size:512 × 512 pixels)
Figure 5.4: Normalised log-magnitude (log base 10) Fourier spectra of the images inFig.5.3 in corresponding order (size:512 × 512 pixels). The axes are spatial frequencyunits in cycles/pixel
5.4. Image-based methods 87
2 4 6 8 10 12 14 16 18 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Lag(Pixels)
Aut
ocor
rela
tion(
A2 )
FineCoarse
Coarser
Figure 5.5: Autocorrelation sequence of the fine and coarse textures in Fig.5.3
(a) (b)
Figure 5.6: (a) Level 1 and (b) Level 2, wavelet transforms for the image in Fig.5.3busing wavelets
88 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
(a) (b)
Figure 5.7: Edge profiles of the images in Fig.5.3 in corresponding order (size:512×512pixels)
Aggregate shape
Aggregate shape has an impact on the stability of asphalt mix and it was concluded
that cubical shapes have the best particle index - a measure of the combined contribu-
tion of particle shape, angularity and surface texture to the stability of an aggregate
defined in [31]. However, there is no known relationship between aggregate shape and
macrotexture measurements for all the above mentioned devices and techniques. This
is also the case for image analysis techniques. The assumption of circular or spherical
shapes common in granulometric approach to image analysis is more for computational
convenience than a physical assumption. For example, in the analysis of granular tex-
tures [21, 57], grain size distributions are obtained using circular structural elements to
probe the aggregates. Such morphological techniques apply both to binarised images
with distinct aggregate-background pixels and also to grey-level images [98].
In [34] texture elements were defined to be circular regions of uniform intensity extracted
by convolving the image with ∇2G (Laplacian of Gaussian) and ∂∂σ
∇2G operators and
matching the response to an ideal disc (σ is the size of the Gaussian). This technique
was used to simplify the problem of extracting regions of arbitrary shapes and sizes,
and was successful in extracting surface information of texture.
In our proposed method, edge profiles of pavement surfaces as those shown in Fig.5.7 do
not suggest a definite shape for the aggregates. As a description of aggregate size, we
use the average diameter of closed curves derived from edge profiles, of the sort shown
in Fig.5.7. Thus in our method, the average diameter of an aggregate is an abstract
5.4. Image-based methods 89
description of the aggregate size, which may give the reader the impression of spherical
shape, but only for computational purposes.
Camera angle
Still images obtained from surfaces contain information regarding the surface projec-
tion angle and relief of the texture. The surface projection angle becomes evident from
the decreasing texture gradient for textures further away along the direction of the
optical axis of the camera. This gives rise to what is known as projective distortion
[8]. Several techniques regarding the extraction of 3D characteristics from 2D images
by using cues from projective distortion are discussed in [36, 37, 38, 39, 134]. Here we
define the projection angle to be the angle between the road surface and the optical
axis of the camera lens. This is represented by φ in Fig.5.14. The projection angle
affects the texture relief which in turn impacts the perceived aggregate size.
The presence of relief also means that the resulting images are also dependent on other
aspects of projected texture such as mean illuminance. Mean illuminance can vary
with viewing angle, owing to variations in shading or cast shadows and can alter the
perceived coarseness of the texture [135]. This complex relationship between orienta-
tion and illuminance has been studied in [136] where a difference in the spectrum of
images captured at different angles were observed and were attributed to the change in
azimuthal angle of the illumination direction which causes a change in the shadowing
direction and hence a change in the dominant orientation of the spectrum.
To study the effect of projection angle on the perceived aggregate size, a road surface
was photographed at five different angles φ = [30o, 45o, 60o, 75o, 90o], with the camera
set-up as shown in Fig.5.14. The effective distance between the camera lens and a chosen
reference point on the road surface was kept constant. All five images were captured
within less than a minute from each other to ensure the variation in illuminance and
the direction of solar radiation, are as negligible as possible. The resulting images are
shown in Fig.5.8. Notice the compression resulting from the images acquired at smaller
φ. Fig.5.9 shows the edge density of each image as φ varies. In general, there is a
perceived increase in coarseness as φ increases.
Illuminance
To study the effects of road surface brightness on the image analysis technique, an
image of a road surface was taken at six different times of the day corresponding to six
illuminance ranges measured with a lux-meter. Fig.5.10 shows the road surface images
at the six lux levels. Fig.5.11 shows the edge profiles of the six images highlighting
the intensity of edges captured for each lux level. To quantify the effect of illuminance
on the edge profiles we used the measure of edge density as a percentage κ. Based
on Fig.5.11, imaging at lower level of lighting results in higher perceived coarseness.
90 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
φ = 30o φ = 45o φ = 60o
φ = 75o φ = 90o
Figure 5.8: Images of a road surface captured at different projection angles (size:512×512 pixels). The patch in the middle highlights the distortional impact of viewing angle.
φ = 30o, κ = 66.92 φ = 45o, κ = 56.87 φ = 60o, κ = 56.85
φ = 75o, κ = 57.26 φ = 90o, κ = 55.27
Figure 5.9: Edge profiles of the images in Fig.5.8, showing the edge density κ of each,(size:512 × 512 pixels). The distortion of the central patch in each image indicates therelative distortion of average grain size for each φ
5.5. The Proposed technique 91
Fig.5.12 showing the autocorrelation sequence for the images in Fig.5.10, confirms that
lower lux levels result in the coarsening of the textures. From Fig.5.12 it is evident that
at lower lux levels (100 and 300 lux points), the trend of increasing coarseness with
lower illuminance starts to break. This might be due to a range of factors including
a set minimum tolerance in the camera parameters below which linear behaviour is
compromised. The value of κ starts to stabilise beyond 20,000 lux, roughly indicating
a minimum recommended value of illuminance for an imaging technique to be effective.
Figure 5.10: Images of a road surface at various lux levels: Approximate Lux levels leftto right and top to bottom, 100, 300, 2000, 7000, 20, 000 and above 30, 000 (size:512×512 pixels)
5.5 The Proposed technique
The method proposed here uses estimates of the aggregate sizes of the road surface
as obtained from road surface edge profiles. The size of the aggregates in the surface
image can be obtained once their boundaries are determined. The average chord length
of each closed boundary determines the mean aggregate size of the entire image. To
implement this, each image is processed through four stages i.e edge detection, dilation,
skeletonization and pixel run statistics from the resulting distribution. The various
parameters used in the processing stages are optimised in a final stage of calibration.
Details of the technique are explained in section 4.3, but here, the adjustments of the
technique to suit application to road surfaces is briefly discussed. The adjustments
encompass edge detection, edge linking and pixel run statistics.
The edge detection of road surfaces is implemented with the view of extracting the
optimal values of the Gaussian width σ and the threshold Θ that would best correlate
92 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
100lux κ=10.67 300lux κ=11.96 2000lux κ=13.57
7000lux κ=14.66 20,000lux κ=15.35 30,000lux κ=15.65
Figure 5.11: Edge profiles of the images in Fig.5.10 (scaled down for visibility (size:256×256 pixels)). The labels show the lux levels and the percent of edge pixels κ
2 4 6 8 10 12 14 16 18 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
lag
Aut
ocor
rela
tion
100 lux300 lux2000 lux7000 lux20,000 lux30,000 lux
Coarser
Figure 5.12: Autocorrelation sequence for each of the images in Fig.5.10.
5.5. The Proposed technique 93
with the SMTD method. Fig.5.13 depicts the edge profiles of the pavement surface
image shown in Fig.5.3a for various σ and Θ. The depicted range of values for σ and
Θ were found to be optimal for the pavement surface images obtained.
Θ = 0.05
Θ = 0.10
Θ = 0.20
σ = 0.5 σ = 1.0 σ = 2.0
Figure 5.13: Effect of σ and Θ on edge contour of a pavement sample (central 200×200pixels of Fig.5.3a). Visual impact along σ is more noticeable than along Θ but botheffects are also described in terms of edge densities κ as shown in Table 5.1.
σ = 0.5 σ = 1.0 σ = 2.0
Θ = 0.05 29.14 22.52 13.25
Θ = 0.10 28.71 21.98 12.73
Θ = 0.20 26.71 19.68 10.92
Table 5.1: Values of κ (percentage of edge pixels) as a function of (σ,Θ) for the profilesin Fig.5.13
The edge linking step is implemented using a disc of radius 2 pixels as the structuring
element. This choice is compatible with the resolution of the acquired image (3-4)
pixels/mm and therefore the low range of macrotexture (0.5-2)mm could be adequately
captured. Thus the structuring element used in the dilation operator applied to the
edge profile images is a 5× 5 matrix ∆ given by (5.5.1).
94 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
∆ =
0 0 1 0 0
0 1 1 1 0
1 1 1 1 1
0 1 1 1 0
0 0 1 0 0
(5.5.1)
An illustration of the sequential process of of edge detection, dilation and skeletonisation
resulting in the desired edge map is shown in Fig.4.4. This resulting edge map is used for
calculating the aggregate size distribution. The mean of the distribution (here termed
EDPC) from each image, are arrayed in a vector representing the mean aggregate size
along the chainage. To mean is subtracted from each vector and the vectors offset
to clear any negative values. This removal of mean values neutralizes the undesirable
effect of variations in illuminance on the results. The resulting vector is then correlated
with the corresponding SMTD values.
The next step is parameter calibration where σ and Θ that result in the highest R2 are
selected following a training of a subset of the data with corresponding known SMTD.
These values of σ and Θ will result in a EDPC values being predictive of SMTD. A
relation similar to that in Eqn.5.2.2 can be formulated as follows:
SMTD = αEDPC + β (5.5.2)
EDPC values could also be predictive of the estimated texture depth (ETD) in mm
as follows:
ETD = 2.5(αEDPC + β) (5.5.3)
These parameters and their relations, determine the robustness of the technique and
are further discussed in section 5.8.
5.6 Data Acquisition and Experiment Set-up
The method proposed here is tested against the laser profilometer measured SMTD
across three lanes from two different road segments with a combined distance stretching
3.06km. A total of 1392 images were acquired from the road surfaces with two different
camera angles of 60o and 90o.
5.6.1 Camera Set-up and SMTD data
The images were obtained using a 7.2 Mega-Pixel Panasonic Lumix DMC-FZ8 camera.
For all the lanes under study, the images were typically captured at intervals of roughly
5-10m. Therefore the images were not acquired at the precise chainage points at which
5.6. Data Acquisition and Experiment Set-up 95
Figure 5.14: Dimensions used in road surface image acquisition
the SMTD were given . The images were obtained using a tripod as shown in Fig.5.14.
For each lane the images were acquired at φ = 60o and at φ = 90o to study the effect
of viewing angle.
The SMTD values were recorded for every 20m interval along each lane, and were
available for both OWP (Outer Wheel Path) and BWP (Between Wheel Path). For
each road segment the difference in duration between the SMTD data acquisition and
the images acquisition was at least three months. For this reason, the images were
acquired from the BWP (Between Wheel Path) region. The BWP surface of the road,
is less likely than the OWP to be exposed to road-tyre friction dynamics and therefore
more stable over time. Fig.5.15 depicts the SMTD values for both OWP and BWP for a
single lane. For each point on the road surface, the texture depth in mm is 2.5×SMTD.
The resulting images, after cropping to remove spurious shadows resulting from the
tripod, were each of size 1024 × 1024 pixels in size. Acquisition details for the sites
including date, time and solar exposure1 are shown in table 5.3.
5.6.2 Nudgee Road
The segment of road used in this study is a part of Nudgee Road in Brisbane Australia -
a two-lane road extending 2.26km. This is a road managed by the Brisbane city council
but also used as a calibration site by the regional Queensland department of transport
and main roads. This is where friction testing devices are tested before being deployed
throughout the region. The road was initially constructed in 1979, followed by reseal
in 1994 and further repair including cold planing and AC overlay for the segment of
road under study, in 2007. The surface was designed with a projected 20 year deign
volume of 8.8 × 106ESA which includes commercial vehicles entering and exiting a
1Source: Australian Bureau of Meteorology. This is the total exposure for each cited day in MJ/m2.Usually the maximum exposure is at midday. Images were captured in clear solar exposure where cloudswere minimum.
96 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
0.5 1 1.5 2
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
Chainage (Km)
SM
TD
OWPBWP
Figure 5.15: SMTD values for Outer Wheel Path (OWP) and Between Wheel Path(BWP) for Lane-1
recycling plant. The authors were not able to locate crash history for the site. SMTD
data were acquired on Feb 2009 from a laser profilometer at 20m intervals for each
lane, resulting in 113 data points per lane. The texture data for SMTD ranged from
0.37 ( 0.925 mm) to 1.13 ( 2.825 mm). The mean deviation (see section 5.2.2) of the
SMTD data was 0.138. The image data were acquired in July 2009. For each lane,
image data were captured twice, once at φ = 60o and the other at φ = 90o, resulting in
1006 images for the Nudgee Beach site. Owing to the length of the segment at Nudgee
road (2.26km) and incoming traffic, each image acquisition exercise was done over 4
days. For convenience, the lanes shall be called Lane-1 and Lane-2.
5.6.3 Colburn Avenue
The segment of the examined road at Colburn avenue is part of road constructed in
1969, followed by reseal in 1989 and AC overlay in 2003. The length of the examined
segment is 0.7 km long and only one lane was examined. SMTD data were acquired on
Feb 2010 from a laser profilometer at 20m intervals for the lane, resulting in 40 data
points. The images were acquired in May 2010. The texture data for SMTD ranged
from 0.35 ( 0.875mm) to 0.46 ( 1.15mm) with mean deviation of 0.14. The images were
acquired at intervals of 5-10m intervals, resulting in 386 images in total for φ = 60o
and φ = 90o. The crash history of the road averages a little less that 1 crash per
year, with 0.18 crashes per year (or 3 in 16 years) resulting in hospitalisation. The rest
are minor injuries or property damage. The image data for the segment at Colburn
avenue (700m) were captured in a single morning for both camera viewing angles. The
estimated AADT (Annual Average Daily Traffic) of the road is 8500.
5.7. Performance Comparison of The Imaging techniques 97
(a) (b)
Figure 5.16: Maps showing (a) Nudgee Road and (b) Colburn Avenue. Images werecaptured roughly from the highlighted sections pointed to by the arrow
5.6.4 Experiment Setup
We aim at arriving to the optimal set of parameters that correlate with the existing
SMTD values and investigate the performance of the resulting parameter set on other
unseen data. The Canny edge detector is characterised by the parameters σ and Θ as
discussed in section 5.5. The experiment involved a range of values for σ and Θ from
which the optimal (σ,Θ) is selected. The selection of the (σ,Θ) pair is based on the
EDPC values that result in the best R2 when compared with the SMTD method for
φ = 60o and φ = 90o. This experiment is repeated on (Lane-2 of Nudgee Beach, and
Colburn Avenue) to test for robustness of the selected parameters.
The SMTD data were acquired at 20m intervals across the chainage and at least two
images are acquired at that same interval. Thus there are at least twice as much image
data than SMTD data. In our correlation therefore we smooth both data using a
Gaussian moving average filter to minimize data spikes. Note also that the SMTD
data shown in Fig.5.15 are in SMTD units. Multiplying the values by 2.5 results in
the Sand-Patch equivalent in mm. In the displays for the results we shall use the Sand
Patch equivalent in mm, and therefore also calculate the EDPC values in mm. This
is done by first converting the EDPC values into SMTD using the linear equations
relating them and then multiplying the result by 2.5.
5.7 Performance Comparison of The Imaging techniques
Imaging data from the 2.26km lane-1 of Nudgee road were used to compare the imag-
ing techniques presented here, namely, the FFT, autocorrelation, Wavelets and EDPC
techniques (see table 5.2). The merit of any of these methods lies first on how close
they correlate with the SMTD data, and secondly on whether the high correlation is
reproducible on a different set of data. In this preliminary result we shall investigate
the closeness of each of the methods in approximating the SMTD data. Calibration
and reproducibility of the results on other roads will be applied to the method showing
the highest correlation.
98 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
A few notes on the algorithms used in each method. The FFT method [46], was applied
by first computing the road surface spectrum as shown in Fig.5.4. The spectrum was
then divided into regions or bands formed by concentric circles, so that the inner circle
is band 1, the following ring is band 2 and so on. The energy in each band is correlated
with SMTD, and also the energy in a combination of bands was compared with SMTD.
It was found that there is a band preference that results in a highest correlation with
SMTD. In our computation this was band 4. This is consistent with the findings in
[133] who also concluded a band preference. However, this band preference is related to
the resolution of the image, since images captured at different resolutions are subsam-
pled versions of each other and therefore have frequencies related to their respective
resolutions. The results for FFT method are shown in Fig.5.17 .
The autocorrelation technique was applied to the images as shown in Fig.5.5. Several
lags were investigated to arrive at the optimal lag that would yield slopes that correlate
with the SMTD data. A lag of 2 registered the best correlation with SMTD and this
was used for comparison. The results for autocorrelation method are shown in Fig.5.18 .
There are a number of existing wavelet filters and a comparison of all the filters is a
task that would derail from the scope of this study. As such three of the most popular
wavelets used in texture analysis were applied (db4, coif2, and bior3.5 ). The results
for the method of wavelets is shown in Fig.5.19 .
Although wavelets are suited for capturing textural properties, it is not yet clear which
wavelet type is most suitable for capturing the coarseness of road surface macrotex-
ture. Therefore three of the commonly used wavelets in texture analysis (db4, coif2,
and bior3.5 ) [49], were tested. This is done as follows: Each image is decomposed
into its 4 components (see Fig.A.3 in appendix A for a schematic and Fig.5.6 for the
implementation) , and the energy (gray level variance) in the LL component is com-
puted. The LL component is then further decomposed resulting in another LL in level2
and the variance also computed and so on. This is done for each image after which
the effectiveness by which the energy in each decomposition level closely approximates
texture depth is established by correlating the energy results from each decomposition
level with the SMTD values. Fig.5.6 is an example of only a two level decomposition
of a road surface image, using Coif2 filters. The energy is obtained from the LL ap-
proximation images shown in the figure, whereas the detail images (LH,HL and HH),
three in each decomposition level, are discarded. Of the types of wavelets tested the
one with the best performance was (bior3.5) and therefore was selected.
The EDPC technique was applied with σ = 0.75 and Θ = 0.05, which were found to be
the optimal set of parameters for the data. The EDPC method outperforms the other
5.7. Performance Comparison of The Imaging techniques 99
imaging techniques as shown in the high R2 in table 5.2. Further results, including the
choice of the parameters of the method and the further refinement of the technique and
its application to more road data are presented and discussed in section 5.8.
One of the parameters kept constant in all the figures (5.17) to (5.20) is that no outliers
were removed. All methods improve as we remove outliers, in which case the SMTD
traces in all figures will vary as each method selects the outliers it seeks to remove.
Table 5.2 shows the R2 resulting from removing the indicated number of outliers for
each method.
0 0.5 1 1.5 22.3
2.74
3.18
3.62
4.06
4.5
FF
T b
and−
4
Chainage (Km)
0 0.5 1 1.5 20.35
0.51
0.67
0.83
0.99
1.15
SM
TD
FFT−band 4SMTD
(a)
2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.40.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
y= 0.22x−0.19
R2=0.7
FFT band−4
SM
TD
(b)
Figure 5.17: Plots showing (a) FFT method and Laser profilometer SMTD side by sideand (b) their coefficient of determination
100 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
0 0.5 1 1.5 21.9
2.26
2.62
2.98
3.34
3.7
AC
CO
R m
etho
d
Chainage (Km)
0 0.5 1 1.5 20.35
0.51
0.67
0.83
0.99
1.15
SM
TD
ACCORSMTD
(a)
1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.60.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
y= 0.36x−0.34
R2=0.64
ACCOR
SM
TD
(b)
Figure 5.18: Plots showing (a) Autocorrelation method and Laser profilometer SMTDside by side and (b) their coefficient of determination
5.7.1 The selection of adaptive σ for road surfaces
It has been demonstrated in section 4.6 that an adaptive σ results in improved discrim-
ination between grain sizes of sediments. Adaptive σ have also been applied to road
surfaces and the results correlated with SMTD. The adaptive selection of σ in EDPC
resulted in a high correlation (R2 = 0.685) with SMTD when no outliers are removed
(compare with table.5.2). This is better than either of the FFT, ACF and wavelets tech-
niques but falls short of the optimal set (σ = 0.75,Θ = 0.05) in the EDPC technique
which resulted in (R2 = 0.785). There are a number of factors why adaptive σ would
5.7. Performance Comparison of The Imaging techniques 101
0 0.5 1 1.5 24
6
8
10
12
14
DW
T
Chainage (Km)
0 0.5 1 1.5 20.35
0.51
0.67
0.83
0.99
1.15
SM
TD
DWT−bior3.5SMTD
(a)
5 6 7 8 9 10 11 12 13
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
y= 0.04x+0.17
R2=0.65
DWT−bior3.5
SM
TD
(b)
Figure 5.19: Plots showing (a) Wavelet transform method and Laser profilometerSMTD side by side and (b) their coefficient of determination
102 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
0 0.5 1 1.5 20
0.76
1.52
2.28
3.04
3.8
ED
PC
Chainage (Km)
0 0.5 1 1.5 20.35
0.51
0.67
0.83
0.99
1.15
SM
TD
EDPCSMTD
(a)
0.5 1 1.5 2 2.5 3 3.50.4
0.5
0.6
0.7
0.8
0.9
1
1.1
y= 0.13x+0.34
R2=0.79
EDPC
SM
TD
(b)
Figure 5.20: Plots showing (a) EDPC method and Laser profilometer SMTD side byside and (b) their coefficient of determination
5.8. Results and Discussions 103
-Outliersremoved(%)
FFT Auto-correlation
Wavelet EDPC
Band-4 (1/slope @lag=2)
bior3.5 (Θ = 0.05) ,(σ = 0.75)
0 0.7 0.638 0.652 0.785
1 0.744 0.697 0.736 0.847
2 0.756 0.735 0.761 0.86
3 0.755 0.767 0.791 0.872
4 0.767 0.782 0.805 0.879
5 0.773 0.787 0.821 0.88
6 0.78 0.8 0.84 0.887
7 0.786 0.799 0.847 0.895
8 0.797 0.804 0.858 0.904
9 0.801 0.817 0.862 0.907
10 0.808 0.823 0.866 0.911
Table 5.2: Coefficient of determination (R2) values, for the four imaging methods
perform this way for the road surface data in contrast to the sediment data in chapter 4.
The selection of adaptive σ in section 4.6, required that Θ remain adaptive. However,
the adaptive Θ is selected in accordance with rules (see the variable ν in section 4.3.1)
that may not hold in all cases especially for image data with relatively high variation.
For a limited set of sediment data with known mean size estimates, the adaptive al-
gorithm of Θ has minimal impact compared to a diverse set of road surface images
sensitive to lighting and viewing angle. For this reason, an adaptive selection of σ
could potentially perform better if there is an accompanying custom designed adaptive
Θ for images obtained at different illuminance and viewing angle. This complicates the
algorithm further and therefore a set of σ and Θ parameters chosen experimentally and
tuned to a range of viewing angles and lighting conditions was the adopted approach
for road surface texture analysis.
5.8 Results and Discussions
The results presented in this section are for the segments of Nudgee Road Lane-1,
Lane-2 and Colburn Avenue. First a plot correlating EDPC with SMTD is presented
alongside a plot showing EDPC with the sand patch equivalent which is 2.5×SMTD.
The plots are shown for Lane-1 of Nudgee road and Lane-1 of Colburn avenue (both at
φ = 60o). Tables 5.4 - 5.9 show the values of R2 for each (σ,Θ,φ) combinations. The
calibration factors α and β for the method are also presented in the tables.
104 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
0.5 1 1.5 2 2.5 30.4
0.5
0.6
0.7
0.8
y= 0.14x+0.35
R2=0.94
EDPC
SM
TD
(a)
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 21
1.2
1.4
1.6
1.8
2
Chainage (km)
Mac
rote
xtur
e(m
m)
EDPC2.5xSMTD
(b)
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Chainage(km)
RM
SE
(m
m)
(c)
Figure 5.21: (a) Correlation between EDPC and SMTD for Lane-1 on Nudgee BeachRd (b) EDPC and SMTD in mm both scaled to match the Sand Patch equivalentand (c) the standard error as a function of chainage, of using EDPC as compared toSMTD. Θ = 0.05, σ = 0.75 and φ = 60o
5.8. Results and Discussions 105
0.5 0.6 0.7 0.8 0.9 1 1.1 1.20.36
0.38
0.4
0.42
0.44y= 0.1x+0.32
R2=0.83
EDPC
SM
TD
(a)
2.9 3 3.1 3.2 3.3 3.4 3.50.9
0.95
1
1.05
1.1
Chainage (km)
Mac
rote
xtur
e(m
m)
EDPC2.5xSMTD
(b)
2.9 3 3.1 3.2 3.3 3.4 3.50.1
0.12
0.14
0.16
0.18
Chainage(km)
RM
SE
(mm
)
(c)
Figure 5.22: (a) Correlation between EDPC and SMTD for Lane-1 on Colburn Ave(b) EDPC and SMTD in mm both scaled to match the Sand Patch equivalent and (c)the standard error as a function of chainage, of using EDPC as compared to SMTD.Θ = 0.05, σ = 0.75 and φ = 60o
106 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
There is evidence from tables (5.4 - 5.9) , that a lower Canny edge detector threshold
value yields a better correlation result with the SMTD data. The figures shown in bold
in the tables are based on the criteria of admitting any value R2 that is within 75%
of the maximum R2 in each table. This could be made stricter (say 90%) in order to
restrict the usable (σ,Θ) region. The identified optimal region resulting in high R2
suited for all the sites is the region (0.5 ≤ σ ≤ 2, 0.05 ≤ Θ ≤ 0.10). The operating
point (σ = 0.75,Θ = 0.05) is optimal for all the sites and consistently results in high R2.
The representations in Figs.5.21a and 5.22a suggest that imaging techniques mimic the
laser profilometer data in capturing road surface macrotexture. The relation between
the SMTD data and the proposed technique also show a stable linear relationship.
The gradient (specified by α) and intercept (β) of the lines in Figs.5.21a and 5.22a
are indicators of the robustness of the method and its applicability to other similar
surfaces. Several values of α and β are also shown in tables (5.4 - 5.9)
By applying the mean value of α = 0.11 and β = 0.36 to all the sites and using
(σ = 0.75,Θ = 0.05), the variation of the RMSE (root mean square error) along the
chainage, of using the EDPC as a predictor of texture depth for two selected sites are
shown in Figs.5.21c and 5.22c. The mean RMSE for both sites and for the rest of the
sites is shown in table.5.3. The highest RMSE (0.195mm) is still less than 20% of the
corresponding average macrotexture (1mm) for Colburn avenue. This is an encouraging
result and shows the potential of imaging techniques as viable methods for macrotexture
determination. Note that these RMSE values are only a relative comparison between
the EDPC technique and the SMTD laser profilometer technique, neither of which is
the gold standard. More reliable RMSE values could be obtained if comparison is con-
ducted against the sand patch test, but to obtain sand patch data of a similar scale as
that for the laser profilometer obtained here, is practically restrictive. Consequently,
the RMSE values obtained in table.5.3 indicate the usefulness of the EDPC method if
these values can be maintained in similar surfces under the same lighting conditions.
The difference in illuminance amongst the sites, and the position of the lighting source,
is a major factor that would result in a gradient mismatch. An acquisition device with
fixed lighting and a mechanism to block stray lights will potentially boost the robust-
ness of the technique.
5.9 Chapter Summary
In this chapter the frictional characteristics of road surfaces were presented. These
characteristics are broadly classified by the WRA (World Road Association) into four
categories, namely, micro-texture, macro-texture, mega-texture and roughness. The
relationship between road surface macro-texture and road crashes has been presented
5.9. Chapter Summary 107
Site (φ) AcquisitionDate
Solar exposure(MJ/m2)
Duration R2 RMSE(mm)
Nudgee Rd Lane 1 (2.26km) 60o 25/7/2009 10:59 - 12:26 0.94 0.118
Nudgee Rd Lane 1 (2.26km) 90o 23/10/2009 07:08 - 08:10 0.80 0.159
Nudgee Rd Lane 2 (2.26km) 60o 10/8/2009 20-22 13:35 - 14:23 0.91 0.188
Nudgee Rd Lane 2 (2.26km) 90o 9/10/2009 13:10 - 14:16 0.91 0.090
Colburn Ave. (0.7km) 60o 6/6/2010 07:13 - 08:00 0.83 0.142
Colburn Ave. (0.7km) 90o 6/6/2010 08:06 - 08:57 0.69 0.195
Table 5.3: Image acquisition details showing date, starting and ending time of acquisi-tion. Also shown are correlation results with SMTD (R2) and the average RMSE valuesfor all the sites when the EDPC technique is applied with the parameters: σ = 0.75,Θ = 0.05, α = 0.11 and β = 0.36.
Θ 0.03 0.05 0.10 0.20
σ R2 α β R2 α β R2 α β R2 α β
0.5 0.59 0.12 0.36 0.93 0.14 0.36 0.76 0.13 0.36 0.06 0.08 0.42
0.75 0.58 0.12 0.37 0.94 0.14 0.35 0.74 0.13 0.34 0.11 0.11 0.39
1 0.55 0.11 0.37 0.93 0.14 0.33 0.74 0.14 0.32 0.18 0.13 0.39
2 0.51 0.11 0.32 0.88 0.14 0.26 0.71 0.14 0.24 0.32 0.16 0.26
3 0.51 0.11 0.33 0.87 0.15 0.26 0.65 0.18 0.11 0.24 0.12 0.32
4 0.51 0.12 0.30 0.82 0.15 0.21 0.39 0.18 0.16 0.19 0.09 0.36
5 0.48 0.13 0.28 0.71 0.16 0.20 0.07 0.08 0.32 0.11 0.07 0.39
Table 5.4: Parameters for Lane-1 Nudgee road (φ = 60o). Optimal parameters areshown in bold.
Θ 0.03 0.05 0.10 0.20
σ R2 α β R2 α β R2 α β R2 α β
0.5 0.69 0.12 0.30 0.75 0.12 0.35 0.79 0.12 0.39 0.46 0.09 0.42
0.75 0.80 0.13 0.35 0.80 0.12 0.38 0.85 0.13 0.39 0.41 0.09 0.44
1 0.85 0.13 0.38 0.84 0.13 0.38 0.83 0.13 0.39 0.36 0.09 0.44
2 0.86 0.13 0.36 0.84 0.12 0.37 0.75 0.12 0.38 0.24 0.07 0.45
3 0.83 0.12 0.37 0.82 0.12 0.37 0.67 0.11 0.38 0.09 0.05 0.46
4 0.81 0.12 0.37 0.77 0.12 0.37 0.53 0.11 0.38 0.01 0.02 0.49
Table 5.5: Parameters for Lane-1 Nudgee road (φ = 90o). Optimal parameters areshown in bold.
108 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
Θ 0.03 0.05 0.10 0.20
σ R2 α β R2 α β R2 α β R2 α β
0.5 0.85 0.12 0.29 0.89 0.11 0.37 0.92 0.12 0.42 0.76 0.15 0.42
0.75 0.89 0.11 0.34 0.91 0.11 0.39 0.93 0.12 0.42 0.72 0.14 0.43
1 0.91 0.11 0.36 0.93 0.12 0.40 0.92 0.12 0.42 0.66 0.14 0.42
2 0.91 0.11 0.33 0.91 0.11 0.36 0.89 0.13 0.38 0.37 0.10 0.43
3 0.89 0.11 0.36 0.88 0.11 0.35 0.77 0.12 0.35 0.05 0.04 0.49
4 0.85 0.12 0.37 0.82 0.12 0.37 0.43 0.10 0.38 0.01 -0.02 0.56
5 0.75 0.11 0.36 0.69 0.11 0.35 0.19 0.07 0.41 0.06 -0.05 0.61
Table 5.6: Parameters for Lane-2 Nudgee road (φ = 60o). Optimal parameters areshown in bold.
Θ 0.03 0.05 0.10 0.20
σ R2 α β R2 α β R2 α β R2 α β
0.5 0.88 0.11 0.39 0.89 0.11 0.38 0.86 0.11 0.38 0.69 0.12 0.39
0.75 0.89 0.11 0.39 0.91 0.11 0.38 0.86 0.11 0.37 0.66 0.11 0.39
1 0.89 0.10 0.40 0.90 0.10 0.39 0.85 0.11 0.37 0.61 0.11 0.39
2 0.88 0.11 0.39 0.89 0.11 0.40 0.87 0.11 0.37 0.33 0.08 0.38
3 0.86 0.11 0.37 0.88 0.11 0.38 0.79 0.11 0.35 0.01 0.01 0.51
4 0.82 0.11 0.35 0.82 0.12 0.35 0.55 0.11 0.34 0.19 -0.07 0.68
5 0.71 0.11 0.32 0.68 0.12 0.30 0.09 0.05 0.41 0.54 -0.12 0.77
Table 5.7: Parameters for Lane-2 Nudgee road (φ = 90o). Optimal parameters areshown in bold.
Θ 0.03 0.05 0.10 0.20
σ R2 α β R2 α β R2 α β R2 α β
0.5 0.77 0.17 0.31 0.79 0.14 0.31 0.85 0.08 0.33 0.85 0.05 0.34
0.75 0.82 0.10 0.31 0.83 0.10 0.32 0.85 0.07 0.33 0.84 0.06 0.34
1 0.84 0.08 0.32 0.84 0.08 0.33 0.85 0.07 0.33 0.84 0.07 0.35
2 0.77 0.08 0.34 0.77 0.08 0.34 0.74 0.08 0.34 0.52 0.10 0.34
3 0.51 0.08 0.33 0.55 0.08 0.49 0.21 0.07 0.34 0.23 -0.04 0.42
4 0.10 0.04 0.36 0.11 0.04 0.06 0.01 -0.01 0.39 0.45 -0.03 0.44
5 0.01 0.01 0.38 0.01 0.01 0.00 0.12 -0.03 0.42 0.42 -0.02 0.43
Table 5.8: Parameters for Colburn avenue (φ = 60o). Optimal parameters are shownin bold.
5.9. Chapter Summary 109
Θ 0.03 0.05 0.10 0.20
σ R2 α β R2 α β R2 α β R2 α β
0.5 0.61 0.21 0.33 0.65 0.17 0.34 0.69 0.09 0.34 0.67 0.05 0.36
0.75 0.67 0.13 0.34 0.69 0.11 0.34 0.71 0.08 0.35 0.59 0.04 0.37
1 0.69 0.11 0.34 0.70 0.10 0.34 0.69 0.08 0.35 0.52 0.04 0.37
2 0.53 0.08 0.34 0.53 0.08 0.34 0.50 0.06 0.36 0.35 0.02 0.37
3 0.23 0.06 0.36 0.23 0.06 0.36 0.20 0.05 0.36 0.11 0.02 0.38
4 0.00 -0.01 0.39 0.01 -0.01 0.39 0.05 -0.03 0.41 0.18 -0.02 0.43
5 0.07 -0.03 0.41 0.09 -0.03 0.41 0.14 -0.03 0.42 0.22 -0.02 0.43
Table 5.9: Parameters for Colburn avenue (φ = 90o). Optimal parameters are shownin bold.
and discussed followed by the methods by which macro-texture is analysed.
In this chapter it is postulated that road surface macro-texture and texture coarseness
are related. This is based on the observation that rougher surfaces display coarser
images than smoother surfaces. However, since coarseness and particle size were es-
tablished to be equivalent for packed aggregates (see chapter 2.6.4), then particle size
techniques could also be used for measuring road surface macro-texture or texture depth
in road surfaces.
The benchmark for any of the imaging techniques was chosen to be the level of corre-
lation with the state-of-the-art SMTD technique. Thus, in preliminary comparisons, a
higher correlation of a texture depth algorithm with SMTD, implied a relatively good
performance.
The algorithm for particle size determination developed in chapter 4 is applied to im-
ages of road surfaces obtained from two different regions. These were compared with
results from classical texture analysis algorithms for coarseness measurement, specif-
ically, Fourier feature, autocorrelation features and Wavelet features. The proposed
particle size determination algorithm showed superior performance.
To the best of the author’s knowledge, the amount of data used in this experiment
is the most comprehensive of its kind to date. Fourier transforms have been applied
for road surface macrotexture at a smaller scale data before. However, autocorrelation
functions and wavelets are applied here for the first time.
110 Chapter 5. Application of Particulate Texture Analysis to Road Surfaces
The image acquisition conditions that affect the imaging techniques have been pre-
sented. These include, camera viewing angle and illuminance. It was established that
surface coarseness appear to increase with lower road surface illuminance . Thus imag-
ing techniques are suited for road surface macro-texture analysis subject to intelligent
control of the acquisition conditions.
Chapter 6
Conclusion and Future Work
Texture analysis literature including the granulometric analysis of textures have been
surveyed, particulate textures have been defined, the difference between the analysis
of texture using HVS-inspired techniques and techniques based on particulate features
has been highlighted, algorithms for granulometric analysis of particulate textures have
been developed , and the developed techniques and algorithms implemented on sedi-
ments and road surfaces. By revisiting the stated objectives as outlined in section 1.3
this chapter concludes the thesis with a brief response to the outlined objectives includ-
ing a summary of the work presented, the major contributions, challenges encountered,
and possible research avenues.
6.1 Summary
The concepts and techniques of existing state of the art methods for texture analysis
were studied and their application to particulate textures reviewed. The conditions
by which these classical techniques apply to particulate textures have been identified.
The conditions by which texture coarseness and particle size are equivalent has been
determined. The relationship between particle shape and texture regularity has been
characterized.
Techniques such as wavelet analysis, autocorrelation function and Fourier analysis were
also applied to particulate texture analysis, thus reinforcing the idea that particulate
textures are indeed a special class of textures. The conditions for these classical tech-
niques to apply has also been reviewed leading to the proposal of techniques based on
edge detection. The discussion in Chapters 2 and 3, of existing techniques, and their
limitations in analysing particulate texture was the main motivation for developing a
framework for particulate textures analysis.
111
112 Chapter 6. Conclusion and Future Work
Particulate textures are distinguished from general textures through the descriptors
that identify them. These include, particle size, particle shape, density, microtexture
and macrotexture. These descriptors call for new tools of analysis and also have a wider
significance in terms of applications. These were identified in Chapters 2 and 3.
A framework for particle shape analysis has been developed based on the Jordan curve
in Chapter 3. This resulted in a development of a novel approach for roundness and
convexity/concavity of particles. Results from synthetic images indicate the robust-
ness of the technique. This also resulted in a characterisation of particle shapes using
geometrical techniques in the digital domain as opposed to earlier characterisations in
Euclidean domain.
The thesis also resulted in the development of a set of procedures for grain size anal-
ysis using edge detection and pixel counts. The core of the algorithm is discussed in
Chapter 4. The challenge in developing the technique was the problem of contrast
in particulate images. Contrast is essential for edge detection to capture the particle
outlines in an image. To overcome this problem, an intermediate step of edge linking
has been proposed. The technique was successfully applied to size determination and
classification of sediment images and road surface images with encouraging results. The
application of the technique to road surfaces, resulted in a host of issues such as the
level of lighting and the acquisition angle, on the acquired images and their effects on
the accuracy of measurement. Experiments were conducted to characterise these issues
and propose solutions based on the parameters of the edge detector.
6.2 Future Work
A number of possible avenues of research have been identified as a continuation to this
work.
i Adaptation could be extended to the morphological structuring elements as was
done for the Gaussian width parameter σ in chapter 4.
ii The manner in which an image is acquired, affects the way it is perceived or quan-
tified. Therefore, in addition to the acquisition parameters considered in chapter 5,
such as viewing angle and lighting levels , further investigation is requires into other
parameters (e.g direction of light source). The relation between these acquisition
parameters will result in a more complete understanding and lead to standardised
parameters that could be incorporated into a device.
iii Fusion of features from a combination of the methods discussed in chapter 5, such
as the FFT, autocorrelation, wavelets and EDPC into a single technique, might
result in an improved outcome in comparison to a single technique.
6.2. Future Work 113
iv Generally grey level images provide the minimum performance level or the baseline
and therefore other techniques for improved granulometry may involve usage of
color images which by virtue of their richer information content than grey level
images, have the potential to improve the results. However, if this is to be pursued,
the trade-off between the performance improvement of using color images vs. the
computational cost and storage costs involved especially for such a large dataset
such as used here, will need to be justified.
114 Chapter 6. Conclusion and Future Work
Appendix A
Standard texture analysis
techniques
A.1 Autocorrelation
The autocorrelation function a[u, v] for an M ×N image I is defined as:
a[u, v] =
1(M−u)(N−v)
∑M−ui=1
∑N−vj=1 I[i, j]I(i + u, j + v)
1MN
∑Mi=1
∑Nj=1 I
2[i, j](A.1.1)
where 0 ≤ u ≤M − 1 and 0 ≤ v ≤ N − 1
A.2 Co-occurrence Matrices
A gray level co-occurrence matrix K[i, j] is characterised by first specifying a displace-
ment vector d=(dx, dy) and counting all pairs of pixels separated by d. Thus if we
define d = (dx, dy) to mean: “a displacement of dx pixels to the right and dy pixels
below”, the resulting gray-level co-occurrence matrix K[i, j] is computed by counting
all pairs of pixels in which the first pixel has a value of i and the matching pixel as a
result of displacement d has the value of j. Formally,
K(i, j) = L((p, q), (r, s)) : I(p, q) = i, I(r, s) = j (A.2.1)
where (p, q), (r, s) ∈ M × N (r, s) = (p + dx, q + dy), L(x) is the cardinality of the
set x. For the 5× 5 image depicted in A.1, the results in table A.1 apply:
As an illustration, the cardinality for K[0, 1] is 5 when d = (0, 1), implying that there
are 5 instances in Fig.A.1 where a pixel has a value 0 and has a corresponding pixel of
value 1 right below it. Table.A.2 shows a number of features that could be extracted
from the co-occurrence matrix K[i, j] .
115
116 Appendix A. Standard texture analysis techniques
(a) (b)
Figure A.1: A 5× 5 image with three gray levels
d = (dx, dy) K[i, j]
(0,1) 120
0 5 1
1 1 4
4 2 2
(1,0) 120
2 3 0
0 1 6
4 3 1
(1,1) 116
0 2 2
2 1 2
2 3 2
Table A.1: Co-occurrence matrices for the image in Fig.A.1 for different displacements
A.3. Fourier Domain Filters 117
Energy∑
i
∑
jK2[i, j]
Entropy −∑i
∑
jK[i, j]logK[i, j]
Contrast∑
i
∑
j(i− j)2K[i, j]
Homogeneity∑
i
∑
jK[i,j]1+|i−j|
Table A.2: Common texture features from co-occurrence matrices
A.3 Fourier Domain Filters
For an image I(i, j) of size M×N, the Fourier transform I(k, l) is given by:
I(k, l) =
N∑
j=0
M∑
i=0
I(i, j)e−j(ωkm+ωln) (A.3.1)
where the sinusoids e−j(ωkm+ωln) are Fourier basis functions.
A.4 Gabor Filters
−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
Spatial frequency (uo)
Spatial fr
equency (
vo)
v
u
Figure A.2: Frequency domain representation of Gabor filter banks used in texturebrowsing [7], having four scales and six orientations. The axes are spatial frequencyunits in cycles/pixel)
118 Appendix A. Standard texture analysis techniques
Formally, a 2D Gabor function g(x, y) is given by
g(x, y) =1
2πσxσye(− 1
2((x−xo)
2
σ2x
+(y−yo)
2
σ2y
)+2jπ(uox+voy))(A.4.1)
where σx and σy denote the Gaussian envelope along the x and y-axis, and (uo, vo) are
the spatial frequencies in the x and y directions respectively.
The Fourier transform of the Gabor function in A.4.1 is given by:
G(u, v) = e(− 1
2((u−uo)
2
σ2u
+(v−vo)
2
σ2v
))(A.4.2)
where u and v are the frequency domain axis variables.
Fig.A.2 is a display of the frequency domain representation of a bank of Gabor filters.
Owing to their symmetry, only the upper half of the filters are applied to images as
follows:
A.5 Wavelets
Formally, the decomposition of the scaling function φ(x) at resolution r and translation
t is given by the MRA equation:
φr,t(x) = φ(2rx− t) =∑
n
Lφ(n)√
(2)φ(2(2rx− t)− n) (A.5.1)
where Lφ(n) is a vector of low-pass filter coefficients, used to capture the approximations
of the signal at various resolutions. The difference between the original signal and its
approximation is captured by the high pass filter Hψ(n) used in the decomposition of
the wavelet function ψ(x):
ψr,t(x) = ψ(2rx− t) =∑
n
Hψ(n)√
(2)ψ(2(2rx− t)− n) (A.5.2)
One condition imposed on filters is that they span the orthogonal space. This results
in a relationship between the decomposition filters, given by:
Hψ(n) = (−1)nLφ(1− n) (A.5.3)
Fig.A.3 shows a schematic for the decomposition of an image into its approximation
and three details using the multiresolution approach.
A.6. Random Fields 119
Figure A.3: Wavelet decomposition of an image into four subsampled images
A.6 Random Fields
In the discretised Gauss-Markov random field model, the gray level at any pixel of the
image I(i, j) is modeled as a linear combination of gray level pixels of the neighbouring
pixels and additive noise. The intention is to extract parameters h[k, l], using least
squares method or any other approach. This is mathematically given by.
M [i, j] =∑
[k, l]I[i− k, j − l]h[k, l] + n[i, j] (A.6.1)
A.7 Structural techniques and Mathematical morphology
A.7.1 The basic morphological operators
Any morphological operation is by definition the composition of first a mapping or a
transformation Ψ of one set into another, followed by a measure µ . The fundamental
operations are erosion and dilation, from which all other operations derive. In the
definition of morphological operators the illustration in Fig.A.4 showing a binary image
A and a structuring element B is used.
Erosion and Dilation
The erosion of a set A by a structuring element B, denoted EB(A) is defined as the
locus of point t such that B is included in A. (B)t denotes the translation of the set B
by t. Formally,
EB(A) = AB
= t : (B)t ⊆ A= t : (B)t ∩Ac = ∅ (A.7.1)
120 Appendix A. Standard texture analysis techniques
Figure A.4: A binary image A and a structuring element B.
A.7. Structural techniques and Mathematical morphology 121
The dilation of a set A by a structuring element B, denoted DB(A) is defined as the
locus of point t such that B hits A when its origin coincides with t. Formally,
DB(A) = A⊕B
= t : (Bt) ∩A 6= ∅= t : (B)t ∩A ⊆ A (A.7.2)
where B is the reflection of the set B such that points in (x, y) in B become (−x,−y)in B. Fig.A.5 shows the erosion and dilation of the set A using the structuring element
B (both shown in Fig.A.4). We can see from the examples the erosion thins or shrinks
objects in a binary image, whereas dilation grows or thickens objects in a binary image.
(a) (b)
Figure A.5: (a) Erosion and (b) Dilation of the binary image A by structural elementB (see Fig.A.4)
Duality
There exists a relationship of duality between erosion and dilation with respect to set
complementation and reflection. If Ac is the complement of the set A, and B is the
reflection of the set B, we have
(EB(A))c = DB(Ac) (A.7.3)
Thus, the complement of the erosion of A by B is equivalent to the dilation of the
complement of A by B. Similarly, the complement of the dilation of A by B is equivalent
to the erosion of the complement of A by B.
(DB(A))c = E
B(Ac) (A.7.4)
Having eroded A by B, it is not possible, in general, to recover A by dilating the eroded
set A B, by the same B. Such dilation will only reconstitute a part of A. However,
122 Appendix A. Standard texture analysis techniques
this operation of erosion followed by dilation, known as opening of A was studied by
Matheron [21] and was found to be rich in morphological properties and especially the
notion of size distribution.
Opening and closing
The opening of a set A by a structuring element B, is denoted by OB(A) or A B and
the dilation of A by B is denoted by CB(A) or A • B . Both opening and closing are
derivatives from erosion and dilation defined respectively as follows:
Opening of a set A by B is defined as the erosion of A by B followed by a dilation by
B. Mathematically,
OB(A) = A B= DB(EB(A))= (AB)⊕B (A.7.5)
Closing of a set A by B is defined as the dilation of A by B followed by an erosion by
B. Mathematically,
CB(A) = A • B= EB(DB(A))
= (A⊕B)B (A.7.6)
The opening and closing of the set A by structural element B is shown in Fig.A.6.
(a) (b)
Figure A.6: (a) Opening and (b) Closing of the binary image A by structural elementB (see Fig.A.4)
A geometrical interpretation of opening is as follows: The boundaries of A B are
A.7. Structural techniques and Mathematical morphology 123
established by the points in B that reach the farthest into the boundary of A, provided
all translates of B are inscribed within A (or provided (B)t ∩A = (B)t).
Closing is interpreted geometrically as follows: The boundaries of A •B are established
by the points in B that are the closest to the boundary of A, provided all translates of
B do not intersect A (or provided (B)t ∩A = ∅).
A.7.2 Convexity of structural elements
Definition: Given a 2D object B and a reference point or center c = (x0, y0), and a
real number λ, the homothetic λcB maps every point (x, y) in B into points (x′
, y′
)
such that (x′ − x0, y
′ − y0) = λ(x− x0, y − y0).
In other words, λcB is the scaling or magnification of B by the factor λc. λcB preserves
the shape of B.
Proposition 1: A compact set B is infinitely divisible w.r.t dilation iff it is convex.
This means that if one takes a positive integer k and dilates the homothetic (Bk) of a
convex set B, k times, then the result is identical to B.
B =
k times︷ ︸︸ ︷
1
kB ⊕ 1
kB ⊕ · · · 1
kB =
(1
kB
)⊕k
(A.7.7)
Proposition 2: For a compact set B in Rn, the homothetic λB is open w.r.t B for
every λ ≥ 1, iff B is convex.
In other words, for a convex B and for p ≥ q ≥ 0, pB is qB-open, meaning pBqB = pB.
As a result, A pB ⊂ A qB. The sieving analogy is used to demonstrate this idea. If
an image A (representing the aggregates to be sieved) can pass through the sieves or
filters pB and qB (opened by them) , in any order, the resulting image will always be
A pB, thus:
(A pB) qB = (A qB) pB = A pB (A.7.8)
Eqn.A.7.8 is an axiomatic relation in granulometry known as the importance of the
stronger sieve. Granulometric axioms are presented in the following section.
A.7.3 Gray-scale morphological operations
Structuring elements in gray-scale morphology perform similar function as their binary
counterparts i.e. they are used as probes to examine properties of interest in the image.
124 Appendix A. Standard texture analysis techniques
The difference is that structuring elements in gray-scale morphology belong to one of
two groups: flat and non-flat [59]. Flat structural elements have all their gray-scale
values flat (or equal). Non-flat structural elements have gray-scale values that vary.
For a gray-scale image I(i, j) and structuring element J(i, j), the erosion of the image
at (i, j) is given by:
EJ(I) = I J
= min(s,t)∈J
I(i + s, j + t) (A.7.9)
To find the erosion of I by J we place the origin of J at every pixel location in I. The
erosion is determined by selecting the minimum value of I contained in the region. This,
however only applies to the flat structural element. For non-flat structural element JN ,
the values of the pixels at the overlap point with the image is subtracted, to obtain the
erosion. Thus
EJN (I) = I JN
= min(s,t)∈JN
I(i+ s, j + t)− JN (s, t) (A.7.10)
Similar approach is applied to the process of dilation in gray-scale images, except that
we take the maximum values of the overlap point in the flat case, and add the masks
of the structural elements JN in the non-flat case, Thus
DJ(I) = I ⊕ J
= max(s,t)∈J
I(i − s, j − t) (A.7.11)
DJN (I) = I ⊕ JN
= max(s,t)∈JN
I(i− s, j − t) + JN (s, t) (A.7.12)
Appendix B
Mathematical descriptions of
selected edge detectors
In what follows E(x, y) or E is an edge map which captures the edges in the image
I(x, y) as a result of the first derivative Operator Qname where the subscript name is
the type of Operator used (Gradient,Prewitt,Sobel or Roberts Cross). Unlike the first
derivative detectors, the Canny detector has no known closed form and shall be derived
in the last section of the appendix.
Eh(x, y) and Ev(x, y), (or Eh and Ev for short) shall be used to denote the horizontal
and vertical components of the edge image E. The magnitude |E| of the combined
vertical and horizontal operation is given by:
|E| =√
E2h +E2
v (B.0.1)
The orientation θ of the edges at each point is given by:
θ = tan−1EvEh
(B.0.2)
B.1 Gradient
Eh(x, y)
Ev(x, y)
=
1∑
α=0
(2α − 1)f(x, y + α)
1∑
α=0
(2α − 1)f(x+ α, y)
(1, 1) ≤ (x, y) ≤ (M − 1, N − 1)
α, x, y ∈ Z
(B.1.1)
125
126 Appendix B. Mathematical descriptions of selected edge detectors
B.2 Prewitt
The horizontal and vertical components Eh and Ev, respectively, of an edge profile
E resulting from the operation Qprewitt(I(x, y)) of the Prewitt operator on the image
I(x, y) are given by:
Eh(x, y)
Ev(x, y)
=
1∑
α=−1
1∑
β=−1
αI(x + β, y + α)
1∑
β=−1
1∑
α=−1
− βI(x+ β, y + α)
(2, 2) ≤ (x, y) ≤ (M − 1, N − 1)
α, β, x, y ∈ Z
(B.2.1)
This can be implemented by the following convolution masks:
prewitth =
−1 0 1
−1 0 1
−1 0 1
prewittv =
1 1 1
0 0 0
−1 −1 −1
(B.2.2)
The magnitude and orientation of the edges in E resulting from the Prewitt operator
are as described by Eqn.B.0.1 to Eqn.B.0.2.
B.3 Sobel
The Sobel operator is slightly complex than the Prewitt owing to its emphasis on
weighting the pixels that are closer to the center of the reference pixel. The horizontal
and vertical components Eh and Ev, respectively, of an edge profile E resulting from
the operation Qsobel(I(x, y)) of the Sobel operator on the image I(x, y) are given by:
Eh(x, y)
Ev(x, y)
=
1∑
α=−1
1∑
β=−1
α(2− |β|)I(x + β, y + α)
1∑
β=−1
1∑
α=−1
− β(2− |α|)I(x + β, y + α)
(2, 2) ≤ (x, y) ≤ (M − 1, N − 1)
α, β, x, y ∈ Z
(B.3.1)
The accompanying mask is given by:
sobelh =
−1 0 1
−2 0 2
−1 0 1
sobelv =
1 2 1
0 0 0
−1 −2 −1
(B.3.2)
The magnitude and orientation of the edges in E resulting from the Sobel operator are
as described by Eqn.B.0.1 to Eqn.B.0.2.
B.4. Roberts Cross 127
B.4 Roberts Cross
The Roberts Cross operator is so called because the pixels involved in obtaining the edge
information are paired diagonally across. The horizontal and vertical components Eh
and Ev, respectively, of an edge profile E resulting from the operation Qrobert(I(x, y))
of the Robert Cross operator on the image I(x, y) are given by:
Eh(x, y)
Ev(x, y)
=
1∑
α=0
(1− 2α)I(x + α, y + α)
1∑
β=0
1∑
α=0
(α− β)I(x+ β, y + α)
(1, 1) ≤ (x, y) ≤ (M,N)
α, β, x, y ∈ Z
(B.4.1)
The accompanying mask is given by:
roberth =
1 0
0 −1
robertv =
0 −1
1 0
(B.4.2)
The magnitude and orientation of the edges in E resulting from the Roberts cross
operator are as described by Eqn.B.0.1 to Eqn.B.0.2.
B.5 The Laplacian
The Laplacian is simply the two-dimensional equivalent of the second derivative, or
in other words, the difference image of the difference image. Its improvement over
the existing first derivative operators is that the local maximum in the first derivative
operator, corresponds to a zero crossing in the Laplacian. This means less false edges.
An approximation to the Laplacian masks is given by:
Laplacianh =
0 0 0
1 −2 1
0 0 0
Laplacianv =
0 1 0
0 −2 0
0 1 0
(B.5.1)
One of the criticisms of the Laplacian is that the zero crossing are very sensitive to
noise and therefore it is desirable to remove the noise prior to edge enhancement.
B.6 The Canny detector
The edge map E(x, y) for the Canny detector is obtained from the image I(x, y) as
follows:
The first derivative of the Gaussian function g(x, y) is calculated first, and therefore:
128 Appendix B. Mathematical descriptions of selected edge detectors
g(x, y) =1
2πσ2e−
x2+y2
2σ2 (B.6.1)
and
g′
(x, y) =∂g(x, y)
∂x+∂g(x, y)
∂y
=1
2πσ4
[
xe−x2+y2
2σ2 + ye−x2+y2
2σ2
]
(B.6.2)
The image I(x, y) is convolved with g′
(x, y), thus
E(x, y) = g′
(x, y) ? I(x, y) (B.6.3)
where ? denotes convolution.
The magnitude and direction |E(x, y)| and θ of the edge map E(x, y) are given by B.0.1
and B.0.2 respectively.
Non-maximum suppression
The gradient |E(x, y)| typically contains ridges around the local maxima and the follow-
ing is a scheme to suppress the non-maximum ridges, leaving a thin ridge representing
the maximum.
First the principle edge directions (orientations) are decided. For a 3 × 3 region these
are typically horizontal (αh), vertical (αv), +45o and −45o. We denote these directions
generally as α.
Secondly, the non-maximum suppression scheme for a 3 × 3 region centered at every
point (x, y) of |E(x, y)| resulting in Es(x, y) is computed as follows:
(1) For each (x, y), find α from amongst the four directions, such that |θ(x, y)− α| isthe minimum
(2) If |E(x, y)| is less than any of its neighbours along the selected α, then Es(x, y) = 0
or else Es(x, y) = |E(x, y)|
Application of Thresholds
Two thresholds, Θ1 and Θ2 (Θ2 > Θ1) are applied to Es(x, y), resulting in , thus
T1(x, y) = 1 if Es(x, y) ≥ Θ1 else 0T2(x, y) = 1 if Es(x, y) ≥ Θ2 else 0 (B.6.4)
B.6. The Canny detector 129
T2 contains fewer false edges but too many false negatives. In contrast T1 contains
more false positives. The double thresholding algorithm starts by joining the edges in
T2 and when it reaches a disconnected point in the contour, it looks for edges in the
8-neighbourhood of T1 for connectivity. This is done until all edges are connected.
130 Appendix B. Mathematical descriptions of selected edge detectors
Bibliography
[1] P. Brodatz, Textures: A photographic album for artists and designers. Dover,
1966.
[2] K. Laws, “Textured image segmentation,” Ph.D. dissertation, University of
Southern California, Jan. 1980.
[3] H. Kaizer, “A quantification of textures on aerial photographs,” Master’s thesis,
Boston University, 1955.
[4] W. Krumbein and L. Sloss, Stratigraphy and Sedimentation - 2nd edition. Free-
man, San-Fransisco, 1963.
[5] ASTM E112-96, Standard Test Methods for Determining Average Grain Size,
ASTM Std., 2004.
[6] “Guide for pavement friction,” American Association of State
Highway and Transportation Officials, 2008. [Online]. Available:
http://onlinepubs.trb.org/onlinepubs/nchrp/nchrp w108.pdf
[7] B. Manjunath, J.-R. Ohm, V. Vasudevan, and A. Yamada, “Color and texture
descriptors,” IEEE Transactions on Circuits and Systems for Video Technology,
vol. 11, no. 6, pp. 703–715, Jun 2001.
[8] J. J. Gibson, “The perception of visual surfaces,” The American Journal of Psy-
chology, vol. 63, no. 3, pp. 367–384, July 1950.
[9] B. Julesz, “Visual pattern discrimination,” IRE Transactions on Information
Theory, vol. 8, no. 2, pp. 84–92, Feb 1962.
[10] B. Julesz, E. Gilbert, L. Shepp, and H. Frisch, “Inability of humans to discrim-
inate between visual textures that agree in second-order statistics – revisited,”
Perception, vol. 2, no. 4, pp. 391–405, 1973.
[11] B. Julesz, E. Gilbert, and J. Victor, “Visual discrimination of textures with iden-
tical third-order statistics,” Biological Cybernetics, vol. 31, no. 3, pp. 137–140,
1978.
131
132 Bibliography
[12] B. Julesz, “Nonlinear and cooperative processes in texture perception,” Scientific
American, vol. 232, pp. 34–43, 1981.
[13] ——, “Textons, the elements of texture perception, and their interactions,” Na-
ture, vol. 290, pp. 91–97, 1981.
[14] ——, “A theory of preattentive texture discrimination based on first-order statis-
tics of textons,” Biological Cybernetics, vol. 41, pp. 131–138, 1981.
[15] A. Rosenfeld and E. Troy, “Visual texture analysis tr-116,” Computer Science
Center, University of Maryland, College Park, Tech. Rep., June 1970.
[16] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image clas-
sification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6,
pp. 610–621, Nov. 1973.
[17] K. Hayes, A. Shah, and A. Rosenfeld, “Texture coarseness: Further experiments,”
IEEE Transactions on Systems Man and Cybernetics, vol. SMC-4, pp. 467–472,
Sept. 1974.
[18] H. Tamura, S. Mori, and T. Yamawaki, “Textural features corresponding to visual
perception,” IEEE Transactions on Systems Man and Cybernetics, vol. smc-8,
no. 6, pp. 460–472, June 1978.
[19] J. Klein and J. Serra, “The texture analyser,” Journal of microscopy, no. 95, pp.
349–356, Apr 1972.
[20] M. P. Levine, Vision in Man and Machine. McGraw Hill, 1985.
[21] G. Matheron, Random sets and integral geometry. Wiley, 1975.
[22] J. Coggins, “A framework for texture analysis based on spatial filtering,” Ph.D.
dissertation, Michigan State University, 1982.
[23] M. Tuceryan and A. Jain, The Handbook of Pattern Recognition and Computer
Vision, 2nd ed. Singapore: World Scientific, 1998, ch. 2. Texture Analysis, pp.
207 –248.
[24] A. W. Busch, “Wavelet transform for texture analysis with applications to docu-
ment analysis,” Ph.D. dissertation, Queensland university of technology - School
of electrical and Electronic Systems Engineering, 2004.
[25] J. Sklansky, “Image segmentation and feature extraction,” IEEE Transactions
on Systems, Man and Cybernetics, vol. SMC-8, pp. 237–247, Feb 1978.
[26] R. Haralick, “Statistical and structural approaches to texture,” Proceedings of
the IEEE, vol. 67, pp. 786–804, 1979.
Bibliography 133
[27] W. Richards and A. Polit, “Texture matching,” Kybernetics, vol. 16, pp. 155–162,
1974.
[28] P. Soille, Morphological Image Analysis - Principles and Applications. Springer,
2004.
[29] W. Krumbein and F. Pettijohn, Manual of Sedimentary Petrography. New York,
Appleton, 1938.
[30] R. Oakey, M. Green, P. Carling, M. Lee, D. Sear, and J. Warburton, “Grain-
shape analysis: A new method for determining representative particle shapes for
populations of natural grains,” Journal of Sedimentary Research, vol. 74, no. 6,
pp. 1065–1073, 2005.
[31] J.-S. Chen, M. Chang, and K. Lin, “Influence of coarse aggregate shape on the
strength of asphalt concrete mix,” Journal of the Eastern Asia Society for Trans-
portation Studies, vol. 6, pp. 1062–1075, 2005.
[32] A. Orena, O. Onala, G. Ozdena, and A. Kayab, “Nondestructive evaluation of
volumetric shrinkage of compacted mixtures using digital image analysis,” Engi-
neering Geology, vol. 85, no. 3-4, pp. 239–250, Jan. 2006.
[33] S. Mooney and W. Nipattasuk, “Quantification of the effects of soil compaction
on water flow using dye tracers and image analysis,” Soil Use and Management,
vol. 19, no. 4, pp. 356–363, Aug 2003.
[34] D. Blostein and N. Ahuja, “Shape from texture: integrating texture-element
extraction and surface estimation,” Pattern Analysis and Machine Intelligence,
IEEE Transactions on, vol. 11, no. 12, pp. 1233 –1251, Dec 1989.
[35] A. Dunford and L. Caudwell, “The future of skid resistance? new methods for
specification and measurement,” Asphalt Professional, vol. Issue 37, March,2009.
[36] K. A. Stevens, “The visual interpretation of surface contours,” Artificial Intelli-
gence, vol. 17, no. 1-3, pp. 47 – 73, 1981.
[37] A. P. Witkin, “Recovering surface shape and orientation from texture,” Artificial
Intelligence, vol. 17, no. 1-3, pp. 17 – 45, 1981.
[38] R. Bajcsy and L. Lieberman, “Texture gradient as a depth cue,” Computer Graph-
ics and Image Processing, vol. 5, no. 1, pp. 52 – 67, 1976.
[39] F. Ulupinar and R. Nevatia, “Perception of 3-d surfaces from 2-d contours,”
Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 15, no. 1,
pp. 3 –18, Jan 1993.
[40] A. Materka and M. Strzelecki, “Texture analysis methods - a review,” Institute
of Electronics, University of Lodz, Brussels, Tech. Rep., 1998.
134 Bibliography
[41] B. Manjunath and W. Ma, “Texture features for browsing and retrieval of image
data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18,
no. 8, pp. 837–842, Aug 1996.
[42] R. Jain, R. Kasturi, and B. Schunk, Machine Vision. Mc Graw Hill Series in
Computer Science, 1995.
[43] R. Chellappa and S. Chatterjee, “Classification of textures using gaussian markov
random fields,” IEEE Transactions on Acoustics, Speech and Signal Processing,
vol. 33, no. 4, pp. 959 – 963, aug 1985.
[44] P. Campisi, E. Maiorana, A. Neri, and G. Scarano, “Video texture modelling and
synthesis using fractal processes,” Image Processing, IET, vol. 2, no. 1, pp. 1 –17,
Feb 2008.
[45] D. Rubin, “A simple autocorrelation algorithm for determining grain size from
digital images of sediment,” Journal of Sedimentary Research, vol. 74, no. 1, pp.
160–165, Jan. 2004.
[46] K. Rao, D.-N. Kim, and J. Hwang, Fast Fourier Transform: Algorithms and
applications. Springer - Heidelberg Germany, 2010.
[47] J. Daugman, “Uncertainty relations for resolution in space, spatial frequency, and
orientation optimized by two-dimensional visual cortical filters,” Journal of the
Optical Society of America A, vol. 2, no. 7, pp. 1160–1169, Jul 1985.
[48] S. Mallat, “A theory for multiresolution signal decomposition: the wavelet rep-
resentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 11, no. 7, pp. 674 –693, Jul 1989.
[49] M. Unser, “Texture classification and segmentation using wavelet frames,” vol.
4, No. 11, Sept. 1992.
[50] M. Tuceryan and A. Jain, “Texture segmentation using voronoi polygons,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 2, pp.
211 –216, Feb 1990.
[51] G. Cross and A. Jain, “Markov random field texture models,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. PAMI-5, no. 1, pp. 25 –39,
Jan 1983.
[52] C. Kervrann and F. Heitz, “A markov random field model-based approach to
unsupervised texture segmentation using local and global spatial statistics,” IEEE
Transactions on Image Processing, vol. 4, no. 6, pp. 856 –862, Jun 1995.
[53] E. Cox, “A method of assigning numerical percentage values to the degree of
roundness,” Paleontology, vol. 1, pp. 179–183, 1927.
Bibliography 135
[54] M. Bottema, “Circularity of objects in images,” in IEEE International Conference
on Acoustics, Speech, and Signal Processing, 2000. ICASSP ’00., vol. 6, 2000, pp.
2247–2250.
[55] C. Di Ruberto and A. Dempster, “Circularity measures based on mathematical
morphology,” Electronics Letters, vol. 36, no. 20, pp. 1691–1693, Sep 2000.
[56] G. Matheron, Elements Pour Une Theorie des Mileaux Poreaux. Paris, France:
Masson, 1967.
[57] J. Serra, Image Analysis and Mathematical Morphology. Academic Press, 1982.
[58] E. Dougherty, An Introduction to Morphological Image Processing. SPIE Optical
Engineering Press, Washington USA, 1992.
[59] R. Gonzalez and R. Woods, Digital Image Processing. Pearson Prentice Hall,
2008.
[60] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans-
actions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, Jan. 1979.
[61] L. Garcia-Ugarriza, E. Saber, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic
color image segmentation by dynamic region growth and multimodal merging of
color and texture information,” in IEEE International Conference on Acoustics,
Speech and Signal Processing, ICASSP 2008., Apr 2008, pp. 961–964.
[62] F. Meyer, Flooding and Segmentation. in mathematical Morphology and its
Applications to Image and Signal Processing, Kluwer Academic Publishers, 2000.
[63] I. Levner and H. Zhang, “Classification-driven watershed segmentation,” IEEE
Transactions on Image Processing, vol. 16, no. 5, pp. 1437–1445, May 2007.
[64] I. McEwan, T. Sheen, G. Cunningham, and A. Allen, “Estimating the size compo-
sition of sediment surfaces through image analysis,” Proceedings of the Institute
of Civil Engineers: Water Maritime and energy, vol. 142, No.4, pp. 189–195,
2000.
[65] F. Y. Shih and S. Cheng, “Adaptive mathematical morphology for edge linking,”
Information Sciences, vol. 167, no. 1-4, pp. 9 – 21, 2004.
[66] J. Yang and X. Li, “Boundary detection using mathematical morphology,” Pat-
tern Recognition Letters, vol. 16, no. 12, pp. 1277 – 1286, 1995.
[67] H. Sorby, “On the structure and origin of non-calcarous stratified rocks,” Quar-
terly journal of the Geological society of London, vol. 36, pp. 46–92, 1880.
[68] C. Wentworth, “The shapes of pebbles,” US Geological survey, Bull, vol. 27, pp.
91–114, 1922.
136 Bibliography
[69] ——, “The shapes of beach pebbles,” US Geological survey Prof. paper, vol. 131-c,
pp. 75–83, 1922.
[70] A. Pentland, “A method for measuring the angularity of sands,” Proceedings and
Transactions of the Royal Society of Canada - Titles and Abstracts, vol. 21, 1927.
[71] H. Wadel, “Volume, shape and roundness of rock particles,” Geology, vol. 40, pp.
443–451, 1932.
[72] P. Barret, “The shape of rock particles, a critical review,” Sedimentology, vol. 27,
no. 3, pp. 291–303, 1980.
[73] J. Fraser, “Experimental study of the porosity and permeability of clastic sedi-
ments,” Geology, vol. 43, pp. 934 – 938; 962–964, 1935.
[74] J. Mazzulo and C. Ritter, “Influence of sediment source on the shape and surface
texture of glacial quartz sand grains,” Geology, vol. 19, no. 4, pp. 384–388, Apr
1991.
[75] R. Lake and S. Hinch, “Acute effects of suspended sediment angularity on juvenile
coho salmon (oncorhynchus kisutch),” Canadian journal of fisheries and aquatic
science, vol. 56, no. 5, pp. 862–867, 1999.
[76] M. Herrin and W. Goets, “Effect of aggregate shape on stability of bituminous
mixes,” vol. 33, pp. 293–308, 1954.
[77] G. Maupin, “Effect of particle shape and surface texture on the fatigue behaviour
of asphalt concrete,” vol. 313, pp. 55–62, 1970.
[78] C.-Y. Kuo and R. Freeman, “Imaging indices for quantification of shape, angu-
larity, and surface texture of aggregates,” Journal of the Transportation Research
Board, vol. 1721, no. 00-0686, pp. 57–65, 2000.
[79] H. Schwarcz and K. Shane, “Measurement of particle shape by fourier analysis,”
Sedimentology, vol. 13, no. 3-4, pp. 213–231, 1969.
[80] R. Ehrlich and B. Weinberg, “An exact method for characterization of grain
shape,” Journal of sedimentary research, vol. 40, no. 1, pp. 205–212, 1970.
[81] M. Diepenbroek, A. Bartholoma, and H. Ibbeken, “How round is round? a new
approach to the topic ’roundness’by fourier grain shape analysis,” Sedimentology,
vol. 39, no. 3, pp. 411–422, 1935.
[82] E. Bowman, K. Soga, and W. Drummond, “Particle shape characterisation using
fourier descriptor analysis,” Geotechnique, vol. 51, no. 6, pp. 545–554, 2001.
Bibliography 137
[83] C.-Y. Kuo and R. Freeman, “Image analysis evaluation of aggregates for asphalt
concrete mixtures,” Journal of the Transportation Research Board, vol. 1721, no.
98-0134, pp. 65–71, 1998.
[84] C. F. Mora and A. K. H. Kwan, “Sphericity, shape factor, and convexity mea-
surement of coarse aggregate for concrete using digital image processing,” Cement
and Concrete Research, vol. 30, no. 3, pp. 351 – 358, 2000.
[85] R. Chetana, E. Tutumluer, and I. Kim, “Quantification of coarse aggregate an-
gularity based on image analysis,” Journal of the Transportation Research Board,
vol. 1787, no. 02-3124, pp. 117–124, 2002.
[86] G. Drevin and L. Vincent, “Granulometric determination of sedimentary rock
particle roundness,” in Proceedings of the International Symposium on Mathe-
matical Morphology - Sydney Australia, Apr 2002, pp. 315–325.
[87] L. Narens, “A nonstandard proof of the jordan curve theorem,” Pacific Journal
of Mathematics, vol. 36, no. 1, pp. 219–229, 1971.
[88] T. Hales, “The jordan curve theorem, formally and informally,” The American
Mathematical Monthly, vol. 10, no. 114, pp. 882–894, 2007.
[89] O. Kiselman, “Digital jordan curve theorems,” Discrete Geometry for Computer
Imagery, pp. 46–56, 2000.
[90] E. Melin, “Digital surfaces and boundaries in khalimsky spaces,” Journal of Math-
ematical Imaging and Vision, vol. 28, no. 2, pp. 169–177, Jun 2007.
[91] A. Rozenfeld, Picture Processing by Computer. New York, Academic Press.,
1969.
[92] A. Rosenfeld, “Digital topology,” The American Mathematical Monthly,, vol. 86,
no. 8, pp. 621–630, Oct 1979.
[93] W. Benesova, A. Rinnhofer, and G. Jakob, “Determining the average grain size of
super-alloy micrographs,” in IEEE International Conference on Image Process-
ing, Oct 2006, pp. 2749–2752.
[94] M. Brieseck, W. Lengauer, B. Gnei, K. Wagner, and S. Wagner, “A straightfor-
ward method for analysing the grain-size distribution in tungsten carbide - cobalt
hardmetals,” Microchimica Acta, pp. 252–260, Feb 2010.
[95] L. C. Sime and R. Ferguson, “Information on grain sizes in gravel-bed rivers by
automated image analysis,” Journal of Sedimentary Research, vol. 73, No.4, pp.
630–636, Jul 2003.
138 Bibliography
[96] D. Marr and E. Hildreth, “Theory of edge detection,” in Proceedings of the Royal
Society of London. Series B, Biological Sciences,, vol. 207, no. 1127, Feb 1980,
pp. 187–217.
[97] J. Canny, “A computational approach to edge detection,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. PAMI-8, No.6, pp. 679–697, Nov.
1986.
[98] Y. Chen and E. Dougherty, “Gray-scale morphological granulometric texture clas-
sification,” Optical Engineering, vol. 33, no. 8, pp. 2713–2722, Aug 1994.
[99] A. Rosenfeld and M. Thurston, “Edge and curve detection for visual scene anal-
ysis,” IEEE Transactions on Computers, vol. C-20, no. 5, pp. 562 – 569, May
1971.
[100] A. Witkin, “Scale-space filtering,” in Proceedings of the 8th International Joint
Conference on Artificial Intelligence, Karlsruhe-Germany, 1983, pp. 1019–1022.
[101] H. Jeong and C. Kim, “Adaptive determination of filter scales for edge detection,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 5,
pp. 579–585, May 1992.
[102] M. Bennamoun, B. Boashash, and J. Koo, “Optimal parameters for edge de-
tection,” in IEEE International Conference on Systems, Man and Cybernetics,
1995. Intelligent Systems for the 21st Century., vol. 2, Oct 1995, pp. 1482–1488.
[103] G. Deng and L. Cahill, “An adaptive gaussian filter for noise reduction and
edge detection,” in Nuclear Science Symposium and Medical Imaging Conference,
vol. 3, Oct 1993, pp. 1615–1619.
[104] S. Coleman, B. Scotney, and M. Herron, “Content-adaptive feature extraction
using image variance,” Pattern Recognition Letters, no. 38, pp. 2426–2436, 2005.
[105] M. Basu, “Gaussian-based edge-detection methods-a survey,” IEEE Transactions
on, Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 32,
no. 3, pp. 252–260, Aug 2002.
[106] D. Croney and P. Croney, Design and Performance of Road Pavements. McGraw
Hill, 3rd ed, 1998.
[107] P. Cairney, “Ap-t96/08 - road surface characteristics and crash occurence: A
literature review,” Austroads Technical Report, Tech. Rep., 2008.
[108] AP-G83/05 - Guidelines for the Management of Road Surface Skid Resistance,
2005.
[109] “Inventory of road surface characteristics measuring equipment,” PIARC Tech-
nical Committee on surface characteristics, Tech. Rep., 1995.
Bibliography 139
[110] P. Roe and P. Hillier, “Ap-t72/06 - report on skid resistance test device trials,”
Austroads, Tech. Rep., 2006.
[111] Report of the Committee on Surface Characteristics, Proceedings, Brussels, 1987.
[112] P. Cairney and E. Styles, “A pilot study of the relationship between macrotexture
and crash occurence,” ARRB Transport Research, Tech. Rep., Feb 2005.
[113] W. Horne and R. Dreher, “Phenomena of pneumatic tyre hydroplaning,” National
Aeronautics and Space Administration, NASA, USA, Tech. Rep., Nov. 1963.
[114] M. Gardiner, B. Studdard, and C. Wagner, “Influence of hot mix asphalt macro-
texture on skid resistance,” Auburn University, Civil Engineering Department,
Tech. Rep., Oct. 2001.
[115] Technical Advisory - Surface Texture for Asphalt
and Concrete Pavements, June 2005. [Online]. Available:
http://www.fhwa.dot.gov/legsregs/directives/techadvs/t504036.htm
[116] M. Woodbridge, A. Dunford, and P. Roe, “Ppr114, wehner schulze machine:
First uk experience with a new test for polishing resistance in aggregates,” in
Crowthorne: TRL, 2006.
[117] ASTM E965-96, Standard Test Method for Measuring Pavement Macrotexture
Depth Using a Volumetric Technique, ASTM Std., 2006.
[118] ASTM E1845-01, Standard Practice for Calculating Pavement Macrotexture
Mean Profile Depth, ASTM Std., 2005.
[119] ISO 13473-1:1997 : Characterization of pavement texture by use of surface pro-
files - Part 1: Determination of Mean Profile Depth, International Organization
for Standardization Std., 1997.
[120] K. DeGray, d Hancock, “Ground-based image and data acquisition systems for
roadway inventories in new england: A synthesis of highway practice,” The New
England Transportation Consortium, Tech. Rep., Aug. 2002.
[121] A. Dunford, A. Ruff, and R. Whiteoak, “Measuring skid resistance without con-
tact,” Tansport Research Laboratory, UK, Tech. Rep., 2008.
[122] R. Elunai, V. Chandran, and P. Mabukwa, “Digital image processing techniques
for pavement macro-texture analysis,” in Proceedings of the 24th ARRB Confer-
ence, Melbourne-Australia, Oct 2010.
[123] “Tire pavement noise and safety performance,” Nov. 1996. [Online]. Available:
http://www.fhwa.dot.gov/legsregs/directives/policy/sa 96 06.htm
[124] “Bituminous sealing manual,” Transit New Zealand, Tech. Rep., 1993.
140 Bibliography
[125] Y. Choi, “Ap-t112/08 - review of surface texture measurement method for seal
design input,” Austroads, Tech. Rep., 2008.
[126] ASTM E2157-01, Standard Test Method for Measuring Pavement Macrotexture
Properties Using the Circular Track Meter, ASTM Std., 2005.
[127] D. Hanson and B. Prowell, “Evaluation of circular texture meter for measuring
surface texture of pavements,” Transportation Research Record: Journal of the
Transportation Research Board, pp. 88–96, Dec. 2006.
[128] P. Roe and R. Sinhal, “How do you compare? correlation and calibration of skid
resistance and road surface friction measurement devices,” in managing road and
runway surfaces to improve safety, May 2005.
[129] R. Schonfeld, “Photo-interpretation of pavement skid resistance in practice,”
Transportation Research Board, vol. No.523, ISSN:0361-1981, pp. 65–75, 1974.
[130] A. El Gendy and A. Shalaby, “Image requirements for three-dimensional measure-
ment of pavement macrotexture,” Journal of the Transportation Research Board,
vol. 2068, pp. 126–134, 2008.
[131] J. Vilaca, J. Fonseca, A. Pinho, and E. Freitas, “A new machine for acquiring
pavement texture,” in IEEE International Conference on Computational Cyber-
netics, 2009, pp. 97–102.
[132] D. Gransberg, I. Karaca, and W. Burkett, “Quantifying seal coat surface condi-
tion using digital image processing based on information theory,” International
Journal of Pavement Engineering, vol. 3, No.4, pp. 197–205, Dec. 2002.
[133] B. Pidwerbesky, J. Waters, D. Gransberg, and R. Stemprok, “Road surface tex-
ture measurement using digital image processing and information theory,” Land
Transport New Zealand, Tech. Rep., 2006.
[134] K. A. Stevens, “Slant-tilt: The visual encoding of surface orientation,” Biological
Cybernetics, vol. 46, no. 3, pp. 183–195, 1983.
[135] J. Saunders, “The effect of texture relief on perception of slant from texture,”
Perception, vol. 32, no. 2, pp. 211–233, Feb 2003.
[136] K. Dana, B. Ginneken, S. Naya, and J. Koendernik, “Reflectance and texture of
real world surfaces,” ACM Transactions on Graphics, vol. 18, no. 1, pp. 1–34,
1999.