Upload
thomas-mathew
View
79
Download
2
Embed Size (px)
Citation preview
RETROFITTING MACHINE VISION SYSTEM INTHE PROFILE PROJECTOR
A PROJECT REPORT
Submitted by
M. MAHESH [Reg. No. 1021110383]
THOMAS MATHEW [Reg. No. 1021110436]
NIROSH.Y [Reg. No. 1021110437]
Under the guidance of
Mrs. A. VIJAYA M.E (Ph. D)(Assistant Professor/Sr. G, School of Mechanical Engineering)
in partial fulfillment for the award of the degree
ofBACHELOR OF TECHNOLOGY
in
MECHANICAL ENGINEERINGof
FACULTY OF ENGINEERING & TECHNOLOGY
S.R.M. Nagar, Kattankulathur, Kancheepuram District, Tamil Nadu
MAY 2015
BONAFIDE CERTIFICATE
Certified that this project report titled “RETROFITTING
MACHINE VISION SYSTEM IN THE PROFILE PROJECTOR” is
the bonafide work of “M.MAHESH [1021110383], THOMAS
MATHEW [1021110436], NIROSH.Y [1021110437]” who carried out
the project work under my supervision. Certified further, that to the best
of my knowledge the work reported herein does not form any other
project report or dissertation on the basis of which a degree or award was
conferred on a earlier occasion on this or any other candidate.
SIGNATURE SIGNATURE
Mrs.A.VIJAYA M.E(Ph.D) Dr. S. PRABHU, M.E (Ph.D)
GUIDE HEAD OF THE DEPARTMENT
Assistant Professor, (Sr.G) Department of Mechanical Engineering Department of Mechanical Engineering SRM University, Kattankulathur
SRM University, Kattankulathur
Signature of the Internal Examiner Signature of the External Examiner.
ABSTRACT
This project aims at retrofitting vision system to an existing profile
projector. The profile projector is a machine which measures the
dimensions of a given specimen by magnifying it and then by measuring
the distance between the chosen coordinates by transferring these
coordinates into AutoCAD software. This is a time consuming process
and can easily lead to errors, so this project is about retrofitting machine
vision to the profile projector to capturing the magnified image directly
and find out the dimensions of the specimen through image processing in
MATLAB software. The pixel size of the captured image is obtained and
is multiplied with the reduction factor to get the real dimensions. These
dimensions are compared with the dimensions obtained with the true
values to do error analysis and show that the dimensions obtained as a
result of our project is more accurate than the traditional method and also
that the process is less time consuming.
i
ACKNOWLEDGEMENT
We would like to express immense gratitude with pleasure to those individuals
who have either directly or indirectly contributed to our needs at right time for the
development and successful completion of this project. A special and warm
expression of gratitude to Dr.S.PRABHU, Ph.D(Sr.G), Professor and Head,
Department of Mechanical Engineering, SRM University, Kattankulathur, for
generously extending his help and valuable suggestions. We would also like to
express our sincere thanks to Mr.R.SENTHIL, M.E, Assistant Professor (S.G),
Department of Mechanical Engineering.
We would like to express our deepest sense of gratitude to our guide
Mrs.A.VIJAYA M.E(Ph.D), Assistant professors, Department of Mechanical
Engineering, SRM University, Kattankulathur, for their valuable guidance and
encouragement throughout the entire work. They have been a constant source of
inspiration and have provided timely help and suggestions throughout our work. We
are deeply grateful to them.
We also wish to place on record our deep sense of gratitude and thank all the lab as-
sistances for their unstinted support and help during the entire period of our project.
Finally we wish to thank our beloved parents and friends for their enduring patience,
support love and affection for constantly standing by us during the entire period our
project.
ii
TABLE OF CONTENTS
CHAPTER TITLE PAGE NO ABSTRACT i
ACKNOWLEDGEMENT ii
LIST OF TABLES v
LIST OF FIGURES vi
1 INTRODUCTION 1
1.1 PROFILE PROJECTOR 1
1.2 COORDINATE METROLOGY 2
1.3 MACHINE VISION SYSTEM 4
2 LITERATURE REVIEW 5
2.1 NEED FOR OUR PROJECT 6
3 RETROFITTING PROFILE PROJECTOR 7
3.1 OBJECTIVE 7
3.2 EQUIPMENTS USED 7
3.3 METHODOLOGY 7
3.4 EXPERIMENTAL SETUP 8
iii
4 IMAGE PROCESSING IN MATLAB 13
4.1 INTRODUCTION 13
4.2 THRESHOLDING 13
4.3 CONNECTED COMPONENTS 15
4.4 DETECTION OF CIRCLE USING
HOUGH TRANSFORMATION 16
4.5 ALGORITHM 17
5 RESULT AND DISCUSSION 21
6 ERROR ANALYSIS 26
7 CONCLUSION 31
APPENDICES 32
REFERENCE 37
iv
LIST OF TABLES
TABLE NAME PAGE
3.5.5 Specification of profile projector 12
6.2 Percentage error for horizontal lines 28
6.3 Percentage error for vertical lines 29
6.4 Percentage error for circles 30
v
LIST OF FIGURES
FIGURES PAGE
1.3.2 Machine vision system 4
3.4.1 Experimental setup 8
3.4.2 Adjustable tripod stand 9
3.4.3 Foldable stand 10
3.4.4 Layout of setup 11
4.2.1 Original image 14
4.2.2 Image after thresholding 14
4.4.1 Center point of circle 16
5.1 Original image with numbered circles 21
5.2 Thresholding and edge detection 22
5.3 Image after plotting the edges 23
6.1 Output from profile projector 26
vi
CHAPTER 1
INTRODUCTION
1.1.PROFILE PROJECTOR
Profile Projector often simply called a optical comparator is a device that applies the
principles of optics to the inspection of manufactured parts. In a profile projector, the
magnified silhouette of a part is projected upon the screen, and the dimensions and
geometry of the part are measured against prescribed limits.
Profile projector also known as a shadow graph, is an useful item in small parts ma-
chine shop or production line for the quality control inspection team. It works on the
principle of coordinate metrology. The projector magnifies the profile of the specimen
about ten times, and displays this on the built-in projection screen which has a grid.
The specimen is kept on a movable table, which can be moved in the (X,Y,Z) direc-
tions and this moment is noted by the computer precisely. The projection screen dis-
plays the profile of the specimen and is magnified for better ease of calculating linear
measurements. An edge of the specimen to examine may be lined up with the grid on
the screen. From there, simple measurements may be taken for distances to other
points.
Even though this is being done on a magnified profile of the specimen which reduces
the amount of error considerably, it is still not perfectly accurate and it is a time con-
suming process. Our project focuses on retrofitting vision system on a profile pro-
jector in order to overcome this drawbacks of the equipment.
Application
To find out the dimensions of small parts accurately.
It is also employed for inspecting and comparing very small and complex parts,
which plays very significant role in system’s structure, as an application of qual-
ity.
1
Advantages
Profile Projector can reveal imperfections such as scratches, indentations or unde-
sirable chamfers which both micrometers or calipers can’t reveal.
The profile projector measures in 2-D space. Unlike micrometers and Calipers,
which measure one dimension at a time, where profile projectors measure length
and width simultaneously. Profile Projectors save time. Ease-of-use factors and er-
gonomic designs reduce the inspection time, retraining costs and operator fatigue.
Most measuring instruments are subject to wear and need frequent recertification,
which takes them out of service and adds an additional cost.
Drawbacks Of Computerized Profile Projector:
In a computerized horizontal profile projector ,the end points are measured
manually and the coordinates of each dimensional elements has to be selected in
this manner which makes it a time consuming process. The manual selection of
points can lead to errors as the point of view changes from person to person.
1.2 COORDINATE METROLOGY:
Coordinate metrology is a field of metrology that is becoming increasingly popular in
the manufacturing industry. Coordinate metrology enables the three-dimensional
measurement to be carried out on complex object in a single setup. The instrument
used for this purpose is known as the coordinate measuring machine or CMM. In
general, the CMM comprises three frames that move along three orthogonal axes, i.e.
X-, Y- and Z-axis. The displacement along each axis is measured by a linear
measurement system and the readings are sent to an electronic controller. The
electronic controller is connected to a computer that also enables various types of data
processing to be performed. These machines are made in various sizes and the
methods of operation are based on the position of each axis from an origin/reference
point. The basic CMM consists of three axes, each provided with a guide way that
enable precise movement along a straight line. Each guide way has a carrier that
moves along. The carrier enables the second carrier to move along a straight line
2
based on the first guide way. Each axis is fitted with a precision scale that records the
position of the carrier measured from a reference point.
ADVANTAGES:
1.2.1.Flexibility:
CMMs are essentially universal measuring machines and need not be dedicated to any
particular task. It can measure almost any dimensional characteristic of a part
configuration, including cams, gears and warped surfaces.
1.2.2.Reduced Setup Time:
Part alignment and establishing appropriate reference points are very time consuming
with conventional surface plate inspection techniques. Software allows the operator to
define the orientation of the part on the CMM, and all subsequent data are corrected
for misalignment between the parts-reference system and the machine coordinates.
1.2.3.Single Setup:
Most parts can be inspected in a single setup, thus eliminating the need to reorient the
parts for access to all features.
1.2.4.Improved Accuracy:
All measurements in a CMM are taken from a common geometrically fixed
measuring system, eliminating the introduction and the accumulation of errors that
can result with hand-gage inspection methods and transfer techniques.
1.2.5.Reduced Operator Influence
The specimen is kept in a location and all the operations are done by changing the
coordinates which means the operator doesn't have to touch the specimen directly
during the process.
1.2.6.Improved Productivity:
The above-mentioned advantages help make CMMs more productive than
conventional inspection techniques. Furthermore, productivity is realized through the
computational and analytical capabilities of associated data-handling systems,
including calculators and all levels of computers.
3
1.3.MACHINE VISION SYSTEM:
Machine vision refers to the industrial application of vision technology. It describes
the understanding and interpretation of technically obtained images for controlling
production processes. It has evolved into one of the key technologies in industrial
automation. Machine vision means to apply computer vision in industry. Using the
machine vision it is possible to improve coordinate measuring systems like the profile
projectors as a machine vision is more accurate than the human eye and causes errors
rarely. The Flow diagram of a machine vision system is as shown in the Figure 1.3.2
1.3.1.APPLICATIONS
Machine vision systems are widely used in the manufacture of semiconductors, where
these systems are carrying out an inspection of silicon wafers, microchips,
components such as resistors, capacitors and lead frames . In the automotive machine
vision systems are used in control systems for industrial robots, inspection of painted
surfaces, welding quality control rapid-prototyping checking the engine block or
detect defects of various components. Checking products and quality control
processes.
Figure 1.3.2: Machine vision system
4
CHAPTER 2
LITERATURE REVIEW
A computer-vision-based solution to retrofit existing flat displays into
interactive surfaces, Priyadharsana, L.L.De Silva Lokuge (Information Institute of
technology ,Colombo)
This paper outlines Interactive Display, a novel and cost effective solution to create
vision-based interactive surface systems by retrofitting existing regular displays. The
proposed solution uses a regular off-the-shelf web camera as the main input device,
and the raw image data captured by the web camera are processed using several image
processing algorithms such as, background subtraction and skin color detection, to
identify foreground objects. Interactive Display's configuration addresses complexity
and cost related issues with currently available computer-vision-based interactive
surfaces. The proposed system provides an opening for more people to experience a
new level of interactions with computing systems using the existing and commonly
available technologies. The presented system is capable of responding in real-time for
user interactions and provides a cost-effective configuration that requires minimum
engineering efforts to set-up.
Flexible new technique for camera calibration A.Zhengyou Zhang,(Senior
Member, IEEE)
It propose a flexible new technique to easily calibrate a camera. It only requires the
camera to observe a planar pattern shown at a few different orientations. Either the
camera or the planar pattern can be freely moved. The motion need not be known.
Radial lens distortion is modeled. The proposed procedure consists of a closed-form
solution, followed by a nonlinear refinement based on the maximum likelihood
criterion. Both computer simulation and real data have been used to test the proposed
technique and very good results have been obtained. Compared with classical
techniques which use expensive equipment such as two or three orthogonal planes,
the proposed technique is easy to use and flexible.
5
Introductory computational science using MATLAB and image processing, (D.
Brian Larkins, William Harvey Dept. of Computer Science and Engineering The Ohio
State University Columbus)
It is an exercise which use edge detection and basic image processing to help with the
use of programming MATLAB in a non-trivial scientific application. MATLAB has
strong support for operating on image data, which allows to balance solving practical
engineering problems with basic core concepts of computer science such as functional
abstraction, conditional execution, and iteration.
2.1.NEED FOR OUR PROJECT
Measuring the dimensions of a specimen on a profile projector is a time consuming
process and is a difficult process. Each dimensional entity has to be measured by
probing on different points on the projected image of the specimen in order to obtain
its coordinates and then the data has to be transferred from the R-COSMOS software
to AutoCAD and then has to be manually edited to connect the lines. This is a time
consuming process and can lead to errors.
In this project once the image is captured by the CCD camera, using EasyCAP it can
be transferred into a computer and the work becomes more simple as the rest of the
process is done by the MATLAB software. So now the time is reduced considerably
and the human errors is also reduced considerably.
6
CHAPTER 3
RETTROFITTING PROFILE PROJECTOR
3.1.OBJECTIVE:
To retrofit the vision system into the existing Profile Projector and measuring the
dimension of the object through Image acquisition and Processing of it through
MATLAB programs in order to the dimensional features.
3.2.EQUIPMENTS USED:
1. An existing profile projector.
2. A CCD camera for capturing clear images of the object or specimen.
3. A removable tripod stand for holding the camera.
4. A wooden foldable stand which acts as a base for the tripod stand.
5. C-clamp for fixing the camera on top of the tripod.
6. EasyCAP USB cable and software.
7. MATLAB 2014
3.3.METHODOLOGY:
1. Place the given specimen on the profile projector and change the (X,Y,Z) axis
to get a sharp image on the middle of the screen.
2. Find an approximate focal point of the camera that gives a sharp image of the
screen and fix the camera at this location.
3. Focus the lens in the camera on the screen and change the light intensity on
the camera manually until a satisfactory image is obtained.
4. Use EasyCAP software and capture the image.
5. Run the image through the MATLAB code and find out the dimensions of the
specimen.
7
3.4. EXPERIMENTAL SETUP:
The profile projector uses a back lighting mechanism, the light passes through the
specimen which is kept in front of the light source. It then passes through a series of
lenses and falls on the screen after getting magnified 10 times. The light will be
lacking in those places which forms a part of the specimen (as shown in Figure 3.4.1)
Figure 3.4.1: Experimental setup.
8
This gives us a profile of the object. To capture this profile a CCD camera is fixed to
a tripod stand with help of a C-clamp. In order to counter the problem of the change
in magnification factor each time the tripod is moved around, the tripod is kept on top
of a fixture which has slots to hold the tripod's legs (as shown in Figure 3.4.2). The
fixed location allows the program to use the same magnification factor for each run of
the code as a change in the position of the camera will cause the magnification value
to change. The height adjustment feature of the tripod allows us to correctly focus on
the screen and change the position if need arises.
Figure 3.4.2: Adjustable tripod stand
9
The fixture is attached to one of the walls of the lab (as shown in Figure 3.4.3) at a
distance of about 8feet from the profile protector and at a height of 95cm, this is
because the focal length of the camera is very long and it is not possible to fix the
fixture to the profile projector machine directly. The fixture contains a wooden board
supported by two hinges which are attached to the wall, which allows the fixture to be
foldable. The foldable mechanism makes the fixture less space consuming.
Figure 3.4.3: Foldable stand
10
The CCD camera is linked to the computer using a device called EasyCAP and the
computer is installed with a software which understands EasyCAP. The image is
focused by changing the settings in the camera and when a bright image is formed on
the screen, it is captured and saved in the MATLAB folder. The proper file path is
mentioned in the code to read the image and the rest of the work is done by the
MATLAB codes. The results are displayed at the end. Refer (Figure 3.3.1) for see the
layout of the full setup.
Figure 3.4.4: Layout of setup
11
Table 3.5.5 Specifications of profile projector:
Model No PH 3500
Make Mitutoyo South Asia Pvt Ltd.
Type Horizontal path
Resolution 0.001mm/0.005mm
Projection lens 10x
Magnification accuracy ±0.1% under contour illumination±0.15%under surface illumination
Screen effective diameter 356mm
Screen material Fine-grain ground glass
Reference line Staggered crosses and crosshairs
Protractor display range ±370º
Resolution 1'/0.01º switchable
Functions Absolute/Incremental mode switch, zero setting
Contour light source Halogen bulb (24V,150W)
Surface light source Halogen bulb (24V,200W) Twin fibre
Power supply 240V AC, 60Hz
Work table XY range X-axis (horizontal travel): 254mmY-axis (Vertical travel): 152mm
Digital angle measurement 1' or 0.01º
CHAPTER 412
IMAGE PROCESSING IN MATLAB
4.1. INTRODUCTIONA image is represented by a matrix which stores the image as
pixel information in the matrix. A digital image can be considered as a large array of
discrete dots, each of which has a brightness associated with it. These dots are called
picture elements, or more simply pixels. Image processing involves manipulating this
pixel information in order to make changes to the original image.
4.2.THRESHOLDING
In MATLAB a single variable is a 1 x 1 matrix, a string is a 1 x n matrix of chars. An
image is a n x m matrix of pixels. An image is a rectangular array of values (pixels).
Each pixel represents the measurement of some property of a scene measured over a
finite area. The property could be many things, since MATLAB usually measure the
average brightness that is not of concern. This is a simple method of differentiating
between an object and the background, which works provided it is of different
intensities. Thresholding is the simplest method of image segmentation. From
a grayscale image, thresholding can be used to create binary images. The simplest
thresholding methods replace each pixel in an image with a black pixel if the image
intensity is less than some fixed constant or a white pixel if the image intensity is
greater than that constant.
Example code for thresholding:
level = graythresh(x);
I = im2bw(x, level);
Here 'level' is the average pixel intensity of the image "x" and the value of level varies
from zero to one. Every pixel in the image x which has an intensity less than that of
the threshold level is converted into black and the ones which are more are converted
into white.
The Figure 4.2.1 is the original image and after thresholding it becomes as it is shown
in Figure 4.2.2
13
Figure 4.2.1 Original image
Figure 4.2.2 Image after thresholding
4.3.CONNECTED COMPONENTS
14
Connected components labeling scans an image and groups its pixels into
components based on pixel connectivity, i.e. all pixels in a connected
component share similar pixel intensity values and are in some way
connected with each other. Once all groups have been determined, each
pixel is labeled with a graylevel. Connected component labeling works by
scanning an image, pixel-by-pixel (from top to bottom and left to right) in
order to identify connected pixel regions, i.e. regions of adjacent pixels
which share the same set of intensity values V. For our project the input
image used is a binary image with connectivity 8. The connected
components labeling operator scans the image by moving along a row until
it comes to a point p (where p denotes the pixel to be labeled at any stage in
the scanning process) for which V={1}. When this is true, it examines the
four neighbors of p which have already been encountered in the scan
(i.e. the neighbors to the left of p, above it, the two upper diagonal terms).
Based on this information, the labeling of p occurs as follows:
If all four neighbors are 0, assign a new label to p, else
if only one neighbor has V={1}, assign its label to p, else
if more than one of the neighbors have V={1}, assign one of the labels
to p and make a note of the equivalences.
After completing the scan, the equivalent label pairs are sorted into
equivalence classes and a unique label is assigned to each class. As a final
step, a second scan is made through the image, during which each label is
replaced by the label assigned to its equivalence classes.
4.4.DETECTION OF CIRCLES USING HOUGH TRANSFORMATION
15
The Hough transform can be used to determine the parameters of a circle when a
number of points that fall on the perimeter are known. A circle with radius R and
center (a,b) can be described with the parametric equations
x = a+ Rcos(θ) y = b + Rsin(θ)
When the angle θ sweeps through the full 360 degree range the points (x, y) trace the
perimeter of a circle.
If an image contains many points(Figure, some of which fall on perimeters of circles,
then the job of the search program is to find parameter triplets (a,b,R) to describe each
circle. The fact that the parameter space is 3D makes a direct implementation of the
Hough technique more expensive in computer memory and time.
4.4.1.SEARCHING FOR CIRCLES WITH A FIXED RADIUS
If the circles in an image are of known radius R, then the search can be reduced to 2D.
The objective is to find the (a,b) coordinates of the centers.
x = a + Rcos(θ)
y = b + Rsin(θ)
The locus of (a,b) points in the parameter space fall on a circle of radius R centered at
(x,y). The true center point will be common to all parameter circles, and can be found
with a Hough accumulation array.
Figure 4.4.1: Center point of circle
4.5.ALGORITHM:
16
1.Read the image.
2.Convert the image to black and white using thresholding.
3.Find out the edges using edge detection.
4.Using the ‘imfindcircles’ function find out the coordinates and radius of the
circle.
5.Using ‘insertText’ function insert the numbers of the respective circles into
the image.
6.Display the coordinates and the radius of the circle.
7.Create a label matrix “L” of the same size as the image and find out the
number of connecting elements “num”
5.Find out the row and column value of all the non-zero values from the label
matrix and
6.Input the values of these row and column into a new matrix ‘cel_num’.
7.Plot the boundary using blue colour with the coordinates saved in each cell
of the matrix ‘cel_num’
8.Set sort_row as the ascending order of boundary and sort_col as the
ascending order of boundary arranged column wise.
9.Make a matrix 'hori_line' with the size equal to that of sort_rows where all
the elements are zeros.
10.Set values:
horiLine_count=1
horiLine_start=1
horiLine_end=0
11.Make a NaN matrix 'hori_lines_index' of length same as boundary and
column value 3
12.Now from i=1 to 'length of boundary-1'
Check if the difference of the adjecent pixels of the 1st column of sort_row is
equal to zero and at the same time if the 2nd column is one
17
Then{
Set the value:
hori_line(i,:) = sort_row(i,:)
}
Else {
Set the values:
not_hline(i,:)=sort_rows(i,:);
HoriLine_end=i-1;
hori_lines_index(i,:,:)=[HoriLine_start,HoriLine_end,HoriLine_end-
HoriLine_start];
HoriLine_count=HoriLine_count+1;
HoriLine_start=i+1;
}
13.if 'HoriLine_start' is not equal to 'length of boundary'
Then{
HoriLine_end=length(boundary)-1;
hori_lines_index(end,:,:)=[HoriLine_start,HoriLine_end,HoriLine_end-
HoriLine_start];
}
14.Set the value:
vv=hori_lines_index(find(hori_lines_index(:,3)>=10),:);
15.for jj=1 to length(vv)
Then,{
set the value:
hlnedet(jj,:)=[jj,hori_line(vv(jj,1),:),hori_line(vv(jj,2),:),vv(jj,3)];
Plot the horizontal line using the function:
plot(hori_line(vv(jj,1):vv(jj,2),2),hori_line(vv(jj,1):vv(jj,2),1),'*r');
18
hold on;
}
16.Display the result in a tabular form with the line number starting and the
ending coordinates of the line and the size of the pixels in mm
17.Set the values:
vertLine_count=1;
vertLine_start=1;
vertLine_end=0;
18.Make a matrix 'vert_line' with the size equal to that of sort_col where all
the elements are zeros.
19. Make a NaN matrix 'vert_lines_index' of length same as boundary and
column value 3
20. for i=1 to 'length of boundary-1'
21.Check if the difference of the adjacent pixels of the 1st column of sort_col
is equal to one and at the same time if the 2nd column is zero
Then
{
vert_line(i,:)=sort_col(i,:);
}
else
{
not_vline(i,:)=sort_col(i,:);
vertLine_end=i-1;
vert_lines_index(i,:,:)=[vertLine_start,vertLine_end,vertLine_end-
vertLine_start];
vertLine_count=vertLine_count+1;
vertLine_start=i+1;
}
19
22.if vertLine_start is not equal to the 'length of boundary'
{
Set values:
vertLine_end=length(boundary)-1;
vert_lines_index(end,:,:)=[vertLine_start,vertLine_end,vertLine_end-
vertLine_start];
}
23.vv1=vert_lines_index(find(vert_lines_index(:,3)>=20),:); hold on;
24.for jj=1 to length(vv1)
{
Set value:
vlnedet(jj,:)=[jj,vert_line(vv1(jj,1),:),vert_line(vv1(jj,2),:),vv1(jj,3)];
Plot the vertical line along the image in green color using the command:
plot(vert_line(vv1(jj,1):vv1(jj,2),2),vert_line(vv1(jj,1):vv1(jj,2),1),'*g');
hold on;
}
26. Display the result in a tabular form with the line number starting and the
ending coordinates of the line and the size of the pixels in mm.
CHAPTER 5
20
RESULT AND DISCUSSON
5.1.Numbering Each Circles
The Figure 5.1 is displayed after 'imread' function reads it into MATLAB. The num-bers are plotted at the center of the circles for the purpose of identifying it easily.
Figure 5.1: Original image with numbered circles
.
5.2 Edge Detection
21
Figure 5.2 is obtained after thresholding and edge detection of the input image. The
blue line in the diagram is obtained as a result of calling the 'imdistline' function
which gives a drag able, resizable line, superimposed on the figure, that measures the
distance between the two endpoints of the line. The Distance tool displays the dis-
tance in a text label superimposed over the line. This tool helps us to manually find
the pixels of two points or lines.
Figure 5.2: Thresholding and edge detection
5.3. Plotting The Edges
22
Figure 5.3 is obtained after plotting the all boundaries of the input image in blue, after
which the horizontal lines are plotted in red and the vertical lines are plotted in blue
colors, superimposing the blue line.
Figure 5.3: Image after plotting the edges
Orginal Values are:
23
HORIZONTAL =
Hori_LineNo Start End Size_in_mm
___________ __________ __________ __________
1 123 279 123 448 6.8802
2 123 506 123 549 1.7506
3 124 604 124 647 1.7506
4 178 451 178 503 2.117
5 179 552 179 601 1.9949
6 447 155 447 647 20.03
VERTICAL =
Verti_LineNo Start End Size_in_mm
____________ __________ __________ __________
1 202 154 445 154 9.8929
2 124 450 176 450 2.117
3 123 506 175 506 2.117
4 124 551 177 551 2.1577
5 124 604 176 604 2.117
6 125 649 225 649 4.0711
7 343 650 444 650 4.1118
24
CIRCLE =
Circle_No Center_Coordinates Radius_in_mm
_________ __________________ ____________
1 201 284 1.0585
2 200 393 1.0585
3 300 174 1.0585
4 301 394 1.0585
5 503 393 1.0585
6 402 394 1.0992
7 401 284 1.0992
8 301 283 2.1577
9 653 284 2.2798
The magnification factor is 0.040711
CHAPTER 6
25
ERROR ANALYSIS
The error analysis of this project has been done by comparing the results obtained
from the MATLAB software and the results obtained by conventional method of
using a profile projector.
Figure 6.1 shows the result obtained from the conventional method of using a profile
projector, where the coordinates are found and the results are transferred into
AutoCAD software to get the dimensions.
Figure 6.1: Output from the profile projector.
The formula used for error analysis
%Error=[|Actual value –True value|/True value]*100
Reasons For Error
1. This may be due to some change in focal length of camera.
26
2. This may be due to some over lightning effect.
3. Some errors in coding.
How To Rectify It
1. Not to change the focal length of camera once it is fixed.
2. Make the coding perfect.
Percentage Errors:
27
Line no Start values(in pixels)
End value(in pixels)
Actual value(in mm)
True Value (in mm)
%Error
1 (123,279) (123,448) 6.8699 6.7800 1.3
2 (123,506) (123,549) 1.748 1.980 1.3
3 (124,604) (124,647) 1.748 1.980 1.1
4 (178,451) (178,503) 2.1138 2.0151 0.48
5 (179,552) (179,601) 1.9919 2.0252 0.16
6 (447,155) (447,647) 20 20.0833 0.41
Table 6.2: Percentage error for horizontal lines
Table 6.3: Percentage error for vertical line
28
Line no Start values(in pixels)
End value(in pixels)
Actual value(in mm)
True Value (in mm)
%Error
1 (202,154) (445,154) 9.878 8.9899 0.98
2 (124,450) (176,450) 2.1138 2.040 1.6
3 (123,506) (175,506) 2.1138 2.040 1.6
4 (124,551) (177,551) 2.1545 2.040 1.9
5 (124,604) (176,604) 2.1138 2.040 1.6
6 (125,649) (225,649) 4.065 3.9986 1.6
7 (343,650) (444,650) 4.1057 4.0953 0.25
29
Circle no Centre coordinate values(in pixels)
Actual radius(in mm)
True radius (in mm)
%Error
1 (201,284) 1.0569 1.0065 5
2 (200,393) 1.0569 1.004 5.1
3 (300,174) 1.0569 1.0115 4.4
4 (301,394) 1.0569 1.005 5.1
5 (503,393) 1.0569 1.0115 4.4
6 (402,394) 1.0976 1.009 5.2
7 (401,284) 1.0976 1.0165 4.4
8 (301,283) 2.1545 2.019 5.3
9 (653,284) 2.2764 2.025 5.5
Table 6.4: Percentage error for circles
30
CHAPTER 7
CONCLUSION
From this experiment it was concluded that the readings of profile projector can be
calculated using this method with negligible percentage of error. The time consuming
is reduced by more than a hour and more number of miniature specimen can be
calculated at a time by changing the file destination path.
The human error is also reduced by considerably because once the camera is fixed
even an ordinary man can do rest of the work very easily since this method is much
more simpler than the traditional method of using a profile projector.
Advantages:
Work time is reduced.
Human error is reduced.
Disadvantages:
Small errors in dimension might occur.
Coding has to be changed for complicated specimens used.
Future scope:
The code used can be improved in the future to identify and measure more complex
specimens like screws, gears can be measured.
31
APPENDICES
CODE:
close all;
clc;
x = imread('edited.jpg');
org = x;
lev = graythresh(x);
I = im2bw(x,lev);
BW = edge(I);
[center, radius] = imfindcircles(I,[20 60],'ObjectPolarity','bright');
n = numel(radius);
p = 1;
for j = 1.0: 1.0: n
position = [center(j,1)-5, center(j,2)-10];
RGB = insertText(x,position,j);
x = RGB;
end
Area_circle = pi*(radius).^2;
figure,imshow(RGB);
impixelinfo;
figure,imshow(BW);
impixelinfo;
imdistline;
figure,imshow(org);
impixelinfo;
[L, num] = bwlabel(BW, 8);
32
for i=1:num
[row ,col]=find(L==i);
cel_num{1,i}=[row col];
end
hold on;
for i=1:num
plot(cel_num{1,i}(:,2),cel_num{1,i}(:,1),'*b');hold on;
end
boundary=cel_num{1,1};
sort_rows=sortrows(boundary);
sort_col=sortrows(boundary,2);
hori_line=zeros(size(sort_rows));
HoriLine_count=1;
HoriLine_start=1;
HoriLine_end=0;
hori_lines_index=NaN(length(boundary),3);
for i=1:length(boundary)-1
if (sort_rows(i+1,1)-sort_rows(i,1)==0) && (sort_rows(i+1,2)-sort_rows(i,2)==1)
hori_line(i,:)=sort_rows(i,:);
else
not_hline(i,:)=sort_rows(i,:);
HoriLine_end=i-1;
hori_lines_index(i,:,:)=[HoriLine_start,HoriLine_end,HoriLine_end-HoriLine_start];
HoriLine_count=HoriLine_count+1;
HoriLine_start=i+1;
end
33
end
if HoriLine_start~=length(boundary)
HoriLine_end=length(boundary)-1;
hori_lines_index(end,:,:)=[HoriLine_start,HoriLine_end,HoriLine_end-
HoriLine_start];
end
vv=hori_lines_index(find(hori_lines_index(:,3)>=10),:); hold on;
for jj=1:length(vv)
hlnedet(jj,:)=[jj,hori_line(vv(jj,1),:),hori_line(vv(jj,2),:),vv(jj,3)];
plot(hori_line(vv(jj,1):vv(jj,2),2),hori_line(vv(jj,1):vv(jj,2),1),'*r');hold on;
end
vert_line=zeros(size(sort_col));
vertLine_count=1;
vertLine_start=1;
vertLine_end=0;
vert_lines_index=NaN(length(boundary),3);
for i=1:length(boundary)-1
if (sort_col(i+1,1)-sort_col(i,1)==1) && (sort_col(i+1,2)-sort_col(i,2)==0)
vert_line(i,:)=sort_col(i,:);
else
not_vline(i,:)=sort_col(i,:);
vertLine_end=i-1;
vert_lines_index(i,:,:)=[vertLine_start,vertLine_end,vertLine_end-vertLine_start];
vertLine_count=vertLine_count+1;
vertLine_start=i+1;
end
34
end
if vertLine_start~=length(boundary)
vertLine_end=length(boundary)-1;
vert_lines_index(end,:,:)=[vertLine_start,vertLine_end,vertLine_end-vertLine_start];
end
vv1=vert_lines_index(find(vert_lines_index(:,3)>=20),:); hold on;
for jj=1:length(vv1)
vlnedet(jj,:)=[jj,vert_line(vv1(jj,1),:),vert_line(vv1(jj,2),:),vv1(jj,3)];
plot(vert_line(vv1(jj,1):vv1(jj,2),2),vert_line(vv1(jj,1):vv1(jj,2),1),'*g');hold on;
end
M = 20/492;
fprintf('\n\nOrginal Values are:\n\n')
Hori_LineNo = [hlnedet(:,1)];
Start = [hlnedet(:,2) hlnedet(:,3)];
End = [hlnedet(:,4) hlnedet(:,5)];
Size_in_mm = [hlnedet(:,6)]*M;
HORIZONTAL = table(Hori_LineNo,Start,End,Size_in_mm)
Verti_LineNo = [vlnedet(:,1)];
Start = [vlnedet(:,2) vlnedet(:,3)];
End = [vlnedet(:,4) vlnedet(:,5)];
Size_in_mm = [vlnedet(:,6)]*M;
VERTICAL = table(Verti_LineNo,Start,End,Size_in_mm)
for i=1:1:n
Circle_No(i,1) = i;
end
Radius_in_mm = [round(radius(:,1))]*M;
35
Center_Coordinates = [round(center(:,1)) round(center(:,2))];
CIRCLE = table(Circle_No,Center_Coordinates,Radius_in_mm)
fprintf('\nThe magnification factor is %f \n',M)
36
REFERENCES
1. Wang Xiaoyong (2011), “Research on Technologies of Stability and Calibra-
tion Precision of Mapping Camera” International Conference on Physics Sci-
ence and Technology (ICPST 2011).
2. Xia Zhao(2010), Bin Liua and Chunhui Yang, “A New Calibration Method for
linear CCD camera” 2010 Symposium on Security Detection and Information
Processing.
3. Zhen Chen(2012), Liang Zhuo, Kaiqiong Sun and Congxuan Zhang “Extrinsic
Calibration of a Camera and a Laser Range Finder using Point to Line Con-
straint” 2012 International Workshop on Information and Electronics Engi-
neering (IWIEE).
4. Zhengyou Zhang (2000), “A Flexible New Technique for Camera Calibration”
IEEE Transactions on pattern analysis and machine intelligence, VOL. 22,
NO. 11, November 2000.
5. Priyadharsana, L.L and De silva Lokuge (2011), “A computer-vision-based
solution to retrofit existing flat displays into interactive surfaces” Computers
& Informatics (ISCI), 2011 IEEE Symposium.
6. D. Brian Larkins and William Harvey (2011), “Introductory computational
science using MATLAB and image processing” Dept. of Computer Science
and Engineering the Ohio State University Columbus.
37