Tomato Classification using Computer Vision

Preview:

Citation preview

TOMATO CLASSIFICATION

Presented By: Raman Pandey(CSJMA12001390146)Neha Chowdhary(CSJMA12001390136)B.Tech, 4th Year (Information Technology)

Introduction

The ability to identify the tomatoes based on quality in the food industry which is the most important technology in the realization of automatic tomato sorting machine in order to reduce the work of human and also time consuming.

Automation of quality control is highly significant because saving time and expenses is always a necessity in industrial applications.

An automated system has to be developed which acquires the images of the tomatoes and various features would be extracted using Image Processing and these features would be further used to train machine using some Machine Learning Algorithm and data would be classified and analyzed and accuracy and quality of the test data will be determined.

APPLICATIONS USED MATLAB :Matlab (MATrix LABoratory) is a multi-

paradigm numerical computing environment and fourth generation programming language.

R :R is a language and environment for statistical computing and graphics.

E1071 and RGL libraries are used for SVM classification and visualization. WEKA :Waikato Environment for Knowledge

Analysis. Weka is a collection of machine learning algorithms for data mining tasks.

Libsvm package is used to classify data.

Proposed Methodology

ALGORITHM Images are acquired by camera with precision of 20 cm from the surface above

from standard scale for calibration. Contrast is adjusted for the acquired image. Segmentation is performed using Otsu’ segmentation method. Histogram of Images are bimodal hence pixel set of each histogram are calculated

and subtracted and the obtained pixel set is used for threshold. Mask is generated from threshold level and median filter is applied. Morphological erosion and filling operation is performed on the mask to generated

finally binary segmented image. Region props is used to calculated features such as centroid, major Axis Length for

radius and this will be used to find volume and area.

Algorithm Continued…

Gradient weight is used to find the image weight. RGB to HSV transformation is applied on the acquired image and threshold level is

calculated for each red , green and blue level and these threshold are used to find maximum number of pixels of different colors.

The maximum value of red pixels represents very good quality of tomato and maximum value of yellow pixels represents good quality of tomato and maximum value of green pixels represents poor quality of tomato.

These features data acquired and stored in the database and hence eventually total 145 samples feature data are stored.

These samples are feed into WEKA for classification. SVM Multi class classification of Sequential Marginal Optimization algorithm is

used to classify data. 70% of the data used for training and 30% is used for testing which generates

accuracy of 74%.

Algorithm in Code Clear the Workspace;clcclear workspace;

Read and load the imageI=imread('t56.jpg');figure;imshow(I)title('original image')

Adjust Contrast of blue channel of the Original ImageIL = imadjust(I(:,:,3))

Algorithm in Code Get the size and total pixel count of the image[rows, columns, numberOfColorBands] = size(IL);[pixelCount, grayLevels] = imhist(IL, 256);

Divide image in two half and get the total pixel count of the left half imagemiddleColumn = floor(columns/2);leftHalfImage = IL(:, 1:middleColumn);[pixelCountL, grayLevelsL] = imhist(leftHalfImage, 256);

Get the pixel count of the another half right imagerightHalfImage = IL(:, middleColumn+1:end);[pixelCountR, grayLevelsR] = imhist(rightHalfImage, 256);

Algorithm in Code Subtract the two left and right pixelcount and get the subtracted histogramdiffHistogram = int16(pixelCountL - pixelCountR);

Create the threshold level of subtracted histogram value. Find Otsu threshold level

thresholdLevel = 255 * graythresh(diffHistogram)

Create mask from the threshold levelmask1 = IL > thresholdLevel;

Apply Median Filter to the Maskmask2 = medfilt2(mask1)

Algorithm in Code Apply Morphological Operation on the mask which will generate segmented ImageSE = strel('disk',2)mask3 = imerode(mask2,SE)mask4 = ~imfill(~mask3,'holes')figure;imshow(mask4)title('Segmented Image')

Get the separate Channels of the Original Imagered = I(:, :, 1)green = I(:, :, 2)blue = I(:, :, 3)

Get the Gradient Weight of the Imageweight = mean2(gradientweight(IL))

Algorithm in Code Get the Major Axis Lengthradii = regionprops(mask3,'MajorAxisLength')radii2 = mean2(cat(1,radii.MajorAxisLength))

Calculate Volumevolume = (4.0/3.0)*pi*(radii2^3)

Calculate Areaarea=4.0*pi*(radii2^2)

%Convert to HSV ImagehsvImage = rgb2hsv(I)weight = mean2(gradientweight(IL))

Algorithm in Code Extract out the H, S, and V images individuallyhImage = hsvImage(:,:,1);sImage = hsvImage(:,:,2);vImage = hsvImage(:,:,3);

Threshold for Yellow ColorYhueThresholdLow = 0.10;YhueThresholdHigh = 0.14;YsaturationThresholdLow = 0.4;

Algorithm in Code

YsaturationThresholdHigh = 1;YvalueThresholdLow = 0.8;YvalueThresholdHigh = 1.0;

% Now apply each color band's particular thresholds to the color band for yellow

YhueMask = (hImage >= YhueThresholdLow) & (hImage <= YhueThresholdHigh);YsaturationMask = (sImage >= YsaturationThresholdLow) & (sImage <= YsaturationThresholdHigh);YvalueMask = (vImage >= YvalueThresholdLow) & (vImage <= YvalueThresholdHigh);

Algorithm in Code Smooth the border using a morphological closing operation,

imclose().YstructuringElement = strel('disk', 4);YcoloredObjectsMask = imclose(YcoloredObjectsMask, YstructuringElement);

Fill in any holes in the regions, since they are most likely red also.YcoloredObjectsMask = imfill(logical(YcoloredObjectsMask), 'holes');YcoloredObjectsMask = cast(YcoloredObjectsMask, 'like', I);

Algorithm in Code

Use the colored object mask to mask out the colored-only portions of the rgb image.

YmaskedImageR = YcoloredObjectsMask .* red;YmaskedImageG = YcoloredObjectsMask .* green;YmaskedImageB = YcoloredObjectsMask .* blue;

yellowImage = cat(3, YmaskedImageR, YmaskedImageG, YmaskedImageB);

Yellow Pixel Countyel = mean2(yellowImage(find(yellowImage)))

Algorithm in Code Threshold for Red ColorRhueThresholdLow = 0.03;RhueThresholdHigh = 1.5;RsaturationThresholdLow = 0.18;RsaturationThresholdHigh = 1.5;RvalueThresholdLow = 0.05;RvalueThresholdHigh = 1.8;

Algorithm in Code

Now apply each color band's particular thresholds to the color band for Red

RhueMask = (hImage >= RhueThresholdLow) & (hImage <= RhueThresholdHigh);RsaturationMask = (sImage >= RsaturationThresholdLow) & (sImage <= RsaturationThresholdHigh);RvalueMask = (vImage >= RvalueThresholdLow) & (vImage <= RvalueThresholdHigh);RcoloredObjectsMask = uint8(RhueMask & RsaturationMask & RvalueMask);

RstructuringElement = strel('disk', 4);RcoloredObjectsMask = imclose(RcoloredObjectsMask, RstructuringElement);

Algorithm in Code Fill in any holes in the regions, since they are most likely red also.RcoloredObjectsMask = imfill(logical(RcoloredObjectsMask), 'holes');RcoloredObjectsMask = cast(RcoloredObjectsMask, 'like', I);

Use the colored object mask to mask out the colored-only portions of the rgb image.

RmaskedImageR = RcoloredObjectsMask .* red;RmaskedImageG = RcoloredObjectsMask .* green;RmaskedImageB = RcoloredObjectsMask .* blue;

redImage = cat(3, RmaskedImageR, RmaskedImageG, RmaskedImageB);

Red Pixel Countrel = mean2(redImage(find(redImage)))

Algorithm in Code Threshold for Green ColorGhueThresholdLow = 0.15;GhueThresholdHigh = 0.60;GsaturationThresholdLow = 0.36;GsaturationThresholdHigh = 1;GvalueThresholdLow = 0;GvalueThresholdHigh = 0.8;

Algorithm in Code

Now apply each color band's particular thresholds to the color band for GreenGhueMask = (hImage >= GhueThresholdLow) & (hImage <= GhueThresholdHigh);GsaturationMask = (sImage >= GsaturationThresholdLow) & (sImage <= GsaturationThresholdHigh);GvalueMask = (vImage >= GvalueThresholdLow) & (vImage <= GvalueThresholdHigh);

GcoloredObjectsMask = uint8(GhueMask & GsaturationMask & GvalueMask);

GstructuringElement = strel('disk', 4);GcoloredObjectsMask = imclose(GcoloredObjectsMask, GstructuringElement);

Algorithm in Code Fill in any holes in the regions, since they are most likely red also.GcoloredObjectsMask = imfill(logical(GcoloredObjectsMask), 'holes');GcoloredObjectsMask = cast(GcoloredObjectsMask, 'like', I);

Use the colored object mask to mask out the colored-only portions of the rgb image.

GmaskedImageR = GcoloredObjectsMask .* red;GmaskedImageG = GcoloredObjectsMask .* green;GmaskedImageB = GcoloredObjectsMask .* blue;

greenImage = cat(3, GmaskedImageR, GmaskedImageG, GmaskedImageB);

Green Pixel Countgel = mean2(greenImage(find(greenImage)))

Algorithm in Code Find the Maximum Number of Pixels to determine the ripenessripe = max([yel rel gel])if ripe == yel ripeVal = 'Yellow'else if ripe == rel ripeVal = 'Red' else if ripe == gel ripeVal = 'Green' else ripeVal = 'Undefined' end endend % Store Values in Local Databasedata = [radii2,area,volume,weight,centroidxavg,centroidyavg,rel,gel,yel,ripeVal]dlmwrite('test.csv',data,'delimiter',',','-append');

R Code Load Required Library require(e1071) #For SVM require(rgl) #For 3 D Plotting

Load Data SetFeaturesTrainingData <- read.csv("G:/8th sem/BTP Tomato Complete Sample/FinalBTPGUI/FeaturesTrainingData.csv")View(FeaturesTrainingData)

Create Data Frame from Training Dataftd <- data.frame(R=FeaturesTrainingData$Radius,A=FeaturesTrainingData$Area,V=FeaturesTrainingData$Volume,W=FeaturesTrainingData$Weight,C=FeaturesTrainingData$Class)View(ftd)

R Code Create SVM Modelsvm_model <- svm(C~., ftd, type='C-classification', kernel='linear',scale=FALSE)w <- t(svm_model$coefs) %*% svm_model$SV

Visualizing the Hyperplane and Support Vectorsdetalization <- 100 grid <- expand.grid(seq(from=min(ftd$R),to=max(ftd$R),length.out=detalization), + + seq(from=min(ftd$A),to=max(ftd$A),length.out=detalization)) z <- (svm_model$rho- w[1,1]*grid[,1] - w[1,2]*grid[,2]) / w[1,3] plot3d(grid[,1],grid[,2],z, xlab="PC1 (72%)", ylab="PC2 (19%)", zlab="PC3 (7%)", col="pink") spheres3d(ftd$R[which(ftd$C=='A')], ftd$A[which(ftd$C=='A')], ftd$V[which(ftd$C == 'A')], col='red',type="s",radius=0.01) spheres3d(ftd$R[which(ftd$C=='B')], ftd$A[which(ftd$C=='B')], ftd$V[which(ftd$C == 'B')], col='blue',type="s",radius=0.01)

Result

A B C

14 0 2

5 0 0

4 0 18

Accuracy %age : 74.42%

Confusion Matrix

Summary

Correctly Classified Instances 32

Incorrectly Classified Instances 11

Kappa statistics 0.5456

Mean Absolute Error 0.2946

RMS Error 0.3827

Relative Absolute Error 72.7541 %

Root relative squared Error 84.8467 %

Main Application GUI

Loading the Image

Segmentation of Image

Features Extraction

DATA SETS

Loading Data for Classification

Performing Classification

3 D SVM HyperPlane

CONCLUSION AND RESULTS

In this automated system, we have developed a methodology which identifies and detect tomato ripeness and its quality based on an image processing algorithm followed by classification process. In the algorithm, tomatoes images are acquired via different perspectives and preprocessed and segmented.And after segmentation process, five features are extracted such as area,volume,weight,radius,perimeter and ripeness.The real value of features of tomatoes are calculated such as volume using Archimedes Principle and weight using weighing machine.

In classification process, SVM is used with multi-class classification in WEKA.We provided the training data in WEKA and Weka calculated SVM accuracy of 74.41% and classified into three classes named A, B and C which represent quality class of very good, good and poor respectively.

Future Scope  The future scope of this Automated Application is in the industry of harvest

engineering and Automated Agriculture. This Automated Application will be helpful for easing of Supply Chain Management. This Application is used to detection of bad quality tomatotes using Computer

Vision. It is also used to analyzed remote sensing data for farming purpose and large scale

control production. Image processing technique has been proved as effective machine vision

system for agriculture domain. we can conclude that image processing was the non invasive and effective tool that can be applied for the agriculture domain with great accuracy for analysis of agronomic parameters.

References

Rafael C.Gonzalez and Richard E. woods, “Digital Image Processing”, Pearson Education, Second Edition,2005

A simple method for removing reflection and distortion from a single image.[IJEIT]

Recognition and localization of ripen tomato based on machine vision.[AICS]

Noise removal and enhancement of binary images using morphological operations.

Tomato classification and sorting with machine vision using SVM,MLP and LVQ.[IJACS]

THANK YOU!!!

Recommended