11
Research Article Exudate Detection for Diabetic Retinopathy Using Pretrained Convolutional Neural Networks Muhammad Mateen, 1 Junhao Wen , 1 Nasrullah Nasrullah, 2 Song Sun, 1 and Shaukat Hayat 3 1 School of Big Data & Software Engineering, Chongqing University, Chongqing 401331, China 2 Department of Software Engineering, Foundation University, Islamabad, 44000, Pakistan 3 School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China Correspondence should be addressed to Junhao Wen; [email protected] Received 11 November 2019; Revised 25 February 2020; Accepted 13 March 2020; Published 10 April 2020 Academic Editor: Matilde Santos Copyright © 2020 Muhammad Mateen et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In the field of ophthalmology, diabetic retinopathy (DR) is a major cause of blindness. DR is based on retinal lesions including exudate. Exudates have been found to be one of the signs and serious DR anomalies, so the proper detection of these lesions and the treatment should be done immediately to prevent loss of vision. In this paper, pretrained convolutional neural network- (CNN-) based framework has been proposed for the detection of exudate. Recently, deep CNNs were individually applied to solve the specific problems. But, pretrained CNN models with transfer learning can utilize the previous knowledge to solve the other related problems. In the proposed approach, initially data preprocessing is performed for standardization of exudate patches. Furthermore, region of interest (ROI) localization is used to localize the features of exudates, and then transfer learning is performed for feature extraction using pretrained CNN models (Inception-v3, Residual Network-50, and Visual Geometry Group Network-19). Moreover, the fused features from fully connected (FC) layers are fed into the softmax classifier for exudate classification. e performance of proposed framework has been analyzed using two well-known publicly available databases such as e-Ophtha and DIARETDB1. e experimental results demonstrate that the proposed pretrained CNN-based framework outperforms the existing techniques for the detection of exudates. 1. Introduction In the area of ophthalmology, deep learning is performing a vital role to diagnose serious diseases including diabetic retinopathy (DR). DR is a severe and common disease all over the world. Diabetic retinopathy is a widespread disease that is diagnosed in diabetic patients. e World Health Organization (WHO) has declared that, in 2030, diabetes will be the most serious and 7th highest death-causing disease in the world [1]. In this perspective, it is most im- portant to prevent the human lives from being affected by diabetes. In the case of diabetic retinopathy, some abnor- malities including lesions are generated in the retina, which later lead towards the nonreversible blindness and vision impairment. But the early detection and treatment of these lesions can reduce the blindness significantly. e retinal abnormalities in DR also include hemorrhages, cotton wool spots, microaneurysm (MA), retinal neovascularization, and exudates, which are clearly shown in Figure 1. Soft exudates (cotton wool spots) are exemplified as light yellow or white areas with distracted edges, but hard exudates are illustrated as yellow waxy patches in the retina. e existence of ex- udates in the retinal fundus photographs is one of the most serious causes of diabetic retinopathy [3]. e manual identification of hard exudates is based on the analyst, which is a time-consuming task. On the contrary, automatic ex- udate identification technique is possible to timely detect the hard exudates accurately. It is also a difficult task to handle the factors, including shape, texture, color, size, and poor contrast of the exudates. Hindawi Complexity Volume 2020, Article ID 5801870, 11 pages https://doi.org/10.1155/2020/5801870

Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

Research ArticleExudate Detection for Diabetic Retinopathy Using PretrainedConvolutional Neural Networks

Muhammad Mateen1 Junhao Wen 1 Nasrullah Nasrullah2 Song Sun1

and Shaukat Hayat3

1School of Big Data amp Software Engineering Chongqing University Chongqing 401331 China2Department of Software Engineering Foundation University Islamabad 44000 Pakistan3School of Information and Software Engineering University of Electronic Science and Technology of ChinaChengdu 610054 China

Correspondence should be addressed to Junhao Wen jhwencqueducn

Received 11 November 2019 Revised 25 February 2020 Accepted 13 March 2020 Published 10 April 2020

Academic Editor Matilde Santos

Copyright copy 2020 Muhammad Mateen et al is is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work isproperly cited

In the field of ophthalmology diabetic retinopathy (DR) is a major cause of blindness DR is based on retinal lesions includingexudate Exudates have been found to be one of the signs and serious DR anomalies so the proper detection of these lesions andthe treatment should be done immediately to prevent loss of vision In this paper pretrained convolutional neural network-(CNN-) based framework has been proposed for the detection of exudate Recently deep CNNs were individually applied to solvethe specific problems But pretrained CNN models with transfer learning can utilize the previous knowledge to solve the otherrelated problems In the proposed approach initially data preprocessing is performed for standardization of exudate patchesFurthermore region of interest (ROI) localization is used to localize the features of exudates and then transfer learning isperformed for feature extraction using pretrained CNNmodels (Inception-v3 Residual Network-50 and Visual Geometry GroupNetwork-19) Moreover the fused features from fully connected (FC) layers are fed into the softmax classifier for exudateclassificatione performance of proposed framework has been analyzed using two well-known publicly available databases suchas e-Ophtha and DIARETDB1 e experimental results demonstrate that the proposed pretrained CNN-based frameworkoutperforms the existing techniques for the detection of exudates

1 Introduction

In the area of ophthalmology deep learning is performing avital role to diagnose serious diseases including diabeticretinopathy (DR) DR is a severe and common disease allover the world Diabetic retinopathy is a widespread diseasethat is diagnosed in diabetic patients e World HealthOrganization (WHO) has declared that in 2030 diabeteswill be the most serious and 7th highest death-causingdisease in the world [1] In this perspective it is most im-portant to prevent the human lives from being affected bydiabetes In the case of diabetic retinopathy some abnor-malities including lesions are generated in the retina whichlater lead towards the nonreversible blindness and visionimpairment But the early detection and treatment of these

lesions can reduce the blindness significantly e retinalabnormalities in DR also include hemorrhages cotton woolspots microaneurysm (MA) retinal neovascularization andexudates which are clearly shown in Figure 1 Soft exudates(cotton wool spots) are exemplified as light yellow or whiteareas with distracted edges but hard exudates are illustratedas yellow waxy patches in the retina e existence of ex-udates in the retinal fundus photographs is one of the mostserious causes of diabetic retinopathy [3] e manualidentification of hard exudates is based on the analyst whichis a time-consuming task On the contrary automatic ex-udate identification technique is possible to timely detect thehard exudates accurately It is also a difficult task to handlethe factors including shape texture color size and poorcontrast of the exudates

HindawiComplexityVolume 2020 Article ID 5801870 11 pageshttpsdoiorg10115520205801870

For the diagnosis of diabetic retinopathy image pro-cessing techniques including optic disk localization adap-tive threshold image boundary tracing and morphologicalpreprocessing are widely used for feature extraction usingretinal fundus images According to [4] early detection ofexudates in retina may assist the ophthalmologists for timelyand proper treatment of affected person e U-Net-basedtechnique was applied for the segmentation and detection ofexudates on 107 retinal images e reported network wascomposed of expensive and shrinking streams whereshrinking has a similar structure with CNNs e unsu-pervised segmentation technique can detect the hard exu-dates on the basis of ant colony optimization eexperimental results were compared with traditional seg-mentation technique named Kirsch filter and found that theunsupervised approach performed better than the tradi-tional approach [5]

Deep convolutional neural network has also performedan important role in the segmentation and detection ofexudates using digital fundus images Tan et al [6] developedconvolutional neural network to automatically discriminateand segment microaneurysms hemorrhages and exudatese reported method describes that only one CNN can beused for the segmentation of retinal features using a hugeamount of retinal datasets with appropriate accuracy Fur-thermore Garcıa et al [7] investigated three classifiersmultilayer perceptron (MLP) radial basis function (RBF)and support vector machine (SVM) to detect the hard ex-udates In this report 117 retinal fundus images were usedwith different variables including quality brightness andcolor Xiao et al presented a review of exudate detection indiabetic retinopathy on the basis of a large-scale assessmentof the related published articles In the reported paper theauthors focused on the recent and emerging techniquesincluding deep learning to detect and classify the diabeticretinopathy in the retinal fundus images [8]

In the segmentation and detection of exudates it isnecessary to localize the specified features e location tosegmentation approach for exudate segmentation usingdigital fundus images was reported [9] and composed ofthree steps including noise removal hard exudate locali-zation in the retinal fundus images and hard exudatesegmentation of diabetic retinopathye noise removal wasperformed with match filters for vessel segmentation andoptic disc segmentation was performed on the basis of sa-liency technique Furthermore the location of exudates was

identified using random forest classifier to categorize thepatches into exudate and nonexudate classes Finally thelocal contrast and exudate regions were identified for thesegmentation of exudates and were further classified asexudate and nonexudate patches Asiri [10] presented areview to highlight the recent development in the field ofdiabetic retinopathy e automatic detection of diabeticretinopathy and macular degeneration has become one ofthe hottest topics of recent deep learning-based researchwork

In addition enormous work has been done to auto-matically identify the exudates on the basis of its featuresincluding texture shape and size e well-known exudatesdetection techniques can be separated into 4 basic types (1)machine learning-based techniques (2) threshold-basedtechniques (3) mathematical morphological techniques (4)region growing approaches

Machine learning-based algorithms contain supervisedand unsupervised learning approaches A R Chowdhuryet al [11] applied random forest classifier for the detection ofretinal abnormalities e technique was based on k-meanssegmentation of fundus photographs and preprocessingperformed by machine learning approaches based on sta-tistical and low-level features Moreover a novel approachwas introduced by Perdomo et al [12] for the detection ofdiabetic macular edema on the basis of exudatesrsquo locationsusing machine learning techniques Furthermore CarsonLam et al [13] applied pretrained models namely AlexNetand GoogleNet for the detection of diabetic retinopathye reported article recognized different stages of diabeticretinopathy using convolutional neural networks e au-thors highlighted multinomial classification models anddiscussed some issues about misclassification of disease andCNNs inability in the article

reshold techniques utilize variations in color strengthamong different image regions In this context iterativethresholding technique is presented on the basis of particleand firefly swarm optimization to diagnose the exudates andhemorrhages [14] e threshold technique consisted ofimage enhancement using preprocessing techniques andvessel segmentation using Top-hat and Gabor transforma-tione detection of hemorrhages is performed on the basisof linear regression and support vector machine classifierAdditionally Kaur and Mittal [15] reported exudate seg-mentation technique to help the eye specialists for effectiveplanning and timely treatment in the detection of DR was

Hemorrhages

Retinal neovascularizationAneurysmCotton wool spots

Hard Exudates

Optic disc

Central retinal vein

Central retinal artery

Retinal venulesRetinal arterioles

Fovea

Macula

Figure 1 (a) Normal retina and (b) diabetic retinopathy [2]

2 Complexity

developed e authors applied dynamic decision thresh-olding approach to find the faint and bright edges which helpto segment the hard exudates efficiently and pick thethreshold values in the retinal fundus images dynamicallyFurthermore Das and Puhan [16] presented the Tsallisentropy thresholding technique to enhance the visibility ofexudates in diabetic retinopathy e obtained features ofexudates are further filtered to remove the false-positivevalues by sparse-based dictionary learning and categoriza-tion e Tsallis technique was analyzed on the basis of thepublic dataset including DIARETDB1 and E-Ophtha toobtain better accuracy results with 95 accuracy

A huge amount of contribution has been made to detectthe abnormalities in fundus images using mathematicalmorphological approaches Morphological techniques uti-lize several mathematical operators having different struc-tures of elements Jaafar et al [17] reported an automatedtechnique for the identification of exudates in fundusphotographs In this work a new method for pure splittingof fundus colored images was applied and on the first stagea segmentation process was performed on the basis ofvariation calculation of pixels in fundus images and then themorphological technique was applied to filter out theadaptive thresholding outcomes on the basis of segmenta-tion results Additionally a random forest technique wasapplied for the detection of hard exudates in the givenfundus images In the diabetic retinopathy ensemble clas-sifier is applied for multiclass segmentation and localizationof hard exudates [18] e features of exudates wereextracted with coarse grain and fine grain levels with the useof Gabor filter and morphological reconstruction respec-tively e candidate regions were trained on ensembleclassifier to classify the exudate and nonexudate boundariese four types of publicly available datasets includingMessidor HEI-MED e-Ophtha Ex and DIARETDB1 wereused for experiments Harangi and Hajdu [19] also reporteda novel approach to detect the exudates in three steps in-cluding candidate extraction by greyscale morphologicaltechnique precise boundary segmentation by contour-basedtechnique and exudate classification by region wise classi-fier Harangi et al [20] presented an exudate detectionapproach using greyscale morphology and active contourtechniques to recognize potential exudate states and toextract exact boundaries of the candidates respectively

Region growing approaches observe neighbourhoods ofstart positions and decide whether they can be a member of aparticular region Lim et al [21] introduced a modifiedtechnique of the previous research worke classification ofdiabetic and normal macular edema was performed with thehelp of extracted exudates e detection of exudates wasperformed on the basis of signed macular regions to dis-tinguish the diabetic retinopathy from the retinal fundusimages On the basis of contour identification Harangi andHajdu [22] introduced exudate detection technique andadditionally region wise categorization In this techniquemorphological approaches were applied including greyscalemorphology to extract the exudate features and proper shapeby Markovian segmentation system A novel approach fordetection of diabetic macular edema was developed by

Giancardo et al [23] on the basis of features includingexudate segmentation wavelet decomposition and colore experiments were performed on the publicly availabledatasets and obtained 88 to 94 accuracy depends ondifferent datasets

In this research work the goal of the proposed techniqueis to detect the exudates from diabetic retinopathy usingtransfer learning e main contribution of the proposedwork is to apply the transfer learning concept for featureextraction using well-known pretrained deep convolutionalneural networks includes Inception-v3 ResNet-50 andVGG-19 Additionally fusion is performed on extractedfeatures and further classified by softmax for the finaldecision

e rest of the article is organized as follows the pro-posed method is explained in Section 2 the experimentalresults and discussion are covered in Section 3 Finally thefindings are concluded in Section 4

2 The Proposed Technique

In this portion the proposed framework based on pretrainedconvolutional neural network architectures is described forretinal exudate detection and classification in fundus imagesIn the proposed framework three well-reputed pretrainednetwork architectures are combined together to performfeature fusion as different architectures can capture dif-ferent features if only single architecture had been adoptedinstead of combining multiple architectures then theprobability would have been high to miss some usefulfeatures and ultimately it might had affected the perfor-mance of the proposed framework

Initially data preprocessing is performed on bothdatasets to standardize the exudate patches and thenGaussian mixture technique is applied to localize the can-didate exudate before feature extraction e novel frame-work becomes helpful for the low-level feature extractionindividually by 3 reputed pretrained convolutional neuralnetwork architectures including Inception-v3 VGG-19 andResNet-50 Moreover collective features are treated as inputinto the fully connected (FC) layers for further action in-cluding classification performed by softmax to classify theretinal exudate and nonexudate patches as shown inFigure 2

21 Dataset Data gathering is an essential part of the ex-periments for the analysis of the proposed technique In thisproposed approach two publicly available retinal datasetsare used for experiments (i) e-Ophtha and (ii) DIARETDB1E-Ophtha dataset contains 47 retinal fundus images ex-amined by four ophthalmologist experts for manual an-notation of exudates [24] e size of the retinal imagesvaried from the resolution of 1400times 960 to 2544times1 696pixels e DIARETDB1 dataset contains 89 retinal fundusphotographs with the resolution of 1500times1 152 [25] All theretinal images were captured by the digital specified fundusimage camera having a 50-degree field of view e exam-ination of exudates in the diabetic retinopathy was

Complexity 3

performed manually and evaluated by five authorizedophthalmologists Soft and hard exudates were labelled withldquoexudatesrdquo as a single class e total images were resized tothe standard size of DIARETDB1 images having a resolutionof 1500times1152 pixels and the estimated image scale size wasdecided based on the standard size of the retinal optic disce samples including affected and healthy retinal images ofthe e-Ophtha and DIARETDB1 are shown in Figure 3

22 Data Preprocessing In this phase the input data areprepared for standardization because of variations in the sizeof the retinal exudates Figure 4 demonstrates the distinctionbetween patch sizes among all the extracted retinal exudatepatches e length and the width of the extracted patchesare corresponding to the X and Y axis respectively It alsodetermines that with the ignorance of outliers the collectionof retinal exudate patch differs from the size of 25times 25 to thesize of 286times 487 resolution In this case the analysis of theretinal images requires the standard size of the patch forbetter understanding of data labelling For this solution thesmallest patch size was selected for the identification of thepathological sign by the experts [26]

In the proposed model 25times 25 patch size of coloredpatch images is used with two types of groups includingnonexudate and exudate e manual exudate patch ex-traction is performed and obtained 36500 and 75600

exudates from e-Ophtha and DIARETDB1 datasets re-spectively Similarly for the balance dataset 35000 and60000 are extracted nonexudate patches and obtained by theregions of e-Ophtha and DIARETDB1 databases In theretinal nonexudate patch group there are various retinaldiseases including optic nerve heads background tissuesand retinal blood vessels In the proposed technique all thepatches were obtained and extracted without any kind ofoverlap and can be seen as nonexudate and exudate patchclasses in Figures 5(a) and 5(b) respectively

23 Region of Interest Localization Exudates can be de-scribed as bright lesions highlighted as bright patches andspots in diabetic retinopathy with full contrast in the yellowplane of the color fundus image Exudate segmentation isapplied before the application of feature extraction using theregion of interest (ROI) localization In this step exudatesegmentation is performed to detect the ROI into the retinalfundus images In this case numerous approaches have beenused including neural network fuzzy models edge-basedsegmentation and ROI-based segmentation In the pro-posed technique Gaussian mixture approach is used forexudate localization Stauffer and Grimson [27] usedGaussian sorting to attain the background subtractiontechnique In this paper a hybrid technique is applied withthe integration of Gaussian mixture model (GMM) on the

RestNet-50(F1 F2 F3 Fn)

(output)

Dee

pfe

atur

eex

trac

tion

Features fusion

Region of interest localization(GMM + ALR)

Inception-v3(F1 F2 F3Fn)

VGG-19(F1 F2 F3 Fn)

Fused features into single featurevector

Pretrained VGG-19

Exudatepatches

NonexudatePatches

Tran

sfer

lear

ning

Pretrained CNN architectures

Softmax classifier

Pretrained ResNet- 50

Pretrained Inception-v3

Fundus imagedatasets(input)

Data preprocessing

Figure 2 e proposed pretrained CNN-based framework

4 Complexity

basis of adaptive learning rate (ALR) to attain the significantoutcome in the form of candidate exudate detection eregion of interest (ROI) is acquired from hybrid approach asshown in Figure 6 e ROI is fed into the pretrainedconvolutional neural network models for feature extractionto obtain compact feature vector

e following equation calculates the region of interest(ROI) by Gaussian mixture model

m(x) 1113944

q

p1rqw x μq σq1113872 1113873 (1)

where rq is denoted as a weight factor and w(x μq σq)

represents the normalized form of the average μq eadaptive learning rate is described to revise μq frequentlywith the application of probability constraint w(x μq σq) torecognize that a pixel is a part of qth Gaussian distribution ornot

24 Pretrained Deep Convolutional Neural Network Modelsfor Feature Extraction In the start individual deep con-volutional neural network models are applied to extract thefeatures and later adopted models are further combinedwith FC layer for the categorization of fundus images In thisscenario of feature combination there could be multipletypes of features including compactness roundness andcircularity extracted by the single shape descriptor In theproposed technique three up-to-date and the most recentdeep convolutional neural network architectures includingInception-v3 [28] Residual Network (ResNet)-50 [29] andVisual Geometry Group Network (VGGNet)-19 [30] areapplied for feature extraction and for further classification ofexudate and nonexudate diabetic retinopathy e aboveCNN models are already trained for numerous standardimage descriptors monitored by the significant extractedfeatures from the tiny images on the basis of transfer

Exudates

(a) (b)

Exudates

(c) (d)

Figure 3 Retinal image samples from e-Ophtha ((a) affected and (b) healthy) and DIARETDB1 ((c) affected and (d) healthy) databases In(a) and (c) exudates are encircled

0

50

100

150

200

250

300

350

400

450

500

0 50 100 150 200 250 300 400350 450 500 550X-axis

Y-ax

is

Figure 4 e class of sample exudate patches

Complexity 5

learning [31] In the following subsections the adopted deepconvolutional neural network architectures are brieflydefined

241 Inception-v3 Architecture Inception-v3 architecture isa convolutional network based on convolutional layers in-cluding pooling layers rectified linear operation layers andfully connected layers Inception-v3 architecture is designedfor image recognition and classification e proposedmodel is also based on the Inception-v3 architecture whichpools several convolutional filters of various sizes towards aninnovative single filter Furthermore the innovative filternot only decreases the computational complexity but alsoabates the number of parameters Inception-v3 also attainsbetter accuracy with the combination of heterogeneous-sized filters and low-dimensional embeddings e basicarchitecture of the Inception-v3 is shown in Figure 7

242 ResNet-50 Architecture Residual Network-50 is adeep convolutional neural network to achieve significant

results in the classification of ImageNet database [32]ResNet-50 is composed of numerous sizes of convolutionalfilters to reduce the training time and manage the degra-dation issue that happens because of deep structures In thiswork ResNet-50 is applied which is already trained on thestandard ImageNet database [33] except fully connectedsoftmax layer associated with this model e basic archi-tecture of the ResNet-50 is shown in Figure 8

243 VGG-19 Architecture e Visual Geometry GroupNetwork model is based on multilayered operations called adeep neural network model It is comparable with theAlexNet model except additional convolutional layers eexpansion of VGGNet architecture is based on the re-placement of kernel-sized filters with the window size 3times 3filters and with 2times 2 pooling layers consecutively egeneral VGG-19 architecture contains 3times 3 convolutionslayers ratification layers pooling layers and three fullyconnected layers with 4096 neurons [30] e performanceof VGGNet-19 neural network is better than AlexNet ar-chitecture due to its simplicity e basic architecture of theVGG-19 is shown in Figure 9

25 Transfer Learning and Features Fusion In the field ofmachine learning transfer learning is recognized as a mostuseful method which learns the contextual knowledgeused for solving one problem and applying it to the newrelated problems Primarily the transfer learning ap-proach network is trained for a particular job on the re-lated dataset and after that transfer to the objective job istrained by the objective dataset [34] In this work theobjective of the proposed technique is to experiment thewell-known CCN models in both transfer learning contextand feature-level fusion concerning retinal exudateclassification and to validate the achieved results on thee-Ophtha and DIARETDB1 retinal datasets e fusionapproach combines features extracted from fully con-nected layer using three different DCNNs e features ofall three DCNNs are merged together in single featurevector Suppose three different CNN architectures withrespective FC layers are represented as

(a)

(b)

Figure 5 (a) Nonexudate patches (b) exudate patches

Figure 6 e region of interest is encircled with blue line

6 Complexity

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 2: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

For the diagnosis of diabetic retinopathy image pro-cessing techniques including optic disk localization adap-tive threshold image boundary tracing and morphologicalpreprocessing are widely used for feature extraction usingretinal fundus images According to [4] early detection ofexudates in retina may assist the ophthalmologists for timelyand proper treatment of affected person e U-Net-basedtechnique was applied for the segmentation and detection ofexudates on 107 retinal images e reported network wascomposed of expensive and shrinking streams whereshrinking has a similar structure with CNNs e unsu-pervised segmentation technique can detect the hard exu-dates on the basis of ant colony optimization eexperimental results were compared with traditional seg-mentation technique named Kirsch filter and found that theunsupervised approach performed better than the tradi-tional approach [5]

Deep convolutional neural network has also performedan important role in the segmentation and detection ofexudates using digital fundus images Tan et al [6] developedconvolutional neural network to automatically discriminateand segment microaneurysms hemorrhages and exudatese reported method describes that only one CNN can beused for the segmentation of retinal features using a hugeamount of retinal datasets with appropriate accuracy Fur-thermore Garcıa et al [7] investigated three classifiersmultilayer perceptron (MLP) radial basis function (RBF)and support vector machine (SVM) to detect the hard ex-udates In this report 117 retinal fundus images were usedwith different variables including quality brightness andcolor Xiao et al presented a review of exudate detection indiabetic retinopathy on the basis of a large-scale assessmentof the related published articles In the reported paper theauthors focused on the recent and emerging techniquesincluding deep learning to detect and classify the diabeticretinopathy in the retinal fundus images [8]

In the segmentation and detection of exudates it isnecessary to localize the specified features e location tosegmentation approach for exudate segmentation usingdigital fundus images was reported [9] and composed ofthree steps including noise removal hard exudate locali-zation in the retinal fundus images and hard exudatesegmentation of diabetic retinopathye noise removal wasperformed with match filters for vessel segmentation andoptic disc segmentation was performed on the basis of sa-liency technique Furthermore the location of exudates was

identified using random forest classifier to categorize thepatches into exudate and nonexudate classes Finally thelocal contrast and exudate regions were identified for thesegmentation of exudates and were further classified asexudate and nonexudate patches Asiri [10] presented areview to highlight the recent development in the field ofdiabetic retinopathy e automatic detection of diabeticretinopathy and macular degeneration has become one ofthe hottest topics of recent deep learning-based researchwork

In addition enormous work has been done to auto-matically identify the exudates on the basis of its featuresincluding texture shape and size e well-known exudatesdetection techniques can be separated into 4 basic types (1)machine learning-based techniques (2) threshold-basedtechniques (3) mathematical morphological techniques (4)region growing approaches

Machine learning-based algorithms contain supervisedand unsupervised learning approaches A R Chowdhuryet al [11] applied random forest classifier for the detection ofretinal abnormalities e technique was based on k-meanssegmentation of fundus photographs and preprocessingperformed by machine learning approaches based on sta-tistical and low-level features Moreover a novel approachwas introduced by Perdomo et al [12] for the detection ofdiabetic macular edema on the basis of exudatesrsquo locationsusing machine learning techniques Furthermore CarsonLam et al [13] applied pretrained models namely AlexNetand GoogleNet for the detection of diabetic retinopathye reported article recognized different stages of diabeticretinopathy using convolutional neural networks e au-thors highlighted multinomial classification models anddiscussed some issues about misclassification of disease andCNNs inability in the article

reshold techniques utilize variations in color strengthamong different image regions In this context iterativethresholding technique is presented on the basis of particleand firefly swarm optimization to diagnose the exudates andhemorrhages [14] e threshold technique consisted ofimage enhancement using preprocessing techniques andvessel segmentation using Top-hat and Gabor transforma-tione detection of hemorrhages is performed on the basisof linear regression and support vector machine classifierAdditionally Kaur and Mittal [15] reported exudate seg-mentation technique to help the eye specialists for effectiveplanning and timely treatment in the detection of DR was

Hemorrhages

Retinal neovascularizationAneurysmCotton wool spots

Hard Exudates

Optic disc

Central retinal vein

Central retinal artery

Retinal venulesRetinal arterioles

Fovea

Macula

Figure 1 (a) Normal retina and (b) diabetic retinopathy [2]

2 Complexity

developed e authors applied dynamic decision thresh-olding approach to find the faint and bright edges which helpto segment the hard exudates efficiently and pick thethreshold values in the retinal fundus images dynamicallyFurthermore Das and Puhan [16] presented the Tsallisentropy thresholding technique to enhance the visibility ofexudates in diabetic retinopathy e obtained features ofexudates are further filtered to remove the false-positivevalues by sparse-based dictionary learning and categoriza-tion e Tsallis technique was analyzed on the basis of thepublic dataset including DIARETDB1 and E-Ophtha toobtain better accuracy results with 95 accuracy

A huge amount of contribution has been made to detectthe abnormalities in fundus images using mathematicalmorphological approaches Morphological techniques uti-lize several mathematical operators having different struc-tures of elements Jaafar et al [17] reported an automatedtechnique for the identification of exudates in fundusphotographs In this work a new method for pure splittingof fundus colored images was applied and on the first stagea segmentation process was performed on the basis ofvariation calculation of pixels in fundus images and then themorphological technique was applied to filter out theadaptive thresholding outcomes on the basis of segmenta-tion results Additionally a random forest technique wasapplied for the detection of hard exudates in the givenfundus images In the diabetic retinopathy ensemble clas-sifier is applied for multiclass segmentation and localizationof hard exudates [18] e features of exudates wereextracted with coarse grain and fine grain levels with the useof Gabor filter and morphological reconstruction respec-tively e candidate regions were trained on ensembleclassifier to classify the exudate and nonexudate boundariese four types of publicly available datasets includingMessidor HEI-MED e-Ophtha Ex and DIARETDB1 wereused for experiments Harangi and Hajdu [19] also reporteda novel approach to detect the exudates in three steps in-cluding candidate extraction by greyscale morphologicaltechnique precise boundary segmentation by contour-basedtechnique and exudate classification by region wise classi-fier Harangi et al [20] presented an exudate detectionapproach using greyscale morphology and active contourtechniques to recognize potential exudate states and toextract exact boundaries of the candidates respectively

Region growing approaches observe neighbourhoods ofstart positions and decide whether they can be a member of aparticular region Lim et al [21] introduced a modifiedtechnique of the previous research worke classification ofdiabetic and normal macular edema was performed with thehelp of extracted exudates e detection of exudates wasperformed on the basis of signed macular regions to dis-tinguish the diabetic retinopathy from the retinal fundusimages On the basis of contour identification Harangi andHajdu [22] introduced exudate detection technique andadditionally region wise categorization In this techniquemorphological approaches were applied including greyscalemorphology to extract the exudate features and proper shapeby Markovian segmentation system A novel approach fordetection of diabetic macular edema was developed by

Giancardo et al [23] on the basis of features includingexudate segmentation wavelet decomposition and colore experiments were performed on the publicly availabledatasets and obtained 88 to 94 accuracy depends ondifferent datasets

In this research work the goal of the proposed techniqueis to detect the exudates from diabetic retinopathy usingtransfer learning e main contribution of the proposedwork is to apply the transfer learning concept for featureextraction using well-known pretrained deep convolutionalneural networks includes Inception-v3 ResNet-50 andVGG-19 Additionally fusion is performed on extractedfeatures and further classified by softmax for the finaldecision

e rest of the article is organized as follows the pro-posed method is explained in Section 2 the experimentalresults and discussion are covered in Section 3 Finally thefindings are concluded in Section 4

2 The Proposed Technique

In this portion the proposed framework based on pretrainedconvolutional neural network architectures is described forretinal exudate detection and classification in fundus imagesIn the proposed framework three well-reputed pretrainednetwork architectures are combined together to performfeature fusion as different architectures can capture dif-ferent features if only single architecture had been adoptedinstead of combining multiple architectures then theprobability would have been high to miss some usefulfeatures and ultimately it might had affected the perfor-mance of the proposed framework

Initially data preprocessing is performed on bothdatasets to standardize the exudate patches and thenGaussian mixture technique is applied to localize the can-didate exudate before feature extraction e novel frame-work becomes helpful for the low-level feature extractionindividually by 3 reputed pretrained convolutional neuralnetwork architectures including Inception-v3 VGG-19 andResNet-50 Moreover collective features are treated as inputinto the fully connected (FC) layers for further action in-cluding classification performed by softmax to classify theretinal exudate and nonexudate patches as shown inFigure 2

21 Dataset Data gathering is an essential part of the ex-periments for the analysis of the proposed technique In thisproposed approach two publicly available retinal datasetsare used for experiments (i) e-Ophtha and (ii) DIARETDB1E-Ophtha dataset contains 47 retinal fundus images ex-amined by four ophthalmologist experts for manual an-notation of exudates [24] e size of the retinal imagesvaried from the resolution of 1400times 960 to 2544times1 696pixels e DIARETDB1 dataset contains 89 retinal fundusphotographs with the resolution of 1500times1 152 [25] All theretinal images were captured by the digital specified fundusimage camera having a 50-degree field of view e exam-ination of exudates in the diabetic retinopathy was

Complexity 3

performed manually and evaluated by five authorizedophthalmologists Soft and hard exudates were labelled withldquoexudatesrdquo as a single class e total images were resized tothe standard size of DIARETDB1 images having a resolutionof 1500times1152 pixels and the estimated image scale size wasdecided based on the standard size of the retinal optic disce samples including affected and healthy retinal images ofthe e-Ophtha and DIARETDB1 are shown in Figure 3

22 Data Preprocessing In this phase the input data areprepared for standardization because of variations in the sizeof the retinal exudates Figure 4 demonstrates the distinctionbetween patch sizes among all the extracted retinal exudatepatches e length and the width of the extracted patchesare corresponding to the X and Y axis respectively It alsodetermines that with the ignorance of outliers the collectionof retinal exudate patch differs from the size of 25times 25 to thesize of 286times 487 resolution In this case the analysis of theretinal images requires the standard size of the patch forbetter understanding of data labelling For this solution thesmallest patch size was selected for the identification of thepathological sign by the experts [26]

In the proposed model 25times 25 patch size of coloredpatch images is used with two types of groups includingnonexudate and exudate e manual exudate patch ex-traction is performed and obtained 36500 and 75600

exudates from e-Ophtha and DIARETDB1 datasets re-spectively Similarly for the balance dataset 35000 and60000 are extracted nonexudate patches and obtained by theregions of e-Ophtha and DIARETDB1 databases In theretinal nonexudate patch group there are various retinaldiseases including optic nerve heads background tissuesand retinal blood vessels In the proposed technique all thepatches were obtained and extracted without any kind ofoverlap and can be seen as nonexudate and exudate patchclasses in Figures 5(a) and 5(b) respectively

23 Region of Interest Localization Exudates can be de-scribed as bright lesions highlighted as bright patches andspots in diabetic retinopathy with full contrast in the yellowplane of the color fundus image Exudate segmentation isapplied before the application of feature extraction using theregion of interest (ROI) localization In this step exudatesegmentation is performed to detect the ROI into the retinalfundus images In this case numerous approaches have beenused including neural network fuzzy models edge-basedsegmentation and ROI-based segmentation In the pro-posed technique Gaussian mixture approach is used forexudate localization Stauffer and Grimson [27] usedGaussian sorting to attain the background subtractiontechnique In this paper a hybrid technique is applied withthe integration of Gaussian mixture model (GMM) on the

RestNet-50(F1 F2 F3 Fn)

(output)

Dee

pfe

atur

eex

trac

tion

Features fusion

Region of interest localization(GMM + ALR)

Inception-v3(F1 F2 F3Fn)

VGG-19(F1 F2 F3 Fn)

Fused features into single featurevector

Pretrained VGG-19

Exudatepatches

NonexudatePatches

Tran

sfer

lear

ning

Pretrained CNN architectures

Softmax classifier

Pretrained ResNet- 50

Pretrained Inception-v3

Fundus imagedatasets(input)

Data preprocessing

Figure 2 e proposed pretrained CNN-based framework

4 Complexity

basis of adaptive learning rate (ALR) to attain the significantoutcome in the form of candidate exudate detection eregion of interest (ROI) is acquired from hybrid approach asshown in Figure 6 e ROI is fed into the pretrainedconvolutional neural network models for feature extractionto obtain compact feature vector

e following equation calculates the region of interest(ROI) by Gaussian mixture model

m(x) 1113944

q

p1rqw x μq σq1113872 1113873 (1)

where rq is denoted as a weight factor and w(x μq σq)

represents the normalized form of the average μq eadaptive learning rate is described to revise μq frequentlywith the application of probability constraint w(x μq σq) torecognize that a pixel is a part of qth Gaussian distribution ornot

24 Pretrained Deep Convolutional Neural Network Modelsfor Feature Extraction In the start individual deep con-volutional neural network models are applied to extract thefeatures and later adopted models are further combinedwith FC layer for the categorization of fundus images In thisscenario of feature combination there could be multipletypes of features including compactness roundness andcircularity extracted by the single shape descriptor In theproposed technique three up-to-date and the most recentdeep convolutional neural network architectures includingInception-v3 [28] Residual Network (ResNet)-50 [29] andVisual Geometry Group Network (VGGNet)-19 [30] areapplied for feature extraction and for further classification ofexudate and nonexudate diabetic retinopathy e aboveCNN models are already trained for numerous standardimage descriptors monitored by the significant extractedfeatures from the tiny images on the basis of transfer

Exudates

(a) (b)

Exudates

(c) (d)

Figure 3 Retinal image samples from e-Ophtha ((a) affected and (b) healthy) and DIARETDB1 ((c) affected and (d) healthy) databases In(a) and (c) exudates are encircled

0

50

100

150

200

250

300

350

400

450

500

0 50 100 150 200 250 300 400350 450 500 550X-axis

Y-ax

is

Figure 4 e class of sample exudate patches

Complexity 5

learning [31] In the following subsections the adopted deepconvolutional neural network architectures are brieflydefined

241 Inception-v3 Architecture Inception-v3 architecture isa convolutional network based on convolutional layers in-cluding pooling layers rectified linear operation layers andfully connected layers Inception-v3 architecture is designedfor image recognition and classification e proposedmodel is also based on the Inception-v3 architecture whichpools several convolutional filters of various sizes towards aninnovative single filter Furthermore the innovative filternot only decreases the computational complexity but alsoabates the number of parameters Inception-v3 also attainsbetter accuracy with the combination of heterogeneous-sized filters and low-dimensional embeddings e basicarchitecture of the Inception-v3 is shown in Figure 7

242 ResNet-50 Architecture Residual Network-50 is adeep convolutional neural network to achieve significant

results in the classification of ImageNet database [32]ResNet-50 is composed of numerous sizes of convolutionalfilters to reduce the training time and manage the degra-dation issue that happens because of deep structures In thiswork ResNet-50 is applied which is already trained on thestandard ImageNet database [33] except fully connectedsoftmax layer associated with this model e basic archi-tecture of the ResNet-50 is shown in Figure 8

243 VGG-19 Architecture e Visual Geometry GroupNetwork model is based on multilayered operations called adeep neural network model It is comparable with theAlexNet model except additional convolutional layers eexpansion of VGGNet architecture is based on the re-placement of kernel-sized filters with the window size 3times 3filters and with 2times 2 pooling layers consecutively egeneral VGG-19 architecture contains 3times 3 convolutionslayers ratification layers pooling layers and three fullyconnected layers with 4096 neurons [30] e performanceof VGGNet-19 neural network is better than AlexNet ar-chitecture due to its simplicity e basic architecture of theVGG-19 is shown in Figure 9

25 Transfer Learning and Features Fusion In the field ofmachine learning transfer learning is recognized as a mostuseful method which learns the contextual knowledgeused for solving one problem and applying it to the newrelated problems Primarily the transfer learning ap-proach network is trained for a particular job on the re-lated dataset and after that transfer to the objective job istrained by the objective dataset [34] In this work theobjective of the proposed technique is to experiment thewell-known CCN models in both transfer learning contextand feature-level fusion concerning retinal exudateclassification and to validate the achieved results on thee-Ophtha and DIARETDB1 retinal datasets e fusionapproach combines features extracted from fully con-nected layer using three different DCNNs e features ofall three DCNNs are merged together in single featurevector Suppose three different CNN architectures withrespective FC layers are represented as

(a)

(b)

Figure 5 (a) Nonexudate patches (b) exudate patches

Figure 6 e region of interest is encircled with blue line

6 Complexity

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 3: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

developed e authors applied dynamic decision thresh-olding approach to find the faint and bright edges which helpto segment the hard exudates efficiently and pick thethreshold values in the retinal fundus images dynamicallyFurthermore Das and Puhan [16] presented the Tsallisentropy thresholding technique to enhance the visibility ofexudates in diabetic retinopathy e obtained features ofexudates are further filtered to remove the false-positivevalues by sparse-based dictionary learning and categoriza-tion e Tsallis technique was analyzed on the basis of thepublic dataset including DIARETDB1 and E-Ophtha toobtain better accuracy results with 95 accuracy

A huge amount of contribution has been made to detectthe abnormalities in fundus images using mathematicalmorphological approaches Morphological techniques uti-lize several mathematical operators having different struc-tures of elements Jaafar et al [17] reported an automatedtechnique for the identification of exudates in fundusphotographs In this work a new method for pure splittingof fundus colored images was applied and on the first stagea segmentation process was performed on the basis ofvariation calculation of pixels in fundus images and then themorphological technique was applied to filter out theadaptive thresholding outcomes on the basis of segmenta-tion results Additionally a random forest technique wasapplied for the detection of hard exudates in the givenfundus images In the diabetic retinopathy ensemble clas-sifier is applied for multiclass segmentation and localizationof hard exudates [18] e features of exudates wereextracted with coarse grain and fine grain levels with the useof Gabor filter and morphological reconstruction respec-tively e candidate regions were trained on ensembleclassifier to classify the exudate and nonexudate boundariese four types of publicly available datasets includingMessidor HEI-MED e-Ophtha Ex and DIARETDB1 wereused for experiments Harangi and Hajdu [19] also reporteda novel approach to detect the exudates in three steps in-cluding candidate extraction by greyscale morphologicaltechnique precise boundary segmentation by contour-basedtechnique and exudate classification by region wise classi-fier Harangi et al [20] presented an exudate detectionapproach using greyscale morphology and active contourtechniques to recognize potential exudate states and toextract exact boundaries of the candidates respectively

Region growing approaches observe neighbourhoods ofstart positions and decide whether they can be a member of aparticular region Lim et al [21] introduced a modifiedtechnique of the previous research worke classification ofdiabetic and normal macular edema was performed with thehelp of extracted exudates e detection of exudates wasperformed on the basis of signed macular regions to dis-tinguish the diabetic retinopathy from the retinal fundusimages On the basis of contour identification Harangi andHajdu [22] introduced exudate detection technique andadditionally region wise categorization In this techniquemorphological approaches were applied including greyscalemorphology to extract the exudate features and proper shapeby Markovian segmentation system A novel approach fordetection of diabetic macular edema was developed by

Giancardo et al [23] on the basis of features includingexudate segmentation wavelet decomposition and colore experiments were performed on the publicly availabledatasets and obtained 88 to 94 accuracy depends ondifferent datasets

In this research work the goal of the proposed techniqueis to detect the exudates from diabetic retinopathy usingtransfer learning e main contribution of the proposedwork is to apply the transfer learning concept for featureextraction using well-known pretrained deep convolutionalneural networks includes Inception-v3 ResNet-50 andVGG-19 Additionally fusion is performed on extractedfeatures and further classified by softmax for the finaldecision

e rest of the article is organized as follows the pro-posed method is explained in Section 2 the experimentalresults and discussion are covered in Section 3 Finally thefindings are concluded in Section 4

2 The Proposed Technique

In this portion the proposed framework based on pretrainedconvolutional neural network architectures is described forretinal exudate detection and classification in fundus imagesIn the proposed framework three well-reputed pretrainednetwork architectures are combined together to performfeature fusion as different architectures can capture dif-ferent features if only single architecture had been adoptedinstead of combining multiple architectures then theprobability would have been high to miss some usefulfeatures and ultimately it might had affected the perfor-mance of the proposed framework

Initially data preprocessing is performed on bothdatasets to standardize the exudate patches and thenGaussian mixture technique is applied to localize the can-didate exudate before feature extraction e novel frame-work becomes helpful for the low-level feature extractionindividually by 3 reputed pretrained convolutional neuralnetwork architectures including Inception-v3 VGG-19 andResNet-50 Moreover collective features are treated as inputinto the fully connected (FC) layers for further action in-cluding classification performed by softmax to classify theretinal exudate and nonexudate patches as shown inFigure 2

21 Dataset Data gathering is an essential part of the ex-periments for the analysis of the proposed technique In thisproposed approach two publicly available retinal datasetsare used for experiments (i) e-Ophtha and (ii) DIARETDB1E-Ophtha dataset contains 47 retinal fundus images ex-amined by four ophthalmologist experts for manual an-notation of exudates [24] e size of the retinal imagesvaried from the resolution of 1400times 960 to 2544times1 696pixels e DIARETDB1 dataset contains 89 retinal fundusphotographs with the resolution of 1500times1 152 [25] All theretinal images were captured by the digital specified fundusimage camera having a 50-degree field of view e exam-ination of exudates in the diabetic retinopathy was

Complexity 3

performed manually and evaluated by five authorizedophthalmologists Soft and hard exudates were labelled withldquoexudatesrdquo as a single class e total images were resized tothe standard size of DIARETDB1 images having a resolutionof 1500times1152 pixels and the estimated image scale size wasdecided based on the standard size of the retinal optic disce samples including affected and healthy retinal images ofthe e-Ophtha and DIARETDB1 are shown in Figure 3

22 Data Preprocessing In this phase the input data areprepared for standardization because of variations in the sizeof the retinal exudates Figure 4 demonstrates the distinctionbetween patch sizes among all the extracted retinal exudatepatches e length and the width of the extracted patchesare corresponding to the X and Y axis respectively It alsodetermines that with the ignorance of outliers the collectionof retinal exudate patch differs from the size of 25times 25 to thesize of 286times 487 resolution In this case the analysis of theretinal images requires the standard size of the patch forbetter understanding of data labelling For this solution thesmallest patch size was selected for the identification of thepathological sign by the experts [26]

In the proposed model 25times 25 patch size of coloredpatch images is used with two types of groups includingnonexudate and exudate e manual exudate patch ex-traction is performed and obtained 36500 and 75600

exudates from e-Ophtha and DIARETDB1 datasets re-spectively Similarly for the balance dataset 35000 and60000 are extracted nonexudate patches and obtained by theregions of e-Ophtha and DIARETDB1 databases In theretinal nonexudate patch group there are various retinaldiseases including optic nerve heads background tissuesand retinal blood vessels In the proposed technique all thepatches were obtained and extracted without any kind ofoverlap and can be seen as nonexudate and exudate patchclasses in Figures 5(a) and 5(b) respectively

23 Region of Interest Localization Exudates can be de-scribed as bright lesions highlighted as bright patches andspots in diabetic retinopathy with full contrast in the yellowplane of the color fundus image Exudate segmentation isapplied before the application of feature extraction using theregion of interest (ROI) localization In this step exudatesegmentation is performed to detect the ROI into the retinalfundus images In this case numerous approaches have beenused including neural network fuzzy models edge-basedsegmentation and ROI-based segmentation In the pro-posed technique Gaussian mixture approach is used forexudate localization Stauffer and Grimson [27] usedGaussian sorting to attain the background subtractiontechnique In this paper a hybrid technique is applied withthe integration of Gaussian mixture model (GMM) on the

RestNet-50(F1 F2 F3 Fn)

(output)

Dee

pfe

atur

eex

trac

tion

Features fusion

Region of interest localization(GMM + ALR)

Inception-v3(F1 F2 F3Fn)

VGG-19(F1 F2 F3 Fn)

Fused features into single featurevector

Pretrained VGG-19

Exudatepatches

NonexudatePatches

Tran

sfer

lear

ning

Pretrained CNN architectures

Softmax classifier

Pretrained ResNet- 50

Pretrained Inception-v3

Fundus imagedatasets(input)

Data preprocessing

Figure 2 e proposed pretrained CNN-based framework

4 Complexity

basis of adaptive learning rate (ALR) to attain the significantoutcome in the form of candidate exudate detection eregion of interest (ROI) is acquired from hybrid approach asshown in Figure 6 e ROI is fed into the pretrainedconvolutional neural network models for feature extractionto obtain compact feature vector

e following equation calculates the region of interest(ROI) by Gaussian mixture model

m(x) 1113944

q

p1rqw x μq σq1113872 1113873 (1)

where rq is denoted as a weight factor and w(x μq σq)

represents the normalized form of the average μq eadaptive learning rate is described to revise μq frequentlywith the application of probability constraint w(x μq σq) torecognize that a pixel is a part of qth Gaussian distribution ornot

24 Pretrained Deep Convolutional Neural Network Modelsfor Feature Extraction In the start individual deep con-volutional neural network models are applied to extract thefeatures and later adopted models are further combinedwith FC layer for the categorization of fundus images In thisscenario of feature combination there could be multipletypes of features including compactness roundness andcircularity extracted by the single shape descriptor In theproposed technique three up-to-date and the most recentdeep convolutional neural network architectures includingInception-v3 [28] Residual Network (ResNet)-50 [29] andVisual Geometry Group Network (VGGNet)-19 [30] areapplied for feature extraction and for further classification ofexudate and nonexudate diabetic retinopathy e aboveCNN models are already trained for numerous standardimage descriptors monitored by the significant extractedfeatures from the tiny images on the basis of transfer

Exudates

(a) (b)

Exudates

(c) (d)

Figure 3 Retinal image samples from e-Ophtha ((a) affected and (b) healthy) and DIARETDB1 ((c) affected and (d) healthy) databases In(a) and (c) exudates are encircled

0

50

100

150

200

250

300

350

400

450

500

0 50 100 150 200 250 300 400350 450 500 550X-axis

Y-ax

is

Figure 4 e class of sample exudate patches

Complexity 5

learning [31] In the following subsections the adopted deepconvolutional neural network architectures are brieflydefined

241 Inception-v3 Architecture Inception-v3 architecture isa convolutional network based on convolutional layers in-cluding pooling layers rectified linear operation layers andfully connected layers Inception-v3 architecture is designedfor image recognition and classification e proposedmodel is also based on the Inception-v3 architecture whichpools several convolutional filters of various sizes towards aninnovative single filter Furthermore the innovative filternot only decreases the computational complexity but alsoabates the number of parameters Inception-v3 also attainsbetter accuracy with the combination of heterogeneous-sized filters and low-dimensional embeddings e basicarchitecture of the Inception-v3 is shown in Figure 7

242 ResNet-50 Architecture Residual Network-50 is adeep convolutional neural network to achieve significant

results in the classification of ImageNet database [32]ResNet-50 is composed of numerous sizes of convolutionalfilters to reduce the training time and manage the degra-dation issue that happens because of deep structures In thiswork ResNet-50 is applied which is already trained on thestandard ImageNet database [33] except fully connectedsoftmax layer associated with this model e basic archi-tecture of the ResNet-50 is shown in Figure 8

243 VGG-19 Architecture e Visual Geometry GroupNetwork model is based on multilayered operations called adeep neural network model It is comparable with theAlexNet model except additional convolutional layers eexpansion of VGGNet architecture is based on the re-placement of kernel-sized filters with the window size 3times 3filters and with 2times 2 pooling layers consecutively egeneral VGG-19 architecture contains 3times 3 convolutionslayers ratification layers pooling layers and three fullyconnected layers with 4096 neurons [30] e performanceof VGGNet-19 neural network is better than AlexNet ar-chitecture due to its simplicity e basic architecture of theVGG-19 is shown in Figure 9

25 Transfer Learning and Features Fusion In the field ofmachine learning transfer learning is recognized as a mostuseful method which learns the contextual knowledgeused for solving one problem and applying it to the newrelated problems Primarily the transfer learning ap-proach network is trained for a particular job on the re-lated dataset and after that transfer to the objective job istrained by the objective dataset [34] In this work theobjective of the proposed technique is to experiment thewell-known CCN models in both transfer learning contextand feature-level fusion concerning retinal exudateclassification and to validate the achieved results on thee-Ophtha and DIARETDB1 retinal datasets e fusionapproach combines features extracted from fully con-nected layer using three different DCNNs e features ofall three DCNNs are merged together in single featurevector Suppose three different CNN architectures withrespective FC layers are represented as

(a)

(b)

Figure 5 (a) Nonexudate patches (b) exudate patches

Figure 6 e region of interest is encircled with blue line

6 Complexity

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 4: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

performed manually and evaluated by five authorizedophthalmologists Soft and hard exudates were labelled withldquoexudatesrdquo as a single class e total images were resized tothe standard size of DIARETDB1 images having a resolutionof 1500times1152 pixels and the estimated image scale size wasdecided based on the standard size of the retinal optic disce samples including affected and healthy retinal images ofthe e-Ophtha and DIARETDB1 are shown in Figure 3

22 Data Preprocessing In this phase the input data areprepared for standardization because of variations in the sizeof the retinal exudates Figure 4 demonstrates the distinctionbetween patch sizes among all the extracted retinal exudatepatches e length and the width of the extracted patchesare corresponding to the X and Y axis respectively It alsodetermines that with the ignorance of outliers the collectionof retinal exudate patch differs from the size of 25times 25 to thesize of 286times 487 resolution In this case the analysis of theretinal images requires the standard size of the patch forbetter understanding of data labelling For this solution thesmallest patch size was selected for the identification of thepathological sign by the experts [26]

In the proposed model 25times 25 patch size of coloredpatch images is used with two types of groups includingnonexudate and exudate e manual exudate patch ex-traction is performed and obtained 36500 and 75600

exudates from e-Ophtha and DIARETDB1 datasets re-spectively Similarly for the balance dataset 35000 and60000 are extracted nonexudate patches and obtained by theregions of e-Ophtha and DIARETDB1 databases In theretinal nonexudate patch group there are various retinaldiseases including optic nerve heads background tissuesand retinal blood vessels In the proposed technique all thepatches were obtained and extracted without any kind ofoverlap and can be seen as nonexudate and exudate patchclasses in Figures 5(a) and 5(b) respectively

23 Region of Interest Localization Exudates can be de-scribed as bright lesions highlighted as bright patches andspots in diabetic retinopathy with full contrast in the yellowplane of the color fundus image Exudate segmentation isapplied before the application of feature extraction using theregion of interest (ROI) localization In this step exudatesegmentation is performed to detect the ROI into the retinalfundus images In this case numerous approaches have beenused including neural network fuzzy models edge-basedsegmentation and ROI-based segmentation In the pro-posed technique Gaussian mixture approach is used forexudate localization Stauffer and Grimson [27] usedGaussian sorting to attain the background subtractiontechnique In this paper a hybrid technique is applied withthe integration of Gaussian mixture model (GMM) on the

RestNet-50(F1 F2 F3 Fn)

(output)

Dee

pfe

atur

eex

trac

tion

Features fusion

Region of interest localization(GMM + ALR)

Inception-v3(F1 F2 F3Fn)

VGG-19(F1 F2 F3 Fn)

Fused features into single featurevector

Pretrained VGG-19

Exudatepatches

NonexudatePatches

Tran

sfer

lear

ning

Pretrained CNN architectures

Softmax classifier

Pretrained ResNet- 50

Pretrained Inception-v3

Fundus imagedatasets(input)

Data preprocessing

Figure 2 e proposed pretrained CNN-based framework

4 Complexity

basis of adaptive learning rate (ALR) to attain the significantoutcome in the form of candidate exudate detection eregion of interest (ROI) is acquired from hybrid approach asshown in Figure 6 e ROI is fed into the pretrainedconvolutional neural network models for feature extractionto obtain compact feature vector

e following equation calculates the region of interest(ROI) by Gaussian mixture model

m(x) 1113944

q

p1rqw x μq σq1113872 1113873 (1)

where rq is denoted as a weight factor and w(x μq σq)

represents the normalized form of the average μq eadaptive learning rate is described to revise μq frequentlywith the application of probability constraint w(x μq σq) torecognize that a pixel is a part of qth Gaussian distribution ornot

24 Pretrained Deep Convolutional Neural Network Modelsfor Feature Extraction In the start individual deep con-volutional neural network models are applied to extract thefeatures and later adopted models are further combinedwith FC layer for the categorization of fundus images In thisscenario of feature combination there could be multipletypes of features including compactness roundness andcircularity extracted by the single shape descriptor In theproposed technique three up-to-date and the most recentdeep convolutional neural network architectures includingInception-v3 [28] Residual Network (ResNet)-50 [29] andVisual Geometry Group Network (VGGNet)-19 [30] areapplied for feature extraction and for further classification ofexudate and nonexudate diabetic retinopathy e aboveCNN models are already trained for numerous standardimage descriptors monitored by the significant extractedfeatures from the tiny images on the basis of transfer

Exudates

(a) (b)

Exudates

(c) (d)

Figure 3 Retinal image samples from e-Ophtha ((a) affected and (b) healthy) and DIARETDB1 ((c) affected and (d) healthy) databases In(a) and (c) exudates are encircled

0

50

100

150

200

250

300

350

400

450

500

0 50 100 150 200 250 300 400350 450 500 550X-axis

Y-ax

is

Figure 4 e class of sample exudate patches

Complexity 5

learning [31] In the following subsections the adopted deepconvolutional neural network architectures are brieflydefined

241 Inception-v3 Architecture Inception-v3 architecture isa convolutional network based on convolutional layers in-cluding pooling layers rectified linear operation layers andfully connected layers Inception-v3 architecture is designedfor image recognition and classification e proposedmodel is also based on the Inception-v3 architecture whichpools several convolutional filters of various sizes towards aninnovative single filter Furthermore the innovative filternot only decreases the computational complexity but alsoabates the number of parameters Inception-v3 also attainsbetter accuracy with the combination of heterogeneous-sized filters and low-dimensional embeddings e basicarchitecture of the Inception-v3 is shown in Figure 7

242 ResNet-50 Architecture Residual Network-50 is adeep convolutional neural network to achieve significant

results in the classification of ImageNet database [32]ResNet-50 is composed of numerous sizes of convolutionalfilters to reduce the training time and manage the degra-dation issue that happens because of deep structures In thiswork ResNet-50 is applied which is already trained on thestandard ImageNet database [33] except fully connectedsoftmax layer associated with this model e basic archi-tecture of the ResNet-50 is shown in Figure 8

243 VGG-19 Architecture e Visual Geometry GroupNetwork model is based on multilayered operations called adeep neural network model It is comparable with theAlexNet model except additional convolutional layers eexpansion of VGGNet architecture is based on the re-placement of kernel-sized filters with the window size 3times 3filters and with 2times 2 pooling layers consecutively egeneral VGG-19 architecture contains 3times 3 convolutionslayers ratification layers pooling layers and three fullyconnected layers with 4096 neurons [30] e performanceof VGGNet-19 neural network is better than AlexNet ar-chitecture due to its simplicity e basic architecture of theVGG-19 is shown in Figure 9

25 Transfer Learning and Features Fusion In the field ofmachine learning transfer learning is recognized as a mostuseful method which learns the contextual knowledgeused for solving one problem and applying it to the newrelated problems Primarily the transfer learning ap-proach network is trained for a particular job on the re-lated dataset and after that transfer to the objective job istrained by the objective dataset [34] In this work theobjective of the proposed technique is to experiment thewell-known CCN models in both transfer learning contextand feature-level fusion concerning retinal exudateclassification and to validate the achieved results on thee-Ophtha and DIARETDB1 retinal datasets e fusionapproach combines features extracted from fully con-nected layer using three different DCNNs e features ofall three DCNNs are merged together in single featurevector Suppose three different CNN architectures withrespective FC layers are represented as

(a)

(b)

Figure 5 (a) Nonexudate patches (b) exudate patches

Figure 6 e region of interest is encircled with blue line

6 Complexity

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 5: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

basis of adaptive learning rate (ALR) to attain the significantoutcome in the form of candidate exudate detection eregion of interest (ROI) is acquired from hybrid approach asshown in Figure 6 e ROI is fed into the pretrainedconvolutional neural network models for feature extractionto obtain compact feature vector

e following equation calculates the region of interest(ROI) by Gaussian mixture model

m(x) 1113944

q

p1rqw x μq σq1113872 1113873 (1)

where rq is denoted as a weight factor and w(x μq σq)

represents the normalized form of the average μq eadaptive learning rate is described to revise μq frequentlywith the application of probability constraint w(x μq σq) torecognize that a pixel is a part of qth Gaussian distribution ornot

24 Pretrained Deep Convolutional Neural Network Modelsfor Feature Extraction In the start individual deep con-volutional neural network models are applied to extract thefeatures and later adopted models are further combinedwith FC layer for the categorization of fundus images In thisscenario of feature combination there could be multipletypes of features including compactness roundness andcircularity extracted by the single shape descriptor In theproposed technique three up-to-date and the most recentdeep convolutional neural network architectures includingInception-v3 [28] Residual Network (ResNet)-50 [29] andVisual Geometry Group Network (VGGNet)-19 [30] areapplied for feature extraction and for further classification ofexudate and nonexudate diabetic retinopathy e aboveCNN models are already trained for numerous standardimage descriptors monitored by the significant extractedfeatures from the tiny images on the basis of transfer

Exudates

(a) (b)

Exudates

(c) (d)

Figure 3 Retinal image samples from e-Ophtha ((a) affected and (b) healthy) and DIARETDB1 ((c) affected and (d) healthy) databases In(a) and (c) exudates are encircled

0

50

100

150

200

250

300

350

400

450

500

0 50 100 150 200 250 300 400350 450 500 550X-axis

Y-ax

is

Figure 4 e class of sample exudate patches

Complexity 5

learning [31] In the following subsections the adopted deepconvolutional neural network architectures are brieflydefined

241 Inception-v3 Architecture Inception-v3 architecture isa convolutional network based on convolutional layers in-cluding pooling layers rectified linear operation layers andfully connected layers Inception-v3 architecture is designedfor image recognition and classification e proposedmodel is also based on the Inception-v3 architecture whichpools several convolutional filters of various sizes towards aninnovative single filter Furthermore the innovative filternot only decreases the computational complexity but alsoabates the number of parameters Inception-v3 also attainsbetter accuracy with the combination of heterogeneous-sized filters and low-dimensional embeddings e basicarchitecture of the Inception-v3 is shown in Figure 7

242 ResNet-50 Architecture Residual Network-50 is adeep convolutional neural network to achieve significant

results in the classification of ImageNet database [32]ResNet-50 is composed of numerous sizes of convolutionalfilters to reduce the training time and manage the degra-dation issue that happens because of deep structures In thiswork ResNet-50 is applied which is already trained on thestandard ImageNet database [33] except fully connectedsoftmax layer associated with this model e basic archi-tecture of the ResNet-50 is shown in Figure 8

243 VGG-19 Architecture e Visual Geometry GroupNetwork model is based on multilayered operations called adeep neural network model It is comparable with theAlexNet model except additional convolutional layers eexpansion of VGGNet architecture is based on the re-placement of kernel-sized filters with the window size 3times 3filters and with 2times 2 pooling layers consecutively egeneral VGG-19 architecture contains 3times 3 convolutionslayers ratification layers pooling layers and three fullyconnected layers with 4096 neurons [30] e performanceof VGGNet-19 neural network is better than AlexNet ar-chitecture due to its simplicity e basic architecture of theVGG-19 is shown in Figure 9

25 Transfer Learning and Features Fusion In the field ofmachine learning transfer learning is recognized as a mostuseful method which learns the contextual knowledgeused for solving one problem and applying it to the newrelated problems Primarily the transfer learning ap-proach network is trained for a particular job on the re-lated dataset and after that transfer to the objective job istrained by the objective dataset [34] In this work theobjective of the proposed technique is to experiment thewell-known CCN models in both transfer learning contextand feature-level fusion concerning retinal exudateclassification and to validate the achieved results on thee-Ophtha and DIARETDB1 retinal datasets e fusionapproach combines features extracted from fully con-nected layer using three different DCNNs e features ofall three DCNNs are merged together in single featurevector Suppose three different CNN architectures withrespective FC layers are represented as

(a)

(b)

Figure 5 (a) Nonexudate patches (b) exudate patches

Figure 6 e region of interest is encircled with blue line

6 Complexity

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 6: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

learning [31] In the following subsections the adopted deepconvolutional neural network architectures are brieflydefined

241 Inception-v3 Architecture Inception-v3 architecture isa convolutional network based on convolutional layers in-cluding pooling layers rectified linear operation layers andfully connected layers Inception-v3 architecture is designedfor image recognition and classification e proposedmodel is also based on the Inception-v3 architecture whichpools several convolutional filters of various sizes towards aninnovative single filter Furthermore the innovative filternot only decreases the computational complexity but alsoabates the number of parameters Inception-v3 also attainsbetter accuracy with the combination of heterogeneous-sized filters and low-dimensional embeddings e basicarchitecture of the Inception-v3 is shown in Figure 7

242 ResNet-50 Architecture Residual Network-50 is adeep convolutional neural network to achieve significant

results in the classification of ImageNet database [32]ResNet-50 is composed of numerous sizes of convolutionalfilters to reduce the training time and manage the degra-dation issue that happens because of deep structures In thiswork ResNet-50 is applied which is already trained on thestandard ImageNet database [33] except fully connectedsoftmax layer associated with this model e basic archi-tecture of the ResNet-50 is shown in Figure 8

243 VGG-19 Architecture e Visual Geometry GroupNetwork model is based on multilayered operations called adeep neural network model It is comparable with theAlexNet model except additional convolutional layers eexpansion of VGGNet architecture is based on the re-placement of kernel-sized filters with the window size 3times 3filters and with 2times 2 pooling layers consecutively egeneral VGG-19 architecture contains 3times 3 convolutionslayers ratification layers pooling layers and three fullyconnected layers with 4096 neurons [30] e performanceof VGGNet-19 neural network is better than AlexNet ar-chitecture due to its simplicity e basic architecture of theVGG-19 is shown in Figure 9

25 Transfer Learning and Features Fusion In the field ofmachine learning transfer learning is recognized as a mostuseful method which learns the contextual knowledgeused for solving one problem and applying it to the newrelated problems Primarily the transfer learning ap-proach network is trained for a particular job on the re-lated dataset and after that transfer to the objective job istrained by the objective dataset [34] In this work theobjective of the proposed technique is to experiment thewell-known CCN models in both transfer learning contextand feature-level fusion concerning retinal exudateclassification and to validate the achieved results on thee-Ophtha and DIARETDB1 retinal datasets e fusionapproach combines features extracted from fully con-nected layer using three different DCNNs e features ofall three DCNNs are merged together in single featurevector Suppose three different CNN architectures withrespective FC layers are represented as

(a)

(b)

Figure 5 (a) Nonexudate patches (b) exudate patches

Figure 6 e region of interest is encircled with blue line

6 Complexity

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 7: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

X x isin 1 2 3 (2)

Y y isin 1 2 3 (3)

where equation (2) represents three CNN models andequation (3) illustrates a number of FC layers erefore the

extracted features are combined in the feature vector spaceFVoplus having dimensions ldquodldquo and can be described as

FVoplus X + Y (4)

Transfer learning-based techniques are implemented withthe pretrained Inception-v3 ResNet-50 and VGGNet-19

1 times 1convolutions

3 times 3convolutions

Filterconcatenation

Previous layer

5 times 5convolutions

1 times 1convolutions

3 times 3 maximumpooling

1 times 1convolutions

1 times 1convolutions

Figure 7 e basic Inception-v3 architecture [28]

7 times

7 co

nv 6

42

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 6

43

times 3

conv

64

1 times

1 co

nv 2

56

1 times

1 co

nv 1

282

3 times

3 co

nv 1

281

times 1

conv

512

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 1

283

times 3

conv

128

1 times

1 co

nv 5

12

1 times

1 co

nv 2

562

3 times

3 co

nv 2

561

times 1

conv

102

4

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 2

563

times 3

conv

256

1 times

1 co

nv 1

024

1 times

1 co

nv 5

122

3 times

3 co

nv 5

121

times 1

conv

204

8

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

1 times

1 co

nv 5

123

times 3

conv

512

1 times

1 co

nv 2

048

fc 1

000

cfg[0] blocks cfg[1] blocks cfg[3] blockscfg[2] blocksSize

112

Size

56

Size

28

Size

14

Size

7

50 layers cfg = [3 4 6 3]101 layers cfg = [3 4 23 8]152 layers cfg = [3 8 36 3]

avg p

ool2

max

poo

l2

Figure 8 e basic Residual Network-50 architecture [29]

Conv

64K

3 times

3

Conv

64K

3 times

3

Conv

128K

3 times

3

Conv

128K

3 times

3

Conv

256K

3 times

3

Conv

256K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Conv

512K

3 times

3

Pool

Fully connected layers

K = kernals N = neurons

Pool

Pool

Pool

Pool

FC40

96N

FC40

96N

FC10

00N

Convolutional + maximum pooling layers

Laye

r 1

Laye

r 2

Laye

r 3

Laye

r 4

Laye

r 5

Size

7La

yer 6

Laye

r 7

Laye

r 8

Size

224

Size

112

Size

56

Size

28

Size

14

Figure 9 e basic VGGNet-19 architecture [30]

Complexity 7

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 8: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

architectures from ImageNet e transfer learning setup istracked by handling the continuing neural networkmodules asthe fixed feature extractor for the different datasets Generallythe transfer learning holds the primary pretrained prototypicalweights and extracts image-based features through the con-cluding network layer Mostly a huge amount of data aremandatory to train a convolutional neural network fromscrape however sometime it is hard to organize a large amountof database of related problems Opposite to an ultimatecircumstance in the case ofmost real-world applications it is adifficult job or it rarely happens to achieve similar training andtesting data In this scenario the transfer learning approach ispresented and is also proved a fruitful techniqueere are twomain steps of transfer learning approach firstly the selectionof pretrained architecture secondly the problem similarityand its size In the selection phase the choice of pretrainedarchitecture is based on the relevant problem which is asso-ciated with the objective problem In the case of similarity andsize of the dataset if the amount of the target database is lesser(for example smaller than one thousand images) and alsorelevant to the source database (for example vehicles datasethand-written character dataset and medical datasets) thenthere will be more chances of data over fitting In another caseif the amount of the target database is sufficient and relevant tothe source training database then there will be a little chance ofover fitting and it just needs fine tuning of the pretrainedarchitecture e deep convolutional neural network (DCNN)models including Inception-v3 ResNet-50 and VGG-19 areapplied in the proposed framework to utilize their features onfine-tuning and transfer learning In the beginning thetraining of the selective convolutional neural network modelsis performed using sample images taken by the standardpublicly available ldquoImageNetrdquo database moreover the idea oftransfer learning for fused feature extraction has beenimplemented In this case the proposed technique assists thearchitecture to learn the common features from the newdataset without any requirement of other training e in-dependently extracted features of all the selective convolu-tional neural network models are joined into the FC layer forfurther action including the classification of nonexudate andexudate patch classes performed by softmax

3 Results and Discussion

e experiments are performed on ldquoGoogle Colabrdquo usinggraphics processing units (GPUs) For the performanceevaluation two publicly available standard datasets are se-lected for experiments e training phase is divided into 2sessions and each session took 6 hours to complete theexperimental task e designed framework of the proposedtechnique is trained on 3 types of convolutional neuralnetwork architectures including Inception-v3 ResNet-50and VGG-19 individually and after that transfer learning isperformed to transfer the knowledge data into the fusedextracted features e attained experimental results fromthe individual convolutional neural network is comparedand analyzed with the set of fused features accompanied byvarious existing approaches 10-fold cross-validation ap-proach is applied for performance evaluation Cross-

validation is a resampling procedure used to evaluate ma-chine learning models on a limited data sample e pro-cedure has a single parameter called k that refers to thenumber of groups that a given data sample is to be split intoAs such the procedure is often called k-fold cross-validationWhen a specific value for k is chosen it may be used in placeof k in the reference to the model such as k 10 becoming10-fold cross-validation [35]e input data are divided intodifferent ratios of training and testing datasets used in theexperiments of the proposed methodology e splittingdata are performed in three different ways with the ratio of70 for training and 30 for testing similarly 80 fortraining and 20 for testing and 90 for training and 10for testing the CNN architectures Table 1 shows thecomparative analysis of three individual CNN architectureswith the proposed technique on the basis of data splittingusing e-Ophtha dataset

Similarly Table 2 illustrates individual architecture andproposed technique results in terms of classification accu-racy using DIARETDB1 dataset

In the context of classification performance a true positiveis an outcome where the model correctly predicts the positiveclass Similarly a true negative is an outcome where the modelcorrectly predicts the negative class However false negative(FN) and false positive (FP) represent the samples which aremisclassified by the model e following equations can beapplied for the performance assessment

Accuracy it is a measure used to evaluate the modeleffectiveness to identify correct class labels and can becalculated by the following equation

accuracy TN + TP

TN + TP + FN + FPtimes 100 (5)

F-measure it averages out the precision and recall of aclassifier having a range between 0 and 1 Best and worstscores are represented by ldquo0rdquo and ldquo1rdquo respectively com-puted as follows

precision TP

FP + TP

recall TP

FN + TP

F1 score 2 times recall times precisionrecall + precision

(6)

Table 1 also illustrates the output as exudate patcheswith respective F1 score recall precision value and accu-racy e highest classification accuracy of individual CNNarchitectures and proposed technique is achieved with thehelp of splitting data approach Overall it is mentioned thatthe proposed approach achieves significant classificationaccuracy for retinal exudate detection than the individualCNN architectures Using e-Ophtha dataset Table 1 showsthat the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9367 9780 and 9580 respectively but theproposed approach attained a classification accuracy of9843 Using the DIARETDB1 dataset Table 2 illustrates

8 Complexity

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 9: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

that the highest classification accuracy of individual CNNarchitectures including Inception-v3 ResNet-50 and VGG-19 is 9357 9790 and 9550 respectively but theproposed approach attained a classification accuracy of9891

In order to make better understanding of classificationaccuracy results Figure 10 and Figure 11 show the com-parative classification accuracies of the proposed modelagainst the individual models using e-Ophtha and DIA-RETDB1 datasets respectively

Additionally Table 3 demonstrates the comparativeresults obtained by proposed framework and the existingfamiliar approaches for the detection of retinal exudatesTable 3 illustrates the classification accuracies of [18] as 87[36] as 92 and [37] as 9760 and 9820 But the pro-posed framework achieved higher accuracy than theabovementioned techniques using both e-Ophtha andDIARETDB1 datasets

e comparative classification performance of theproposed framework against [37] is a little bit high but theextracted features achieved by the proposed framework cansupport the final results and specifically be very meaningful

Table 1 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection using e-Ophtha dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 092 925080 20 093 094 092 929090 10 094 092 096 9367

ResNet-5070 30 094 098 091 906780 20 094 098 090 957090 10 098 097 099 9780

VGG-1970 30 094 099 090 923380 20 094 093 095 958090 10 094 090 089 9350

Proposed model70 30 096 096 097 979880 20 096 097 095 984390 10 095 096 095 9790

Table 2 Comparative classification accuracy results of the proposed model with individual CNN models for exudate detection usingDIARETDB1 dataset

CNN modelsData splitting

Training () Testing () F1 score Recall Precision Classification accuracy ()

Inception-v370 30 095 098 093 931080 20 093 094 092 933090 10 095 094 097 9357

ResNet-5070 30 095 099 092 905780 20 094 098 090 961090 10 098 098 098 9790

VGG-1970 30 094 098 090 931280 20 093 092 095 955090 10 091 093 090 9376

Proposed model70 30 096 097 096 987280 20 095 096 095 989190 10 096 096 096 9792

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3

ResNet-50

VGG-19

Proposed model

Figure 10 e comparative classification accuracy results using e-Ophtha dataset

Complexity 9

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 10: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

in clinical practices e comparative analysis shows that theproposed pretrained CNN-based transfer learning techniqueoutperforms the existing individual methods in terms ofaccuracy against both datasets for the detection of retinalexudates

4 Conclusions

In this article a pretrained convolutional neural network-(CNN-) based framework is proposed for the detection ofretinal exudates in fundus images using transfer learning Inthe proposed framework pretrained models namely Incep-tion-v3 Residual Network-50 (ResNet-50) and Visual Ge-ometry Group Network-19 (VGG-19) are used to extract thefeatures from fundus images based on transfer learning for theimprovement of classification accuracy Finally the classifi-cation accuracy of the proposed model is compared withvarious DCNN models separately as well as compared withthe existing techniques e proposed transfer learning-basedframework has been evaluated and outstanding results interms of accuracy are obtained instead of training fromscratch Hence the accuracy of the proposed approach out-performs the other existing techniques for the detection ofretinal exudates In future work the proposed framework canbe modified to discriminate hard and soft exudates Moreoverthe proposed framework can also be extended to diagnosehemorrhages and microaneurysms for diabetic retinopathy

Data Availability

In the experiment the data used to support the findings ofthe proposed framework are available at httpwwwadcisnetenDownload-ird-PartyE-Ophthahtml and httpwwwitlutfiprojectimageretdiaretdb1indexhtml

Conflicts of Interest

e authors declare that there are no conflicts of interest

Acknowledgments

e authors are thankful to Dr Somal Sana (MBBS) whoprovided knowledge about diabetic retinopathy is workwas supported in part by the National Key RampD Program ofChina under Grant 2018YFF0214700

References

[1] httpswwwwhointnews-roomfact-sheetsdetaildiabetes[2] M Mateen J Wen Nasrullah S Song and Z Huang

ldquoFundus image classification using VGG-19 architecture withPCA and SVDrdquo Symmetry vol 11 no 1 2019

[3] W Hsu P M D S Pallawala M L Lee and K G A Eongldquoe role of domain knowledge in the detection of retinal hardexudatesrdquo in Proceedings of the 2001 IEEE Computer SocietyConference on Computer Vision and Pattern RecognitionCVPR IEEE Kauai HI USA December 2001

[4] F Zabihollahy A Lochbihler and E Ukwatta ldquoDeep learningbased approach for fully automated detection and segmen-tation of hard exudate from retinal imagesrdquo in Proceedings ofthe Medical Imaging 2019 Biomedical Applications in Mo-lecular Structural and Functional Imaging Springer SanDiego CA USA January 2019

[5] C Pereira L Gonccedilalves and M Ferreira ldquoExudate seg-mentation in fundus images using an ant colony optimizationapproachrdquo Information Sciences vol 296 pp 14ndash24 2015

[6] J H Tan H Fujita S Sivaprasad et al ldquoAutomated seg-mentation of exudates haemorrhages microaneurysms usingsingle convolutional neural networkrdquo Information Sciencesvol 420 pp 66ndash76 2017

[7] M Garcıa C I Sanchez M I Lopez D Abasolo andR Hornero ldquoNeural network based detection of hard exu-dates in retinal imagesrdquo Computer Methods and Programs inBiomedicine vol 93 no 1 pp 9ndash19 2009

[8] D Xiao A Bhuiyan S Frost J Vignarajan M L T Kearneyand Y Kanagasingam ldquoMajor automatic diabetic retinopathyscreening systems and related core algorithms a reviewrdquoMachine Vision and Applications vol 30 no 3 pp 423ndash4462019

[9] Q Liu B Zou J Chen et al ldquoA location-to-segmentationstrategy for automatic exudate segmentation in colour retinalfundus imagesrdquoComputerizedMedical Imaging and Graphicsvol 55 pp 78ndash86 2017

[10] N Asiri ldquoDeep learning based computer-aided diagnosissystems for diabetic retinopathy a surveyrdquo arXiv preprintarXiv181101238 2018

[11] A R Chowdhury T Chatterjee and S Banerjee ldquoA RandomForest classifier-based approach in the detection of abnor-malities in the retinardquo Medical amp Biological Engineering ampComputing vol 57 no 1 pp 193ndash203 2019

Inception-v3 ResNet-50 VGG-19 Proposedmodel

0102030405060708090

100

Models

Clas

sifica

tion

accu

racy

()

Inception-v3ResNet-50

VGG-19Proposed model

Figure 11 e comparative classification accuracy results usingDIARETDB1 dataset

Table 3 e comparative classification accuracy of the proposedmethodology and the existing methods

Year Methodology Database Accuracy ()2017 Fraz et al [18] DIARETDB1 87002018 Mo et al [36] e-Ophtha 9200

2019Khojasteh et al [37] e-Ophtha 9760

DIARETDB1 9820

Proposed framework e-Ophtha 9843DIARETDB1 9891

10 Complexity

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11

Page 11: Exudate Detection for Diabetic Retinopathy Using …downloads.hindawi.com/journals/complexity/2020/5801870.pdfdiabetic retinopathy. e automatic detection of diabetic retinopathy and

[12] O Perdomo S Otalora F Rodrıguez J Arevalo andF A Gonzalez ldquoA novel machine learning model based onexudate localization to detect diabetic macular edemardquo inProceedings of the Ophthalmic Medical Image Analysis =irdInternational Workshop Athens Greece October 2016

[13] D Y Carson Lam ldquoAutomated detection of diabetic reti-nopathy using deep learningrdquo AMIA Summits on Transla-tional Science Proceedings vol 2018147 pages 2018

[14] K Adem M Hekim and S Demir ldquoDetection of hemorrhagein retinal images using linear classifiers and iterativethresholding approaches based on firefly and particle swarmoptimization algorithmsrdquo Turkish Journal of Electrical Engi-neering amp Computer Sciences vol 27 no 1 pp 499ndash515 2019

[15] J Kaur and D Mittal ldquoA generalized method for the seg-mentation of exudates from pathological retinal fundus im-agesrdquo Biocybernetics and Biomedical Engineering vol 38no 1 pp 27ndash53 2018

[16] V Das and N B Puhan ldquoTsallis entropy and sparse recon-structive dictionary learning for exudate detection in diabeticretinopathyrdquo Journal of Medical Imaging vol 4 Article ID024002 2017

[17] H F Jaafar A K Nandi and W A Nuaimy ldquoDetection ofexudates in retinal images using a pure splitting techniquerdquo in2010 Annual International Conference of the IEEE Engineeringin Medicine and Biology IEEE Buenos Aires Argentinapp 6745ndash6748 September 2010

[18] M M Fraz W Jahangir S Zahid M M Hamayun andS A Barman ldquoMultiscale segmentation of exudates in retinalimages using contextual cues and ensemble classificationrdquoBiomedical Signal Processing and Control vol 35 pp 50ndash622017

[19] B Harangi and A Hajdu ldquoAutomatic exudate detection byfusing multiple active contours and regionwise classificationrdquoComputers in Biology andMedicine vol 54 pp 156ndash171 2014

[20] B Harangi I Lazar and A Hajdu ldquoAutomatic exudate de-tection using active contour model and regionwise classifi-cationrdquo in 2012 Annual International Conference of the IEEEEngineering in Medicine and Biology IEEE San Diego CAUSA pp 5951ndash5954 August 2012

[21] S Lim W M D W Zaki A Hussain S L Lim andS Kusalavan ldquoAutomatic classification of diabetic macularedema in digital fundus imagesrdquo in 2011 IEEE Colloquium onHumanities Science and Engineering IEEE Penang Malaysiapp 265ndash269 December 2011

[22] B Harangi and A Hajdu ldquoDetection of exudates in fundusimages using a Markovian segmentation modelrdquo in 2014 36thAnnual International Conference of the IEEE Engineering inMedicine and Biology Society IEEE Chicago IL USApp 130ndash133 November 2014

[23] L Giancardo F Meriaudeau T P Karnowski et al ldquoExudate-based diabetic macular edema detection in fundus imagesusing publicly available datasetsrdquo Medical Image Analysisvol 16 no 1 pp 216ndash226 2012

[24] E Decenciere G Cazuguel X Zhang et al ldquoTeleOphtamachine learning and image processing methods for tele-ophthalmologyrdquo Irbm vol 34 no 2 pp 196ndash203 2013

[25] T Kauppi V Kalesnykiene and J-K Kamarainen ldquoediaretdb1 diabetic retinopathy database and evaluation pro-tocolrdquo in Procedings of the British Machine Vision Conference2007 British Machine Vision Association University ofWarwick UK pp 1ndash10 September 2007

[26] W Cao ldquoMicroaneurysm detection in fundus images usingsmall image patches and machine learning methodsrdquo in 2017IEEE International Conference on Bioinformatics and

Biomedicine (BIBM) IEEE Kansas MO USA pp 325ndash331November 2017

[27] C Stauffer and W E L Grimson ldquoAdaptive backgroundmixture models for real-time trackingrdquo in Proceedings 1999IEEE Computer Society Conference on Computer Vision andPattern Recognition IEEE Collins Colorado pp 246ndash252June 1999

[28] C Szegedy W Liu and Y Jia ldquoGoing deeper with convo-lutionsrdquo in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition IEEE Boston MA USApp 1ndash9 June 2015

[29] K He X Zhang S Ren and J Sun ldquoDeep residual learningfor image recognitionrdquo in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition IEEE Las VegasNV USA pp 770ndash778 June 2016

[30] K Simonyan and A Zisserman ldquoVery deep convolutionalnetworks for large-scale image recognitionrdquo arXiv preprintarXiv14091556 2014

[31] J Yosinski ldquoHow transferable are features in deep neuralnetworksrdquo Advances in Neural Information Processing Sys-tems pp 3320ndash3328 2014

[32] Y Yu ldquoModality classification for medical images usingmultiple deep convolutional neural networksrdquo Journal ofComputer Information Systems vol 11 pp 5403ndash5413 2015

[33] O Russakovsky J Deng H Su et al ldquoImagenet large scalevisual recognition challengerdquo International Journal of Com-puter Vision vol 115 no 3 pp 211ndash252 2015

[34] L Yang S Hanneke and J Carbonell ldquoA theory of transferlearning with applications to active learningrdquo MachineLearning vol 90 no 2 pp 161ndash189 2013

[35] J Brownlee ldquoA gentle introduction to k-fold cross-validationrdquo vol 7 p 2018 2018

[36] J Mo L Zhang and Y Feng ldquoExudate-based diabeticmacular edema recognition in retinal images using cascadeddeep residual networksrdquo Neurocomputing vol 290 pp 161ndash171 2018

[37] P Khojasteh L A Passos Junior T Carvalho et al ldquoExudatedetection in fundus images using deeply-learnable featuresrdquoComputers in Biology and Medicine vol 104 pp 62ndash69 2019

Complexity 11