5
Projection Based Algorithm for Detecting Exudates in Color Fundus Images C. Eswaran, Marwan D. Saleh, and Junaidi Abdullah Centre for Visual Computing, Faculty of Computing and Informatics, Multimedia University Cyberjaya, Selangor, Malaysia [eswaran, marwan, junaidi]@mmu.edu.my Abstract—The detection and analysis of spot lesions associated with the retinal diseases, such as exudates, microaneurysms, and hemorrhages, play an important role in the screening of retinal diseases. This paper presents an algorithm for segmentation of automated exudates from color fundus images. The proposed algorithm comprises two major stages, namely, pre-processing and segmentation. A novel pre-processing method is employed for background removal through contrast enhancement and noise removal. In the second stage, the pre-processed image is sliced horizontally and vertically into a number of slices and then the corresponding projection values are obtained in order to select an appropriate threshold value for each of the image slices. Finally, optic disc is removed to facilitate the correct identification of exudates and to decrease the false positive cases. DIARETDB1 database is used to measure the accuracy of the proposed method. Based on the experiments which are conducted on pixel basis, it is found that the proposed algorithm achieves better results compared to known algorithms. With the proposed algorithm, average values of 71.2%, 72.77%, 99.98%, 97.72%, 99.74%, and 83.28% are obtained in terms of overlap, sensitivity, specificity, PPV, accuracy, and kappa coefficient respectively. I. INTRODUCTION Diabetic Retinopathy (DR) is one of the well-known and most common eye diseases, affecting patients with diabetes mellitus. DR is potentially considered as the major reason behind blindness in adults of age between 20 - 60 years, where it causes 45% of the legal blindness in patients with Diabetes Mellitus [1]. Moreover, DR has become a serious threat in our society, where the number of patients with DR is considerably increasing as a result of the increasing number of people affected by diabetes mellitus. According to Lee et al. [2], blindness due to diabetic eye disease costs about 500 million dollars a year in the United States [3]. As DR is a progressive disease, the longer a patient has untreated diabetes, the higher will be his chances of progressing towards blindness [4]. For this reason, early detection as well as the periodic screening of DR potentially helps in reducing the progression of this disease and in preventing the subsequent loss of visual capability. The focus of this paper is on the detection of exudates. There exist two types of exudates, namely, soft and hard exudates. Soft exudates (SE) can be considered as one of the signs indicating the presence of DR and hypertensive retinopathy. Hard exudates (HE) are one of the most prevalent lesions during early stages of DR and other disorders such as Coats’ disease. The identification of the presence of HE with red lesions would be useful for grading DR [5-6]. Exudates (EXs), in general, are caused by patches of vascular damage with leakage and typically manifest as spatially random yellow/white patches of varying sizes, depth, color and shapes, as can be seen in Fig. 1[7]. The detection of EXs tends to be a difficult task due to the poor contrast, uneven illumination, and color variation of the retinal images. In this work, a powerful automated algorithm for EX detection and segmentation is presented. The proposed algorithm comprises two stages, namely, a novel pre- processing scheme and segmentation. The proposed pre- processing scheme makes use of a combination of different channels along with techniques such as de-correlation stretch, Top-hat and Bottom-hat transforms and median filtering. In the second stage, the pre-processed image will be sliced horizontally and vertically into a number of blocks, and then a projection will be performed in order to select an appropriate threshold value. Finally, a recently proposed algorithm [8] to remove the optic disc (OD) in order to improve the accuracy of identification of EXs and also to decrease the possibility of false positives. The remainder of the paper is organized as follows: Section 2 presents a detailed description of the proposed method. Results and discussion are presented in Section 3. Finally, Section 4 provides conclusions. Figure 1. RGB retinal image acquired from one of the DR patients shows exudates EX Proceedings of the 19th International Conference on Digital Signal Processing 20-23 August 2014 978-1-4799-4612-9/14/$31.00 © 2014 IEEE 459 DSP 2014

[IEEE 2014 International Conference on Digital Signal Processing (DSP) - Hong Kong, Hong Kong (2014.8.20-2014.8.23)] 2014 19th International Conference on Digital Signal Processing

  • Upload
    junaidi

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2014 International Conference on Digital Signal Processing (DSP) - Hong Kong, Hong Kong (2014.8.20-2014.8.23)] 2014 19th International Conference on Digital Signal Processing

Projection Based Algorithm for Detecting Exudates in Color Fundus Images

C. Eswaran, Marwan D. Saleh, and Junaidi Abdullah Centre for Visual Computing,

Faculty of Computing and Informatics, Multimedia University Cyberjaya, Selangor, Malaysia

[eswaran, marwan, junaidi]@mmu.edu.my

Abstract—The detection and analysis of spot lesions associated with the retinal diseases, such as exudates, microaneurysms, and hemorrhages, play an important role in the screening of retinal diseases. This paper presents an algorithm for segmentation of automated exudates from color fundus images. The proposed algorithm comprises two major stages, namely, pre-processing and segmentation. A novel pre-processing method is employed for background removal through contrast enhancement and noise removal. In the second stage, the pre-processed image is sliced horizontally and vertically into a number of slices and then the corresponding projection values are obtained in order to select an appropriate threshold value for each of the image slices. Finally, optic disc is removed to facilitate the correct identification of exudates and to decrease the false positive cases. DIARETDB1 database is used to measure the accuracy of the proposed method. Based on the experiments which are conducted on pixel basis, it is found that the proposed algorithm achieves better results compared to known algorithms. With the proposed algorithm, average values of 71.2%, 72.77%, 99.98%, 97.72%, 99.74%, and 83.28% are obtained in terms of overlap, sensitivity, specificity, PPV, accuracy, and kappa coefficient respectively.

I. INTRODUCTION Diabetic Retinopathy (DR) is one of the well-known and

most common eye diseases, affecting patients with diabetes mellitus. DR is potentially considered as the major reason behind blindness in adults of age between 20 - 60 years, where it causes 45% of the legal blindness in patients with Diabetes Mellitus [1]. Moreover, DR has become a serious threat in our society, where the number of patients with DR is considerably increasing as a result of the increasing number of people affected by diabetes mellitus. According to Lee et al. [2], blindness due to diabetic eye disease costs about 500 million dollars a year in the United States [3]. As DR is a progressive disease, the longer a patient has untreated diabetes, the higher will be his chances of progressing towards blindness [4]. For this reason, early detection as well as the periodic screening of DR potentially helps in reducing the progression of this disease and in preventing the subsequent loss of visual capability.

The focus of this paper is on the detection of exudates. There exist two types of exudates, namely, soft and hard exudates. Soft exudates (SE) can be considered as one of the signs indicating the presence of DR and hypertensive retinopathy. Hard exudates (HE) are one of the most prevalent lesions during early stages of DR and other disorders such as

Coats’ disease. The identification of the presence of HE with red lesions would be useful for grading DR [5-6]. Exudates (EXs), in general, are caused by patches of vascular damage with leakage and typically manifest as spatially random yellow/white patches of varying sizes, depth, color and shapes, as can be seen in Fig. 1[7].

The detection of EXs tends to be a difficult task due to the poor contrast, uneven illumination, and color variation of the retinal images. In this work, a powerful automated algorithm for EX detection and segmentation is presented. The proposed algorithm comprises two stages, namely, a novel pre-processing scheme and segmentation. The proposed pre-processing scheme makes use of a combination of different channels along with techniques such as de-correlation stretch, Top-hat and Bottom-hat transforms and median filtering. In the second stage, the pre-processed image will be sliced horizontally and vertically into a number of blocks, and then a projection will be performed in order to select an appropriate threshold value. Finally, a recently proposed algorithm [8] to remove the optic disc (OD) in order to improve the accuracy of identification of EXs and also to decrease the possibility of false positives.

The remainder of the paper is organized as follows: Section 2 presents a detailed description of the proposed method. Results and discussion are presented in Section 3. Finally, Section 4 provides conclusions.

Figure 1. RGB retinal image acquired from one of the DR patients shows

exudates

EX

Proceedings of the 19th International Conference on Digital Signal Processing 20-23 August 2014

978-1-4799-4612-9/14/$31.00 © 2014 IEEE 459 DSP 2014

Page 2: [IEEE 2014 International Conference on Digital Signal Processing (DSP) - Hong Kong, Hong Kong (2014.8.20-2014.8.23)] 2014 19th International Conference on Digital Signal Processing

II. METHODOLOGY

A. Pre-processing The input RGB image shown in Fig. 1 is first pre-processed

using the decorrelation stretch transform which was first proposed by Taylor et al. [9]. The decorrelation stretch transform is used with advantage for retinal images to highlight the differences in the retinal features. Fig. 2(a) shows the resulting image IRGB(DS) after applying the decorrelation stretch transform to the input RGB image of Fig.1. The green-channel of the decorrelated-image IG(DS) is first extracted since it provides maximum local contrast among the three image channels (Red, Green, and Blue). Simultaneously, the decorrelated-image IRGB(DS) is also converted into YCbCr color space. By combining these two color space components as shown in Eq.1, a resulting image IRes is obtained as in Fig. 2(b).

YCbDSII Gs +−= ))((Re (1)

where IRes represents the resulting image. Such a combination will maximize the difference between the bright objects (i.e. OD and EX) and other features against the background. As illustrated in Eq. 1, this combination involves a subtraction of the Cb-channel from the green-channel, and an addition of the Y-channel to the resulting image after the subtraction. Subsequently, both morphological top- and bottom-hat transforms are applied to IRes to achieve prominence for the tiny exudates with minimal effect on the background intensity levels. Top-hat operation involves subtracting the result of performing a morphological opening on the image IRes from the image itself. On the contrary, bottom-hat operation is performed by subtracting the image IRes from the result of performing a morphological closing on the input image. Generally, the effect of the top- and bottom-hat operations are based on a predefined neighborhood or a structuring element SE, as illustrated in Eqns. (2) and (3) respectively:

)()( 1ResResReshat SEIIIT −= (2)

sISEIIB Re2ResReshat )()( −•= (3)

Using flat structuring elements (SE1=10×10 pixel and SE2=100×100 pixel), the proposed algorithm enhances the contrast of the image IRes based on the following formula:

shathatTB IBTI Re+−= (4)

The resulting image ITB is shown in Fig. 2(c). As a result of this contrast enhancement process, some noises might be appearing in the image. Therefore, median filtration is performed on the resulting contrast-enhanced image ITB to remove the noise using a window of 9×9 pixels, and the resulting image IM. Basically, the median filter replaces the value of a pixel by the median of all pixels in the neighborhood (9x9) of this pixel, as in Eq. 5.

)},({median),(),(

tsIyxI TBWtsMxy∈

=

(5)

where W represents a neighborhood centered around location in the image. Eventually, another process is performed to remove the background of the image IM based on top- and bottom-hat transforms with different values of SE using the following formula:

hatMhatF TIBI +−= (6)

The SE values used in top- and bottom hat are 100×100 pixel and 10×10 pixel respectively. Fig. 2(d) shows the final pre-processed image IF.

(a) (b)

(c) (d)

Figure 2. (a) The decorrelation stretched image, (b) the resulting image obtained using Eq. 1, (c) the resulting image obtained using Eq. 4, and (d) The

resulting image of the pre-processing stage

B. Exudates Segmentation The challenging problem associated with the detection of

EXs is the complexity involved in calculating the appropriate threshold value and also in removing the OD. In the proposed algorithm, two new techniques called image slicing and projection techniques are used for detecting the bright objects (i.e. EXs and OD) based on adaptive and automatic threshold values.

1) Image Slicing and Projection

The aim of this step is to localize the bright objects in the image. Image projection plays a vital role in detecting bright objects in the image, which represent the high values in the projection. However, the detection must be controlled by an appropriate threshold value as the peak in the projection is not necessarily to represent a bright object. The presence of the bright object can be sensed by the sudden changes in the projection values. Also, the projection will not be able to detect tiny exudates accurately due to the large size of the image. To overcome this problem, the image can be divided into a number

Proceedings of the 19th International Conference on Digital Signal Processing 20-23 August 2014

978-1-4799-4612-9/14/$31.00 © 2014 IEEE 460 DSP 2014

Page 3: [IEEE 2014 International Conference on Digital Signal Processing (DSP) - Hong Kong, Hong Kong (2014.8.20-2014.8.23)] 2014 19th International Conference on Digital Signal Processing

of smaller slices (horizontally and vertically) and then the projection values of each slice can be calculated separately and based on these projection values, a suitable threshold value can be calculated for each slice. The value for any projection represents the sum of the intensities of all the pixels lying in that projection. In the proposed algorithm, the pre-processed image of size 1200×1200 pixels is equally sliced horizontally and vertically into 48 horizontal and vertical slices of size 25×1200 pixels and 1200×25 pixels respectively, as shown in Figs. 3(a)-(b). For example, for a horizontal (vertical) slice of size 25×1200 (1200×25), there will be 1200 vertical (horizontal) projections with each projection comprising 25 pixels. The value of a projection is determined by summing up the intensity values of the 25 pixels lying in that projection. Each horizontal and vertical slice will have 1200 projection values. The idea of slicing the image in both directions (not just in one direction) will be discussed in the next section. Figs. 3(c) and 3(d) show samples of vertical and horizontal projections respectively.

(a) (b)

(c)

(d)

Figure 3. Image slicing and projection, (a) Horizontal slicing, (b) Vertical slicing, (c) Vertical projection values for the horizontal slice 6, and (d)

Horizontal projection values for the vertical slice 12

2) Bright Object Segmentation

An empirical formula is proposed to calculate a threshold value automatically for each slice based on the projection values. We assume that the empirical formula for calculating the threshold of an image slice must depend on the significant

projection values, namely, maximum (Max) and minimum (Min) values, standard deviation (σ), and the mean value (µ). Based on several experiments, it is found that the empirical formula given in Eq. 7 for the threshold value (T) yields the best result.

⎟⎟⎠

⎞⎜⎜⎝

⎛−⎟⎠⎞

⎜⎝⎛ −=

μσ

σ*10MinMaxT (7)

Based on Eq.7, each slice of IF is thresholded to obtain a black and white (binary) image in which the value of a pixel will be 1 if its value in IF is greater than T (representing EX or OD) or 0 if its value in IF is less than or equal to T (representing other retinal features). Let IBWH-i and IBWV-i represent the horizontal and vertical binary sliced images respectively. By combining all the 48 horizontal (vertical) binary sliced images, we obtain the horizontal (vertical) binary image represented as IH (IV). The resulting binary image IBW is obtained by adding both horizontal and vertical binary images, as shown in Eq.8.

VHBW III += (8)

Fig. 4(a) shows the resulting binary image (IBW) obtained for IF. The pre-processed image (IF) is sliced in both directions in the previous step since some objects (especially tiny objects) might appear only in the horizontal or vertical slices.

3) Bright Object Classification

OD removal is essential for the correct identification of EX regions and for decreasing the false positives since the OD may incorrectly be classified as EX lesion due to their similarity in terms of brightness, yellowish color, depth, shape, etc. [10-11]. OD can be localized efficiently by using a template matching technique with fast Fourier transform [8]. First, a binary mask of size 250×250 pixels (approximately representing the size of OD) is initially generated. Once the OD is localized using the algorithm in [8], the centre point of the OD will be matched with the centre point of the mask. Then a cross-correlation of the OD with the mask is performed in order to remove the OD and keep the EX only. The final resulting image after the removal of OD is shown in Fig. 4(b).

Figure 4. (a) The resulting binary image (IBW), and (b) The resulting image

of the proposed algorithm

Proceedings of the 19th International Conference on Digital Signal Processing 20-23 August 2014

978-1-4799-4612-9/14/$31.00 © 2014 IEEE 461 DSP 2014

Page 4: [IEEE 2014 International Conference on Digital Signal Processing (DSP) - Hong Kong, Hong Kong (2014.8.20-2014.8.23)] 2014 19th International Conference on Digital Signal Processing

III. RESULTS AND DISCUSSION The proposed algorithm was implemented using Matlab (V.

7.7) on a Dell workstation PC (2.40 GHz Intel Core 2 Due and 2 GB RAM). DIARETDB1 database [12] was used to evaluate the performance of the proposed algorithm. This database contains 89 fundus images, of which 38 images contain hard exudates, and 20 contain soft exudates. The images were randomly split into two sets, i.e. training and test sets. The training set comprised 50 images (35 images contain EX and 15 images from healthy retinas), and the test set comprised 39 images (23 images contain EX and 16 images from healthy retinas). Manually-marked images by a specialist were used as reference (Gold Standard) for measuring the performance of the proposed exudates segmentation algorithm. The training set images are used to develop the empirical formulae for the parameters. The performance parameters were calculated on pixel by pixel basis using the confusion matrix. The pixel-based evaluation considers four values, namely true positive (TP), false positive (FP), false negative (FN), and true negative (TN). TP refers to positive pixels correctly labeled as positive. FP refers to negative pixels incorrectly labeled as positive. FN refers to positive pixels incorrectly labeled as negative. Finally, TN refers to negative pixels correctly labeled as negative. Based on the confusion matrix, several performance parameters were calculated, namely overlap (Over.) [13], sensitivity (Sens.) [13], specificity (Spec.) [13], positive predictive value (PPV) [13], accuracy (Acc.) [13], and kappa coefficient (k) [14]. These parameters were calculated using Eqns. (12)-(19) respectively.

FNFPTPTPOverlap

++= (12)

FNTPTPySensitivit+

= (13)

FPTNTNySpecificit+

= (14)

FPTPTPPPV+

= (15)

)()(

FNFPTNTPTNTPAccuracy

++++= (16)

)Pr(1)Pr()Pr(

eeokappa

−−= (17)

)()()Pr(

FNFPTNTPTNTPo

++++= (18)

2)(

)](*)[()](*)[()Pr(FNFPTNTP

TNFPTNFNFNTPFPTPe+++

+++++= (19)

Table 1 shows a comparison of the results obtained by various exudates segmentation algorithms. Based on the experiments conducted on the DIARETDB1, the proposed

algorithm achieved values of 69.08, 99.98, 97.11, 99.75 and 81.59 in terms of overlap, sensitivity, specificity, positive predictive value, accuracy, and kappa coefficient respectively. Compared to other known algorithms, the proposed algorithm achieves better results in terms of sensitivity and specificity. Though the results obtained by the proposed algorithm are not significantly higher compared to those reported in [17], the computational complexity of the proposed algorithm is much less compared to that of [17].

TABLE I. COMPARISON OF EXUDATES SEGMENTATION ALGORITHMS (DIARETDB1DATABASE)

Method

Sens. (%)

Spec. (%)

Sopharak et al. [15] 43.48 99.31

Walter et al. [16] 66.00 98.64

Welfer [17] 70.48 98.84

Proposed Algorithm 70.53 99.98

IV. CONCLUSION In this paper, an efficient automated algorithm for exudates

detection using image slicing and projection techniques has been presented. The proposed algorithm comprises two major stages, namely, pre-processing and segmentation. A novel pre-processing scheme using a color space combination technique along with techniques such as de-correlation stretch, Top-hat and Bottom-hat transforms and median filtering has been employed. In the second stage, the pre-processed image is sliced horizontally and vertically into a number of blocks, and then vertical and horizontal projections have been performed in order to select an appropriate threshold value for each image slice. The significant contribution of this paper is the proposal of an empirical formula for adaptive and automatic thresholding of the image slices. Finally, optic disk removal is performed using a recently proposed algorithm in order to facilitate the correct identification of exudates and to decrease the number of false positives. The performance of the proposed algorithm has been evaluated using DIARETDB1 database with Matlab software. The experimental results show that the proposed algorithm achieves promising results for accurate detection of exudates.

REFERENCES [1] Klein R, Klein B. E., Jensen S. C., et al. “The relation of socioeconomic

factors to the incidence of early age-related maculopathy”. The Beaver Dam Eye Study.Am J Ophthalmology, 132: 128-131, 2001.

[2] S. C. Lee, E. T. Lee, R. M. Kingsley, Y. Wang, D. Russell, R. Klein, A. Warn; “Comparison of diagnosis of early retinal lesions of diabetic retinopathy between a computer system and human experts”. Archives of Ophthalmology 119: 509–515, 2001.

[3] T. Walter, P. Massin, A. Erginay, R. Ordonez, C. Jeulin, J. C. Klein; “Automatic detection of microaneurysms in color fundus images”, Medical Image Analysis 11 (6): 555–566, 2007..

[4] Thomas A. Chiulla, Annando G. Amador, Bernard Zinman. “Diabetic retinupathy and diabetic mucdur edema: purhophysiology, screening, and novel therapies - Review Article. Dibetes care. 2003.

[5] Clara I. S´anchez, Roberto Hornero, Mar´ıa I. L´opez, Mateo Aboy, Jes´us Poza, Daniel Ab´asolo, “A novel automatic image processing

Proceedings of the 19th International Conference on Digital Signal Processing 20-23 August 2014

978-1-4799-4612-9/14/$31.00 © 2014 IEEE 462 DSP 2014

Page 5: [IEEE 2014 International Conference on Digital Signal Processing (DSP) - Hong Kong, Hong Kong (2014.8.20-2014.8.23)] 2014 19th International Conference on Digital Signal Processing

algorithm for detection of hard exudates based on retinal image analysis,” Medical Engineering & Physics 30 (2008) 350–357.

[6] M. D. Abràmoff, M. K. Garvin, M. Sonka, “Retinal Imaging and Image Analysis,” IEEE Reviews In Biomedical Engineering, VOL. 3, 2010

[7] A. Osareh, B. Shadgar, and R. Markham, “A Computational-Intelligence-Based Approach for Detection of Exudates in Diabetic Retinopathy Images,” IEEE Transactions on Information Technology in Biomedicine, vol. 13, no. 4, July 2009.

[8] Marwan D. Saleh, N. D. Salih, C. Eswaran and Junaidi Abdullah, “Automated Segmentation of Optic Disc in Fundus Images,” in Proc. IEEE 10th International Colloquium on Signal Processing & its Applications (CSPA2014), pp: 153-158, Malaysia, 2014.

[9] M. Taylor, “Principal components colour display of ERTS imagery,” 3rd Earth Resources Technology Satellite-1 Symposium, NASA, pp. 1877-1897, 1973.

[10] A. A. A. Youssif, A. Z. Ghalwash, A. A. S. A. Ghoneim, “Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels’ Direction Matched Filter”, IEEE Trans. Med. Imag., 27(1):11-18, 2008

[11] A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, “Automated Identification of Diabetic Retinal Exudates in Digital Colour Images,” Br. J. Ophthalmol., vol. 87, pp. 1220–1223, 2003.

[12] Kauppi, T., Kalesnykiene, V., Kamarainen, J.-K., Lensu, L., Sorri, I., Raninen A., Voutilainen R., Uusitalo, H., Kälviäinen, H., Pietilä, J “DIARETDB1 diabetic retinopathy database and evaluation protocol”, Technical report.

[13] L. Costaridou, “Medical Image Analysis Methods,” The Electrical Engineering and Applied Signal Processing Series. CRC Press, 2005. ISBN: 0-8493-2089-5.

[14] J. Cohen, “A coefficient of agreement for nominal scales,” Educational and Psychological Measurement, .20(1):37–46, 1960

[15] Sopharak A, Uyyanonvara B, Barmanb S, Williamson TH. Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Computerized Medical Imaging and Graphics 2008;32:720–7.

[16] Walter T, Klein J-C, Massin P, Erginay A. A contribution of image processing to the diagnosis of diabetic retinopathy – detection of exudates in color fundus images of the human retina. Transactions on Medical Imaging 2002;21(10):1236–43.

[17] D. Welfer, J. Scharcanski, D. R. Marinho, “A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images,” Computerized Medical Imaging and Graphics 34 (2010) 228–235.

Proceedings of the 19th International Conference on Digital Signal Processing 20-23 August 2014

978-1-4799-4612-9/14/$31.00 © 2014 IEEE 463 DSP 2014