12
Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET Available Online at www.ijpret.com 71 INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK AN IMPROVED BINARIZATION TECHNIQUE FOR DEGRADED DOCUMENT IMAGES USING LOCAL THRESHOLDING METHOD VIKUL J.PAWAR Research Scholar, Computer Science and Engineering Department, Government College of Engineering, Aurangabad [Autonomous] Station Road, Aurangabad, Maharashtra, India. Accepted Date: 15/02/2014 ; Published Date: 01/04/2014 Abstract: An improved binarization technique presents a document image binarization technique for improving the quality of text from poorly degraded document images with good efficiency and accuracy. Now a days it is a very demanding job due to world is moving in the direction digitalization and the hard copy document having high variation between the document background and the foreground text of different noised document images. The degraded document images are consisting of various noises, illumination because of old age and many more reason. The proposed method addresses these issues by using accommodative image contrast. The accommodative image contrast is a combination of the local image contrast and the local image gradient. The proposed technique firstly constructed an accommodative contrast map for an input degraded document image. The constructed binarized contrast map is then pooled with Canny’s edge map detection to identify the text stroke edge pixels. By using Segmentation method the text is further segmented by a local thresholding method that is estimated based on the intensities of detected text stroke edge pixels in a particular local window. The proposed method is simple, strong, and requires minimum parameter tuning. The proposed technique has been tested over the dataset that is used in the recent Document Image Binarization Contest (DIBCO) 2009, 2010, 2011. Keywords: Local Thresholding, Segmentation, Cannys Edge Detection, Image contrast, pixel classification, Illumination, Degradation, Noise. Corresponding Author: Mr. VIKUL J.PAWAR Access Online On: www.ijpret.com How to Cite This Article: Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 PAPER-QR CODE

A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

71

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND

TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK

AN IMPROVED BINARIZATION TECHNIQUE FOR DEGRADED DOCUMENT IMAGES USING LOCAL THRESHOLDING METHOD

VIKUL J.PAWAR

Research Scholar, Computer Science and Engineering Department, Government College of

Engineering, Aurangabad [Autonomous] Station Road, Aurangabad, Maharashtra, India.

Accepted Date: 15/02/2014 ; Published Date: 01/04/2014

\

Abstract: An improved binarization technique presents a document image binarization technique for improving the quality of text from poorly degraded document images with good efficiency and accuracy. Now a days it is a very demanding job due to world is moving in the direction digitalization and the hard copy document having high variation between the document background and the foreground text of different noised document images. The degraded document images are consisting of various noises, illumination because of old age and many more reason. The proposed method addresses these issues by using accommodative image contrast. The accommodative image contrast is a combination of the local image contrast and the local image gradient. The proposed technique firstly constructed an accommodative contrast map for an input degraded document image. The constructed binarized contrast map is then pooled with Canny’s edge map detection to identify the text stroke edge pixels. By using Segmentation method the text is further segmented by a local thresholding method that is estimated based on the intensities of detected text stroke edge pixels in a particular local window. The proposed method is simple, strong, and requires minimum parameter tuning. The proposed technique has been tested over the dataset that is used in the recent Document Image Binarization Contest (DIBCO) 2009, 2010, 2011. Keywords: Local Thresholding, Segmentation, Cannys Edge Detection, Image contrast, pixel classification, Illumination, Degradation, Noise.

Corresponding Author: Mr. VIKUL J.PAWAR

Access Online On:

www.ijpret.com

How to Cite This Article:

Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82

PAPER-QR CODE

Page 2: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

72

INTRODUCTION

The Document image binarization technique focuses on to the conversion of a grayscale image into a binary image.

Generally, it make out text areas from background areas, Binarization plays a key role in document processing. Document image binarization is performed in the preprocessing stage of different document image processing related applications such as optical character recognition (OCR) and document image retrieval[1], [5].

Though document image binarization has been studied for last many years, the thresholding technique of degraded document images are still an unsettled problem. This can be explained by the difficulty in modeling different types of document degradation such as uneven illumination, change in image contrast, old age, smear and bleeding-through that exist in many DIBCO document images as illustrated in Fig. 1. [2], [3], [4].

(a)

(b)

Page 3: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

73

(c)

Fig.1 (a), (b), (c) Degraded document Images from DIBCO dataset.

The literature surveys have been shown a huge number of document image thresholding techniques [6], [7], [8], [9].

The binarization techniques for grayscale documents can be categories in two group first is global binarization and secondly local binarization. Global binarization methods is try to find a single threshold value for the whole document image. The each pixel is assigned to page foreground or background based on its gray value. Global binarization methods gives good result for typical scanned documents. However if the illumination over the document is not uniform, for instance in the case of scanned book pages or camera-captured documents, global binarization methods tend to produce marginal noise along the page borders[10]. The other type of documents is like historical documents in which image intensities can change significantly within a document. To overcome on these problems of global thresholding the Local binarization methods was introduced.

[11], [12], [13]. By computing thresholds individually for each pixel using information from the local neighborhood of the pixel, is usually capable of producing much better binarization results. While taking in to consideration of all the above, this paper proposes a novel locally adaptive thresholding technique which binarizes and improve poor quality and degraded document images for keeping originality of document.

The proposed technique goes through several steps. The first step is preprocessing through that noise can be removed from document images using a mean filter. In the second step, through enhancing contrast pixel information is collected. The next, third step is to canny edge detector which construct edge map. In the fourth step, to the final binarization by combining information from the calculated surface background and the original image, the text areas are located if the distance of the original image from the calculated background exceeds a threshold. In the last step proceed for post-processing that eliminates noise, improves the quality of text regions and preserves stroke connectivity. In particular, the proposed technique

Page 4: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

74

addresses the over-normalization problem of the local maximum minimum algorithm [14]. At the same time, the parameters used in the algorithm can be accommodative estimated.

II. RELATED WORK

Several adaptive document thresholding methods [4], [5], [15] have been reported. One typical adaptive thresholding approach is window based [16], which estimates the local threshold based on image pixels within a neighborhood window. However, the performance of the window-based methods majorly depends on the window size that cannot be determined properly without prior knowledge of the text strokes. Simillarly, some window-based method such as Niblack’s [13] often introduces a large amount of noise and some of the method like Sauvola’s [11] is more sensitive to the variation of the image contrast between the document text and the document background. In particular, one of the adaptive document thresholding approaches is to first estimate a document background surface and then estimate a thresholding surface based on the estimated background surface. Such as, Gatos et al. [5] which estimate the document background surface. Su et al. [14] also attempt to find the text stroke edges by using an image contrast that is evaluated based on the local maximum and minimum.

Other approaches have also been reported, including background subtraction [5], texture analysis [17], recursive method [18], decomposition method [19], contour completion [20], Markov Random Field [21], cross section sequence graph analysis [15], self-learning, Laplacian energy user assistance and combination of binarization techniques [22]. These methods combine different types of image information and domain knowledge and are often complex. The local image contrast and the local image gradient are very useful features for segmentation of text from the document image background because the document text usually has certain image contrast to the neighboring document background. They are very effective and have been used in many document image binarization techniques [11], [14].

III. PROPOSED METHOD

The proposed methods for document image binarization techniques are described as follows,

A. Preprocessing.

B. Contrast Map Construction.

C. Canny Edge Map.

D. Segmentation using Local Thresholding.

Page 5: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

75

E. Post Processing.

F. Performance analysis

The proposed method can be implemented Firstly through enhancing the image contrast of degraded document image, Then in next step through canny’s edge detection method [23], the text stroke edge pixel can be detected, afterwards through local thresholding method text is segmented from image, and finally by applying post processing technique quality of document image can be improved. The block diagram of proposed system is shown in below Figure 2.

A. Preprocessing

Pre-processing methods use a small neighborhood of a pixel in an input image to get a new brightness value in the output image such operations are also called filtration. Local pre-processing methods can be divided into the two groups according to the goal of the processing. The smoothing suppresses noise or other small fluctuations in the image; equivalently to the suppression of high frequencies in the frequency domain. But sometimes smoothing may blurs all sharp edges that bear important information about the image. Preprocessing is applied to remove noise from the Image. Mean filter is used to remove noise from the document image. The mean filter considers each pixel in the image in turn and looks at its nearby neighbors to decide whether or not it is representative of its surroundings. The mean value is calculated for the surrounding pixels. The mean filter replaces the defected pixel with the mean value.

Fig. 2 Block Diagram of Proposed System.

Page 6: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

76

B. Contrast Map Construction

Contrast is the difference in luminance and/or color that makes an object distinguishable. Combining the Image Contrast and the Image gradient the contrast map of the Image is constructed. Contrast is determined by the difference in the color and brightness of the object and other objects within the same field of view. An image gradient is a directional change in the intensity or color in an image. Image gradients may be used to extract information from images. The Contrast map reveals helps to reveal the contents of the documents.

Histogram Equalization is a technique which used in image processing for contrast adjustment by using an image's histogram and normalizing it. However, histogram equalization can only be applied to grayscale images. Histogram equalization can be categorized into two methods: global and local histogram equalization. Global histogram equalization uses the histogram information of the input image. Then transformation function stretches the contrast of the high histogram region and compresses the contrast of the low histogram region.

This global histogram equalization method is simple, but it cannot adapt to local brightness features of the input image because it uses only global histogram information of image. To overcome on this limitation, a local histogram-equalization method can be used, in this method, a rectangular sub-block of the input image is first defined, a histogram of that region is obtained, and then its histogram-equalization is determined. Afterward, the center of the rectangular region is then moved to the adjacent pixel and the histogram equalization is repeated. This procedure is repeated pixel by pixel for all input pixels. This method allows each pixel to adapt to its neighboring region, so that high contrast can be obtained for all locations in the image [29].

C. Canny Edge Map

The edges can be detected through canny edge detection algorithm, The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images firstly through smoothing the noise from the image can be removed then algorithm finds for the higher magnitude of image accordingly the edges of image gradient will be marked. While marking only local edges of image should be marked [23].

As the local image contrast and the local image gradient are evaluated by the difference between the maximum and minimum intensity in a local window, the pixels at both sides of the text stroke will be selected as the high contrast pixels. The binary map can be further improved

Page 7: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

77

through the combination with the edges by Canny’s edge detector [23], through the canny edge detection the text will be identified from input image.

D. Segmentation using Local Thresholding

Once the text stroke edges are detected, the document text can be extracted based on the observation that the document text is surrounded by text stroke edges and also has a lower intensity level compared with the detected stroke edge pixels [27]. The document text is extracted based on the detected text Stroke edges as follows:

푔(푥,푦) = 1, 푖푓 푓(푥, 푦) > 푇0, 푖푓 푓(푥, 푦) ≤ 푇 (3)

Where a g(x, y) [0, 255], g(x, y) is grayscale document image be the intensity of a pixel at location(x, y). Local thresholding techniques compute a threshold T. Where, T refers to the threshold. If f(x, y) is larger than threshold value then the image pixel will be classified as a text pixel if f(x, y) is smaller than T then the image pixel will be classified as a background pixel [30].

As described earlier, the performance of the proposed document image binarization using the text stroke edges depends on parameter, which is the minimum number of the text stroke edge pixels within the neighborhood window. The identified edge pixels are then segmented in order to make the text clear and better. For segmentation we use Local Threshold method which segments image based on the seed point values. The segmentation process will demonstrate the text in the image documents clearly. In this way this process will be helpful in restoring degraded historical or other important documents.

E. Post-processing

The certain amount of errors and noise may be often introduces in document image that can be corrected through a series of post-processing operations. Correct the document thresholding error by post-processing operations based on the estimated document background surface and some document domain knowledge. In particular, the post processing step noise reduction is done by applying median filter. The median filter considers each pixel in the image in turn and looks at its nearby neighbors to decide whether or not it is representative of its surroundings. Then the median filter replaces the odd pixels with the median of neighboring values. The

Page 8: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

78

median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. Next, remove the falsely detected text components that have a relatively large size. The falsely detected text components of a relatively large size are identified based on the observation that they are usually much brighter than the surrounding real text strokes. Then capture such observation by the image difference between the labeled text component and the corresponding patch within the estimated document background surface.

F. Performance analysis

Some examinations are designed to show the usefulness and strength of our proposed method. Firstly look into the performance of the proposed technique on datasets. The proposed technique is then tested and compared on datasets: DIBCO 2009 dataset [2], H-DIBCO 2010 dataset [3], and DIBCO 2011dataset [4]. The performance of proposed method are evaluated by using F-Measure, Peak Signal to Noise Ratio (PSNR), rank score that are adopted from DIBCO 2009, H-DIBCO 2010 and DIBCO 2011 [2]–[4]. The ground reality is that the not all of the metrics are applied on every images.

The Figure 3, shows the comparison of proposed method with other method by testing several document images from DIBCO dataset.

Methods F measure PSNR

OTSU 78.72 15.34

LMM 91.06 18.5

BE 91.24 18.6

Proposed System 92.05 20.6

Fig. 3 Evaluation result of proposed system using DIBCO dataset.

The performance analysis of proposed system is in compare with other method is found further improved, the fig 4, shows the analytical study of proposed system.

Page 9: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

79

Fig. 4 analytical study of proposed system using DIBCO dataset

IV. DISCUSSION

As expressed in the previous sections of proposed method, apply various techniques to remove noise and degradation from input document image. Then through applying local thresholding method the background and foreground text is then isolated. This makes our proposed technique more stable and easy-to-use for document images with different kinds of degradation.

The performance of our proposed method can be finer due to several factors. First, the proposed method applies preprocessing to remove the noise or fining the image which is degraded due to illumination and old age reason and combines the local image contrast and the local image gradient that help to suppress the background. Second, the combination with edge map helps to produce a precise text stroke edge map. Third, then proposed method formulates use of the text stroke edges that help to extract the foreground text from the document background precisely.

V. CONCLUSION

The proposed method follows frequent diverse steps, Firstly pre-processing procedure by collecting the document image information, makes use of the local image contrast that is predictable based on the local maximum and minimum. Through canny edge detection method

Page 10: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

80

the edges stroke are detected based on the local image variation, then local threshold is computed based on the detected stroke edge pixels within a local neighborhood window through which it is possible to extract the exact text from image document and lastly a post-processing step uses median filter in order to get better quality of output document image, which will preserve the text regions and stroke connectivity accurately. After far-reaching try out the proposed method exhibit improved performance.

ACKNOWLEDGMENT

I, Vikul J. Pawar thankful to Prof. Vivek Kshirsagar, Assistant Professor and Head, Computer Science and Engineering Department, Government College of Engineering, Aurangabad [Autonomous] for his guidance, motivation and continuous support throughout the Dissertation work. And I am also thankful to the Hon Principal, Government College of Engineering, Aurangabad [Autonomous] for being a constant source of inspiration.

REFERENCES

1. Bolan Su, Shijian Lu, and Chew Lim Tan, “Robust Document Image Binarization Technique for Degraded Document Images” IEEE TRANS ON IMAGE PROCESSING, VOL. 22, NO. 4, APRIL 2013.

2. B. Gatos, K. Ntirogiannis, and I. Pratikakis, “ICDAR 2009 document image binarization contest (DIBCO 2009),” in Proc. Int. Conf. Document Anal. Recognit., Jul. 2009, pp. 1375–1382.

3. Pratikakis, B. Gatos, and K. Ntirogiannis, “H-DIBCO 2010 handwritten document image binarization competition,” in Proc. Int. Conf. Frontiers Handwrit. Recognit., Nov. 2010, pp. 727–732.

4. Pratikakis, B. Gatos, and K. Ntirogiannis, “ICDAR 2011 document image binarization contest (DIBCO 2011),” in Proc. Int. Conf. Document Anal. Recognit., Sep. 2011, pp. 1506–1510.

5. B. Gatos, I. Pratikakis, and S. Perantonis, “Adaptive degraded document image binarization,” Pattern Recognit., vol. 39, no. 3, pp. 317–327, 2006.

6. Otsu, N.: A threshold selection method from gray level histogram. IEEE Trans. Syst. Man Cybern. 19, 62–66 (1978).

7. Brink, A.: Thresholding of digital images using two-dimensional entropies. Pattern Recogn. 25(8), 803–808 (1992).

Page 11: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

81

8. Kittler, J., Illingworth, J.: On threshold selection using clustering criteria. IEEE Trans. Syst. Man Cybern. 15, 652–655 (1985).

9. Solihin, Y., Leedham, C.: Integral ratio: a new class of global thresholding techniques for handwriting images. IEEE Trans. Pattern Anal. Mach. Intell. 21, 761–768 (1999).

10. Shafait, J. van Beusekom, D. Keysers, and T. M. Breuel, “Page frame detection for marginal noise removal from scanned documents,” in 15th Scandinavian Conference on Image Analysis, pp. 651–660, (Aalborg, Denmark), June 2007.

11. J. Sauvola and M. Pietikainen, “Adaptive document image binarization,” Pattern Recognition 33(2), pp. 225–236, 2000.

12. J. Bernsen, “Dynamic thresholding of gray level images,” in Proc. Intl. Conf. on Pattern Recognition, pp. 1251–1255, 1986.

13. W. Niblack, “An Introduction to Image Processing”, Prentice-Hall, Englewood Cliffs, NJ, 1986.

14. B. Su, S. Lu, and C. L. Tan, “Binarization of historical handwritten document images using local maximum and minimum filter,” in Proc. Int. Workshop Document Anal. Syst., Jun. 2010, pp. 159–166.

15. Dawoud, A.: Iterative cross section sequence graph for handwritten character segmentation. IEEE Trans. Image Process. 16, 2150–2154(2007).

16. M. Sezgin and B. Sankur, “Survey over image thresholding techniques and quantitative performance evaluation,” J. Electron. Imag., vol. 13, no. 1, pp. 146–165, Jan. 2004.

17. Y. Liu and S. Srihari, “Document image binarization based on texture features,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 5, pp. 540–544, May 1997.

18. M. Cheriet, J. N. Said, and C. Y. Suen, “A recursive thresholding technique for image segmentation,” in Proc. IEEE Trans. Image

19. Process., Jun. 1998, pp. 918–921.

20. Y. Chen and G. Leedham, “Decompose algorithm for thresholding degraded historical document images,” IEE Proc. Vis., Image Signal Process., vol. 152, no. 6, pp. 702–714, Dec. 2005.

Page 12: A PATH FOR HORIZING YOUR INNOVATIVE WORK AN …ijpret.com/publishedarticle/2014/3/IJPRET - CSE 14127.pdfdifferent document image processing related applications such as optical character

Research Article Impact Factor: 0.621 ISSN: 2319-507X Vikul Pawar, IJPRET, 2014; Volume 2 (8): 71-82 IJPRET

Available Online at www.ijpret.com

82

21. Q. Chen, Q. Sun, H. Pheng Ann, and D. Xia, “A double-threshold image binarization method based on edge detector,” Pattern Recognit., vol. 41, no. 4, pp. 1254–1267, 2008.

22. T. Lelore and F. Bouchara, “Document image binarisation using Markov field model,” in Proc. Int. Conf. Doc. Anal. Recognit., Jul. 2009, pp. 551–555.

23. E. Badekas and N. Papamarkos, “Optimal combination of document binarization techniques using a selforganizing map neural network.” Eng. Appl. Artif. Intell., vol. 20, no. 1, pp. 11–24, Feb. 2007.

24. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 8, no. 6, pp. 679–698, Jan. 1986.

25. D. Ziou and S. Tabbone, “Edge detection techniques—An overview,” Int. J. Pattern Recognit. Image Anal., vol. 8, no. 4, pp. 537–559, 1998.

26. M. van Herk, “A fast algorithm for local minimum and maximum filters on rectangular and octagonal kernels,” Pattern Recognit. Lett., vol. 13, no. 7, pp. 517–521, Jul. 1992

27. J. Bernsen, “Dynamic thresholding of gray-level images,” in Proc. Int. Conf. Pattern Recognit., Oct. 1986, pp. 1251–1255.

28. S. Lu, B. Su, and C. L. Tan, “Document image binarization using background estimation and stroke edges,” Int. J. Document Anal Recognit., vol. 13, no. 4, pp. 303–314, Dec. 2010.

29. S. Lu, B. Su, and C. L. Tan, “Document image binarization using background estimation and stroke edges,” Int. J. Document Anal. Recognit., vol. 13, no. 4, pp. 303–314, Dec. 2010.

30. Joung-Youn Kim, Lee-Sup Kim, and Seung-Ho Hwang “An Advanced Contrast Enhancement Using Partially Overlapped Sub-Block Histogram Equalization” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 4, APRIL 2001.

31. Faisal Shafait, Daniel Keysers, Thomas M. Breuel “Efficient Implementation of Local Adaptive Thresholding Techniques Using Integral Images”.