6
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 WACV #232 WACV #232 WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. A Deep Convolutional Neural Network Trained on Representative Samples for Circulating Tumor Cell Detection Anonymous WACV submission Paper ID 232 Abstract The number of Circulating Tumor Cells (CTCs) in blood indicates the tumor response to chemotherapeutic agents and disease progression. In early cancer diagnosis and treatment monitoring routine, detection and enumeration of CTCs in clinical blood samples have significant applica- tions. In this paper, we design a Deep Convolutional Neu- ral Network (DCNN) with automatically learned features for image-based CTC detection. We also present an effec- tive training methodology which finds the most represen- tative training samples to define the classification bound- ary between positive and negative samples. In the experi- ment, we compare the performance of auto-learned features from DCNN and hand-crafted features, in which the DCNN outperforms hand-crafted features. We also prove that the proposed training methodology is effective in improving the performance of DCNN classifiers. 1. Introduction Malignant cells may break away from the primary tumor during the process of cancer metastasis. They may enter the blood or lymphatics system, and then form a secondary tumor at a distant organ site. Several studies have proved that malignant cells are found circulating in blood in the early stages of solid tumor progression, before overt clin- ical signs [1, 2, 5, 6]. The number of Circulating Tumor Cells (CTCs) in blood can predict disease progression. It indicates tumor response to chemotherapeutic agents [5, 6]. Thus, the development of routine techniques for detection and enumeration of CTCs in clinical blood samples are in need. An automated technique for CTC detection may help in the early diagnosis of cancer even before tumors are vis- ible using traditional imaging approaches. It may also help avoid the use of more invasive diagnosis techniques such as tumor biopsy. 1.1. Related Work Because CTCs in blood are infrequent, 1 per 1 billion normal cells found in the blood [1], routine detection of CTCs poses a significant challenge. Several methods have been developed to quantify and capture CTCs from human blood. Many of these approaches depend on surface mark- ers expressed on tumor cells which can be exploited for use in positive selection. One widely used marker for detection of carcinoma cells in the blood is epithelial cell adhesion molecule (epCAM) [7]. In some enrichment techniques magnetic beads with immobilized anti-epCAM [8], and other anti-tumor antibodies [9], are used for immunomag- netic separation of malignant cells from the normal blood cell population. Immunomagnetic-based selection of CTCs is attractive because of its simplicity and the availability of the needed tools and reagents. Commercially available sys- tems, based on epCAM-positive selection, have been suc- cessfully implemented in CTC evaluation in certain types of carcinoma. Other approaches exploit differences in tu- mor cell physical properties such as size, density or adhe- siveness [10]. To the best of our knowledge, little work has been done on image-based CTC detection. Figure 1. Visualize Circulating Tumor Cells (CTCs) using Phase Contrast Microscopy Imaging. (a) An image containing CTCs. (b) CTC samples with different shapes and sizes, some of which cluster together. (c) Samples from non-CTC background, which are similar to CTCs in intensity and shape. 1

A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

Embed Size (px)

Citation preview

Page 1: A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044045046047048049050051052053

054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089090091092093094095096097098099100101102103104105106107

WACV#232

WACV#232

WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

A Deep Convolutional Neural Network Trained on Representative Samples forCirculating Tumor Cell Detection

Anonymous WACV submission

Paper ID 232

Abstract

The number of Circulating Tumor Cells (CTCs) in bloodindicates the tumor response to chemotherapeutic agentsand disease progression. In early cancer diagnosis andtreatment monitoring routine, detection and enumeration ofCTCs in clinical blood samples have significant applica-tions. In this paper, we design a Deep Convolutional Neu-ral Network (DCNN) with automatically learned featuresfor image-based CTC detection. We also present an effec-tive training methodology which finds the most represen-tative training samples to define the classification bound-ary between positive and negative samples. In the experi-ment, we compare the performance of auto-learned featuresfrom DCNN and hand-crafted features, in which the DCNNoutperforms hand-crafted features. We also prove that theproposed training methodology is effective in improving theperformance of DCNN classifiers.

1. IntroductionMalignant cells may break away from the primary tumor

during the process of cancer metastasis. They may enterthe blood or lymphatics system, and then form a secondarytumor at a distant organ site. Several studies have provedthat malignant cells are found circulating in blood in theearly stages of solid tumor progression, before overt clin-ical signs [1, 2, 5, 6]. The number of Circulating TumorCells (CTCs) in blood can predict disease progression. Itindicates tumor response to chemotherapeutic agents [5, 6].Thus, the development of routine techniques for detectionand enumeration of CTCs in clinical blood samples are inneed. An automated technique for CTC detection may helpin the early diagnosis of cancer even before tumors are vis-ible using traditional imaging approaches. It may also helpavoid the use of more invasive diagnosis techniques such astumor biopsy.

1.1. Related WorkBecause CTCs in blood are infrequent, 1 per 1 billion

normal cells found in the blood [1], routine detection of

CTCs poses a significant challenge. Several methods havebeen developed to quantify and capture CTCs from humanblood. Many of these approaches depend on surface mark-ers expressed on tumor cells which can be exploited for usein positive selection. One widely used marker for detectionof carcinoma cells in the blood is epithelial cell adhesionmolecule (epCAM) [7]. In some enrichment techniquesmagnetic beads with immobilized anti-epCAM [8], andother anti-tumor antibodies [9], are used for immunomag-netic separation of malignant cells from the normal bloodcell population. Immunomagnetic-based selection of CTCsis attractive because of its simplicity and the availability ofthe needed tools and reagents. Commercially available sys-tems, based on epCAM-positive selection, have been suc-cessfully implemented in CTC evaluation in certain typesof carcinoma. Other approaches exploit differences in tu-mor cell physical properties such as size, density or adhe-siveness [10]. To the best of our knowledge, little work hasbeen done on image-based CTC detection.

Figure 1. Visualize Circulating Tumor Cells (CTCs) using PhaseContrast Microscopy Imaging. (a) An image containing CTCs.(b) CTC samples with different shapes and sizes, some of whichcluster together. (c) Samples from non-CTC background, whichare similar to CTCs in intensity and shape.

1

Page 2: A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161

162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215

WACV#232

WACV#232

WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

Figure 2. The overview of our proposed framework.

1.2. Motivation and ContributionsMethods that use antibodies against tumor cells require

prior knowledge of the markers which vary widely accord-ing to different types and stages of cancer. These antibody-based systems that efficiently detect carcinomas will missmany other types of malignancies including leukemia, lym-phoma and non-epithelial tumors. Svensson et al. [16] use aBayesian classifier based on a probabilistic generative mix-ture model to detect CTCs. However, their system is basedon fluorescent microscopy images which is an invasive ap-proach. There is a great interest in the CTC community todevelop an noninvasive method that is not dependent on tu-mor cell markers and capable of detection across a widerange of cancer types.

A phase contrast microscopy image containing someCTCs is shown in Fig.1(a). The CTCs exhibit large vari-ations in shape and size and they overlap with each other(Fig.1(b)). Some non-CTC background has similar appear-ance to the CTCs (Fig.1(c)). It is very hard to distinguishCTCs from the background by simple intensity thresholdingor morphological operation.

Therefore, reliable image features are needed to detectCTCs. Deep Convolutional Neural Network (DCNN) hasshown its effectiveness on object detection and classifica-tion in recent years [3, 4]. It has been proved to be an ef-fective tool in several biomedical applications such as mi-tosis cell detection [13, 14]. In a DCNN architecture, ithas several layers of convolutional filters with each layerfollowed by either max or mean pooling operations to pro-duce abstract and useful representations of the input object.The parameters of kernals are automatically learned withoutany human effort. The learned convolutional kernels aremore effective feature extractors compared to other human-

designed feature descriptors.The balance between the number of positive and negative

samples is quite important in the DCNN training. However,CTCs in blood are infrequent so that it is hard to acquirea large amount of CTC image patches for DCNN training.Meanwhile, there are much more negative samples from thebackground with redundancies so that it is infeasible to in-clude every possible negative sample in training. Thus, atraining methodology which is able to collect the most rep-resentative training samples from limited training images isneeded to avoid the class imbalance problem.

The above three needs (noninvasive microscopy imag-ing, image feature extraction, and training with representa-tive samples) motivate us to develop a CTC detection sys-tem with the following contributions:

• We propose an image-based CTC detection systembased on DCNN. The proposed system is non-invasivewithout staining markers that damage the viabilities ofCTCs.

• An effective training methodology is proposed. It findsthe most representative samples to better define theclassification boundary between positive and negativesamples.

2. Methodology

2.1. Overview

The diagram of our framework is shown in Fig. 2. In theith iteration, DCNN detector Di is trained from positiveand negative training dataset plus the false positives gener-ated from detectors D1 to Di−1, and the ith detector Di

generates a set of false positives FPi. FPi is added to the

2

Page 3: A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269

270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323

WACV#232

WACV#232

WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

training dataset to train the detector Di+1 in the i + 1 iter-ation. The iteration stops when the performance converges.This sequence of iterations is defined as one ROUND oftraining in the paper. Then, we apply all Di’s (i ∈ [1, N ])in one ROUND of N iterations to all the collected false pos-itive samples during N iterations. The confident scores Si

are the output values of Di to label the false positive sam-ples as positive. Since we have N iterations in one ROUND,each false positive will have a N×1 feature vector. K-meanclustering method is applied to classify these false positivesbased on their feature vectors into two groups: easy sam-ples and hard samples. Only hard samples are added to theoriginal negative training dataset to start another ROUNDof iteratively training. We obtain one DCNN detector even-tually after multi-ROUNDs of training (each ROUND hasmulti-iterations), i.e., the final trained detector is the DCNNin the last iteration of the last ROUND.

2.2. Data Acquisition

Figure 3. Staining and fluorescence imaging are used to obtain theground truth. (a) A phase contrast microscopy image containingthree CTCs; (b) The corresponding fluorescence image shows thelocation of CTCs.

We labeled MCF-7 breast cancer cells with a red fluo-rescence cell-tracker dye for 30 minutes. The MCF-7 cellswere mixed with purified sheep red blood cells at a ratioof 1:10,000 (note that, in reality, the CTCs are very rareand the ratio of CTCs to red blood cells is as low as 1:109.We increase the ratio to have more positive samples for thepurpose of training and validation). Then the CTC sampleswere mounted onto glass slides using an 18x18 mm cover-slip. Fluorescence and phase contrast image sets were ac-quired using a Leica DMIRE2 epifluorescence microscopeequipped with a 10X objective and 12-bit monochromeCCD camera as shown in Fig.3. The bright regions in fluo-rescence image indicate where are the CTC cells. Note that,the invasive staining process and fluorescence imaging areused to obtain ground truth for training and evaluating ourcomputational algorithms. In our non-invasive CTC detec-tion, the CTCs will not be stained so fluorescence imagingwill not be used.

2.3. DCNN Architecture

We design a DCNN as shown in Fig.4. The input imagepatch to DCNN is normalized to 40× 40 pixels. In the first

Figure 4. The architecture of DCNN for CTC detection.

layer, it has 6 different convolutional filters with size 5 × 5applied on the input images. The convolution operation isformulated as

yj = sigm(bj +∑i

kij ∗ xi) (1)

where xi and yj are the i-th input map and j-th outputmap, respectively. bj is the bias term and kij is the con-volutional kernel between xi and yj . The sigmoid function,sigm, maps output values to the range of [-1,1].

The first layer is followed by a max-pooling layer whichis used to extract local signal in every 2 × 2 region. Max-pooling function is expressed as

zjp,q = max0≤m,n ≤2

{yi2×p+m,2×q+n} (2)

where the pixel at (p, q) of the output map zj pools over a2× 2 region in yi.

The third layer is a convolutional layer with 12 kernels.The fourth layer is a max-pooling layer. The last layer isfully connected to the output layer by performing the dotproduct between the weight vector and input vector. Theweighted sum is then passed to a sigm function.

All the parameters in kernels, bias terms and weight vec-tors are automatically learned by back propagation with thelearning rate equaling to 0.1.

2.4. Training Methodolgy

The locations of positive training samples are automat-ically obtained around the bright regions in fluorescenceimages. In order to enhance the tolerance to variationsof intensity and rotation, we rotate the phase contrast mi-croscopy images every 30 degrees and automatically croppositive samples from them.

It is important to build a comprehensive negative trainingdataset in order to precisely define the classification bound-ary between positive and negative samples. But collectingnegative samples which cover every possible location in thebackground may introduce a lot of repetitive samples andcause a large class imbalance between positive and nega-tive samples. Thus, how to collect a representative set ofnegative samples becomes crucial.

We propose a bootstrapping method to collect repre-sentative negative samples from limited training images.

3

Page 4: A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377

378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431

WACV#232

WACV#232

WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

Figure 5. Iteratively training a detector by adding the false positives in the previous iteration to the training dataset for the next iteration.(a)ROC of each iteration; (b) The 1st iteration (c) The 3rd iteration (5) The 5th iteration. Red: false positive; Green: true positive; Blue: missdetection.

Figure 6. Confidence scores of false positive samples in each ROUND. The confidence score is the output value of classifiers that classifyfalse positive samples as positive.

Unlike other training methodologies which train classifierswith all the found false positives until the performance con-verges, our approach continues to refine the classificationboundary by training with the most representative samplesamong the false positives.

Traditional boosting training method trains the detectoriteratively. After one iteration, the detector will collect falsepositives and add them to negative training dataset, and thenstart a new training iteration. As shown in Fig. 5(a), the tra-ditional iterative training ends when the performance con-verges. However, the detector after the iterative training stillcontains quite some false positives (Fig.5(d)).

When we apply the trained detector of each iteration onall the false positives, a large amount of false positives gen-erate low responses as shown in the first ROUND trainingin Fig. 6, which means they can be classified as negative

samples relatively easily. As shown in Fig. 7, these rela-tively easy false positive samples are close to the classifica-tion boundary. But the rest small amount of false positivesamples with relatively high responses are hard samples faraway from the classification boundary. To train a classifierto better classify those hard samples, they should gain moreweights in the training.

Suppose we have N iterations in one ROUND of iter-ative training, then we apply these N detectors on all thefalse positives collected from all iterations. For each falsepositive sample, it has a N × 1 confidence score featurevector. The confidence score is the output of a classifierwhich indicates how likely a false positive sample is clas-sified as positive. The higher the confidence score is, themore likely the false positive sample is classified as posi-tive. We simply apply k-mean clustering method to classify

4

Page 5: A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485

486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539

WACV#232

WACV#232

WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

Figure 7. Illustration of Easy and Hard False Positive Samples.Easy False Positive Samples are in yellow circles which are closeto the decision boundary and Hard False Positive Samples are inred circles which are far away from the decision boundary, and arefalsely classified as positives.

these false positives based on confidence score feature vec-tors into two groups: easy samples which have low confi-dence scores and hard samples which have high confidencescores. To enhance the influence of these hard samples onthe training, we start another ROUND of iterative trainingby only adding these hard samples to the previous negativetraining dataset. By this iterative training, only a small num-ber of false positives will be collected. As shown in Fig. 6,the number of false positive samples reduces from 11000 inthe first ROUND to 2500 in the second ROUND. The pro-portion of samples with relatively high scores in the secondROUND is larger than that in the first ROUND. Thus hardfalse positive samples gain more weights in the new trainingROUND.

3. Experimental Results3.1. Evaluation Metric

We acquired 45 phase contrast microscopy images, eachof which has its corresponding fluorescence image as theground truth. We randomly select 35 images for trainingand the rest 10 for testing. To avoid bias, we repeat thisrandom experiment 5 times. The evaluation result is basedon the average performance of 5 trials. As defined in PAS-CAL [15], a detection is a True Positive (TP) if the areaof the intersection between the detection window and theground truth exceeds 50 percent of their union area, other-wise it is a False Positive (FP). If one cell is not detected, itis missed (False Negative, FN). We define precision as P =|TP |/(|TP |+ |FP |), recall as R = |TP |/(|TP |+ |FN |),and F score as the Harmonic mean of precision and recall.

3.2. Comparison of Hand-Crafted Features andFeatures Learned by DCNN

The Histogram-of-Gradient (HoG, [11]) feature can beused to extract regional gradient information, capturing theshape of objects. The Histogram of Color (HoC) of im-

Figure 8. The convergence of DCNN and SVM.

Table 1. Evaluation on the proposed training method.F Score

1st Round HoG + HoC 75.4 %1st Round DCNN 91.2 %2nd Round HoG + HoC 78.4 %2nd Round DCNN 97.0 %

age patches may be considered as an feature to separatecells from the background. We distribute the color of imagepatches into 32 bins. In the experiment we feed the HoG+ HoC to Support Vector Machine (SVM, [12]) to comparewith DCNN.

The average number of positive training samples duringthe five trials is 1400. Training DCNN classifier takes 2hours and training SVM takes around 0.5 hour. Note: weonly use fluorescence images as ground truth. No informa-tion from fluorescence image is extracted as image featuresfor CTC detection.

As shown in Tab. 1, the F score of SVM + HoG + HoC is78.4% and that of DCNN is to 97%. The F score of DCNNis larger than that of SVM + HoG by 18.6 percentage pointsin the second ROUND. This result indicates that DCNNfinds the better feature than HoG + HoC to detect CTCs.Some detection examples of DCNN are shown in Fig.9.

3.3. Validation of the Proposed Training MethodWe evaluate our training methodology for both SVM and

DCNN classifiers. Fig.8 shows the F score in every itera-tion of 2 ROUNDS. Both the SVM and DCNN classifiersconverge in 5 iterations in each ROUND. The performanceof both DCNN and SVM + human-designed feature im-prove after the first ROUND, which shows that our train-ing method is effective in finding representative samples.Without our training method, the DCNN achieves F scoreof 91.2%. The F score of SVM + HoG + HoC is 75.4%.The F scores increase to 97% and 78.4% respectively withour training method, as summarized in Table 1.

5

Page 6: A Deep Convolutional Neural Network Trained on …web.mst.edu/~yinz/Papers/WACV2016_CNN_CTC.pdf · 2015-09-05 · the blood or lymphatics system, and then form a secondary ... fective

540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593

594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647

WACV#232

WACV#232

WACV 2016 Submission #232. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.

Figure 9. Samples of CTC detection.

4. Conclusion and DiscussionIn this paper, we proposed an image-based CTC detec-

tion system based on DCNN. We also proposed an effectivetraining method which targets at finding the most represen-tative training samples. Comparison of DCNN and SVMclassifiers shows that our DCNN classifier works better andthe proposed training method is able to improve the perfor-mance of classifiers by reducing the redundancy in negativesamples. The high performance of DCNN on a challeng-ing dataset shows that it is promising to solve the problemof automated CTC detection in a non-invasive way. Ourimage-based CTC detection is not dependent on cell markerexpression, and is not limited to any particular cancer type.Our detection approach could be adapted to microfluidicsdevices for accurate and rapid enumeration of CTCs in clin-ical blood samples for early diagnosis and treatment moni-toring.

References[1] Emilian Racila et al., “Detection and characterization

of carcinoma cells in the blood,” in Proceedings ofthe National Academy of Sciences, vol. 95, no. 8, pp.4589, 1998.

[2] Tanja Fehm et al., “Cytogenetic evidence that circu-lating epithelial cells in patients with carcinoma aremalignant,” Clinical Cancer Research, vol. 8, no. 7,pp.2073-2084, 2002.

[3] Alex Krizhevsky et al., “ImageNet Classification withDeep Convolutional Neural Networks,” The Twenty-

sixth Annual Conference on Neural Information Pro-cessing Systems (NIPS), 2012.

[4] Koray Kavukcuoglu et al., “Advances in Neural Infor-mation Processing Systems,” The Twenty-fourth An-nual Conference on Neural Information ProcessingSystems (NIPS), 2010.

[5] Jeffrey Allard et al., “Tumor cells circulate in theperipheral blood of all major carcinomas but not inhealthy subjects or patients with nonmalignant dis-eases,” Clinical Cancer Research, vol. 10, no. 20, pp.6897-6904, 2004.

[6] Klaus Pantel et al., “Detection, clinical relevance andspecific biological properties of disseminating tumourcells,” Nature Reviews Cancer, vol. 8, no. 5, pp. 329-340, 2008.

[7] Philip TH Went et al., “Frequent epcam protein ex-pression in human carcinomas,” Human Pathology,vol. 35, no. 1, pp. 122-128, 2004.

[8] Robert Konigsberg et al., “Detection of epcam posi-tive and negative circulating tumor cells in metastaticbreast cancer patients,” Human Pathology, vol. 50, no.5, pp. 700-710, 2011.

[9] Udo Bilkenroth et al., “Detection and enrichment ofdisseminated renal carcinoma cells from peripheralblood by immunomagnetic cell separation,” Interna-tional Journal of Cancer, vol. 92, no. 4, pp. 577-582,2001.

[10] Esmaeilsabzali Hadi et al., “Detection and isolationof circulating tumor cells: Principles and methods,”Biotechnology Advances, vol. 31, no. 7, pp. 1063-1084, 2013.

[11] Navneet Dalal and Bill Triggs, “Histograms of ori-ented gradients for human detection,” in IEEE CVPR,vol. 1, pp. 886-893, 2005.

[12] Corinna Cortes and Vladimir Vapnik, “Support-vectornetworks,” Machine Learning, vol. 20, no. 3, pp. 273-297, 1995.

[13] Dan C Ciresan et al., “Mitosis detection in breastcancer histology images with deep neural networks,”The 16th International Conference on Medical Im-age Computing and Computer-Assisted Intervention(MICCAI), pp. 411-418, 2013.

[14] Mehdi Habibzadeh et al., “White blood cell differ-ential counts using convolutional neural networks forlow resolution images,” Artificial Intelligence and SoftComputing, pp. 263-274, 2013.

[15] Mark Everingham et al., “The pascal visual ob-ject classes (voc) challenge,” International Journal ofComputer Vision, vol. 88, no. 2, pp. 303-338, 2010.

[16] Carl Magnus Svensson et al., “Automated detectionof circulating tumor cells with naive Bayesian classi-fiers,” Cytometry Part A, vol. 85, no. 6, pp. 501-511,2014.

6