1
Computer Tomography Scan Skull Segmentation Using Deep Learning John Joshua, Ward Melville Highschool Hongyi Duanmu, Dr. Fusheng Wang, Dr. Jinkoo Kim Department of Biomedical Informatics, Stony Brook University Objective For years, doctors have manually annotated CT images to separate different structures in the scan. This process is often long and tedious. To solve this problem, we have developed, trained, and tested a model to segment the skull from the other parts of a CT scan. Our goal is to create a fully automated system to annotate these scans as perfectly as a doctor. The need for this technology in the medical field is quite immense. Figure 1: Original Raw CT scan at Slice 247. (CT-03 in dataset) Figure 2: Doctor’s annotated CT scan at slice 247. Our goal with computer annotation. (CT-03 in dataset) Dataset The dataset consists of 9 raw CT images (CT-03 – CT-11) and 9 annotated CT images (seg-03 – seg-11). With trial and error, we have successfully trained the raw data to match the annotated data. Raw Data Annotated Data Methodology - We are using Fully Convolutional Neural Networks to train our model. - Software used to train data: SSH Secure Shell - GPU used for training: Tesla K80 - Programming language: Python 2.7 -> Theano - Repository: LiviaNET - Number of Epochs used in training: 10 - Approximate time of successful training: 20 hours - Approximate time of successful testing: 3 hours Challenges - Separating connected components such as the teeth and any outside interference. - MAT LAB file conversion. - Various training errors such as the cost of sub epoch not consistently decreasing or cost equal NAN(Not A Number). CT-03 CT-04 CT-05 CT-06 CT-07 CT-08 CT-09 CT-10 CT-11 seg-03 seg-04 seg-05 Seg-06 Seg-07 Seg-08 Seg-09 Seg-10 Seg-11 Results All outer connected components have been excluded but some parts of the skull were cut off and the spine was visible in the Sagittal view. Figure 3 & 4: (Skull-03) Final automated computer segmentation at Slice 247. Originally CT-03 in dataset. (Axial View on left and Sagittal View on right.) Conclusions In the future, to receive better results, we would need to have access to a larger dataset to train. Once that larger dataset has been obtained, we would most likely need to train our that data for a longer period of time, possibly for 20 epoch. Acknowledgements and References I would like to thank my mentor and PhD student Duanmu Hongyi for his guidance throughout this project. I would also like to thank Dr. FushengWang for his support and Dr. Jinkoo Kim for our dataset. The repository of use was developed by Jose Dolz, a Post-doctoral researcher at the LIVIA department of the ETS, in Montreal.

Computer Tomography Scan Skull Segmentation Using Deep ... · All outer connected components have been excluded but some parts of the skull were cut off and the spine was visible

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Computer Tomography Scan Skull Segmentation Using Deep LearningJohn Joshua, Ward Melville Highschool

Hongyi Duanmu, Dr. Fusheng Wang, Dr. Jinkoo KimDepartment of Biomedical Informatics, Stony Brook University

ObjectiveFor years, doctors have manually annotated CT images to

separate different structures in the scan. This process is often long and tedious. To solve this problem, we have developed, trained, and tested a model to segment the skull from the other parts of a CT scan. Our goal is to create a fully automated system to annotate these scans as perfectly as a doctor. The need for this technology in the medical field is quite immense.

Figure 1: Original Raw CT scan at Slice 247. (CT-03 in dataset)

Figure 2: Doctor’s annotated CT scan at slice 247. Our goal with computer annotation. (CT-03 in dataset)

DatasetThe dataset consists of 9 raw CT images (CT-03 – CT-11) and 9

annotated CT images (seg-03 – seg-11). With trial and error, we have successfully trained the raw data to match the annotated data.

Raw Data Annotated Data

Methodology- We are using Fully Convolutional Neural Networks to train our

model.

- Software used to train data: SSH Secure Shell- GPU used for training: Tesla K80- Programming language: Python 2.7 -> Theano- Repository: LiviaNET- Number of Epochs used in training: 10- Approximate time of successful training: 20 hours- Approximate time of successful testing: 3 hours

Challenges- Separating connected components such as the teeth

and any outside interference.- MAT LAB file conversion.- Various training errors such as the cost of sub epoch not

consistently decreasing or cost equal NAN(Not A Number).

CT-03 CT-04 CT-05

CT-06 CT-07 CT-08

CT-09 CT-10 CT-11

seg-03 seg-04 seg-05

Seg-06 Seg-07 Seg-08

Seg-09 Seg-10 Seg-11

Results

All outer connected components have been excluded but some parts of the skull were cut off and the spine was visible in the Sagittal view.

Figure 3 & 4: (Skull-03) Final automated computer segmentation at Slice 247. Originally CT-03 in dataset. (Axial View on left and Sagittal View on right.)

ConclusionsIn the future, to receive better results, we would need to

have access to a larger dataset to train. Once that larger dataset has been obtained, we would most likely need to train our that data for a longer period of time, possibly for 20 epoch.

Acknowledgements and References

I would like to thank my mentor and PhD student Duanmu Hongyi for his guidance throughout this project. I would also like to thank Dr. Fusheng Wang for his support and Dr. Jinkoo Kim for our dataset.

The repository of use was developed by Jose Dolz, a Post-doctoral researcher at the LIVIA department of the ETS, in Montreal.