본문 바로가기
HOME> 논문 > 논문 검색상세

논문 상세정보

Journal of biomedical engineering research : the official journal of the Korean Society of Medical & Biological Engineering v.39 no.5, 2018년, pp.213 - 219  
본 등재정보는 저널의 등재정보를 참고하여 보여주는 베타서비스로 정확한 논문의 등재여부는 등재기관에 확인하시기 바랍니다.

합성곱 신경망을 활용한 위내시경 이미지 분류에서 전이학습의 효용성 평가
Evaluation of Transfer Learning in Gastroscopy Image Classification using Convolutional Neual Network

박성진   (가천대학교 의과대학 의공학교실  ); 김영재   (가천대학교 의과대학 의공학교실  ); 박동균   (가천대학교 길병원 소화기내과  ); 정준원   (가천대학교 길병원 소화기내과  ); 김광기   (가천대학교 의과대학 의공학교실  );
  • 초록

    Stomach cancer is the most diagnosed cancer in Korea. When gastric cancer is detected early, the 5-year survival rate is as high as 90%. Gastroscopy is a very useful method for early diagnosis. But the false negative rate of gastric cancer in the gastroscopy was 4.6~25.8% due to the subjective judgment of the physician. Recently, the image classification performance of the image recognition field has been advanced by the convolutional neural network. Convolutional neural networks perform well when diverse and sufficient amounts of data are supported. However, medical data is not easy to access and it is difficult to gather enough high-quality data that includes expert annotations. So This paper evaluates the efficacy of transfer learning in gastroscopy classification and diagnosis. We obtained 787 endoscopic images of gastric endoscopy at Gil Medical Center, Gachon University. The number of normal images was 200, and the number of abnormal images was 587. The image size was reconstructed and normalized. In the case of the ResNet50 structure, the classification accuracy before and after applying the transfer learning was improved from 0.9 to 0.947, and the AUC was also improved from 0.94 to 0.98. In the case of the InceptionV3 structure, the classification accuracy before and after applying the transfer learning was improved from 0.862 to 0.924, and the AUC was also improved from 0.89 to 0.97. In the case of the VGG16 structure, the classification accuracy before and after applying the transfer learning was improved from 0.87 to 0.938, and the AUC was also improved from 0.89 to 0.98. The difference in the performance of the CNN model before and after transfer learning was statistically significant when confirmed by T-test (p


  • 주제어

    Gastroscope .   Convolutional Neual Network .   Transfer learning .   Resnet .   Inception .   VGGnet.  

  • 참고문헌 (25)

    1. http://www.ncc.re.kr, accessed on Apr. 24, 2018. 
    2. H. Katai et al., "Five-year survival analysis of surgically resected gastric cancer cases in Japan: a retrospective analysis of more than 100,000 patients from the nationwide registry of the Japanese Gastric Cancer Association (2001-2007)," Gastric Cancer, vol. 21, no. 1, pp. 144-154, 2018. 
    3. H. A. Park et al., "The Korean guideline for gastric cancer screening," J. Korean Med. Assoc., vol. 58, no. 5, pp. 373-384, 2015. 
    4. T. Hirasawa et al., "Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images," Gastric Cancer, no.0123456789, pp. 1-8, 2018. 
    5. S. Menon and N. Trudgill, "How commonly is upper gastrointestinal cancer missed at endoscopy? A meta-analysis," Endosc. Int. Open, vol. 02, no. 02, pp. E46-E50, 2014. 
    6. Y. Shimodate et al., "Gastric superficial neoplasia : high miss rate but slow progression," no. December 2014, pp. 722-726, 2017. 
    7. K. Y. Hosokawa O, Hattori M, Douden K, Hayashi H, Ohta K, "Difference in accuracy between gastroscopy and colonoscopy for detection of cancer.," Hepatogastroenterology, vol. 54, pp. 442-444, 2007. 
    8. M. Hafner, A. Gangl, M. Liedlgruber, A. Uhl, A. Vecsei, and F. Wrba, "Combining Gaussian Markov random fields with the discretewavelet transform for endoscopic image classification," DSP 2009 16th Int. Conf. Digit. Signal Process. Proc., pp. 1-6, 2009. 
    9. P. Wang, S. M. Krishnan, C. Kugean, and M. P. Tjoa, "Classification of endoscopic images based on texture and neural network," Annu. Reports Res. React. Institute, Kyoto Univ., vol. 4, pp. 3691-3695, 2001. 
    10. G. H. Yann LeCun, Yoshua Bengio, "Deep learning," Nature, vol. 521, pp. 436-444, 2015. 
    11. M. I. Razzak, S. Naz, and A. Zaib, "Deep Learning for Medical Image Processing: Overview, Challenges and Future," CoRR, vol. 1704.06825, pp. 1-30, 2017. 
    12. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Adv. Neural Inf. Process. Syst., pp. 1-9, 2012. 
    13. K. He, "Delving Deep into Rectifiers : Surpassing Human-Level Performance on ImageNet Classification," 2014. 
    14. V. Gulshan et al., "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs," JAMA-J. Am. Med. Assoc., vol. 316, no. 22, pp. 2402-2410, 2016. 
    15. A. Esteva et al., "Dermatologist-level classification of skin cancer with deep neural networks," Nature, vol. 542, no. 7639, pp. 115-118, 2017. 
    16. H. C. Shin et al., "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning," IEEE Trans. Med. Imaging, vol. 35, no. 5, pp. 1285-1298, 2016. 
    17. G. Wimmer, A. Vecsei, and A. Uhl, "CNN Transfer Learning for the Automated Diagnosis of Celiac Disease," 2016. 
    18. F. Zhang, X. Xu, and Y. Qiao, "Deep classification of vehicle makers and models: The effectiveness of pre-training and data enhancement," 2015 IEEE Int. Conf. Robot. Biomimetics, IEEE-ROBIO 2015, pp. 231-236, 2015. 
    19. K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," pp. 1-14, 2014. 
    20. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2015. 
    21. K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 770-778, 2016. 
    22. L. F.-F. Olga Russakovsky,Jia Deng,Hao Su,Jonathan Krause,Sanjeev Satheesh,Sean Ma,Zhiheng Huang,Andrej Karpathy,Aditya Khosla,Michael Bernstein,Alexander C. Berg, "ImageNet Large Scale Visual Recognition Challenge," Int. J. Comput. Vis., vol. 115, pp. 211-252, 2015. 
    23. A. Y. Ng, "Preventing 'Overfitting' of Cross-Validation data," CEUR Workshop Proc., vol. 1542, pp. 33-36, 2015. 
    24. R. R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra, "Grad-CAM: Why did you say that?," pp. 1-4, 2016. 
    25. M. Lin, Q. Chen, and S. Yan, "Network In Network," pp. 1-10, 2013. 

 활용도 분석

  • 상세보기

    amChart 영역
  • 원문보기

    amChart 영역

원문보기

무료다운로드
유료다운로드
  • 원문이 없습니다.

유료 다운로드의 경우 해당 사이트의 정책에 따라 신규 회원가입, 로그인, 유료 구매 등이 필요할 수 있습니다. 해당 사이트에서 발생하는 귀하의 모든 정보활동은 NDSL의 서비스 정책과 무관합니다.

원문복사신청을 하시면, 일부 해외 인쇄학술지의 경우 외국학술지지원센터(FRIC)에서
무료 원문복사 서비스를 제공합니다.

NDSL에서는 해당 원문을 복사서비스하고 있습니다. 위의 원문복사신청 또는 장바구니 담기를 통하여 원문복사서비스 이용이 가능합니다.

이 논문과 함께 출판된 논문 + 더보기