基于U型卷积神经网络学习的前列腺癌影像重建模型在手术导航中的应用

  • 颜野 ,
  • 夏海缀 ,
  • 李旭升 ,
  • 何为 ,
  • 朱学华 ,
  • 张智荧 ,
  • 肖春雷 ,
  • 刘余庆 ,
  • 黄华 ,
  • 何良华 ,
  • 卢剑
展开
卢剑,北京大学第三医院泌尿外科主任医师、副教授、博士生导师。主要学术兼职有:中华医学会泌尿外科分会泌尿男科工程委员会委员、国家药品监督管理局医疗器械分类技术委员会和审评专家咨询委员会委员、中国医师协会超声分子影像与人工智能委员会委员、中国性学会性医学专业委员会委员、中国医药教育协会加速康复外科分会委员和北京医学会肿瘤分会及罕见病分会泌尿专业组委员等。主要研究方向包括肿瘤发病机制与医学图像处理。长期从事泌尿外科临床与基础研究,具备丰富的临床工作经验和较强的医学-信息科学交叉学科理论基础。主持国家重点研发计划数字诊疗专项、国家自然科学基金、北京市自然科学基金等多项国家及省部级科研课题;曾获教育部高等学校科学研究优秀成果奖(科学技术)科技进步二等奖和中华医学科技奖等省部级奖励3项。在The Journal of Urology等发表SCI论文16篇;在《北京大学学报(医学版)》《中华泌尿外科杂志》等国内核心期刊发表论文110篇,参编专业著作4部。

收稿日期: 2019-03-18

  网络出版日期: 2019-06-26

基金资助

北京市自然科学基金(L172012)、国家自然科学基金(61871004)、北大医学青年科技创新培育基金(BMU2018ZHYL012)-中央高校基本科研业务费、同济大学“中央高校基本科研业务费”专项基金(kx0080020173428)

Application of U-shaped convolutional neural network in auto segmentation and reconstruction of 3D prostate model in laparoscopic prostatectomy navigation

  • Ye YAN ,
  • Hai-zhui XIA ,
  • Xu-sheng LI ,
  • Wei HE ,
  • Xue-hua ZHU ,
  • Zhi-ying ZHANG ,
  • Chun-lei XIAO ,
  • Yu-qing LIU ,
  • Hua HUANG ,
  • Liang-hua HE ,
  • Jian LU
Expand

Received date: 2019-03-18

  Online published: 2019-06-26

Supported by

Supported by Beijing Natural Science Foundation (L172012), the National Natural Science Foundation of China (61871004), the Fundamental Research Funds for the Central Universities: Peking University Medicine Fund of Fostering Young Scholars’ Scientific & Technological Innovation (BMU2018ZHYL012), the Fundamental Research Funds for the Central Universities: Tongji University (kx0080020173428)

摘要

目的 探讨基于U型卷积神经网络(U-shaped convolutional neural network, U-net)建立的前列腺磁共振图像自动化分割和重建3D模型对腹腔镜前列腺癌根治术进行术中认知导航的效果。方法 应用含有人工注释的共5 000张前列腺癌磁共振影像训练集,训练U-net,构建了一套以临床需求为导向,稳定高效的全卷积神经网络算法模型,对前列腺磁共振图像进行区域化、多结构和精细自动化分割,并将分割数据使用医学影像处理交互平台(Medical Image Interaction Tool Kit,MITK)自动重建,以STL格式输出建模信息,应用平板电脑在术中展示前列腺模型,进行认知导航。结果 基于201例前列腺癌患者的磁共振图像训练样本,在经典U-net基础上通过适应性改良,建立了一套结构简单、性能优秀的U-net,可以实现对前列腺、肿瘤、精囊腺、直肠等重要结构的单独分割,并进行三维可视化,直观地显示手术关键部位的结构关系和肿瘤侵犯程度。术中通过平板电脑同步展示3D模型,成功进行认知导航。结论 通过改良的U-net可以自动化完成前列腺磁共振图像的结构化分割,通过重建局部解剖部位的3D模型用于术中认知融合导航,可以达到肿瘤可视化、降低手术切缘阳性率、提高手术效果的作用。

本文引用格式

颜野 , 夏海缀 , 李旭升 , 何为 , 朱学华 , 张智荧 , 肖春雷 , 刘余庆 , 黄华 , 何良华 , 卢剑 . 基于U型卷积神经网络学习的前列腺癌影像重建模型在手术导航中的应用[J]. 北京大学学报(医学版), 2019 , 51(3) : 596 -601 . DOI: 10.19723/j.issn.1671-167X.2019.03.033

Abstract

Objective: To investigate the efficacy of intraoperative cognitive navigation on laparoscopic radical prostatectomy using 3D prostatic models created by U-shaped convolutional neural network (U-net) and reconstructed through Medical Image Interaction Tool Kit (MITK) platform. Methods: A total of 5 000 pieces of prostate cancer magnetic resonance (MR) imaging discovery sets with manual annotations were used to train a modified U-net, and a set of clinically demand-oriented, stable and efficient full convolutional neural network algorithm was constructed. The MR images were cropped and segmented automatically by using modified U-net, and the segmentation data were automatically reconstructed using MITK platform according to our own protocols. The modeling data were output as STL format, and the prostate models were simultaneously displayed on an android tablet during the operation to help achieving cognitive navigation. Results: Based on original U-net architecture, we established a modified U-net from a 201-case MR imaging training set. The network performance was tested and compared with human segmentations and other segmentation networks by using one certain testing data set. Auto segmentation of multi-structures (such as prostate, prostate tumors, seminal vesicles, rectus, neurovascular bundles and dorsal venous complex) were successfully achieved. Secondary automatic 3D reconstruction had been carried out through MITK platform. During the surgery, 3D models of prostatic area were simultaneously displayed on an android tablet, and the cognitive navigation was successfully achieved. Intra-operation organ visualization demonstrated the structural relationships among the key structures in great detail and the degree of tumor invasion was visualized directly. Conclusion: The modified U-net was able to achieve automatic segmentations of important structures of prostate area. Secondary 3D model reconstruction and demonstration could provide intraoperative visualization of vital structures of prostate area, which could help achieve cognitive fusion navigation for surgeons. The application of these techniques could finally reduce positive surgical margin rates, and may improve the efficacy and oncological outcomes of laparoscopic prostatectomy.

参考文献

[1] Siegel RL, Miller KD, Jemal A.Cancer statistics, 2018[J]. CA Cancer J Clin, 2018, 68(1): 7-30.
[2] Simmons MN, Stephenson AJ, Klein EA.Natural history of biochemical recurrence after radical prostatectomy: risk assessment for secondary therapy[J]. Eur Urol, 2007, 51(5): 1175-1184.
[3] Van den Broeck T, van den Bergh R, Arfi N, et al. Prognostic value of biochemical recurrence following treatment with curative intent for prostate cancer: A systematic review [J/OL]. Eur Urol,(2018-10-17) [2019-02-15]. https://doi.org/10.1016/j.eururo.2018.10.011.
[4] Ukimura O, Aron M, Nakamoto M, et al.Three-dimensional surgical navigation model with TilePro display during robot-assisted radical prostatectomy[J]. J Endourol, 2014, 28(6): 625-630.
[5] Hughes-Hallett A, Mayer EK, Marcus HJ, et al.Augmented rea-lity partial nephrectomy: examining the current status and future perspectives[J]. Urology, 2014, 83(2): 266-273.
[6] 王燕, 高旭, 阳青松, 等. 3D打印技术辅助认知融合在前列腺穿刺活检术中的应用[J]. 临床泌尿外科杂志, 2016, 31(2): 104-107.
[7] 邵叶秦, 杨新. 基于随机森林的CT前列腺分割[J]. CT理论与应用研究, 2015, 24(5): 647-655.
[8] Ronneberger O, Fischer P, Brox T.U-net: Convolutional networks for biomedical image segmentation[C]. International Conference on Medical image computing and computer-assisted intervention. Cham: Springer, 2015: 234-241.
[9] 詹曙, 梁植程, 谢栋栋. 前列腺磁共振图像分割的反卷积神经网络方法[J]. 中国图象图形学报, 2017, 22(4): 516-522.
[10] Neher PF, Stieltjes B, Reisert M, et al.MITK global tractography[C]. Proceedings of SPIE: The International Society for Optical Engineering, 2012: 83144D. doi: 10.1117/12.911215.
[11] Lecun Y, Bottou L, Bengio Y, et al.Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[12] Mahapatra D, Buhmann JM.Prostate MRI segmentation using learned semantic knowledge and graph cuts[J]. IEEE Transactions on Biomedical Engineering, 2014, 61(3): 756-764.
[13] Korez R, Likar B, Pernu? F, et al.Model-based segmentation of vertebral bodies from MR images with 3D CNNs[C]. International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2016: 433-441.
[14] Brosch T, Tang LY, Yoo Y, et al.Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1229-1239.
[15] Martínez F, Romero E, Dréan G, et al.Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector[J]. Phys Med Biol, 2014, 59(6): 1471-1484.
[16] 凌彤, 杨琬琪, 杨明. 利用多模态U形网络的CT图像前列腺分割[J]. 智能系统学报, 2018, 13(6): 981-988.
[17] Ebbing J, J?derling F, Collins JW, et al.Comparison of 3D printed prostate models with standard radiological information to aid understanding of the precise location of prostate cancer: A construct validation study[J]. PLoS One, 2018, 13(6): e199477.
[18] Volonté F, Pugin F, Bucher P, et al.Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion[J]. J Hepatobiliary Pancreat Sci, 2011, 18(4): 506-509.
[19] Teber D, Guven S, Simpfendorfer T, et al.Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results[J]. Eur Urol, 2009, 56(2): 332-338.
[20] Porpiglia F, Fiori C, Checcucci E, et al.Augmented reality robot-assisted radical prostatectomy: Preliminary experience[J]. Urology, 2018, 115(5): 184.
[21] Porpiglia F, Checcucci E, Amparore D, et al.Augmented-reality robot-assisted radical prostatectomy using hyper-accuracy three-dimensional reconstruction (HA 3DTM) technology: a radiological and pathological study[J]. BJU international, 2018, 123(5): 834-845.
文章导航

/