Application of U-shaped convolutional neural network in auto segmentation and reconstruction of 3D prostate model in laparoscopic prostatectomy navigation

  • Ye YAN ,
  • Hai-zhui XIA ,
  • Xu-sheng LI ,
  • Wei HE ,
  • Xue-hua ZHU ,
  • Zhi-ying ZHANG ,
  • Chun-lei XIAO ,
  • Yu-qing LIU ,
  • Hua HUANG ,
  • Liang-hua HE ,
  • Jian LU
Expand

Received date: 2019-03-18

  Online published: 2019-06-26

Supported by

Supported by Beijing Natural Science Foundation (L172012), the National Natural Science Foundation of China (61871004), the Fundamental Research Funds for the Central Universities: Peking University Medicine Fund of Fostering Young Scholars’ Scientific & Technological Innovation (BMU2018ZHYL012), the Fundamental Research Funds for the Central Universities: Tongji University (kx0080020173428)

Abstract

Objective: To investigate the efficacy of intraoperative cognitive navigation on laparoscopic radical prostatectomy using 3D prostatic models created by U-shaped convolutional neural network (U-net) and reconstructed through Medical Image Interaction Tool Kit (MITK) platform. Methods: A total of 5 000 pieces of prostate cancer magnetic resonance (MR) imaging discovery sets with manual annotations were used to train a modified U-net, and a set of clinically demand-oriented, stable and efficient full convolutional neural network algorithm was constructed. The MR images were cropped and segmented automatically by using modified U-net, and the segmentation data were automatically reconstructed using MITK platform according to our own protocols. The modeling data were output as STL format, and the prostate models were simultaneously displayed on an android tablet during the operation to help achieving cognitive navigation. Results: Based on original U-net architecture, we established a modified U-net from a 201-case MR imaging training set. The network performance was tested and compared with human segmentations and other segmentation networks by using one certain testing data set. Auto segmentation of multi-structures (such as prostate, prostate tumors, seminal vesicles, rectus, neurovascular bundles and dorsal venous complex) were successfully achieved. Secondary automatic 3D reconstruction had been carried out through MITK platform. During the surgery, 3D models of prostatic area were simultaneously displayed on an android tablet, and the cognitive navigation was successfully achieved. Intra-operation organ visualization demonstrated the structural relationships among the key structures in great detail and the degree of tumor invasion was visualized directly. Conclusion: The modified U-net was able to achieve automatic segmentations of important structures of prostate area. Secondary 3D model reconstruction and demonstration could provide intraoperative visualization of vital structures of prostate area, which could help achieve cognitive fusion navigation for surgeons. The application of these techniques could finally reduce positive surgical margin rates, and may improve the efficacy and oncological outcomes of laparoscopic prostatectomy.

Cite this article

Ye YAN , Hai-zhui XIA , Xu-sheng LI , Wei HE , Xue-hua ZHU , Zhi-ying ZHANG , Chun-lei XIAO , Yu-qing LIU , Hua HUANG , Liang-hua HE , Jian LU . Application of U-shaped convolutional neural network in auto segmentation and reconstruction of 3D prostate model in laparoscopic prostatectomy navigation[J]. Journal of Peking University(Health Sciences), 2019 , 51(3) : 596 -601 . DOI: 10.19723/j.issn.1671-167X.2019.03.033

References

[1] Siegel RL, Miller KD, Jemal A.Cancer statistics, 2018[J]. CA Cancer J Clin, 2018, 68(1): 7-30.
[2] Simmons MN, Stephenson AJ, Klein EA.Natural history of biochemical recurrence after radical prostatectomy: risk assessment for secondary therapy[J]. Eur Urol, 2007, 51(5): 1175-1184.
[3] Van den Broeck T, van den Bergh R, Arfi N, et al. Prognostic value of biochemical recurrence following treatment with curative intent for prostate cancer: A systematic review [J/OL]. Eur Urol,(2018-10-17) [2019-02-15]. https://doi.org/10.1016/j.eururo.2018.10.011.
[4] Ukimura O, Aron M, Nakamoto M, et al.Three-dimensional surgical navigation model with TilePro display during robot-assisted radical prostatectomy[J]. J Endourol, 2014, 28(6): 625-630.
[5] Hughes-Hallett A, Mayer EK, Marcus HJ, et al.Augmented rea-lity partial nephrectomy: examining the current status and future perspectives[J]. Urology, 2014, 83(2): 266-273.
[6] 王燕, 高旭, 阳青松, 等. 3D打印技术辅助认知融合在前列腺穿刺活检术中的应用[J]. 临床泌尿外科杂志, 2016, 31(2): 104-107.
[7] 邵叶秦, 杨新. 基于随机森林的CT前列腺分割[J]. CT理论与应用研究, 2015, 24(5): 647-655.
[8] Ronneberger O, Fischer P, Brox T.U-net: Convolutional networks for biomedical image segmentation[C]. International Conference on Medical image computing and computer-assisted intervention. Cham: Springer, 2015: 234-241.
[9] 詹曙, 梁植程, 谢栋栋. 前列腺磁共振图像分割的反卷积神经网络方法[J]. 中国图象图形学报, 2017, 22(4): 516-522.
[10] Neher PF, Stieltjes B, Reisert M, et al.MITK global tractography[C]. Proceedings of SPIE: The International Society for Optical Engineering, 2012: 83144D. doi: 10.1117/12.911215.
[11] Lecun Y, Bottou L, Bengio Y, et al.Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324.
[12] Mahapatra D, Buhmann JM.Prostate MRI segmentation using learned semantic knowledge and graph cuts[J]. IEEE Transactions on Biomedical Engineering, 2014, 61(3): 756-764.
[13] Korez R, Likar B, Pernu? F, et al.Model-based segmentation of vertebral bodies from MR images with 3D CNNs[C]. International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2016: 433-441.
[14] Brosch T, Tang LY, Yoo Y, et al.Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1229-1239.
[15] Martínez F, Romero E, Dréan G, et al.Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector[J]. Phys Med Biol, 2014, 59(6): 1471-1484.
[16] 凌彤, 杨琬琪, 杨明. 利用多模态U形网络的CT图像前列腺分割[J]. 智能系统学报, 2018, 13(6): 981-988.
[17] Ebbing J, J?derling F, Collins JW, et al.Comparison of 3D printed prostate models with standard radiological information to aid understanding of the precise location of prostate cancer: A construct validation study[J]. PLoS One, 2018, 13(6): e199477.
[18] Volonté F, Pugin F, Bucher P, et al.Augmented reality and image overlay navigation with OsiriX in laparoscopic and robotic surgery: not only a matter of fashion[J]. J Hepatobiliary Pancreat Sci, 2011, 18(4): 506-509.
[19] Teber D, Guven S, Simpfendorfer T, et al.Augmented reality: a new tool to improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results[J]. Eur Urol, 2009, 56(2): 332-338.
[20] Porpiglia F, Fiori C, Checcucci E, et al.Augmented reality robot-assisted radical prostatectomy: Preliminary experience[J]. Urology, 2018, 115(5): 184.
[21] Porpiglia F, Checcucci E, Amparore D, et al.Augmented-reality robot-assisted radical prostatectomy using hyper-accuracy three-dimensional reconstruction (HA 3DTM) technology: a radiological and pathological study[J]. BJU international, 2018, 123(5): 834-845.
Outlines

/