论著

基于卷积神经网络实现锥形束CT牙齿分割及牙位标定

  • 薄士仕 ,
  • 高承志
展开
  • 1. 北京大学口腔医学院·口腔医院综合二科, 国家口腔医学中心, 国家口腔疾病临床医学研究中心, 口腔生物材料和数字诊疗装备国家工程研究中心, 北京 100081
    2. 北京大学人民医院口腔科, 北京 100044

收稿日期: 2021-02-09

  网络出版日期: 2024-07-23

Tooth segmentation and identification on cone-beam computed tomography with convolutional neural network based on spatial embedding information

  • Shishi BO ,
  • Chengzhi GAO
Expand
  • 1. Department of General Dentistry Ⅱ, Peking University School and Hospital of Stomatology & National Center for Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China
    2. Department of Dentistry, Peking University People' s Hospital, Beijing 100044, China

Received date: 2021-02-09

  Online published: 2024-07-23

摘要

目的: 利用卷积神经网络实现基于锥形束CT(cone-beam computed tomography, CBCT)体素数据的牙齿实例分割和牙位标定。方法: 本文所提出的牙齿算法包含三个不同的卷积神经网络,网络架构以Resnet为基础模块,首先对CBCT图像进行降采样,然后确定一个包含CBCT图像中所有牙齿的感兴趣区域(region of interest, ROI)。通过训练模型,ROI利用一个双分支“编码器-解码器”结构网络,预测输入数据中每个体素所对应的相关空间位置信息,进行聚类后实现牙齿的实例分割。牙位标定则通过另一个多类别分割任务设计的U-Net模型实现。随后,在原始空间分辨率下,训练了一个用于精细分割的U-Net网络,得到牙齿的高分辨率分割结果。本实验收集了59例带有简单冠修复体及种植体的CBCT数据进行人工标注作为数据库,对牙齿算法的预测结果使用实例Dice相似系数(instance Dice similarity coefficient, IDSC)用来评估牙齿分割结果,使用平均Dice相似系数(the average Dice similarity coefficient,ADSC)评估牙齿分割及牙位标定的共同结果并进行评定。结果: 量化指标显示,IDSC为89. 35%,ADSC为84. 74%。剔除了带有修复体伪影的数据后生成了有43例样本的数据库,训练网络得到了更优良的性能,IDSC为90. 34%,ADSC为87.88%。将得到的结果进行可视化分析,牙齿算法不仅可以清晰地分割出CBCT中牙齿的形态,而且可以对牙齿的分类进行准确的编号。结论: 该牙齿算法不仅可以成功实现三维图像的牙齿及修复体分割,还可以准确标定所有恒牙的牙位,具有临床实用性。

本文引用格式

薄士仕 , 高承志 . 基于卷积神经网络实现锥形束CT牙齿分割及牙位标定[J]. 北京大学学报(医学版), 2024 , 56(4) : 735 -740 . DOI: 10.19723/j.issn.1671-167X.2024.04.030

Abstract

Objective: To propose a novel neural network to achieve tooth instance segmentation and recognition based on cone-beam computed tomography (CBCT) voxel data. Methods: The proposed methods included three different convolutional neural network models. The architecture was based on the Resnet module and built according to the structure of "Encoder-Decoder" and U-Net. The CBCT image was de-sampled and a fixed-size region of interest (ROI) containing all the teeth was determined. ROI would first through a two-branch "encoder and decoder" structure of the network, the network could predict each voxel in the input data of the spatial embedding. The post-processing algorithm would cluster the prediction results of the relevant spatial location information according to the two-branch network to realize the tooth instance segmentation. The tooth position identification was realized by another U-Net model based on the multi-classification segmentation task. According to the predicted results of the network, the post-processing algorithm would classify the tooth position according to the voting results of each tooth instance segmentation. At the original spatial resolution, a U-Net network model for the fine-tooth segmentation was trained using the region corresponding to each tooth as the input. According to the results of instance segmentation and tooth position identification, the model would process the correspon-ding positions on the high-resolution CBCT images to obtain the high-resolution tooth segmentation results. In this study, CBCT data of 59 cases with simple crown prostheses and implants were collected for manual labeling as the database, and statistical indicators were evaluated for the prediction results of the algorithm. To assess the performance of tooth segmentation and classification, instance Dice similarity coefficient (IDSC) and the average Dice similarity coefficient (ADSC) were calculated. Results: The experimental results showed that the IDSC was 89.35%, and the ADSC was 84. 74%. After eliminating the data with prostheses artifacts, the database of 43 samples was generated, and the performance of the training network was better, with 90.34% for IDSC and 87.88% for ADSC. The framework achieved excellent performance on tooth segmentation and identification. Voxels near intercuspation surfaces and fuzzy boundaries could be separated into correct instances by this framework. Conclusions: The results show that this method can not only successfully achieve 3D tooth instance segmentation but also identify all teeth notation numbers accurately, which has clinical practicability.

参考文献

1 Liao FZ , Liang M , Li Z , et al. Evaluate the malignancy of pulmonary nodules using the 3-D deep leaky noisy-OR network[J]. IEEE Trans Neural Netw Learn Syst, 2019, 30 (11): 3484- 3495.
2 Long J , Shelhamer E , Darrell T . Fully convolutional networks for semantic segmentation[J]. IEEE Trans Pattern Anal Mach Intell, 2017, 39 (4): 640- 651.
3 Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation: International conference on medical image computing and computer-assisted intervention[C]. Cham: Springer, 2015.
4 Neven D, Brabandere BD, Proesmans M, et al. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR)[C]. Long Beach, CA: IEEE, 2019.
5 Pei YR , Ai XS , Zha HB , et al. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images[J]. Med Phys, 2016, 43 (9): 5040- 5050.
6 Patil S , Kulkarni V , Bhise A . Algorithmic analysis for dental caries detection using an adaptive neural network architecture[J]. Heliyon, 2019, 5 (5): e1579.
7 Zhang KL , Wu J , Chen H , et al. An effective teeth recognition method using label tree with cascade network structure[J]. Comput Med Imaging Graph, 2018, 68, 61- 70.
8 Hosntalab M , Aghaeizadeh ZR , Abbaspour TA , et al. Classification and numbering of teeth in multi-slice CT images using wavelet-Fourier descriptor[J]. Int J Comput Assist Radiol Surg, 2010, 5 (3): 237- 249.
9 Hwang JJ , Jung YH , Cho BH , et al. An overview of deep learning in the field of dentistry[J]. Imaging Sci Dent, 2019, 49 (1): 1- 7.
10 Chen H , Zhang KL , Lyu PJ , et al. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films[J]. Sci Rep, 2019, 9 (1): 3840.
11 Miki Y , Muramatsu C , Hayashi T , et al. Classification of teeth in cone-beam CT using deep convolutional neural network[J]. Comput Biol Med, 2017, 80, 24- 29.
12 Cui ZM, Li CJ, Wang WP. ToothNet: Automatic tooth instance segmentation and identification from cone beam CT images: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR)[C]. Long Beach, CA: IEEE, 2019.
13 He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition: 2016 IEEE conference on computer vision and pattern recognition (CVPR)[C]. Las Vegas: IEEE, 2016.
14 Dentistry-designation system for teeth and areas of the oral cavity: ISO 3950: 2009[S/OL]. [2016-03-01]. https://www.iso.org/standard/68292.html.
15 Dice LR . Measures of the amount of ecologic association between species[J]. Ecology, 1945, 26 (3): 297- 302.
文章导航

/