北京大学学报(医学版) ›› 2020, Vol. 52 ›› Issue (6): 1107-1111. doi: 10.19723/j.issn.1671-167X.2020.06.020

• 论著 • 上一篇    下一篇

基于三维动态照相机的正常人面部表情可重复性研究

邱天成,刘筱菁,薛竹林,李自力()   

  1. 北京大学口腔医学院·口腔医院,口腔颌面外科 国家口腔疾病临床医学研究中心 口腔数字化医疗技术和材料国家工程实验室 口腔数字医学北京市重点实验室,北京 100081
  • 收稿日期:2018-10-10 出版日期:2020-12-18 发布日期:2020-12-13
  • 通讯作者: 李自力 E-mail:kqlzl@sina.com

Evaluation of the reproducibility of non-verbal facial expressions in normal persons using dynamic stereophotogrammetric system

Tian-cheng QIU,Xiao-jing LIU,Zhu-lin XUE,Zi-li LI()   

  1. Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
  • Received:2018-10-10 Online:2020-12-18 Published:2020-12-13
  • Contact: Zi-li LI E-mail:kqlzl@sina.com

摘要:

目的:测量正常人群表情运动的可重复性,为患者手术等干预措施的效果评价提供参照数据。方法:征集面部结构大致对称、无面部运动及感觉神经障碍病史的志愿者共15名(男性7名,女性8名,中位年龄25岁)。使用三维动态照相机记录研究对象的面部表情运动(闭唇笑、露齿笑、噘嘴、鼓腮),分辨率为采集频率60帧/s,挑选每个面部表情中最有特征的6帧图像,分别为静止状态时图像(T0)、从静止状态至最大运动状态时的中间图像(T1)、刚达到最大运动状态时的图像(T2)、最大运动状态将结束时的图像(T3)、最大运动状态至静止状态时的中间图像(T4)及动作结束时的静止图像(T5)。采集两次面部表情三维图像数据,间隔1周以上。以静止图像(T0)为参照,将运动状态系列图像(T1~T5)与之进行图像配准融合,采用区域分析法量化分析前后两次同一表情相同关键帧图像与对应静止状态三维图像的三维形貌差异,以均方根(root mean square,RMS)表示。结果:闭唇笑、露齿笑以及鼓腮表情中,前后两次的对应时刻(T1~T5)图像与相应T0时刻的静止图像配准融合,计算得出的RMS值差异无统计学意义。撅嘴动作过程中,前后两次T2时刻对应面部三维图像与相应T0时刻静止图像配准融合,得出RMS值差异有统计学意义(P<0.05),其余时刻的图像差异无统计学意义。结论:正常人的面部表情具有一定的可重复性,但是噘嘴动作的可重复性较差,三维动态照相机能够量化记录及分析面部表情动作的三维特征。

关键词: 三维成像, 面部表情, 结果可重复性

Abstract:

Objective: To assess the reproducibility of non-verbal facial expressions (smile lips closed, smile lips open, lip purse, cheek puff) in normal persons using dynamic three-dimensional (3D) imaging and provide reference data for future research. Methods: In this study, 15 adults (7 males and 8 females) without facial asymmetry and facial nerve dysfunction were recruited. Each participant was seated upright in front of the 3D imaging system in natural head position. The whole face could be captured in all six cameras. The dynamic 3D system captured 60 3D images per second. Four facial expressions were included: smile lips closed, smile lips open, lip purse, and cheek puff. Before starting, we instructed the subjects to make facial expressions to develop muscle memory. During recording, each facial expression took about 3 to 4 seconds. At least 1 week later, the procedure was repeated. The rest position (T0) was considered as the base frame. The first quartile of expressions (T1), just after reaching the maximum state of expressions (T2), just before the end of maximum state of expressions (T3), the third quartile of expressions (T4), and the end of motion (T5) were selected as key frames. Using the stable part of face such as forehead, each key frame (T1-T5) of the different expressions was aligned on the corresponding frame at rest (T0). The root mean square (RMS) between each key frame and its corresponding frame at rest were calculated. The Wilcoxon signed ranks test was applied to assess statistical differences between the corresponding frames of the different facial expressions. Results: Facial expressions like smile lips closed, smile lips open, and cheek puff were reproducible. Lip purse was not reproducible. The statistically significant differences were found on the T2 frame of the repeated lip purse movement. Conclusion: The dynamic 3D imaging can be used to evaluate the reproducibility of facial expressions. Compared with the qualitative analysis and two-dimensions analysis, dynamic 3D images can be able to more truly represent the facial expressions which make the research more reliable.

Key words: Three-dimensional images, Facial expression, Reproducibility of results

中图分类号: 

  • R782.2

图1

三维动态录像所采用的4个动作表情"

图2

面部表情整体运动过程中6帧较重要的图像"

图3

闭唇笑动作T2时刻与T0时刻图像配准融合后,所有对应采集点的均方根值"

表1

闭唇笑表情的T3时刻所有研究对象前后两次测量对应的RMS值"

No. of subject RMS/mm
First time Second time
Participant 1 1.80 1.13
Participant 2 1.04 1.29
Participant 3 1.11 1.10
Participant 4 1.71 1.23
Participant 5 1.13 2.06
Participant 6 1.09 1.48
Participant 7 1.17 1.18
Participant 8 0.88 0.82
Participant 9 1.55 1.80
Participant 10 3.14 2.68
Participant 11 2.69 1.34
Participant 12 0.79 1.03
Participant 13 1.18 0.80
Participant 14 2.33 1.93
Participant 15 1.43 1.82

表2

面部表情的可重复性"

Items P values
T1 T2 T3 T4 T5
Standard smile 0.173 0.256 0.477 0.394 0.140
Maximum smile 0.132 0.069 0.124 0.460 0.776
Lip purse 0.079 0.027 0.932 0.513 0.093
Cheek puff 0.570 0.691 0.118 0.691 0.233
[1] Mehrabian A, Ferris SR. Inference of attitudes from nonverbal communication in two channels[J]. J Consult Psychol, 1967,31(3):248-252.
doi: 10.1037/h0024648 pmid: 6046577
[2] House JW, Brackmann DE. Facial nerve grading system[J]. Otolaryngol Head Neck Surg, 1985,93(2):146-147.
doi: 10.1177/019459988509300202 pmid: 3921901
[3] Popat H, Richmond S, Zhurov AI, et al. A geometric morphometric approach to the analysis of lip shape during speech: Development of a clinical outcome measure[J]. PLoS One, 2013,8(2):e57368.
doi: 10.1371/journal.pone.0057368 pmid: 23451213
[4] Hallac RR, Feng J, Kane AA, et al. Dynamic facial asymmetry in patients with repaired cleft lip using 4D imaging (video stereophotogrammetry)[J]. J Craniomaxillofac Surg, 2017,45(1):8-12.
doi: 10.1016/j.jcms.2016.11.005 pmid: 28011182
[5] Al-Hiyali A, Ayoub A, Ju X, et al. The impact of orthognathic surgery on facial expressions[J]. J Oral Maxillofac Surg, 2015,73(12):2380-2390.
doi: 10.1016/j.joms.2015.05.008 pmid: 26044608
[6] Popat H, Richmond S, Marshall D, et al. Three-dimensional assessment of functional change following class 3 orthognathic correction: A preliminary report[J]. J Craniomaxillofac Surg, 2012,40(1):36-42.
doi: 10.1016/j.jcms.2010.12.005 pmid: 21377887
[7] Shujaat S, Khambay BS, Ju X, et al. The clinical application of three-dimensional motion capture (4D): A novel approach to quantify the dynamics of facial animations[J]. Int J Oral Maxillofac Surg, 2014,43(7):907-916.
pmid: 24583138
[8] Bell A, Lo TW, Brown D, et al. Three-dimensional assessment of facial appearance following surgical repair of unilateral cleft lip and palate[J]. Cleft Palate Craniofac J, 2014,51(4):462-471.
doi: 10.1597/12-140 pmid: 23369016
[9] Sawyer AR, See M, Nduka C. Assessment of the reproducibility of facial expressions with 3-d stereophotogrammetry[J]. Otolaryngol Head Neck Surg, 2009,140(1):76-81.
doi: 10.1016/j.otohns.2008.09.007 pmid: 19130966
[10] Johnson PC, Brown H, Kuzon WM, et al. Simultaneous quantitation of facial movements: The maximal static response assay of facial nerve function[J]. Ann Plast Surg, 1994,32(2):171-179.
doi: 10.1097/00000637-199402000-00013 pmid: 8192368
[11] Gross MM, Trotman CA, Moffatt KS. A comparison of three-dimensional and two-dimensional analyses of facial motion[J]. Angle Orthod, 1996,66(3):189-194.
doi: 10.1043/0003-3219(1996)066<0189:ACOTDA>2.3.CO;2 pmid: 8805913
[12] Trotman CA, Faraway JJ, Silvester KT, et al. Sensitivity of a method for the analysis of facial mobility. I. Vector of displacement[J]. Cleft Palate Craniofac J, 1998,35(2):132-141.
pmid: 9527310
[13] Alagha MA, Ju X, Morley S, et al. Reproducibility of the dyna-mics of facial expressions in unilateral facial palsy[J]. Int J Oral Maxillofac Surg, 2018,47(2):268-275.
pmid: 28882498
[14] Alqattan M, Djordjevic J, Zhurov AI, et al. Comparison between landmark and surface-based three-dimensional analyses of facial asymmetry in adults[J]. Eur J Orthod, 2015,37(1):1-12.
doi: 10.1093/ejo/cjt075 pmid: 24152377
[15] Ju X, O’Leary E, Peng M, et al. Evaluation of the reproducibility of nonverbal facial expressions using a 3D motion capture system[J]. Cleft Palate Craniofac J, 2016,53(1):22-29.
doi: 10.1597/14-090r
[1] 朱玉佳,赵一姣,郑盛文,温奥楠,傅湘玲,王勇. 基于赋权形态学分析的三维面部对称参考平面构建方法[J]. 北京大学学报(医学版), 2021, 53(1): 220-226.
[2] 孙现涛,何伟,刘筱菁,李自力,王兴. Delaire头影测量分析法预测正颌手术患者上颌及颏部理想矢状向位置的可行性评估[J]. 北京大学学报(医学版), 2020, 52(1): 90-96.
[3] 王顺吉,章文博,于尧,谢晓艳,杨宏宇,彭歆. 术前虚拟设计在股前外侧皮瓣修复口腔颌面部缺损中的应用[J]. 北京大学学报(医学版), 2020, 52(1): 119-123.
[4] 张添文,王晓霞,李自力,伊彪,梁成,王兴. 上颌前突患者鼻唇区软组织三维形态测量方法的建立[J]. 北京大学学报(医学版), 2019, 51(5): 944-948.
[5] 周仁,郑鸿尘,李文咏,王梦莹,王斯悦,李楠,李静,周治波,吴涛,朱洪平. 利用二代测序数据探索SPRY基因家族与中国人群非综合征型唇腭裂的关联[J]. 北京大学学报(医学版), 2019, 51(3): 564-570.
[6] 吴灵,刘筱菁,李自力,王兴. 磨牙非中性关系与虚拟环境下拼对终末咬合精度[J]. 北京大学学报(医学版), 2018, 50(1): 154-159.
[7] 刘宇楠,刘筱菁,余小蒙,赵福运. 细针吸取细胞学检查用于诊断颌面部脉管疾病的准确性[J]. 北京大学学报(医学版), 2017, 49(3): 527-530.
[8] 章文博,于尧,王洋,刘筱菁,毛驰,郭传瑸,俞光岩,彭歆. 数字化外科技术在上颌骨缺损重建中的应用[J]. 北京大学学报(医学版), 2017, 49(1): 1-005.
[9] 吴煜,李自力,王兴, 伊彪, 马莲. 腭侧入路改良Le Fort Ⅰ型截骨术矫正唇腭裂继发上颌骨发育不足的临床初步应用[J]. 北京大学学报(医学版), 2016, 48(3): 550-554.
[10] 黄俊强, 刘施瑶, 江久汇. Tweed-Merrifield技术矫治成人严重双颌前突的疗效评价[J]. 北京大学学报(医学版), 2016, 48(3): 555-561.
[11] 于森, 王洋, 毛驰, 郭传瑸, 俞光岩, 彭歆. 1 107例上颌骨缺损的临床分类及修复方法分析[J]. 北京大学学报(医学版), 2015, 47(3): 509-513.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
[1] 田增民, 陈涛, Nanbert ZHONG, 李志超, 尹丰, 刘爽. 神经干细胞移植治疗遗传性小脑萎缩的临床研究(英文稿)[J]. 北京大学学报(医学版), 2009, 41(4): 456 -458 .
[2] 郭岩, 谢铮. 用一代人时间弥合差距——健康社会决定因素理论及其国际经验[J]. 北京大学学报(医学版), 2009, 41(2): 125 -128 .
[3] 成刚, 钱振华, 胡军. 艾滋病项目自愿咨询检测的技术效率分析[J]. 北京大学学报(医学版), 2009, 41(2): 135 -140 .
[4] 袁惠燕, 张苑, 范田园. 离子交换型栓塞微球及其载平阳霉素的制备与性质研究[J]. 北京大学学报(医学版), 2009, 41(2): 217 -220 .
[5] 徐莉, 孟焕新, 张立, 陈智滨, 冯向辉, 释栋. 侵袭性牙周炎患者血清中抗牙龈卟啉单胞菌的IgG抗体水平的研究[J]. 北京大学学报(医学版), 2009, 41(1): 52 -55 .
[6] 董稳, 刘瑞昌, 刘克英, 关明, 杨旭东. 氯诺昔康和舒芬太尼用于颌面外科术后自控静脉镇痛的比较[J]. 北京大学学报(医学版), 2009, 41(1): 109 -111 .
[7] 祁琨, 邓芙蓉, 郭新彪. 纳米二氧化钛颗粒对人肺成纤维细胞缝隙连接通讯的影响[J]. 北京大学学报(医学版), 2009, 41(3): 297 -301 .
[8] 李宏亮*, 安卫红*, 赵扬玉, 朱曦. 妊娠合并高脂血症性胰腺炎行血液净化治疗1例[J]. 北京大学学报(医学版), 2009, 41(5): 599 -601 .
[9] 李伟军, 邢晓芳, 曲立科, 孟麟, 寿成超. PRL-3基因C104S位点突变体和CAAX缺失体的构建及表达[J]. 北京大学学报(医学版), 2009, 41(5): 516 -520 .
[10] 丰雷, 王玉凤, 曹庆久. 哌甲酯对注意缺陷多动障碍儿童平衡功能影响的开放性研究[J]. 北京大学学报(医学版), 2007, 39(3): 304 -309 .