[go: up one dir, main page]

Skip to main content
Log in

Building anatomically realistic jaw kinematics model from data

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Recent work on anatomical face modeling focuses mainly on facial muscles and their activation. This paper considers a different aspect of anatomical face modeling: kinematic modeling of the jaw, i.e., the temporomandibular joint (TMJ). Previous work often relies on simple models of jaw kinematics, even though the actual physiological behavior of the TMJ is quite complex, allowing not only for mouth opening, but also for some amount of sideways (lateral) and front-to-back (protrusion) motions. Fortuitously, the TMJ is the only joint whose kinematics can be accurately measured with optical methods, because the bones of the lower and upper jaw are rigidly connected to the lower and upper teeth. We construct a person-specific jaw kinematic model by asking an actor to exercise the entire range of motion of the jaw while keeping the lips open so that the teeth are at least partially visible. This performance is recorded with three calibrated cameras. We obtain highly accurate 3D models of the teeth with a standard dental scanner and use these models to reconstruct the rigid body trajectories of the teeth from the videos (markerless tracking). The relative rigid transformations samples between the lower and upper teeth are mapped to the Lie algebra of rigid body motions in order to linearize the rotational motion. Our main contribution is to fit these samples with a three-dimensional nonlinear model parameterizing the entire range of motion of the TMJ. We show that standard principal component analysis (PCA) fails to capture the nonlinear trajectories of the moving mandible. However, we found these nonlinearities can be captured with a special modification of autoencoder neural networks known as nonlinear PCA. By mapping back to the Lie group of rigid transformations, we obtain a parametrization of the jaw kinematics which provides an intuitive interface allowing the animators to explore realistic jaw motions in a user-friendly way.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
€32.70 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (France)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Abadi, M., Barham, P., Chen, J., et al.: Tensorflow: a system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, pp. 265–283 (2016)

  2. Aichert, A., Wein, W., Ladikos, A., Reichl, T., Navab, N.: Image-based tracking of the teeth for orthodontic augmented reality. In: Proceedings of the 15th International Conference on Medical Image Computing and Computer-Assisted Intervention, Part II, pp. 601–608 (2012)

  3. Alexa, M.: Linear combination of transformations. ACM Trans. Graph. 21(3), 380–387 (2002)

    Article  MathSciNet  Google Scholar 

  4. Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28(3), 70:1–70:11 (2009)

    Article  Google Scholar 

  5. Cong, M., Bao, M., E, J.L., Bhat, K.S., Fedkiw, R.: Fully automatic generation of anatomical face simulation models. In: Proceedings of the 14th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 175–183 (2015)

  6. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 681–685 (2001)

    Article  Google Scholar 

  7. De Zee, M., Dalstra, M., Cattaneo, P.M., Rasmussen, J., Svensson, P., Melsen, B.: Validation of a musculo-skeletal model of the mandible and its application to mandibular distraction osteogenesis. J. Biomech. 40(6), 1192–1201 (2007)

    Article  Google Scholar 

  8. Gold, S., Rangarajan, A., Lu, C.P., Pappu, S., Mjolsness, E.: New algorithms for 2D and 3D point matching: pose estimation and correspondence. Pattern Recognit. 31(8), 1019–1031 (1998)

    Article  Google Scholar 

  9. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  10. Hannam, A.G., Stavness, I., Lloyd, J.E., Fels, S.: A dynamic model of jaw and hyoid biomechanics during chewing. J. Biomech. 41(5), 1069–1076 (2008)

    Article  Google Scholar 

  11. Hannam, A.G., Stavness, I.K., Lloyd, J.E., Fels, S.S., Miller, A.J., Curtis, D.A.: A comparison of simulated jaw dynamics in models of segmental mandibular resection versus resection with alloplastic reconstruction. J. Prosthet. Dent. 104(3), 191–198 (2010)

    Article  Google Scholar 

  12. Hastie, T., Stuetzle, W.: Principal curves. J. Am. Stat. Assoc. 84(406), 502–516 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  13. Henderson, S.E., Desai, R., Tashman, S., Almarza, A.J.: Functional analysis of the rabbit temporomandibular joint using dynamic biplane imaging. J. Biomech. 47(6), 1360–1367 (2014)

    Article  Google Scholar 

  14. Herda, L., Urtasun, R., Fua, P., Hanson, A.: An automatic method for determining quaternion field boundaries for ball-and-socket joint limits. In: Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition, pp. 95–100 (2002)

  15. Ichim, A.E., Kadleček, P., Kavan, L., Pauly, M.: Phace: physics-based face modeling and animation. ACM Trans. Graph. 36(4), 153:1–153:14 (2017)

    Article  Google Scholar 

  16. Kiefer, J.: Sequential minimax search for a maximum. Proc. Am. Math. Soc. 4(3), 502–506 (1953)

    Article  MathSciNet  MATH  Google Scholar 

  17. Klein, G., Murray, D.: Full-3D edge tracking with a particle filter. In: Proceedings of the British Machine Vision Conference, pp. 1119–1128 (2006)

  18. Koolstra, J.H.: Dynamics of the human masticatory system. Crit. Rev. Oral Biol. Med. 13(4), 366–376 (2002)

    Article  Google Scholar 

  19. Kozlov, Y., Bradley, D., Bächer, M., Thomaszewski, B., Beeler, T., Gross, M.: Enriching facial blendshape rigs with physical simulation. Comput. Graph. Forum 36(2), 75–84 (2017)

    Article  Google Scholar 

  20. Kramer, M.: Nonlinear principal component analysis using autoassociative neural networks. AIChE J. 37(2), 233–243 (1991)

    Article  Google Scholar 

  21. Lan, L., Cong, M., Fedkiw, R.: Lessons from the evolution of an anatomical facial muscle model. In: Proceedings of the ACM SIGGRAPH Digital Production Symposium, pp. 11:1–11:3 (2017)

  22. Lepetit, V., Fua, P.: Monocular model-based 3D tracking of rigid objects: a survey. Found. Trends Comput. Graph. Vis. 1(1), 1–89 (2005)

    Article  Google Scholar 

  23. Li, T., Bolkart, T., Black, M.J., Li, H., Romero, J.: Learning a model of facial shape and expression from 4D scans. ACM Trans. Graph. 36(6), 194 (2017)

    Google Scholar 

  24. Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  25. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, vol. 2, pp. 674–679 (1981)

  26. Murray, R.M., Sastry, S.S., Zexiang, L.: A Mathematical Introduction to Robotic Manipulation. CRC Press, Boca Raton (1994)

    MATH  Google Scholar 

  27. Ramalingam, S., Bouaziz, S., Sturm, P., Brand, M.: SKYLINE2GPS: localization in urban canyons using omni-skylines. In: International Conference on Intelligent Robotics (IROS), pp. 3816–3823 (2010)

  28. Rosenhahn, B., Perwass, C., Sommer, G.: Pose estimation of 3D free-form contours. Int. J. Comput. Vis. 62(3), 267–289 (2005)

    Article  Google Scholar 

  29. Roweis, S.T., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)

    Article  Google Scholar 

  30. Scholz, M., Fraunholz, M., Selbig, J.: Nonlinear principal component analysis: neural network models and applications. Lect. Notes Comput. Sci. Eng. 58, 44–67 (2008)

    Article  MathSciNet  Google Scholar 

  31. Scholz, M., Vigario, R.: Nonlinear PCA: a new hierarchical approach. In: Proceedings of European Symposium on Artificial Neural Networks, pp. 439–444 (2002)

  32. Sifakis, E., Neverov, I., Fedkiw, R.: Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Trans. Graph. 24(3), 417–425 (2005)

    Article  Google Scholar 

  33. Sorkine-Hornung, O., Rabinovich, M.: Least-Squares Rigid Motion Using SVD. Technical Notes, ETHZ (2009)

  34. Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)

    Article  Google Scholar 

  35. Terzopoulos, D., Waters, K.: Physically-based facial modelling, analysis, and animation. Comput. Anim. Virtual Worlds 1(2), 73–80 (1990)

    Google Scholar 

  36. Villamil, M., Nedel, L., Freitas, C., Macq, B.: Simulation of the human TMJ behavior based on interdependent joints topology. Comput. Methods Progr. Biomed. 105(3), 217–232 (2012)

    Article  Google Scholar 

  37. Wang, J., Suenaga, H., Hoshi, K., Yang, L., Kobayashi, E., Sakuma, I., Liao, H.: Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans. Biomed. Eng. 61(4), 1295–1304 (2014)

    Article  Google Scholar 

  38. Wang, J., Suenaga, H., Yang, L., Kobayashi, E., Sakuma, I.: Video see-through augmented reality for oral and maxillofacial surgery. Int. J. Med. Robot. Comput. Assist. Surg. 13(2), e1754 (2017)

    Article  Google Scholar 

  39. Wu, C., Bradley, D., Gross, M., Beeler, T.: An anatomically-constrained local deformation model for monocular face capture. ACM Trans. Graph. 35(4), 115:1–115:12 (2016)

    Google Scholar 

  40. Yoon, H.J., Baltali, E., Zhao, K.D., Rebellato, J., Kademani, D., An, K.N., Keller, E.E.: Kinematic study of the temporomandibular joint in normal subjects and patients following unilateral temporomandibular joint arthrotomy with metal fossa-eminence partial joint replacement. J. Oral Maxillofac. Surg. 65(8), 1569–1576 (2007)

    Article  Google Scholar 

  41. Zhang, S., Gersdorff, N., Frahm, J.: Real-time magnetic resonance imaging of temporomandibular joint dynamics. Open Med. Imaging J. 5(1), 1–9 (2011)

    Article  Google Scholar 

  42. Zoss, G., Bradley, D., Bérard, P., Beeler, T.: An empirical rig for jaw animation. ACM Trans. Graph. 37(4), 59:1–59:12 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant Numbers IIS-1617172, IIS-1622360 and IIS-1764071. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Wenwu Yang was partially funded by the NSF of China (U1609215 and 61472363). Daniel Sýkora was funded by the Fulbright Commission in the Czech Republic, the Technology Agency of the Czech Republic under research program TE01020415 (V3C—Visual Computing Competence Center), and the Grant Agency of the Czech Technical University in Prague (No. SGS17/215/OHK3/3T/18). We also gratefully acknowledge the support of Research Center for Informatics (No. CZ.02.1.01/0.0/0.0/16_019/0000765), Activision, Adobe, and Mitsubishi Electric Research Labs (MERL) as well as hardware donation from NVIDIA Corporation.

Funding

This study was funded by the National Science Foundation (IIS-1617172, IIS-1622360 and IIS-1764071), NSF of China (U1609215 and 61472363), the Fulbright Commission in the Czech Republic, the Technology Agency of the Czech Republic under research program TE01020415 (V3C-Visual Computing Competence Center), the Grant Agency of the Czech Technical University in Prague (No. SGS17/215/OHK3/3T/18), and Research Center for Informatics (No. CZ.02.1.01/0.0/0.0/16_19/0000765).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenwu Yang.

Ethics declarations

Conflict of interest

Daniel Sýkora has received research grants from the Fulbright Commission in the Czech Republic. Ladislav Kavan has received a hardware donation from NVIDIA Corporation. Wenwu Yang declares that he has no conflict of interest. Nathan Marshak declares that he has no conflict of interest. Srikumar Ramalingam declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was done when Wenwu Yang was a visiting scholar at the University of Utah.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 15837 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, W., Marshak, N., Sýkora, D. et al. Building anatomically realistic jaw kinematics model from data. Vis Comput 35, 1105–1118 (2019). https://doi.org/10.1007/s00371-019-01677-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-019-01677-8

Keywords

Navigation