Facial action unit detection methodology with application in Brazilian sign language recognition

被引:0
|
作者
Emely Pujólli da Silva
Paula Dornhofer Paro Costa
Kate Mamhy Oliveira Kumada
José Mario De Martino
机构
[1] University of Campinas (Unicamp),School of Electrical and Computer Engineering (FEEC)
[2] Federal University of ABC (UFABC),Center for Natural and Human Sciences (CCNH)
来源
关键词
Facial expression; Non-manual markers; AU detection; Libras; Brazilian sign language;
D O I
暂无
中图分类号
学科分类号
摘要
Sign Language is the linguistic system adopted by the Deaf to communicate. The lack of fully-fledged Automatic Sign Language (ASLR) technologies contributes to the numerous difficulties that deaf individuals face in the absence of an interpreter, such as in private health appointments or in emergency situations. A challenging problem in the development of reliable ASLR systems is that sign languages do not rely only on manual gestures but also on facial expressions and other non-manual markers. This paper proposes to adopt Facial Action Coding System to encode sign language facial expressions. However, the state-of-the-art of Action Unit (AU) recognition models is mostly targeted to classify two dozen of AUs, typically related to the expression of emotions. We adopted Brazilian Sign Language (Libras) as our case study and we identified more than one hundred of AUs (with a great intersection with other sign languages). We then implemented and evaluated a novel AU recognition model architecture that combines SqueezeNet and geometric-based features. Our model obtained 88% of accuracy for 119 classes. Combined with the state-of-the-art of gesture recognition, our model is ready to improve sign disambiguation and to advance ASLR.
引用
收藏
页码:549 / 565
页数:16
相关论文
共 50 条
  • [1] Facial action unit detection methodology with application in Brazilian sign language recognition
    da Silva, Emely Pujolli
    Paro Costa, Paula Dornhofer
    Oliveira Kumada, Kate Mamhy
    De Martino, Jose Mario
    [J]. PATTERN ANALYSIS AND APPLICATIONS, 2022, 25 (03) : 549 - 565
  • [2] Action unit detection in 3D facial videos with application in facial expression retrieval and recognition
    Danelakis, Antonios
    Theoharis, Theoharis
    Pratikakis, Ioannis
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (19) : 24813 - 24841
  • [3] Action unit detection in 3D facial videos with application in facial expression retrieval and recognition
    Antonios Danelakis
    Theoharis Theoharis
    Ioannis Pratikakis
    [J]. Multimedia Tools and Applications, 2018, 77 : 24813 - 24841
  • [4] Detection and tracking of face and facial features for Recognition of Turkish Sign Language
    Guevensan, M. Amac
    Haberdar, Hakan
    [J]. 2007 IEEE 15TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS, VOLS 1-3, 2007, : 957 - 960
  • [5] Sign Language Recognition using Facial Expression
    Das, Siddhartha Pratim
    Talukdar, Anjan Kumar
    Sarma, Kandarpa Kumar
    [J]. SECOND INTERNATIONAL SYMPOSIUM ON COMPUTER VISION AND THE INTERNET (VISIONNET'15), 2015, 58 : 210 - 216
  • [6] Brazilian Sign Language Recognition Using Kinect
    Yauri Vidalon, Jose Elias
    De Martino, Jose Mario
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 391 - 402
  • [7] Upper Facial Action Unit Recognition
    Zor, Cemre
    Windeatt, Terry
    [J]. ADVANCES IN BIOMETRICS, 2009, 5558 : 239 - 248
  • [8] Dual Learning for Joint Facial Landmark Detection and Action Unit Recognition
    Wang, Shangfei
    Chang, Yanan
    Wang, Can
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (02) : 1404 - 1416
  • [9] Facial Expression Recognition Based on Facial Action Unit
    Yang, Jiannan
    Zhang, Fan
    Chen, Bike
    Khan, Samee U.
    [J]. 2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [10] The Significance of Facial Features for Automatic Sign Language Recognition
    von Agris, Ulrich
    Knorr, Moritz
    Kraiss, Karl-Friedrich
    [J]. 2008 8TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2008), VOLS 1 AND 2, 2008, : 286 - 291