Deriving Minimal Features for Human-Like Facial Expressions in Robotic Faces

被引:0
|
作者
Casey C. Bennett
Selma Šabanović
机构
[1] Indiana University,School of Informatics and Computing
[2] Centerstone Research Institute,Department of Informatics
[3] Centerstone Research Institute,Department of Informatics
关键词
Human–robot interaction; Facial expression; Emotion; Minimalist design; Robot design;
D O I
暂无
中图分类号
学科分类号
摘要
This study explores deriving minimal features for a robotic face to convey information (via facial expressions) that people can perceive and understand. Recent research in computer vision has shown that a small number of moving points/lines can be used to capture the majority of information (∼\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim $$\end{document}95 %) in human facial expressions. Here, we apply such findings to a minimalist robot face design, which was run through a series of experiments with human subjects (n = 75) exploring the effect of various factors, including added neck motion and degree of expression. Facial expression identification rates were similar to more complex robots. In addition, added neck motion significantly improved facial expression identification rates to 100 % for all expressions (except Fear). The Negative Attitudes towards Robots (NARS) and Godspeed scales were also collected to examine user perceptions, e.g. perceived animacy and intelligence. The project aims to answer a number of fundamental questions about robotic face design, as well as to develop inexpensive and replicable robotic faces for experimental purposes.
引用
收藏
页码:367 / 381
页数:14
相关论文
共 50 条
  • [41] Human-Like Posture Correction for Seven-Degree-of-Freedom Robotic Arm
    Deng, Yu-Heng
    Chang, Jen-Yuan
    JOURNAL OF MECHANISMS AND ROBOTICS-TRANSACTIONS OF THE ASME, 2022, 14 (02):
  • [42] A machine learning predictor of facial attractiveness revealing human-like psychophysical biases
    Kagian, Amit
    Dror, Gideon
    Leyvand, Tommer
    Meilijson, Isaac
    Cohen-Or, Daniel
    Ruppin, Eytan
    VISION RESEARCH, 2008, 48 (02) : 235 - 243
  • [43] Human-Like Robotic Handwriting with Inverse Kinematic Using a Mobile Telescopic Arm
    Troconis, Luigi Gabriel
    Genova, Oleg
    Freddi, Alessandro
    Monteriu, Andrea
    2024 20TH IEEE/ASME INTERNATIONAL CONFERENCE ON MECHATRONIC AND EMBEDDED SYSTEMS AND APPLICATIONS, MESA 2024, 2024,
  • [44] Human-Like Endtip Stiffness Modulation Inspires Dexterous Manipulation With Robotic Hands
    Shafer, Anna
    Deshpande, Ashish D.
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 30 : 1138 - 1146
  • [45] In vivo human-like robotic phenotyping of leaf traits in maize and sorghum in greenhouse
    Atefi, Abbas
    Ge, Yufeng
    Pitla, Santosh
    Schnable, James
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2019, 163
  • [46] Emerged human-like facial expression representation in a deep convolutional neural network
    Zhou, Liqin
    Yang, Anmin
    Meng, Ming
    Zhou, Ke
    SCIENCE ADVANCES, 2022, 8 (12)
  • [47] Facial Expressions Reconstruction of 3D Faces based on Real Human Data
    Minoi, Jacey-Lynn
    Jupit, Amelia Jati Robert
    Gillies, Duncan Fyfe
    Arnab, Sylvester
    2012 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND CYBERNETICS (CYBERNETICSCOM), 2012, : 185 - 189
  • [48] Virtual reality based programming of human-like torch operation for robotic welding
    Hu, Yijie
    Xiao, Jun
    Chen, Shujun
    Gai, Shengnan
    WELDING IN THE WORLD, 2025, : 1447 - 1458
  • [49] A Human-like Motion Control Method for Robotic Joint Actuated by Antagonistic Pneumatic Muscles
    Gong D.
    He R.
    Yu J.
    Zuo G.
    Jiqiren/Robot, 2019, 41 (06): : 803 - 812
  • [50] Seeing Minds in Others - Can Agents with Robotic Appearance Have Human-Like Preferences?
    Martini, Molly C.
    Gonzalez, Christian A.
    Wiese, Eva
    PLOS ONE, 2016, 11 (01):