Face vs. empathy: the social foundation of Maithili verb agreement

被引:22
|
作者
Bickel, B
Bisang, W
Yadava, YP
机构
[1] Johannes Gutenberg Univ Mainz, Inst Allgemeine & Vergleichende Sprachwissensch, D-55099 Mainz, Germany
[2] Univ Calif Berkeley, Berkeley, CA 94720 USA
[3] Univ Zurich, CH-8006 Zurich, Switzerland
[4] Tribhuvan Univ, Kathmandu, Nepal
[5] Royal Nepal Acad Kathmandu, Kathmandu, Nepal
关键词
D O I
10.1515/ling.37.3.481
中图分类号
H0 [语言学];
学科分类号
030303 ; 0501 ; 050102 ;
摘要
Maithili features one of the most complex agreement systems of any Indo-Aryan language. Not only nominative and non-nominative subjects, but also objects, other core arguments, and even nonarguments are cross-referenced, allowing for a maximum of three participants encoded by the verb desinences. The categories reflected in the morphology ave person, honorific degree, and, in the case of third persons, gentler, spatial distance, and focus. However, not all combinations of category choices are equally represented, and there are many cases of neutralization. We demonstrate that the paradigm structure of Maithili verb agreement is not arbitrary but can be predicted by two general principles of interaction in Maithil society: a principle of social hierarchy underlying the evaluation of people's "face" (Brown and Levinson 1987[1978]), and a principle of social solidarity defining degrees of "empathy" (Kuno 1987) to which people identify with others. Maithili verb agreement not only reflects a specific style of social cognition bur also constitutes a prime means of maintaining this style by requiring constant attention to its defining parameters. In line with this, we find that the system is partly reduced by uneducated, so-called lower-caste speakers, who are least interested in maintaining this style, especially its emphasis on hierarchy.
引用
收藏
页码:481 / 518
页数:38
相关论文
共 50 条
  • [41] Empathy and Persona of English vs. Arabic Chatbots: A Survey and Future Directions
    Hamad, Omama
    Hamdi, Ali
    Shaban, Khaled
    TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 525 - 537
  • [42] Does the Robot Show Empathy with Me? Talking vs. Musical Robot
    Huang, Shiming
    Hoorn, Johannes Ferdinand
    WITH DESIGN: REINVENTING DESIGN MODES, IASDR 2021, 2022, : 1337 - 1347
  • [43] Fractionating empathy: An fMRI study of explicit vs. implicit emotion perception
    Chakrabarti, B.
    Bullmore, Edward T.
    Baron-Cohen, S.
    JOURNAL OF PSYCHOPHYSIOLOGY, 2006, 20 (04) : 318 - 318
  • [44] Parenting Intervention for Mothers with High vs. Low Psychological Risk Affects Neural Activity During an Own Child Face Empathy Task
    Morelen, Diana
    Ho, Shaun
    Rosenblum, Katherine
    Swain, James E.
    Muzik, Maria
    BIOLOGICAL PSYCHIATRY, 2015, 77 (09) : 330S - 330S
  • [45] Constructional meanings of verb-noun compounds in Spanish: Limpiabotas vs. tientaparedes
    Yoon, Jiyoung
    LANGUAGE SCIENCES, 2009, 31 (04) : 507 - 530
  • [46] Look at me: The relation between empathy and fixation on the emotional eye-region in low vs. high social anxiety
    Moutinho, Raquel
    Castro, Sao Luis
    Silva, Susana
    JOURNAL OF BEHAVIOR THERAPY AND EXPERIMENTAL PSYCHIATRY, 2021, 70
  • [47] Corporate Social Responsibility: Illusion vs. Real Possibility, Voluntarism vs. Compliance
    Draskovic, Veselin
    Lojpur, Andelko
    STRATEGIC MANAGEMENT, 2014, 19 (01): : 16 - 21
  • [48] Factivity in Chinese: Factive Verbs vs. Factivity Alternatioin Verb jide 'remember'
    Oh, Youjeong
    Lee, Chungmin
    JOURNAL OF COGNITIVE SCIENCE, 2021, 22 (04) : 593 - 630
  • [49] Differences in social interaction- vs. cocaine reward in mouse vs. rat
    Kummer, Kai K.
    Hofhansel, Lena
    Barwitz, Constanze M.
    Schardl, Aurelia
    Prast, Janine M.
    Salti, Ahmad
    El Rawas, Rana
    Zernig, Gerald
    FRONTIERS IN BEHAVIORAL NEUROSCIENCE, 2014, 8 : 1 - 7
  • [50] FACE-TO-FACE VS. ONLINE LEARNING: THE IMPACT ON ACADEMIC PROFESSIONAL IDENTITY
    Barbara-i-Molinero, Alba
    Sancha, Cristina
    Samper, Noelia
    ICERI2016: 9TH INTERNATIONAL CONFERENCE OF EDUCATION, RESEARCH AND INNOVATION, 2016, : 8280 - 8286