The identification game: deepfakes and the epistemic limits of identity

被引:3
|
作者
Ohman, Carl [1 ]
机构
[1] Uppsala Univ, Dept Govt, Box 514, S-75120 Uppsala, Sweden
关键词
Synthetic media; Deepfakes; AI; Imitation game; Information ethics;
D O I
10.1007/s11229-022-03798-5
中图分类号
N09 [自然科学史]; B [哲学、宗教];
学科分类号
01 ; 0101 ; 010108 ; 060207 ; 060305 ; 0712 ;
摘要
The fast development of synthetic media, commonly known as deepfakes, has cast new light on an old problem, namely-to what extent do people have a moral claim to their likeness, including personally distinguishing features such as their voice or face? That people have at least some such claim seems uncontroversial. In fact, several jurisdictions already combat deepfakes by appealing to a "right to identity." Yet, an individual's disapproval of appearing in a piece of synthetic media is sensible only insofar as the replication is successful. There has to be some form of (qualitative) identity between the content and the natural person. The question, therefore, is how this identity can be established. How can we know whether the face or voice featured in a piece of synthetic content belongs to a person who makes claim to it? On a trivial level, this may seem an easy task-the person in the video is A insofar as he or she is recognised as being A. Providing more rigorous criteria, however, poses a serious challenge. In this paper, I draw on Turing's imitation game, and Floridi's method of levels of abstraction, to propose a heuristic to this end. I call it the identification game. Using this heuristic, I show that identity cannot be established independently of the purpose of the inquiry. More specifically, I argue that whether a person has a moral claim to content that allegedly uses their identity depends on the type of harm under consideration.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] The identification game: deepfakes and the epistemic limits of identity
    Carl Öhman
    Synthese, 200
  • [2] Deepfakes and the Epistemic Backstop
    Rini, Regina
    PHILOSOPHERS IMPRINT, 2020, 20 (24): : 1 - 16
  • [3] Deepfakes and the epistemic apocalypse
    Joshua Habgood-Coote
    Synthese, 201
  • [4] The Epistemic Threat of Deepfakes
    Fallis D.
    Philosophy & Technology, 2021, 34 (4) : 623 - 643
  • [5] Deepfakes, shallow epistemic gravesOn the epistemic robustness of photography and videos in the era of deepfakes
    Paloma Atencia-Linares
    Marc Artiga
    Synthese, 200
  • [6] Deepfakes and the epistemic apocalypse
    Habgood-Coote, Joshua
    SYNTHESE, 2023, 201 (03)
  • [7] Deepfakes, shallow epistemic graves On the epistemic robustness of photography and videos in the era of deepfakes
    Atencia-Linares, Paloma
    Artiga, Marc
    SYNTHESE, 2022, 200 (06)
  • [8] VR, Deepfakes and Epistemic Security
    Aliman, Nadisha-Marie
    Kester, Leon
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY (AIVR), 2022, : 97 - 102
  • [9] Identification of a shared answer-making epistemic game in a group context
    Pawlak, Alanna
    Irving, Paul
    Caballero, Marcos D.
    2015 PHYSICS EDUCATION RESEARCH CONFERENCE, 2015, : 255 - 258
  • [10] The limits of epistemic democracy
    Martin van Hees
    Social Choice and Welfare, 2007, 28 : 649 - 666