STRATEGIES FOR DETECTING INSINCERE RESPONDENTS IN ONLINE POLLING

被引:12
|
作者
Kennedy, Courtney [1 ]
Hatley, Nicholas [2 ]
Lau, Arnold [2 ]
Mercer, Andrew [2 ]
Keeter, Scott [2 ]
Ferno, Joshua [2 ]
Asare-Marfo, Dorene [2 ]
机构
[1] Pew Res Ctr, Survey Res, Washington, DC 20036 USA
[2] Pew Res Ctr, 1615 L St NW,Suite 800, Washington, DC 20036 USA
关键词
D O I
10.1093/poq/nfab057
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
While the migration of public opinion surveys to online platforms has often lowered costs and enhanced timeliness, it has also created new vulnerabilities. Respondents completing the same survey multiple times from different IP addresses, overseas workers posing as Americans, and algorithms designed to complete surveys are among the threats that have emerged in this new era. This paper is an attempt to measure the prevalence of such respondents and their impact on survey data quality, while demonstrating methodological approaches for doing so. Prior studies typically examine just one platform and rely on closed-ended questions and/or paradata (e.g., IP addresses) to identify untrustworthy interviews. This is problematic because such data are relatively easy for bad actors to fake. We carried out a large-scale study with an eye toward overcoming these limitations. This study examines the threat of insincere respondents using large samples from six online platforms: three opt-in survey panels, two address-recruited survey panels, and a crowdsourced sample. Rather than relying solely on closed-ended responses, we incorporated an analysis of 375,834 open-ended answers. By their very nature, open-ended questions offer a more sensitive indicator of whether a respondent is genuine or not. The study found that the incidence of insincere respondents varied significantly by the type of online sample. Critically, insincere respondents did not just answer at random, but rather they tended to select positive answer choices, introducing a small, systematic bias into estimates like presidential approval. Two common data-quality checks failed to detect most insincere respondents.
引用
收藏
页码:1050 / 1075
页数:26
相关论文
共 50 条
  • [41] STRATEGIES FOR ONLINE COMMUNITIES
    Miller, Kent D.
    Fabian, Frances
    Lin, Shu-Jou
    [J]. STRATEGIC MANAGEMENT JOURNAL, 2009, 30 (03) : 305 - 322
  • [42] Online strategies for backups
    Damaschke, P
    [J]. THEORETICAL COMPUTER SCIENCE, 2002, 285 (01) : 43 - 53
  • [43] STRATEGIES FOR ONLINE EDUCATORS
    Motte, Kristy
    [J]. TURKISH ONLINE JOURNAL OF DISTANCE EDUCATION, 2013, 14 (02): : 258 - 267
  • [44] Online strategies for backups
    Damaschke, P
    [J]. ALGORITHMS AND COMPLEXITY, 2000, 1767 : 63 - 71
  • [45] Detecting Deceptive Images in Online Content
    AlMalki, Sheymaa Khalid
    AlMalki, Hajer Khalid
    AlMansour, Amal Abdullah
    [J]. 2018 14TH INTERNATIONAL CONFERENCE ON SIGNAL IMAGE TECHNOLOGY & INTERNET BASED SYSTEMS (SITIS), 2018, : 380 - 386
  • [46] Detecting Influencers in Multiple Online Genres
    Rosenthal, Sara
    McKeown, Kathleen
    [J]. ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2017, 17 (02)
  • [47] Detecting and controlling cheating in online poker
    Yampolskiy, Roman V.
    [J]. 2008 5TH IEEE CONSUMER COMMUNICATIONS AND NETWORKING CONFERENCE, VOLS 1-3, 2008, : 848 - 853
  • [48] Detecting Platform Effects in Online Discussions
    Aragon, Pablo
    Gomez, Vicenc
    Kaltenbrunner, Andreas
    [J]. POLICY AND INTERNET, 2017, 9 (04): : 420 - 443
  • [49] Detecting frauds in online advertising systems
    Mittal, Sanjay
    Gupta, Rahul
    Mohania, Mukesh
    Gupta, Shyam K.
    Iwaihara, Mizuho
    Dillon, Tharam
    [J]. E-COMMERCE AND WEB TECHNOLOGIES, PROCEEDINGS, 2006, 4082 : 222 - 231
  • [50] Detecting Depressed Users in Online Forums
    Shrestha, Anu
    Spezzano, Francesca
    [J]. PROCEEDINGS OF THE 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM 2019), 2019, : 945 - 951