Toward Addressing Collusion Among Human Adversaries in Security Games

被引:0
|
作者
Gholami, Shahrzad [1 ]
Wilder, Bryan [1 ]
Brown, Matthew [1 ]
Thomas, Dana [1 ]
Sintov, Nicole [1 ]
Tambe, Milind [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90089 USA
关键词
D O I
10.3233/978-1-61499-672-9-1750
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Security agencies including the US Coast Guard, the Federal Air Marshal Service and the Los Angeles Airport police are several major domains that have been deploying Stackelberg security games and related algorithms to protect against a single adversary or multiple, independent adversaries strategically. However, there are a variety of real-world security domains where adversaries may benefit from colluding in their actions against the defender. Given the potential negative effect of these collusive actions, the defender has an incentive to break up collusion by playing off the self-interest of individual adversaries. This paper deals with problem of collusive security games for rational and bounded rational adversaries. The theoretical results verified with human subject experiments showed that behavior model which optimizes against bounded rational adversaries provides demonstrably better performing defender strategies against human subjects.
引用
收藏
页码:1750 / 1751
页数:2
相关论文
共 50 条