On the Development of an Autonomous Agent for a 3D First-Person Shooter Game Using Deep Reinforcement Learning

被引:2
|
作者
Serafim, Paulo Bruno S. [1 ]
Nogueira, Yuri Lenon B. [1 ]
Vidal, Creto A. [1 ]
Cavalcante Neto, Joaquim B. [1 ]
机构
[1] Univ Fed Ceara, Dept Comp, Fortaleza, Ceara, Brazil
关键词
3D first-person shooter; autonomous agent; reinforcement learning; deep neural networks;
D O I
10.1109/SBGames.2017.00025
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
First-Person Shooter games have always been very popular. One of the challenges in the development of First-Person Shooter games is the use of game agents controlled by Artificial Intelligence because they can learn how to handle very distinct situations presented to them. In this work, we construct an autonomous agent to play different scenarios in a 3D First-Person Shooter game using a Deep Neural Network model. The agent receives as input only the pixels of the screen and should learn how to interact with the environments by itself. To achieve this goal, the agent is trained using a Deep Reinforcement Learning model through an adaptation of the Q-Learning technique for Deep Networks. We evaluate our agent in three distinct scenarios: a basic environment against one static enemy, a more complex environment against multiple different enemies and a custom medikit gathering scenario. We show that the agent achieves good results and learns complex behaviors in all tested environments. The results show that the presented model is suitable for creating 3D First-Person Shooter autonomous agents capable of playing different scenarios.
引用
收藏
页码:155 / 163
页数:9
相关论文
共 50 条
  • [41] The Formation Control of Mobile Autonomous Multi-Agent Systems Using Deep Reinforcement Learning
    Liu, Qishuai
    Hui, Qing
    2019 13TH ANNUAL IEEE INTERNATIONAL SYSTEMS CONFERENCE (SYSCON), 2019,
  • [42] Standard Plane Extraction From 3D Ultrasound With 6-DOF Deep Reinforcement Learning Agent
    Jiang, Baichuan
    Xu, Keshuai
    Taylor, Russell H.
    Graham, Ernest
    Unberath, Mathias
    Boctor, Emad M.
    PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS), 2020,
  • [43] Online 3D Bin Packing with Constrained Deep Reinforcement Learning
    Zhao, Hang
    She, Qijin
    Zhu, Chenyang
    Yang, Yin
    Xu, Kai
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 741 - 749
  • [44] 3D robotic navigation using a vision-based deep reinforcement learning model
    Zielinski, P.
    Markowska-Kaczmar, U.
    APPLIED SOFT COMPUTING, 2021, 110
  • [45] 3D robotic navigation using a vision-based deep reinforcement learning model
    Zieliński, P.
    Markowska-Kaczmar, U.
    Applied Soft Computing, 2021, 110
  • [46] 3D Autonomous Navigation of UAVs: An Energy-Efficient and Collision-Free Deep Reinforcement Learning Approach
    Wang, Yubin
    Biswas, Karnika
    Zhang, Liwen
    Ghazzai, Hakim
    Massoud, Yehia
    2022 IEEE ASIA PACIFIC CONFERENCE ON CIRCUITS AND SYSTEMS, APCCAS, 2022, : 404 - 408
  • [47] Untying cable by combining 3D deep neural network with deep reinforcement learning
    Fan, Zheming
    Shao, Wanpeng
    Hayashi, Toyohiro
    Ohashi, Takeshi
    ADVANCED ROBOTICS, 2023, 37 (05) : 380 - 394
  • [48] Autonomous 3-D UAV Localization Using Cellular Networks: Deep Supervised Learning Versus Reinforcement Learning Approaches
    Afifi, Ghada
    Gadallah, Yasser
    IEEE ACCESS, 2021, 9 : 155234 - 155248
  • [49] Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning
    Orsula, Andrej
    Bogh, Simon
    Olivares-Mendez, Miguel
    Martinez, Carol
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 4112 - 4119
  • [50] Multi-Agent Cooperative Reinforcement Learning in 3D Virtual World
    Zhang, Ping
    Ma, Xiujun
    Pan, Zijian
    Li, Xiong
    Xie, Kunqing
    ADVANCES IN SWARM INTELLIGENCE, PT 1, PROCEEDINGS, 2010, 6145 : 731 - 739