Playing a FPS Doom Video Game with Deep Visual Reinforcement Learning


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription Access

Abstract

Because of the advancement in Deep visual reinforcement learning, now autonomous game agents are allowed to perform well which often leave behind human beings by using only the raw screen pixels for making their actions or decisions. In this paper, we propose Deep Q-Network (DQN) and a Deep Recurrent Q-Learning Network (DRQN) implementation by playing the Doom video game. Our findings are based on a publication from Lample and Chaplot (2016). Deep Q-learning under two variants (DQN and DRQN) applied is presented first, then how we build an implementation of a testbed for such algorithms is described. we presented our results on a simplified game scenario(s) by showing the predicted enemy positions (game features) with the difference in performance of DQN and DRQN. Finally, unlike other existing works, we show that our proposed architecture performs better with an accuracy of almost 72% in predicting the enemy positions.

About the authors

Adil Khan

Harbin Institute of Technology, School of Computer Science and Technology; Higher Education Department

Author for correspondence.
Email: DrAdil@hit.edu.cn
China, Harbin, Heilongjiang, 150001; KPK

Feng Jiang

Harbin Institute of Technology, School of Computer Science and Technology

Author for correspondence.
Email: fjiang@hit.edu.cn
China, Harbin, Heilongjiang, 150001

Shaohui Liu

Harbin Institute of Technology, School of Computer Science and Technology

Author for correspondence.
Email: shliu@hit.edu.cn
China, Harbin, Heilongjiang, 150001

Ibrahim Omara

Department of Mathematics, Faculty of Science, Menoufia University

Author for correspondence.
Email: I_omara84@hit.edu.cn
Egypt, Al Minufya


Copyright (c) 2019 Allerton Press, Inc.

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies