Please use this identifier to cite or link to this item:
http://dspace.cityu.edu.hk/handle/2031/9493
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cheung, Yiu Chung Jeffrey | en_US |
dc.date.accessioned | 2021-11-17T04:08:44Z | - |
dc.date.available | 2021-11-17T04:08:44Z | - |
dc.date.issued | 2021 | en_US |
dc.identifier.other | 2021eecycj115 | en_US |
dc.identifier.uri | http://dspace.cityu.edu.hk/handle/2031/9493 | - |
dc.description.abstract | Playing against other players in games are often considered better than playing against AI due to various reasons. Repetitive, predictable actions of the AI are reasons that people tend to play against other players instead of AI. The core problem of this issue is that traditional AI follows fixed rules defined in the code. They do not learn from mistakes and try other approaches like a real player would do. However, the choice of playing against AI should not be laughed at by others. Everyone has their rights to play what they want. Therefore, it is necessary to investigate this problem. This study aims to address the repetitiveness and predictable actions of traditional AI and try to humanize them by applying machine learning algorithms to game AI. This study is conducted using Q-learning and neural network to train a robot in Robocode. Robocode is a programming game. The goal of Robocode is to develop a robot battle tank to fight against other tanks. The tanks in this study were trained against SpinBot, one of the sample tanks included in Robocode. This study used data such as player coordinates, distance to the enemy tank, angle to the enemy tank to form a Q-table, then used epsilon-greedy Q-learning algorithms to train the tank against SpinBot. The task of the Q-table is to find the best action in any given moment with sufficient data. However, due to the nature of the Q-table needing to store every possible state, the number of inputs is limited. To overcome the need to store all possible combinations of the Q-table, this study also tried to use a Neural network to replace the Q-table to train an agent against SpinBot. The input of the neural network is the same as a Q-table. The output of the neural network contains only one node, which indicates the Q-value. After training the tanks using Q-learning and Neural Network, the increase in score can be observed when comparing the untrained agents and the trained agents. The agents are also harder to predict, which is hard to achieve with traditional hard-coded AI. This project can act as a starting point for further studies into Machine Learning AI in games. | en_US |
dc.rights | This work is protected by copyright. Reproduction or distribution of the work in any format is prohibited without written permission of the copyright owner. | en_US |
dc.rights | Access is restricted to CityU users. | en_US |
dc.title | Machine Learning AI in Computer Games | en_US |
dc.contributor.department | Department of Electrical Engineering | en_US |
dc.description.supervisor | Supervisor: Prof. Leung, Andrew C S; Assessor: Dr. Yuen, Kelvin S Y | en_US |
Appears in Collections: | Electrical Engineering - Undergraduate Final Year Projects |
Files in This Item:
File | Size | Format | |
---|---|---|---|
fulltext.html | 149 B | HTML | View/Open |
Items in Digital CityU Collections are protected by copyright, with all rights reserved, unless otherwise indicated.