Skip to content

rmhsawyer/Reinforcment_Learning_Car_Racing

Repository files navigation

Reinforcment_Learning_Car_Racing

EC500 Deep Learning Final Project

Deep_Reinforcement_Learning_on_Car_Racing_Game

License

Task

The aim of our team was to explore three different DQN reinforcement learning networks. First, we learned and implemented Deep Q-network (DQN). Then, to improve Q value estimation method, we implemented Double Deep Q-network (DDQN), after that, we managed to improve improve network structure, so we implemented Dueling Deep Q-network (Dueling DQN). In experimental part, we trained each model for more than 10 hours and recorded and plotted average reward and average Q value of each model as experimental results. Finally, we made comparison among these results and it showed that DDQN performed best; followed by DQN while Dueling network had the worst performance.

Related Work

Playing Atari with Deep Reinforcement Learning

Human-level control through deep reinforcement learning

Deep Reinforcement Learning with Double Q-learning

Dueling Network Architectures for Deep Reinforcement Learning

Game Environment

OpenAI Car Racing-v0

Algorithm

We try DQN, Double DQN(DDQN) and dueling DQN. Please refer to the presentation for detailed algorithm explanation. Also, you should have some basic knowledge on Reinforcement Learning and Q-learning.

Installation

  1. pip install -r requirements.txt

    necessary module: tensorflow, pygame, gym, Box2D, VC++ 14.0 ...

  2. In DQN/DDQN/dueling DQN folder, run python car_racing.py

  3. If you'd like to utilize the trained model, switch load_mdoel = True in python car_racing.py

  4. On CPU, it takes about 8 hours to get a well-trained model.

Introduction

The DQN,DDQN and dueling DQN have similar structures. Take DQN for example:

DQN/car_cacing.py - main entrance, the executable file

DQN/dqn/agent.py - DQN model

DQN/dqn/experience_replay.py - experience replay

data/plot.py - plot figures

Result

Training Result

Sample Sample Sample

Test Result

Sample Sample

DQN DDQN Dueling DQN Human
755 784.95 737.35 216.35

Sample

The dueling DQN doesn't perform as well as we expected. Some speculated reasons are in the presentation.

Timeline

Oct.15,2018 Project Proposal

Nov.19,2018 Project Progress

Dec.12,2018 Presentation

Dec.14,2018 Final Report

References

  1. car_racing.py is our gaming environment which we adopted from gym library.
  2. Three DQN networks were implemented by us where we chose same hyperparameters as those in the three main referencing papers.
  3. Three main DQN networks loss functions were modified by us.
  4. The experience_replay part is referenced from diegoalejogm's github.

About

EC500 Deep Learning Final Project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages