This is a great project! Here is a README.md file that you can use for your GitHub repository. It's written in Markdown and includes all the essential information a visitor would want to know.
This project features an AI agent that learns to play the classic Snake game using a Deep Q-Network (DQN). The agent is built with PyTorch and interacts with a game environment created using Pygame.
- About the Project
- Getting Started
- Usage
- The AI Model
- File Structure
- Contributing
- License
- Acknowledgments
This repository contains the code for an AI that learns to play Snake through reinforcement learning. The core of the project is a Deep Q-learning agent that takes the game's state as input and outputs the best move to make. The goal is to maximize the score by eating the food and avoiding collisions.
To get a local copy up and running, follow these simple steps.
You will need to have Python 3 and pip installed on your machine.
- Clone the repo:
git clone https://github.com/arnavkul-byte/AI-Plays-Snake-Game.git
- Navigate to the project directory:
cd your_repository - Install the required packages:
pip install -r requirements.txt
pip install pygame torch numpy matplotlib
To start the training process and see the AI in action, run the agent.py file:
python agent.pyA Pygame window will open, showing the snake game. A plot will also be displayed, showing the agent's score and mean score over time. The model will be saved as model.pth in the model directory whenever a new high score is achieved.
The AI agent uses a Deep Q-Network to learn the optimal policy for playing Snake.
The state is an 11-dimensional vector that provides the agent with information about its immediate surroundings, the direction of movement, and the location of the food. The state vector consists of the following boolean values:
- Danger Ahead: True if there is an immediate danger of collision in the current direction.
- Danger to the Right: True if there is an immediate danger to the right of the current direction.
- Danger to the Left: True if there is an immediate danger to the left of the current direction.
- Current Direction: Four boolean values representing the current direction of the snake (Left, Right, Up, Down).
- Food Location: Four boolean values indicating the relative position of the food (Food Left, Food Right, Food Up, Food Down).
The agent's brain is a simple feed-forward neural network implemented using PyTorch. It consists of:
- An input layer with 11 nodes (corresponding to the state vector).
- A hidden layer with 256 nodes and a ReLU activation function.
- An output layer with 3 nodes, representing the three possible actions (move straight, turn right, turn left).
The agent is trained using a Deep Q-learning algorithm with the following key components:
- Experience Replay: The agent stores its experiences (state, action, reward, next state, done) in a memory buffer. During long-term memory training, it samples a random batch of these experiences to train the network. This helps to break the correlation between consecutive experiences and stabilizes the training process.
- Epsilon-Greedy Policy: To balance exploration and exploitation, the agent uses an epsilon-greedy strategy. With a probability of epsilon, it chooses a random action (exploration); otherwise, it chooses the action with the highest predicted Q-value (exploitation). The value of epsilon decreases as the number of games increases, leading the agent to make more informed decisions over time.
- Q-Trainer: The
QTrainerclass handles the training steps. It calculates the loss between the predicted Q-values and the target Q-values (calculated using the Bellman equation) and updates the network's weights using the Adam optimizer.
├── agent.py # Main script to run the training process
├── snake_game.py # The Snake game environment
├── model.py # The Deep Q-Network model and Q-Trainer
├── helper.py # Helper function for plotting the training progress
├── arial.ttf # Font file for displaying the score
└── README.md # This file
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.
- This project was inspired by the classic Snake game and the power of reinforcement learning.
- Thanks to the developers of PyTorch and Pygame for their amazing libraries.