Neural Network Checkers is an experimental project to learn about the use of neural networks in games and other areas.
A screenshot of Neural Network Checkers is shown below
It uses a Multilayer Perceptron (MLP) neural network. The basic idea is that you train the neural network to learn how to evaluate checkers positions and then after training you play against the AI.
AI is the big topic at the moment with systems like Google Gemini AI being based on a transformer model neural network architecture. Gemini has been trained on a massive corpus of multilingual and multi-modal data sets and when you think about it is an incredible piece of technology which can solve different types of problem in different application domains.
This project explores how to train a Multilayer Perceptron (MLP) neural network to play checkers. Once it has learnt how to play the game you can then play against the AI. In a twist I actually used Gemini to help code the application. An AI helping to create another AI! Skynet comes to mind.
The core of the checkers AI's neural network component is a Multilayer Perceptron. An MLP is a class of feedforward artificial neural network. It's characterised by having multiple layers of neurons, typically organised into an input layer, one or more hidden layers, and an output layer. A MPL neural network with a single hidden layer is shown in the diagram below.
In the checkers game, the input layer consists of 64 neurons, corresponding to the 64 squares on the checkers board. Each input neuron represents a square, and its value indicates the presence and type of piece on that square.
1.0 for a White piece (man or king).
-1.0 for a Black piece (man or king).
0.0 for an empty square.
This numerical representation allows the network to process the board state.
The MLP uses a single hidden layer with 128 neurons. Each neuron in the hidden layer takes inputs from all neurons in the preceding layer, applies a weighted sum, and then passes it through an activation function. The hidden layer allow the network to learn complex patterns and relationships within the input data.
The output layer has a single neuron. This neuron produces a single numerical value, which represents the neural network's evaluation of the given board state. This value is normalized between -1 and 1, where a higher value indicates a more favorable board state for the White player and a lower value indicates a more favorable state for the Black player.
Connections between neurons in the hidden layer have associated "weights," which determine the strength and direction of the connection. Each neuron also has a "bias," which is an additional input that helps the neuron fire. These weights and biases are the parameters that the neural network learns during training.
For both the hidden and output layers the sigmoid function is used. The sigmoid function squashes the output of each neuron to a value between 0 and 1. This non-linearity is vital for MLPs to learn complex, non-linear relationships in the data.
Before you can use the neural network it must be trained. The MLP learns to evaluate board states through a process called backpropagation, which is driven by a target evaluation. During training, the AI plays a large number of games (e.g., 1000 cycles) against a rule-based AI which in this case is the Minimax AI with the evaluate_board_simple function. The Minimax AI is used my GTk4 Checkers game found here. After each game the board states are recorded and evaluated. I found that a minimum number of training cyles is 1000 but the higher the better.
Once trained, the MLP is used as the evaluation function within a Minimax algorithm for the neural network AI player. When it is the neural network AI's turn, the Minimax algorithm explores possible future game states up to a certain depth (e.g. 3 moves ahead) and for each leaf node (the final board state at the maximum search depth), instead of using a hand-coded evaluation function, the Minimax algorithm passes the board state to the trained MLP's feed forward function so that a move can be selected. The MLP neural network is integrated into the Minimax algorithm.
You can play against the artificial intelligence using the human vs AI button.
Pre-built game executable for the neural netwrok checkers game is available for Intel x86 Linux computers and can be downloaded from the binary folder. It has been tested using Debian 13 with Gtk 4.18.
The C source code for Gtk4 Checkers is provided in the src directory.
You need to install the following packages.
sudo apt-get update
sudo apt install build-essential
sudo apt install libgtk-4-dev
Use the MAKEFILE to compile.
make
To run Gtk4 Checkers from the terminal use
./checkers
Make clean is also supported.
make clean
SemVer is used for version control. The version number has the form 0.0.0 representing major, minor and bug fix changes.
The code will be updated as and when I find bugs or make improvement to the code base.
- Alan Crispin Github
Active.
-
Geany is a lightweight source-code editor GPL v2 license
-
MIT OpenCourseWare Lecture. Search: Games, Minimax and Alpha-Beta

