**Artificial Neural Networks** are some of the most fascinating products of the Machine Learning field. In this article we are going to build a Neural Network that will watch the gameplay of a simple board game and then try to learn how to play it. We are then going to have the Neural Network play the game and evaluate the results.

All the code for this project is available on Github, so you can check it out there(if you do so, please don't forget to star the repository so I know you've enjoyed this project). Also, feel free to follow me on Twitter at @b_dmarius, I'll post there every new article.

# Motivation for building a Neural Network to play a simple game

I believe simple games are a nice objective for building a neural network because sometimes we need a more fun approach to learning about a subject. In an earlier article, I've built a Neural Network that learns to play Tic-Tac-Toe. Both that game in this one are simple, solved games for which Neural Networks are a bit overkill. Nonetheless, these games offer us simple data, with small search spaces and the process of gathering the data is much simpler, and therefore we can concentrate more on building the model. Not to mention that, at the end of the project, the process of evaluating our results is more entertaining, because we are just letting the Neural Network play the game.

# Approach for building a Keras Neural Network

First we are going to build a basic, text-based game of Connect Four(I'm going to give you more details about the game in the following section). Then we are going to let two random players play against each other for a few thousand games and record the history of the game and the eventual winner.

Then we are going to build a **Keras Neural Network** model that will observe the history and try to learn how to play it. And then finally we will let the Neural Network play against the random players and we will draw our conclusions on the results.

# Building Connect Four in Python

Connect Four is a simple 2-player board game. Each player is assigned a color(we will use red for player 1 and yellow for player 2) and the players take turns in dropping a colored disc into a board which consists of 6 rows and 7 columns. When a disc is dropped into a column, it will take the lowest availabe space on that column and then it's the other's player turn.

A player wins the game when they manage to build a line of 4 or more connecting discs(horizontally, vertically or on a diagonal) of their own color. The game ends in a draw when all slots are filled and no player has managed to build a line of their own color. For more details on the game, please check the Connect Four Wikipedia page.

## Project setup

We need to install 3 dependencies for this project. They will be used to build our neural network.

```
pip3 install keras
pip3 install tensorflow
pip3 install scikit-learn
```

For the purpose of the project we will use 4 classes: Game, GameController, ConnectFourModel and Player. You can pretty much figure our yourself what these 4 classes are used for: Game will be used to store the data, GameController to control the flow of our Game, ConnectFourModel will hold our Neural Network model and the Player will have 2 strategies: one based on random moves and one based on our Neural Network.

All we need for the setup is to establish a few notations for holding the data of our project.

```
RED_PLAYER = 'R'
YELLOW_PLAYER = 'Y'
RED_PLAYER_VAL = -1
YELLOW_PLAYER_VAL = 1
EMPTY = ' '
EMPTY_VAL = 0
HORIZONTAL_SEPARATOR = ' | '
GAME_STATE_X = -1
GAME_STATE_O = 1
GAME_STATE_DRAW = 0
GAME_STATE_NOT_ENDED = 2
VERTICAL_SEPARATOR = '__'
NUM_ROWS = 6
NUM_COLUMNS = 7
REQUIRED_SEQUENCE = 4
```

## The Game class

As we said, this class will contain the board of the game and methods for evaluating the game state and the results.

```
class Game:
def __init__(self):
self.resetBoard()
def resetBoard(self):
self.board = [
[EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL],
[EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL],
[EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL],
[EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL],
[EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL],
[EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL, EMPTY_VAL]
]
self.boardHistory = []
def printBoard(self):
for i in range(len(self.board)):
for j in range(len(self.board[i])):
print (VERTICAL_SEPARATOR, end='')
print (os.linesep)
for j in range(len(self.board[i])):
if RED_PLAYER_VAL == self.board[i][j]:
print(RED_PLAYER, end='')
elif YELLOW_PLAYER_VAL == self.board[i][j]:
print(YELLOW_PLAYER, end='')
elif EMPTY_VAL == self.board[i][j]:
print(EMPTY, end='')
print(HORIZONTAL_SEPARATOR, end='')
print (os.linesep)
for j in range(len(self.board[i])):
print(VERTICAL_SEPARATOR, end='')
print (os.linesep)
```

The Game class will also offer the list of all available moves given a particular state of the game. I chose this approach so that we don't waste any time on checking whether a given move is legal or not. Instead, we will just offer all the available moves and then the Player will only be able to choose something from this list.

```
def getAvailableMoves(self):
availableMoves = []
for j in range(NUM_COLUMNS):
if self.board[NUM_ROWS - 1][j] == EMPTY_VAL:
availableMoves.append([NUM_ROWS - 1, j])
else:
for i in range(NUM_ROWS - 1):
if self.board[i][j] == EMPTY_VAL and self.board[i + 1][j] != EMPTY_VAL:
availableMoves.append([i, j])
return availableMoves
```

We also have to check the current state of the game. After each turn taken, we check whether anybody has won the game, then we check whether we have a draw. If none of this cases is true, then the game has not ended.

The code for this method is quite big and is not the focus of this article. But if you want to see it, you can check it here.

The Game class also contains a board history. After each move of a player, the Game records a copy of the current board. This will be provided later to the model to train on it. We reset the board history at the end of each game.

```
def move(self, move, player):
self.board[move[0]][move[1]] = player
self.boardHistory.append(copy.deepcopy(self.board))
```

## The Player Class

As discussed earlier, the player can have two strategies. First one is a random strategy, where the player randomly chooses a move from the list of available moves. The second one is based on our Keras Neural Network model, meaning we will pass the decision on which the move we should take to our Neural Network and we will act accordingly.

```
class Player:
def __init__(self, value, strategy='random', model=None):
self.value = value
self.strategy = strategy
self.model = model
def getMove(self, availableMoves, board):
if self.strategy == "random":
return availableMoves[random.randrange(0, len(availableMoves))]
else:
maxValue = 0
bestMove = availableMoves[0]
for availableMove in availableMoves:
boardCopy = copy.deepcopy(board)
boardCopy[availableMove[0]][availableMove[1]] = self.value
if self.value == RED_PLAYER_VAL:
value = self.model.predict(boardCopy, 2)
else:
value = self.model.predict(boardCopy, 0)
if value > maxValue:
maxValue = value
bestMove = availableMove
return bestMove
def getPlayer(self):
return self.value
```

## The Game Controller

Our Game Controller class is the one which controls the flow of the game. We will use it to simulate a large number of games. It will also collect the board history from the Game class and build a training history, meaning a mapping between the board history of a game and the eventual winner, which will be used for **training a Keras Neural Network**.

```
class GameController:
def __init__(self, game, redPlayer, yellowPlayer):
self.game = game
self.redPlayer = redPlayer
self.yellowPlayer = yellowPlayer
self.trainingHistory = []
```

The two players can either be random or based on the Neural Network. From the perspective of the Game Controller, it does not really matter.

The flow of a game is simple: the controller will take the available moves from the board and let the player choose the move. Then it will pass that move back to the board. At the end of the game, it will take the board history and the winner of that game and add them to the training history.

```
def playGame(self):
playerToMove = self.redPlayer
while self.game.getGameResult() == GAME_STATE_NOT_ENDED:
availableMoves = self.game.getAvailableMoves()
move = playerToMove.getMove(availableMoves, self.game.getBoard())
self.game.move(move, playerToMove.getPlayer())
if playerToMove == self.redPlayer:
playerToMove = self.yellowPlayer
else:
playerToMove = self.redPlayer
for historyItem in self.game.getBoardHistory():
self.trainingHistory.append((self.game.getGameResult(), copy.deepcopy(historyItem)))
```

Actually, this method will actually be called thousands of times. We will simulate batches of games to gather the training data and we will also use this for the actual game play of the Neural Network.

```
def simulateManyGames(self, numberOfGames):
redPlayerWins = 0
yellowPlayerWins = 0
draws = 0
for i in range(numberOfGames):
self.game.resetBoard()
self.playGame()
if self.game.getGameResult() == RED_PLAYER_VAL:
redPlayerWins = redPlayerWins + 1
elif self.game.getGameResult() == YELLOW_PLAYER_VAL:
yellowPlayerWins = yellowPlayerWins + 1
else:
draws = draws + 1
totalWins = redPlayerWins + yellowPlayerWins + draws
print('Red Wins: ' + str(int(redPlayerWins * 100 / totalWins)) + '%')
print('Yellow Wins: ' + str(int(yellowPlayerWins * 100 / totalWins)) + '%')
print('Draws: ' + str(int(draws * 100 / totalWins)) + '%')
```

# Keras Neural Network

Now it's time for building our Neural Network Model. The input for this network will be a given state of the game, so the 6x7 board and that's 42 inputs. The output has 3 values: probability that the first player wins the game, probability there's a draw and the probability that the second player wins the game. The output that has the highest values is the one we will consider to be our expected outcome of the given game.

In between the input and the output layers we have 2 Dense layers of size 42. I've played with these values for a little bit and these are the best ones I could find. But please do try better values and let me know on Twitter what you've achieved.

```
class ConnectFourModel:
def __init__(self, numberOfInputs, numberOfOutputs, batchSize, epochs):
self.numberOfInputs = numberOfInputs
self.numberOfOutputs = numberOfOutputs
self.batchSize = batchSize
self.epochs = epochs
self.model = Sequential()
self.model.add(Dense(42, activation='relu', input_shape=(numberOfInputs,)))
self.model.add(Dense(42, activation='relu'))
self.model.add(Dense(numberOfOutputs, activation='softmax'))
self.model.compile(loss='categorical_crossentropy', optimizer="rmsprop", metrics=['accuracy'])
```

For **training our Neural Network**, we take the data provided by the Game Controller and pass it through our model. We split the training/test data in 80%/20%.

```
def train(self, dataset):
input = []
output = []
for data in dataset:
input.append(data[1])
output.append(data[0])
X = np.array(input).reshape((-1, self.numberOfInputs))
y = to_categorical(output, num_classes=3)
limit = int(0.8 * len(X))
X_train = X[:limit]
X_test = X[limit:]
y_train = y[:limit]
y_test = y[limit:]
self.model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=self.epochs, batch_size=self.batchSize)
```

The only other method for this class is the one used to predict the outcome. Given a new board state, we pass it once through our Neural Network and at the end we choose the correct probability with respect with the player index(whether it's the red or the yellow player).

```
def predict(self, data, index):
return self.model.predict(np.array(data).reshape(-1, self.numberOfInputs))[0][index]
```

# Testing our Keras Neural Network

Now it's time to play the game and see our results. First, we are going to build to players based on random strategies and let them play 1000 games so that we can gather our training data.

```
firstGame = Game()
redPlayer = Player(RED_PLAYER_VAL, strategy='random')
yellowPlayer = Player(YELLOW_PLAYER_VAL, strategy='random')
gameController = GameController(firstGame, redPlayer, yellowPlayer)
print ("Playing with both players with random strategies")
gameController.simulateManyGames(1000)
```

We have a pretty much balanced game. Both the red and the yellow players win the same amount of games while the rest of the games(only 15% end in a draw).

Now it's time to train our** Keras Neural Network. **

```
# 42 inputs
# 3 outputs
# 50 batch size
# 100 epochs
model = ConnectFourModel(42, 3, 50, 100)
model.train(gameController.getTrainingHistory())
```

The training will take only a few minutes, as we don't have so much data to go through.

Now it's time to let the Neural Network play as the red player.

```
yellowNeuralPlayer = Player(YELLOW_PLAYER_VAL, strategy='model', model=model)
thirdGame = Game()
gameController = GameController(thirdGame, redNeuralPlayer, yellowPlayer)
print("Playing with red player as Neural Network")
gameController.simulateManyGames(1000)
```

Awesome! The Neural Network wins most of the games - 73% wins. There are some yellow wins as well and some draws, but most of the games are indeed won by the Neural Network.

Now it's time to let the Neural Network play as the yellow player.

```
yellowNeuralPlayer = Player(YELLOW_PLAYER_VAL, strategy='model', model=model)
secondGame = Game()
gameController = GameController(secondGame, redPlayer, yellowNeuralPlayer)
print ("Playing with yellow player as Neural Network")
gameController.simulateManyGames(1000)
```

After a few seconds we get the results.

Well, this time it's not so good. Our Neural Network barely outperforms the random player, which means it hasn't got the chance to learn the game. Playing as the yellow player is also more difficult in this game, but still does not account too much for this result.

# Conclusions and lessons learned

In this article we've built a Keras Neural Network that tried to learn how to play a game of Connect Four. We saw very good results for the Neural Player playing as the red player, but playing as the yellow player was, honestly, a liiiitle bit disappointing. Our Tic-Tac-Toe Neural Network from the other article got way better results, but also that's a much simpler game.

Still, in my opining this was a nice experience and an enjoyable way to learn about how Neural Networks learn and act. It was a fun project and I hope you also liked it. As mentioned earlier, entire code for the project can be found on Github. Feel free to star the project if you enjoyed working with it.

*Thank you so much for reading this article! Interested in more? Follow me on Twitter at @b_dmarius and I'll post there every new article.*