Conway's game of life is a cellular automaton, which produces complex behaviors from a few simple rules. This Repository compares the performance of different neural network architectures on the task of predicting the next state in Conway's game of life.
The CNN architecture consists out of 2 Layers with 8 and 1 feature maps. A Relu activation function in between and a Sigmoid activation function after the second layer were used.
The residual CNN architecture consists out of 2 Layers with 8 and 1 feature maps. A Relu activation function in between and a Tanh activation function after the second layer were used. The output of the network is added to the previous state to get the next one.
The MPL architecture consists out of 2 fully connected with 256 neurons. A Relu activation function in between and a Sigmoid activation function after the second layer were used.
The MPL architecture consists out of 2 fully connected with 256 neurons. A Relu activation function in between and a Tanh activation function after the second layer were used. The output of the network is added to the previous state to get the next one.
The size of one frame are 16x16 pixels where the value 1 represents a living cell and 0 represents a dead cell. For the training set 5000 series of 30 frames were used. For the validation set 500 series of 30 frames were used. The first state of each series was initialized with a 50% chance for each cell to be alive/dead. To evaluate the predictive power of the neural networks the mean squared error(MSE) between the next state and the predicted next state is calculated.
Comparison of Models during training. The loss for each model is the average over 10 full training runs.
-
Install dependencies
-
Create Data
python3 createData.py
- Select wanted model in train.py
- Run train.py
python3 train.py
- python 3.6
- pytorch 1.8.0
- numpy
- matplotlib