Sequential neural network decoder for convolutional code with large block sizes
Due to the curse of dimensionality, the training complexity of the neural network based channel-code decoder increases exponentially along with the code word’s length. Although computation power has made significant progress, it is still hard to deal with long block length code word. In this thesis, we proposed a neural network based decoder termed as Sequential Neural Network Decoder (SNND). The SNND consists of multiple sub models, and it passes the last state of the current sub model to the following model as the initial state. The bit error rate (BER) performance of the SNND remains unchanged during the number of sub models increases, it achieves a performance closes to the performance of Viterbi soft decision under Additive white Gaussian noise (AWGN) channel. However, the SNND’s performance is found to decrease along with the modulation order increase.