The material presented in this paper is the foundation for neural network architectures that can perform (Solving linear equations using matrix splitting for iterative discrete-time methods in neural networks).As announced a neural network consists of many inter connected processing elements (neurons or nodes), I can begins with the presentation of a particular neural network is dependent on the training phase (specifically the training data used). Matrix splitting solved in several preprocessing methods. Many times it’s necessary to processes the training data to extract important features from the data can be used to train the network instead of the “raw” data. The preprocessing of the training data can therefore, improve the performance of the neural network. Then, the convergence is achieved using the Richardson and Gauss-Seidel methods, respectively. The same termination criterion was used for both these methods in order to properly compare all the results we see that the SOR iterative method gives the best results, that is, the fastest convergence. Comparing the SOR results with the next-best results (Gauss-Seidel, ); we see that the SOR method is about 10 times faster.