Abstract
When a person learns, they observe and interact with their surroundings, and monitor the outcome of these interactions. During this process, the brain only examines single snapshots of information. It does not need to continuously revisit past instances of time to retain learned information. Supervised neural networks, as much as they resemble the human brain, do not learn well incrementally. The standard multilayer perceptron, radial basis function network and others, have a common problem of losing learned information when additional data points are presented. In this paper, we develop a supervised learning neural network that converges on new information without overriding previous knowledge. The proposed Neural Field can operate in real time environments where a data point is seen once and never revisited, or chase a function that changes over time without continuously relearning on a full dataset. A dataset can be presented in portions and the network will attain knowledge incrementally. With a static ordered grid of radial basis function neurons, the Neural Field both learns incrementally and generalizes effectively.