Two fast and accurate heuristic RBF learning rules for data classification

Neural Netw. 2016 Mar:75:150-61. doi: 10.1016/j.neunet.2015.12.011. Epub 2016 Jan 4.

Abstract

This paper presents new Radial Basis Function (RBF) learning methods for classification problems. The proposed methods use some heuristics to determine the spreads, the centers and the number of hidden neurons of network in such a way that the higher efficiency is achieved by fewer numbers of neurons, while the learning algorithm remains fast and simple. To retain network size limited, neurons are added to network recursively until termination condition is met. Each neuron covers some of train data. The termination condition is to cover all training data or to reach the maximum number of neurons. In each step, the center and spread of the new neuron are selected based on maximization of its coverage. Maximization of coverage of the neurons leads to a network with fewer neurons and indeed lower VC dimension and better generalization property. Using power exponential distribution function as the activation function of hidden neurons, and in the light of new learning approaches, it is proved that all data became linearly separable in the space of hidden layer outputs which implies that there exist linear output layer weights with zero training error. The proposed methods are applied to some well-known datasets and the simulation results, compared with SVM and some other leading RBF learning methods, show their satisfactory and comparable performance.

Keywords: Classification problem; Heuristic learning method; Linear separability in feature space; Neural network learning; Radial Basis Function.

MeSH terms

  • Algorithms
  • Computer Heuristics*
  • Humans
  • Machine Learning*
  • Neural Networks, Computer*
  • Neurons / physiology
  • Statistics as Topic / classification*
  • Time Factors