General regression neural network architecture

Another type of architecture is general regression neural networks (GRNNs), which are known for their ability to train quickly on sparse data sets. In numerous tests, it was found that a GRNN responds much better than back-propagation to many types of problems, although this is not a rule. It is especially useful for continuous function approximation. A GRNN can have multidimensional input, and it will fit multidimensional surfaces through data. GRNNs work by measuring how far a given sample pattern is from patterns in the training set in N dimensional space, where N is the number of inputs in the problem. The Euclidean distance is usually adopted.

A GRNN is a four-layer feed-forward neural network based on the nonlinear regression theory, consisting of the input layer, the pattern layer, the summation

HIdden layers

HIdden layers

Generalized Regression Neural Network

layer

(total number of neurons should at least equal the training patterns)

FIGURE 11.20 General regression neural network architecture.

layer

(total number of neurons should at least equal the training patterns)

FIGURE 11.20 General regression neural network architecture.

layer, and the output layer (see Figure 11.20). There are no training parameters, such as learning rate and momentum, as in back-propagation networks, but a smoothing factor is applied after the network is trained. The smoothing factor determines how tightly the network matches its predictions to the data in the training patterns. Although the neurons in the first three layers are fully connected, each output neuron is connected only to some processing units in the summation layer. The summation layer has two types of processing units: summation units and a single division unit. The number of summation units is always the same as the number of the GRNN output units. The division unit only sums the weighted activations of the pattern units of the hidden layer, without using any activation function.

Each GRNN output unit is connected only to its corresponding summation unit and the division unit (there are no weights in these connections). The function of the output units consists of a simple division of the signal coming from the summation unit by the signal coming from the division unit. The summation and output layers together basically perform a normalization of the output vector, making a GRNN much less sensitive to the proper choice of the number of pattern units. More details on GRNNs can be found in Tsoukalas and Uhrig (1997) and Ripley (1996).

For GRNN networks, the number of neurons in the hidden pattern layer is usually equal to the number of patterns in the training set because the hidden layer consists of one neuron for each pattern in the training set. This number can be made larger if one wants to add more patterns, but it cannot be made smaller.

The training of the GRNN is quite different from the training used in other neural networks. It is completed after presentation of each input-output vector pair from the training data set to the GRNN input layer only once.

The GRNN may be trained using a genetic algorithm (see Section 11.6.2). The genetic algorithm is used to find the appropriate individual smoothing factors for each input as well as an overall smoothing factor. Genetic algorithms use a "fitness" measure to determine which individuals in the population survive and reproduce. Therefore, survival of the fittest causes good solutions to progress. A genetic algorithm works by selective breeding of a population of "individuals," each of which could be a potential solution to the problem. In this case, a potential solution is a set of smoothing factors, and the genetic algorithm seeks to breed an individual that minimizes the mean squared error of the test set, which can be calculated by

p p where

E = mean squared error. t = network output (target).

o = desired output vectors over all patterns (p) of the test set.

The larger the breeding pool size, the greater is its potential to produce a better individual. However, the networks produced by every individual must be applied to the test set on every reproductive cycle, so larger breeding pools take longer time. After testing all the individuals in the pool, a new "generation" of individuals is produced for testing. Unlike the back-propagation algorithm, which propagates the error through the network many times, seeking a lower mean squared error between the network's output and the actual output or answer, GRNN training patterns are presented to the network only once.

The input smoothing factor is an adjustment used to modify the overall smoothing to provide a new value for each input. At the end of training, the individual smoothing factors may be used as a sensitivity analysis tool; the larger the factor for a given input, the more important that input is to the model, at least as far as the test set is concerned. Inputs with low smoothing factors are candidates for removal for a later trial.

Individual smoothing factors are unique to each network. The numbers are relative to each other within a given network, and they cannot be used to compare inputs from different networks.

If the number of input, output, or hidden neurons is changed, however, the network must be retrained. This may occur when more training patterns are added, because GRNN networks require one hidden neuron for each training pattern.

Getting Started With Solar

Getting Started With Solar

Do we really want the one thing that gives us its resources unconditionally to suffer even more than it is suffering now? Nature, is a part of our being from the earliest human days. We respect Nature and it gives us its bounty, but in the recent past greedy money hungry corporations have made us all so destructive, so wasteful.

Get My Free Ebook


Post a comment