![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
The nonparametric representation is indirect in the sense that any input datum is expressed via a collection of receptive fields (RBFs). The way in which these fields are matched with the data is not unique including possibility - necessity and compatibility computations. The nonparametric representation maintains the size of the input layer of the network under control as it directly depends upon the number of the receptive fields partitioning the input space. Some illustrative simulation studies concern a classification of nonnumeric patterns situated in a two-dimensional feature space. Our objective is to analyze the behavior of the neural classifier for data (patterns) of different level of granularity as well as to investigate various topologies of the preprocessing layer and analyze their ability to cope with uncertainty. The nonlinear classification boundary is given as a sine wave, x2= 5 sin(x1). Both x1 and x2 are distributed in the [0, 5] interval. The patterns are assigned to class ω1 if x2 < 5 sin(x1) or to the second class if x2 > 5 sin( x1). The elements of the partition space are defined as Gaussian-like membership functions (fuzzy relations) of the form with the series of modal values
The nonnumeric aspect of the patterns is admitted in the form of sets - squares of width 2d. This parameter (d) is referred to as a granularity of data. The higher the value of d, the lower the granularity of the patterns. A class membership of such nonnumeric patterns is defined by computing regions of the squares located at the corresponding side of the classification boundary. More specifically, the class membership in ω1 is computed as while the class membership in ω2 is governed by the expression where X denotes a characteristic function of the binary relation. In the series of experiments, we consider the same training set as far as the centers of the patterns are concerned. The granularity of the patterns is modified. An example of the nonnumeric patterns is shown in Fig. 6.5.
Two architectures of feedforward neural networks with a single hidden layer are studied -the difference between them arises at the preprocessing layer:
Depending on the decoding, the input layer generates either 9 or 18 signals that are fed into the hidden layer. Similarly, the hidden layer, depending on the preprocessing layer, comprises 9 or 18 neurons. A function for average differences between the possibility and necessity values is computed in the context of the above Gaussian membership functions for the patterns of different granularity is increasing, Fig. 6.6. If d goes up, a gap between the corresponding possibility and necessity values increases.
The training of the networks with these two forms of the preprocessing layers is carried out for the data with d = 0.4. The performance of learning is monitored by the standard sum of squared errors (performance index) between the target class membership values and those produced by the network, Fig. 6.7.
The obtained connections between the input and output layer are summarized in Fig. 6.8.
We visualize the results of this training of the network in terms of the membership values, Figs. 6.9 and 6.10.
The testing of the neural network is carried out for the same training data - these patterns are the same in terms of their modal values but now exhibit a variable granularity level (d). Based on the results in Figs. 6.11 and 6.12, several observations are worth making:
The performance of the network with the possibility encoding is ostensibly weaker than the previous one with the possibility - necessity encoding. This illustrates an importance of the preprocessing layer.
6.6. Neural calibration of membership functionsThe linguistic terms play an instrumental role in encoding both numerical and nonnumerical information that takes place prior to its further processing. It is obvious that linguistic terms (fuzzy sets) are not universal. When speaking about comfortable speed, we confine ourselves to a certain context and interpret this term accordingly. When the context changes, so does the meaning of the term. Nevertheless, an order of the terms forming the frame of cognition is retained. For instance, in the frames the order of the basic terms is preserved no matter how much the meaning attached to the terms tends to vary. The membership functions of the elements of
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |