![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
More specifically, A1(x) = G(x; 0, 0.02), A2(x) = G(x; 0.25, 0.02), A3(x) = G(x; 0.50, 0.02), A4(x) = G(x; 0.75, 0.02), and A5(x) = G(x; 1.00, 0.02). First the experimental data are associated with the corresponding elements of the unit interval so that the performance index becomes minimized - these results are given in a tabular form. Learning of the mapping has been completed using the standard gradient-based technique The confidence factors standing in the original data set are not involved in the learning procedure and all data are treated uniformly. The vector of the parameters of the neural network, param, to be adjusted concerns the connections between the hidden and output layers as well as the parameters of the sigmoid functions (spreads and modal values). As the learning was highly sensitive to the changes of the latter, the learning rate used in the training was kept at a low level - the experiments were completed for α = 0.0005. The values of Q for several sizes of the hidden layer are visualized in Fig. 6.16.
As is clearly visible, a significant improvement occurs at n = 5; subsequently this case is discussed in detail. To visualize the character of learning Fig. 6.17 shows the changes of Q as they occur in the first 400 learning epochs.
The form of the nonlinear mapping produced by the network is illustrated in Fig. 6.18 - this is shown in comparison with the experimental data. Subsequently, Fig. 6.19 displays the membership functions resulting from the process of context adaptation - to ease a comparison the original membership functions defined in the unit interval are included as well, Fig. 6.19(ii).
6.7. Knowledge-based learning schemes6.7.1. Metalearning and fuzzy setsEven though guided by detailed gradient - based formulas, the learning of neural networks can be enhanced by making use of some domain knowledge acquired via intense experimentation (learning). By running a series of a mixture of successful and unsuccessful learning sessions one can gain a qualitative knowledge on how an efficient learning scenario should look. In particular, some essential qualitative associations can be established by linking the performance of the learning process and the parameters of the scheme being utilized. Two detailed examples illustrate this point:
The pertinent rules are included in Table 6.1. We may also refer to them as metalearning rules as they capture a way in which learning has to be completed.
These rules are fairly monotonic (yet not symmetric) and fully comply with our intuitive observations. In general, any increase in Q calls for some decrease of α; when Q decreases, the increases in à are made more conservative. The linguistic terms in the corresponding rules are defined in the space (universe) of changes of Q (antecedents) and a certain subset of [0, 1] (conclusions). Similarly, the BP learning scheme can be augmented by taking into account a so-called momentum term; the primary intent of this expansion is to avoid eventual oscillations or reduce their amplitude in the values of the minimized performance index. This makes the learning more stable yet adds one extra adjustable learning parameter in the update rule. Similarly, we can propose more detailed control rules governing the changes of learning rate with respect to learning time and learning error; a sample of the rules is included below. The learning metarules rules can also be formulated at the level of some parameters of the networks. The essence of the following approach is to modify activation functions of the neurons in the network. Consider the sigmoid nonlinearity (that is commonly encountered in many neural architectures). We assume that the steepness factor of the sigmoid function (γ) is modifiable. As the changes of the connections are evidently affected by this component (γ), we can easily set up the following metarules:
Summing up, two design issues should be underlined:
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |