![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
7.4.3. Interpretation of fuzzy neural networksIn contrast to the way neural networks are interpreted, an interpretation of FNN is far easier and can be done in a more comprehensive and exhaustive fashion. This stems from the fact that FNNs are highly heterogeneous structures in which each processing element exhibits well-defined functional characteristics. In spite of the flexibility of the network at the parametric level, each element retains its underlying characteristics. What is more important is that the interpretation of the network is not carried out in the standard input - output, black box fashion but rather by gaining some direct insight into the structure of the network. As an example consider a so-called logic processor that is an example of a three-layer FNN. The hidden layer is composed of AND neurons while the output layer is built up with the aid of OR neurons. The connections of the neurons obtained after learning are visualized in Fig. 7.16
The following formulas are inferred directly from the structure (we have ignored the values of the connections; more explanation is given below),
and can be regarded as two rules induced (conveyed) by the network. The black box approach is not capable of revealing this type of dependences; in particular, the internal relationships are not tackled at all. The interpretation can be made even more transparent by dropping the least relevant connections of the neurons. Depending upon the form of the neuron, two reduction schemes apply
These two simple guidelines originate from the boundary conditions of the t- and s-norms. We get
meaning that the value of the second argument (x) becomes completely irrelevant. The reduction procedure requires two threshold values, say μ and λ; depending on the form of neuron, the connections below or above these thresholds are pruned. In general, by changing the thresholds the pruning can be made more radical. For detailed studies on the selection of these parameters, refer to Pedrycz (1995). 7.5. Case studiesIn this section we highlight a limited number of various applications of FNNs. In particular, we elaborate on the role of the transparency of the networks architecture with respect to its abilities to capture domain knowledge as well as carry out all necessary learning activities. It is also important to underline that owing to the prudent representation of domain knowldge, the learning itself is not done from scratch. 7.5.1. Logic filteringOne of the most popular methods of signal filtering deals with a sort of its averages, Fig. 7.17
Considering that x(k) is situated in the unit interval, we propose a discrete time, logic-based filter of the form where
One can rewrite the filter expression in a vector format where now z = [x(k+1), y(k)], v = [1-w w] The weight factor is used to achieve required filtering properties. Note that high values of w promote strong filtering (y(k) depends heavily on the previous output of the filter). For w ≈ 0 we get y(k+1) ≈ x(k+1) and the filtering effect is very much limited or vanishes. Let us realize the t-norm and s-norm as the product and the probabilistic sum, respectively. Consider that the input x(k) is a mixture of a constant signal with some superimposed noise component where z (k) comes from a random variable of a uniform distribution in [0, 1]. The role of the filtering parameter (w) is clearly visible from Fig. 7.18. Higher values of w produce a more profound averaging effect.
7.5.2. Minimization of multiple output two-valued combinational systemsThe problem of minimization of Boolean functions has been extensively studied in the literature on digital systems. This, in fact, constitutes a cornerstone of any design process of combinational as well as sequential systems. For a small number of independent variables (say, up to 5 - 6), this optimization can be carried out manually and usually uses Karnaugh maps (K - maps). The problem becomes more challenging when one wants to minimize several Boolean functions one at a time. The minimization should lead to the most compact circuit that uses minimal hardware. To illustrate the way in which this task could be handled through computing with fuzzy neural networks, let us consider several functions given in a canonical format of a sum of minterms. We start with a simple problem with a single output only. A. We consider a well-known exclusive - OR (XOR) problem that is commonly viewed as a testbed for analyzing various learning algorithms for neural networks. In a simple scenario the training set consists of four two-dimensional patterns distributed at the vertices of the unit square, Fig.7.19.
The logical expression describing these patterns reads as The training is completed using the network with two hidden AND units, see Fig. 7.20.
The learning rate is 0.15. The standard MSE performance index recorded over the course of learning is shown in Fig. 7.21.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |