![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
6.4.2. Variable processing resolution - fuzzy receptive fieldsBy defining the linguistic terms (modelling landmarks) and specifying their distribution along the universe of discourse we can orient (focus) the main learning effort of the network. To clarify this idea, let us refer to Figure 6.1.
The partition of the variable through 6.5. Uncertainty representation in neural networksThe factor of uncertainty or imprecision can be quantified by exploiting some uncertainty measures as commonly exploited in the theory of fuzzy sets. The underlying rationale is to equip the internal format of information available to the network by some indicators describing measures as commonly exploited in the theory of fuzzy sets. The underlying rationale is to equip the internal format of information available to the network by some indicators describing how uncertain the given datum is. Considering possibility and necessity measures this quantification is straightforward: once Poss( X, Ak) ≠ Nec(X, Ak) then X is regarded uncertain (the notion of uncertainty is also context-sensitive and depends on Ak) (Dubois and Prade, 1988). For numerical data one always arrives at the equality of the two measures that clearly points at the certainty of X. In general, the higher the gap between the possibility and necessity measures, Poss( X, Ak) = Nec(X, Ak) + δ, the higher the uncertainty level associated with X. The uncertainty gap attains its maximum for δ = 1. One can also consider the compatibility measure instead of the two used above - this provides us with more flexibility and discriminatory power yet becomes computationally demanding. The possibility and necessity measures reveal interesting relationships between uncertainty conveyed by X when processed in terms of A. Denote by λ the possibility computed with respect to A, Additionally, μ describes the complement of the necessity measure, Here we are in position to introduce the notions of conflict and ignorance as they emerge in the representation studied in context of A. By looking at the values of λ and μ, we identify three characteristic cases :
Let us quantify the above statements through the relationships or equivalently where α and β quantified in [0, 1] are used in the evaluation of the level of conflict or ignorance, respectively. A convenient graphical illustration can be formed in terms of the so-called ignorance-conflict plane, Fig. 6.2. The higher the values of these indices (α or β), the higher the uncertainty (conflict or ignorance) associated with X. If X is a singleton (or it is perceived as such when processed in the specific context of A), the corresponding element in the plane moves along its diagonal.
The way of treating the linguistic term makes a real difference between the architecture enhanced by the uncertainty representation layer, Fig. 6.3, and RBF neural networks. The latter ones do not have any provisions to deal with and quantify uncertainty. The forms of the membership function (RBFs) are very much a secondary issue. In general, one can expect that the fuzzy sets used therein can exhibit a variety of forms (triangular, Gaussian, etc.) while RBFs are usually more homogeneous (e.g., all Gaussian). Furthermore, there are no specific restrictions on the number of RBFs used as well as their distribution within the universe of discourse. For fuzzy sets one usually confines this number to a maximum of 9 terms (more exactly, 7 ± 2); additionally we should make sure that the fuzzy sets are kept distinct and thus retain their semantic identity.
In general, when processing data in the fuzzy set environment, we distinguish two main approaches: a parametric and nonparametric data representation. The possibility - necessity mechanism quantifies uncertainty in a nonparametric way. The complementary parametric approach towards uncertainty quantification concerns a direct representation of a fuzzy datum (Hathaway et al., 1996). This representation depends upon the form of the nonnumeric information one has to deal with. A list of several commonly envisioned scenarios includes a number of interesting cases of membership functions; for details refer to Fig. 6.4.
The parametric characterization suitable for one form of the fuzzy sets may not suitable to describe fuzzy sets coming from some other class. To cope with all types of fuzzy sets, one should go for a direct quantization of the input space. This, however, is not feasible. Assume that we have admitted 100 discrete points to complete a uniform quantization of the universe of discourse (discretization points distributed uniformly across the universe of discourse). Then any fuzzy set, no matter what its membership function looks like, becomes represented as an 100-element vector. This gives rise to 100 nodes in the input layer of the network. In case of even a few input variables, this leads to unacceptable large architectures of neural networks.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |