![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
7.1.3.1. Representing inhibitory informationAs the coding range being commonly encountered in fuzzy sets constitutes the unit interval, the inhibitory effect to be conveyed by some variables can be achieved by including their complements instead of the direct variables themselves, say OR neuron: AND neuron: The reader familiar with two-valued digital systems and their design can easily recognize that any OR neuron acts as a generalized maxterm (Schneeweiss, 1989) summarizing xi and their complements whereas the AND neurons can be viewed as the generalization of miniterms (product terms) encountered in digital circuits. Symbolically, the complemented variable (input) is denoted by a small dot as visualized in Fig. 7.4 (i).
There is also another way of representing inhibitory information in the neuron (or the network). Instead of combining all the inputs (both inhibitory and excitatory), we split these inputs and expand the architecture by adding few extra neurons. The concept is illustrated in Fig. 7.4 (ii) The inhibitory inputs are first aggregated in the second neuron. The outputs of the neurons are combined AND wise by the third neuron situated in the outer layer. Consider that all the inhibitory inputs are equal to 1. Then the output z2 is equal to 0. Assuming that the connections of the AND neuron are equal to zero, this draws the value of the output (y) down to zero. 7.1.3.2. Computational enhancements of the neuronsDespite the well-defined semantics of these neurons, the main concern one may eventually raise about these constructs happens to be on a numerical side. Once the connections (weights) are set (after learning) each neuron realizes an in (rather than an on) mapping between the unit hypercubes, that means that the values of the output y for all possible inputs cover a subset of the unit interval but not necessarily the entire [0,1]. More specifically, for the OR neuron the values of y are included in This observed shortcoming could be alleviated by augmenting the neuron by a nonlinear element placed in series with the previous logical component, Fig. 7.5.
The neurons obtained in this manner are formalized accordingly where Ψ: [0,1] → [0,1] is a nonlinear monotonic mapping. In general, we can even introduce mappings whose monotonicity is restricted to some regions of the unit interval. A useful two-parametric family of the sigmoidal nonlinearities is specified in the form u, m ∈ [0,1], σ ∈ R. By adjusting the parameters of the function (that is m and σ), various forms of the nonlinear characteristics of the element can be easily obtained. Especially, the values of σ determine either an increasing or decreasing type of the characteristics of the obtained neuron while the second parameter (m) shifts the entire characteristics along the unit interval. The incorporation of this nonlinearity changes the numerical characteristics of the neuron, however its essential logical behavior is sustained; refer to Figs. 7.6 and 7.7 that summarize some of the static input - output relationships encountered there (with the triangular norms set up as the product and probabilistic sum).
It is also of interest to mention that the nonlinearities of the logic neurons can be treated as efficient models of the linguistic modifiers (Yager, 1983). Since the very inception the issue of linguistic modifiers (hedges) in fuzzy computations has been studied in various contexts (IEEE Trans. on Neural Networks, 1992; Kacprzyk, 1983, 1985, 1986). The calculus of these objects has been also developed in different ways, cf. Yager (1983). In the setting of fuzzy neurocomputations, the nonlinear element used in the discussed neurons can be viewed as the linguistic modifier defined over [0, 1]. Depending upon the specific value of the parameter σ we are looking at the modifiers of the type at least (realized as a monotonically increasing function, σ > 0) or at most (a monotonically decreasing function, σ < 0). A necessary calibration of the modifier based on the provided data is accomplished through the learning of the neuron as clarified before. This, in particular, pertains to the parameters of this nonlinearity (m and σ). One can model some other linguistic modifiers in the same way; Fig. 7.8 includes some additional examples of the quantifiers.
In a nutshell, the use of the modifier produces a weightless neuron in which all the connections are equal, wi=w, i=1,2, ,n. Hence the flexibility of the neuron resides to a significant extent within the parameters of the modifier.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |