EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


Interestingly enough, these two operations were introduced by J. Lukasiewicz in his model of three-valued and many-valued logics; while being evident extensions of the model of two-valued logic they still were somewhat conservative by retaining the excluded middle and contradiction - the two fundamental principles of two-valued logic.

As far as the remaining set-theoretic properties are concerned, both commutativity and associativity of the operators defined through triangular norms hold owing to the definition itself. The set requirement of idempotency as expressed in terms of t- and s-norms states that

and

As the associativity holds, we get

In general

In other words, no matter how many identical fuzzy set are combined, the result is not affected by this number. The idempotency property, as outlined above, does not hold for all triangular norms. To the contrary, the only meaningful idempotent triangular norms are the maximum and minimum operations. It is also to be noted that by iterating t-norms one gets a decreasing sequence of values when the number of arguments involved in the pertinent computation goes up

The s-norms give rise to an increasing sequence of membership values meaning that

It should be observed that, in general, the distributivity property is not satisfied for triangular norms. An exception is noted when we consider the pair min and max norms to model the intersection and union, respectively.

3.10. Information-based characteristics of fuzzy sets

Owing to their continuous membership functions, fuzzy sets require more prudent characterization than their Boolean counterpart. In this section we discuss the classic information-oriented measures to describe the information content of fuzzy sets. Here entropy and energy measure of fuzziness are of primordial importance. We also discuss an issue of fuzzy set specificity through which we attempt to describe its fuzziness.

3.10.1. Entropy measure of fuzziness

To remind briefly the notion of entropy and explain its meaning, let us consider an experiment with a finite number of outcomes x1, x2, .., xn which occur with some probabilities p1, p2, …, pn; obviously one has

The notion of entropy, as originally introduced by Shannon and Weaver (1949), reads as

To get to the very essence of this notion, let analyze three highly illustrative situations:

a.  n=2. The outcomes of the experiment are equal to p and 1-p, respectively. Hence

Now let p=1/2 mean that x1 and x2 are equiprobable. This implies that H(x1, x2) = -1/2 log2(1/2) -1/2 log2(1/2) = 1 [bit]. Moreover, the entropy attains maximum at this particular value of probability.

b.  As a straightforward generalization of the two-outcome experiment, one can verify that the expression H(x1, x2, …, xn) assumes its maximum for equal probabilities assigned to the results of the experiment,

c.  If one of the probabilities equals one, then the entropy becomes equal zero.

Here we define pilog2pi =0 if pi=0 by extending -log2x to the origin by continuity. As clearly unveiled by the above observations, entropy quantifies uncertainty that stems from a lack of predictability of the results of the experiment caused by the probabilistic nature of the experiment. This uncertainty vanishes if a single outcome occurs (with probability one) and assumes its maximal value if all the outcomes are equiprobable (occur with the same probability).

Referring to the original entropy definition, we can express entropy as an expected value of the function -log2(1/pi), say

The definition easily generalizes to the continuous case; here the sum is replaced by the integral with p being the probability density function,

whereas X is a random variable defined in X and described by the probability density function p(x).

Another useful generalization of the entropy function pertains to a so-called weighted entropy

where all the weight factors are greater than zero, wi >0. Especially, assuming that

the weighted entropy is expressed as a sum of squared probabilities,

In what follows, we confine ourselves to a finite universe of discourse X = {x1, x2, …, xn}. The notion of entropy measures of fuzziness was introduced by De Luca and Termini (1972, 1974). Further generalizations and refinements could be found in Knopfmacher (1975), Trillas and Riera (1978), Czogala et al. (1982). We start with defining a functional h: [0,1] → [0,1] with the following properties (Ebanks, 1983):

(i)  sharpness h(A(xi)) assumes zero if and only if the membership value A(xi) takes on the value 0 or 1 (complete exclusion or complete membership)
(ii)  maximality h(A(xi)) assumes its maximal value if and only if A(xi) equals 1/2
(iii)  resolution h(A(xi)) is greater or equal to h(A∗(xi)) where A∗ denotes any sharpened version of A meaning that

and

(iv)  symmetry h(A(xi)) = h(A(1-xi)). h(x) is monotonically increasing over [0, 1/2] and decreasing over [1/2, 1]; moreover h(1/2) = 1.
(v)  valuation h(max(A(xi), A(xj))) + h(min(A(xi), A(xj))) = h(A(xi)) + h(A(xj))

Several commonly encountered examples of the above functional include, see also Fig. 3.6,

  Shannon functional

  quadratic functional

  piecewise linear functional


Figure 3.6  Examples of entropy functionals h(u): (a) quadratic functional (b) piecewise linear functional

Then the entropy of A is defined as the sum of the functionals of the membership function of A


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.