EarthWeb   
HomeAccount InfoLoginSearchMy ITKnowledgeFAQSitemapContact Us
     

   
  All ITKnowledge
  Source Code

  Search Tips
  Advanced Search
   
  

  

[an error occurred while processing this directive]
Previous Table of Contents Next


Let us consider a collection of generic fuzzy sets (linguistic terms)

defined in [0, 1]. As usual, we require that satisfies some obvious requirements of semantic integrity. Essentially, we insist on unimodality and normality of the membership functions of the generic fuzzy sets. Moreover we request that Ai s do not fully overlap.

Given a data set of experimental outcomes (arising, e.g., as a result of expert polling), they can be arranged in the form of (c+1)- tuples, namely

where dk denotes a given element of the universe of discourse whose membership grades to some linguistic categories under discussion are equal to μk1, μk2, …, μkn, respectively. Our intent is to accommodate these data to the highest extent by adapting the context of . The essence of this process is to nonlinearly map the unit interval of the generic universe of discourse (unit interval) onto the current one being an interval [a, b] in R whose bounds are specified as

This makes the generic membership functions adjusted to the current situation conveyed by the available data - thus the context in which the frame of cognition has been originally developed becomes modified (adapted) to the new environment. Due to the inherently nonlinear character of this mapping, see Fig. 6.14, context adaptation expands some subregions of the unit interval while it contracts others - this feature is definitely not accessible through a straightforward linear mapping (linear scaling).


Figure 6.14  Examples of nonlinear mappings

6.6.1. The Optimization Algorithm

The calibration of the universe of discourse is carried out in two main steps:

(i)  identification of a position of the collected membership values in the unit interval by locating the available membership vectors with respect to the linguistic labels of the original frame of cognition.
(ii)  construction of a nonlinear mapping involving the locations derived in (i)

The first step concerns a specification of an element in the unit interval such that the given vector μ1, μ2, …, μn, (the first index pertaining to the data point has been suppressed) matches to the highest extent the vector of the membership values in . This leads to the optimization task of the form

where

and

while ||.|| is a certain normalized distance function computed between the corresponding membership values.

The result of this processing phase is concisely summarized in the form of the pairs of the corresponding elements defined in [0, 1] and [a, b], respectively

Each of these discrete associations, (xk, dk), is equipped with the resulting relevance factor (coefficient) fk determined as

If fk ≈ 1 then the associated correspondence is regarded as highly essential.

The second step of the optimization algorithm departs from the pairs of data summarized in the above form and constructs the nonlinear mapping

To properly address the core issue of context adaptation, we impose several straightforward requirements on the above mapping such as:

  continuity
  monotonicity. We require that is nondecreasing (we allow it to remain constant over some regions of the universe of discourse). This requirement assures us that the meaning of the mapped linguistic terms is not changed (the semantics are retained). Eventually we may request that is nondecreasing - by imposing this requirement we consistently reverse the meaning of the linguistic terms of .
  boundary conditions - the boundary condition, and allow us to fully accommodate currently available experimental data.

In light of the above properties, is a one-to-one mapping.

Finally, once this nonlinear transformation has been constructed, we locate the original fuzzy sets of in the actual universe of discourse ([a, b]) by computing

namely

i = 1, 2, … c. When collected together these new fuzzy sets form the required frame of cognition .

6.6.2. Neural network realization of the nonlinear mapping

The nonlinear mapping is realized through a neural network with its structure shown in Fig. 6.15.


Figure 6.15  Neural network in the realization of nonlinear mapping

The network is composed of “n” nodes situated in the hidden layer and a single node placed at the output layer. The neurons in the hidden layer implement a series of local receptive fields equipped with two parameter sigmoid nonlinearities. The connections of these elements are fixed and equal to 1. Formally speaking we obtain

i = 1, 2, … n where mi ∈ [0, 1] α1 > 0, are the modal values and spreads of the corresponding fields. The neuron forming the output layer is described as

with

Concisely, the network can be written down as a single input - single output mapping of the form

The learning of the network is supervised and guided via a gradient-based optimization of a specified performance index. As the training method is standard to a high degree, the details are not discussed here. Moreover, the proposed method easily generalizes to a multidimensional case.

As a numerical illustration of the neural calibration of linguistic terms, we consider a data set summarized as follows

This family of data consists of the elements situated in a segment of real numbers [2, 18.1] that are assigned to five linguistic categories. The fuzzy sets of defined in [0, 1] are defined using Gaussian membership functions


Previous Table of Contents Next

Copyright © CRC Press LLC

HomeAccount InfoSubscribeLoginSearchMy ITKnowledgeFAQSitemapContact Us
Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home

Use of this site is subject to certain Terms & Conditions, Copyright © 1996-2000 EarthWeb Inc. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Read EarthWeb's privacy statement.