![]() |
|
|||
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
|
![]() |
[an error occurred while processing this directive]
For illustrative reasons, we consider a neural network with a single output. Similarly, instead of treating the input variables independently, we define a number of fuzzy relations over the Cartesian product of the input variables. Define these fuzzy relations by A1, A2, .., Ac. The fuzzy sets defined in the output space of the neural network will be denoted by C1, C2, , Cp, respectively. The objective is to reveal meaningful rules between Ai and Cj. For this purpose, generate randomly a number of data points in the input space, namely x(1), x(2), , x(M). Obviously with each x(k) comes y(k) computed by the neural network in response to this input. These pairs are now cast in the setting of the fuzzy sets and fuzzy relations; the produced membership values lead to the computations of the associations between the corresponding fuzzy sets. In the simplest form one can take the minimum of Ai(x(k)) and Bj(y(k)) as a measure reflecting a cohesion between Ai and Bj manifesting for the k-th pair of input - output data. The averaging over M data gives a complete impression about the strength of the association over all the elements. It is convenient to collect all these partial results in a single matrix having p rows and c columns. The rules can be discovered by admitting a minimal level of the associations. The (i,j) th entry of the array serves as a confidence level of the rule In other words, we get the rule Clearly, the statements with weak confidence levels are effortlessly weeded out. An analysis of the individual rows or columns of the association table sheds light on the relationships between the rules:
6.8.2. Linguistic Interpretation of self-organizing mapsThis style of the usage of fuzzy sets is advantageous in some topologies of neural networks, especially those having a substantial number of outputs and whose learning is carried out in unsupervised form. Fuzzy sets are aimed at the interpretation of results produced by such architectures and facilitates processes of data mining. To illustrate the idea, we confine ourselves to self-organizing maps. These maps allow us to organize multidimensional patterns in such a way that their vicinity (neighborhood) in the original space is retained when the pattern are distributed in low dimensional space - in this way the map attempts to preserve the main topological properties of the data set. Quite often, the maps are considered in the form of the two-dimensional arrays of regularly distributed processing elements. The mechanism of self-organization is established via competitive learning; the unit that is the closest to the actual pattern is given an opportunity to modify its connections and follow the pattern. These modifications are also allowed to affect the neurons situated in the nearest neighborhood of the winning neuron (node) of the map. Once training has been completed the map can locate any multidimensional pattern on the map by identifying the most active processing unit. Subsequently, the linguistic labels are essential components of data mining by placing the activities of the network in a certain linguistic context. This concept is visualized in Fig. 6.23. Let us consider that for each variable we have specified a particular linguistic term (context) defined as a fuzzy set in the corresponding space, namely A1 and A2 and An1 for x1, B1, B2 and Bn2 for x2, etc.
When exposed to an input pattern the map responds with the activation levels computed at each node in the grid. The logical context leads to an extra two-dimensional grid whose elements are activated based on the corresponding activation levels of the nodes located at the lower layer as well as the level of the contexts assumed for the individual variables. These combinations are of the AND form - the upper grid is constructed as a series of the AND neurons, cf. Chapter 7. The activation region obtained in this way indicates how much the linguistic description (descriptors) covers (activates) the data space. The higher the activation level of the region (higher values of F), the more visible the imposed linguistic pattern within the data set. Fig. 6.24 summarizes some possible patterns of activation - note a diversity in the size of these regions, their intensity as well as compactness.
Copyright © CRC Press LLC
![]() |
![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
![]() |
![]() |