**CHAPTER 4 ANFIS BASED DATA RATE PREDICTION**

**4.2.4 Membership Function and Rules Selection for ANFIS**

In a conventional fuzzy inference system, the number of rules is decided by an expert who is familiar with the target system to be modeled. In ANFIS simulation, however, no expert is available and the number of membership functions (MF‟s) assigned to each input variable is chosen empirically, that is, by plotting the data sets and examining them visually, or simply by trial and error. For data sets with more than three inputs, visualization techniques are not very effective and most of the time we have to rely on trial and error. This situation is similar to that of neural networks; there is just no simple way to determine in advance the minimal number of hidden units needed to achieve a desired performance level.

There are several other techniques for determining the numbers of MFs and rules, such as

54

CART and clustering methods. In a fuzzy inference system, basically there are three types of input space partitioning:

Grid partitioning method

Scatter partitioning method :It includes o Fuzzy-C means clustering method.

o Subtractive clustering method.

Tree Partitioning method.

In thesis FIS is generated by using Grid partitioning and scatter partitioning. Both methods are explained in subsection.

**Grid petitioning **

Grid partitioning is an approach for initializing the structure in a fuzzy inference system.

In this method it generates rules by enumerating all possible combinations of membership functions of all inputs. The number of MFs on each input variable uniquely determines the number of rules. The initial values of premise parameters are set in such a way that the centers of the MFs are equally spaced along the range of each input variable. Figure 4.4 shows example of grid portioning.

**Figure 4. 4 Grid partitioning of input space for two input sugeno fuzzy model with nine rules . **

The grid-partitioning approach to fuzzy systems has the serious disadvantage that the very regular partition of the input space may be unable to produce a rule set of acceptable size which is able to handle a given data set well. If, for example, the data contains regions with several small clusters of different classes, then small rule patches have to be created to

55

classify the data in this region correctly. This problem becomes even more serious as the dimension of the input data increases. That leads to an exponential explosion. For instance, for a fuzzy inference system with 10 inputs, each with two membership functions, the grid partitioning leads to 1024 (=2^10) rules, which is inhibitively large for any practical learning methods. The "curse of dimensionality" refers to such situation where the number of fuzzy rules, when the grid partitioning is used, increases exponentially with the number of input variables. So this leads to increase of simulation time and poor results for high dimensional problem. So to overcome this Scatter partitioning methods are used.

**Scatter partitioning **

To eliminate the problems associated with grid-partitioning, other ways of dividing the
input space into rule patches have been proposed. That approach, known as [**24**] *scatter *
*partitioning*, is to allow the IF-parts of the fuzzy rules to be positioned at arbitrary locations
in input space. If the rules are represented by *n*-dimensional Gaussians or normalized
Gaussians , this means that the centers of the Gaussians are not anymore confined to corners
of a rectangular grid. Rather, they can be chosen freely, e.g., by a clustering algorithm
working on the training data. So two clustering algorithms have been used as mentioned
previously: 1) Fuzzy C means clustering and 2) subtractive clustering

** Fuzzy C-means clustering (FCM): **

Clustering partitions a data set into several groups such that the similarity within a group is larger than that among groups. Achieving such a partitioning re- requires a similarity metrics that takes two input vectors and returns a value reflecting their similarity. Since most similarity metrics are sensitive to the ranges of elements in the input vectors, each of the input variables must be normalized to within, say, the unit interval [0, 1].

Fuzzy C-means clustering (FCM), also known as fuzzy ISODATA, is a data clustering algorithm in which each data point belongs to a cluster to a degree specified by a membership grade. Bezdek proposed this algorithm in 1973. FCM partitions a collection of n vector xi i = 1, ..., n into c fuzzy groups, and finds a cluster center in each group such that a cost function of dissimilarity measure is minimized. FCM employs fuzzy partitioning such that a given data point can belong to several groups with the degree of belongingness specified by membership grades between 0 and 1. Here number cluster center represents the number rules. Here number of rules can be fixed by us.

56
** Subtractive Clustering (SC): **

When there is no clear idea how many clusters there should be for a given set of data,
*subtractive clustering* is a fast, one-pass algorithm for estimating the number of clusters and
cluster centers in a set of data. [**25**]Subtractive clustering operates by finding the optimal data
point to be defined as a cluster center, based on the density of surrounding data points. All
data points within the radius distance of these points are then removed, in order to determine
the next data cluster and its center. This process is repeated until all of the data is within the
radius distance of a cluster center. This method used for rules generation when number of
inputs larger. It gives optimized rules by taking into radii specified. In this work all the three
types FIS method are used and compared for data prediction