Zum Hauptinhalt springen

Feature reduction fuzzy C-Means algorithm leveraging the marginal kurtosis measure

Pan, Xingguang ; Wang, Shitong
In: Journal of Intelligent & Fuzzy Systems, Jg. 39 (2020-11-19), S. 7259-7279
Online unknown

Feature reduction fuzzy C-Means algorithm leveraging the marginal kurtosis measure 

The feature reduction fuzzy c-means (FRFCM) algorithm has been proven to be effective for clustering data with redundant/unimportant feature(s). However, the FRFCM algorithm still has the following disadvantages. 1) The FRFCM uses the mean-to-variance-ratio (MVR) index to measure the feature importance of a dataset, but this index is affected by data normalization, i.e., a large MVR value of original feature(s) may become small if the data are normalized, and vice versa. Moreover, the MVR value(s) of the important feature(s) of a dataset may not necessarily be large. 2) The feature weights obtained by the FRFCM are sensitive to the initial cluster centers and initial feature weights. 3) The FRFCM algorithm may be unable to assign the proper weights to the features of a dataset. Thus, in the feature reduction learning process, important features may be discarded, but unimportant features may be retained. These disadvantages can cause the FRFCM algorithm to discard important feature components. In addition, the threshold for the selection of the important feature(s) of the FRFCM may not be easy to determine. To mitigate the disadvantages of the FRFCM algorithm, we first devise a new index, named the marginal kurtosis measure (MKM), to measure the importance of each feature in a dataset. Then, a novel and robust feature reduction fuzzy c-means clustering algorithm called the FRFCM-MKM, which incorporates the marginal kurtosis measure into the FRFCM, is proposed. Furthermore, an accurate threshold is introduced to select important feature(s) and discard unimportant feature(s). Experiments on synthetic and real-world datasets demonstrate that the FRFCM-MKM is effective and efficient.

Keywords: Fuzzy c-means; feature reduction learning; marginal kurtosis measure; mean-to-variance ratio

1 Introduction

AS a data-driven technique for data analysis, clustering has been widely used in research fields such as statistics, pattern recognition and machine learning. Given a dataset X consisting of data points xi where i  = 1, 2,. . . , n, clustering algorithms aim to group X into k clusters S ={ S1, S2,. . . , Sk } such that the data points x i  ∈ Sk have the highest similarity and the data points between different clusters have the highest dissimilarity. Perhaps two of the most well-known partition-based clustering algorithms are the k-means [[1]] and the fuzzy c-means (FCM) [[2]], and the latter is a generalization of the former. The k-means is known for its simplicity, and the FCM remains popular due to its capability to express the data points that belong to more than one cluster. They have been widely studied and applied in many areas [[4], [6]]. However, the k-means and FCM do not take the feature importance into account, and they cluster data using equal feature weights without considering the structure of the data. That is, the features that are important or less important may make the same contribution to the clustering. As a result, their clustering performance will be affected by the existence of unimportant feature(s) or redundant feature(s), which may cause the k-means and FCM to produce incorrect clustering results and significantly degrade the clustering performance in some applications [[7]].

To overcome these shortcomings, feature weighting and feature selection techniques are used in the traditional k-means and FCM. Feature selection techniques assume that each of the selected features has the same degree of importance. The feature weighting method is an extension of the former and makes feature weights take values in the interval of [0,1]; the more important a feature is, the larger the weight should be.

Feature weighting is an important technique, which can be found in many literatures [[8]]. In clustering analysis, some variants of the k-means and FCM have been presented, such as the weighted k-means (WKM) [[9]]; the entropy-weighted k-means (EWKM) [[10]]; the Minkowski weighted k-means (MWKM) [[11]]; the Minkowski metric fuzzy weighted c-means (MWFCM) [[12]]; the weighted FCM using feature-weight learning (WFCM) [[13]]; the two version of feature-weighted FCM, which is called simultaneous clustering and attribute discrimination (SCAD1 & SCAD2) [[14]]; and the enhanced soft subspace clustering (ESSC) [[15]] algorithm, which can be referred to as an extension of conventional feature weighting clustering. By using a feature weighting technique in the k-means or FCM, the performance may be improved. However, these algorithms do not have a mechanism to remove unimportant features in the clustering process, except for the FRFCM [[16]] that uses feature reduction techniques. As a pioneer work of feature reduction methods, the FRFCM algorithm uses the mean-to-variance-ratio (MVR) to measure the feature importance and integrate it into the objective function to control the within cluster dispersion term and the entropy term. The feature reduction technique is also introduced to the multi-view k-means clustering [[17]] algorithm. Subsequently, this feature reduction learning method is extended to feature-weighted possibilistic c-means clustering [[18]].

The main contributions of this paper are as follows: (1) an MKM index is proposed to measure the feature importance, (2) a new and robust feature reduction fuzzy clustering algorithm is proposed to cluster high dimension data, and (3) a threshold with a theoretical basis is designed to determine which features need to be retained and which features need to be deleted in the clustering process.

The rest of the paper is organized as follows. Section 2 first introduces the feature-weighted k-means and feature-weighted FCM clustering algorithms used in the present work, and then introduces feature reduction fuzzy c-means clustering in detail. Section 3 introduces the techniques of the proposed algorithm, its convergence proof, and complexity analysis. Section 4 presents the results of the experiment, and, finally, Section 5 presents the conclusion and ideas for future work.

2 Related works

In this section, we provide a review of the literature on feature weighted clustering methods including the FRFCM. The commonalities and characteristics of each algorithm are discussed.

A. Feature Weighting using K-Means

Huang et al. [[9]] proposed the WKM algorithm which has the following objective function:

(1) J(U,V,w)=i=1nk=1cμikj=1dwjβ(xij-vjk)2s.t.j=1dwj=1,k=1cμik=1,μik{0,1}.

In its implementation, it adds an extra step to the basic k-means to determine the feature weights of each iteration in the k-means clustering process. The weight of each feature is inversely proportional to the sum of the variances within the feature cluster. Therefore, unimportant features can be identified, and the influence of unimportant features on the clustering result is significantly reduced, which can improve the clustering accuracy and performance. However, the WKM requires the user to subjectively specify an additional parameter, i.e., the index of the feature weights. Therefore, it is difficult for users to determine an appropriate value for this parameter to obtain a desired quality clustering result. In addition, the feature weights generated using this method may not represent the feature importance well [[19]].

Jing et al. [[10]] proposed the entropy weighting K-means method EWKM, which is developed as a subspace clustering technique, and particularly useful for high-dimensional sparse data since it uses using feature weighting methods. The EWKM minimizes the differences within the cluster and maximizes the negative entropy. The reason behind this is that it uses more dimensions to identify clusters of high-dimensional sparse data, avoiding problems related to identifying such clusters using only a few dimensions. The EWKM has the following objective function:

(2) J(U,V,w)=k=1c[i=1nj=1dμikwkj(xij-vkj)2+γj=1dwkjlogwkj]s.t.j=1dwkj=1,{wkj}0,{wkj}0.

In its implementation, it extends the standard k-means algorithm with an additional step to calculate the feature weight of each cluster in each iteration of the clustering process. The weight is inversely proportional to the sum of the intra-cluster variances of the variables in the cluster. Because the EWKM requires many calculations, it is very time-consuming.

B. Feature Weighting in Fuzzy clustering

WFCM was proposed by Wang et al. [[13]] to improve the performance of the FCM. The WFCM attempts to minimize the following objective function:

(3) J(U,V,w)=i=1nk=1cμijmj=1dwj2(xij-vkj)2

where wj is a feature weight calculated by the algorithm proposed by Yeung and Wang [[21]].

SCAD2 was proposed by Frigui and Nasraoui [[14]] by using a central weighing scheme. The objective function of SCAD2 is as follows.

(4) J(U,V,w)=j=1ki=1nl=1pμijmwjlq(xij-vjl)2

In SCAD2, the feature weights produced by it do not properly represent the importance of the features in the clusters in some situations [[22]].

By using the between-cluster information of the dataset, Deng et al. [[15]] proposed ESSC algorithm. The ESSC uses the following objective function:

(5) J(U,V,W)=k=1ci=1nμikmj=1dwkj(xij-vkj)2+γk=1cj=1dwkjlnwkj-ηk=1ci=1nμikmj=1dwkj(vkj-v0j)2

where v0j denotes the jth feature of the center of the whole dataset and k=1ci=1nμikmj=1dwkj(xij-vkj)2 is the within cluster dispersion. The ESSC algorithm minimizes the within cluster dispersion and maximizes the negative entropy, and meanwhile minimizes the weighted distance between cluster centers and data center. The obtained feature weights also do not well represent the feature importance of the clusters for some datasets.

C. Fuzzy C lustering With Feature Reduction Schema

As a pioneer work on feature reduction learning algorithms, Yang et al [[16]] developed a fuzzy clustering algorithm named the FRFCM, which uses the feature-weighted entropy with a feature reduction scheme to cluster data. Their proposed algorithm uses the mean-to-variance-ratio (MVR) index to measure the feature importance of a dataset, and then the MVR index is integrated into the objective function to control the within cluster weighted feature dispersion term and entropy term. By calculating the feature weight using an updating equation, the FRFCM sets a threshold to discard feature(s) with small weight(s) less than it. After reducing the unimportant features from data, the FRFCM can improve the accuracy of the clustering algorithm and speed up the clustering process. The objective function of the FRFCM algorithm is given as follows:

(6) minU,V,wJ(U,V,w)=i=1nk=1cuikmj=1dδjwj(xij-vkj)2+ncj=1d(wjlogδjwj)2

subject to

k=1cuik=1,j=1dwj=1,j=1,...,n.

where δj=(mean(x)var(x))j,j=1,2,,d.

The iteration formula of wj is as follows:

(7) wj=1δjexp(-cnk=1ci=1nuikmδj(xij-vkj)2)t=1d1δtexp(-cnk=1ci=1nuikmδt(xit-vkt)2)

In Equation (7), since δj is irrelevant to subscripts i and k, it can be modified as follows:

(8) wj=1δjexp(-cnδjk=1ci=1nuikm(xij-vkj)2)t=1d1δtexp(-cnk=1ci=1nuikmδt(xit-vkt)2)

Let Sj=k=1ci=1nuikm(xij-vkj)2 , where Sj is the sum of the within-cluster dispersion of the jth feature. Because the denominator is a positive constant, if we denote this constant as Const, then Equation. (7) can be simplified as follows:

(9) wj=1δjConstexp(-cnδjSj)

Since δj is used to measure the feature importance, the FRFCM should produce a large weight for a feature with a large δj, and the algorithm should retain the feature(s) with large weight(s) in the feature reduction clustering process. However, as can be seen from Equation 9, the updated equation of wj is a function of δj and Sj and can produce a large weight for the jth feature if both δj and Sj are small; it cannot produce a large weight for a large δj unless Sj is small enough. As a consequence, important feature(s) may not be identified by the FRFCM, which make the algorithm incorrectly discard important features from the dataset in the first iteration of the FRFCM. This is one of the shortcomings of the FRFCM algorithm. In addition, the FRFCM still has two other disadvantages. 1) The MVR values for the important feature(s) of some datasets are not necessarily large. For example, the MVR index of the iris dataset is δ = [8 . 522, 16 . 244, 1 . 207, 2 . 058], indicating that the 1st feature and 2nd feature are two important features, yet most feature weighting clustering algorithms consider the 3rd feature and 4th feature as two important features [[16]]. Moreover, the MVR index changes when some data are normalized. A large MVR value may become small and also a small MVR value may become large if the data are normalized. 2) The feature weights obtained by the FRFCM are very sensitive to the initialization. These disadvantages can degrade the performance of the FRFCM.

Next, we propose a novel feature reduction fuzzy c-means clustering algorithm that leverages the marginal kurtosis measure to mitigate the above disadvantages of the FRFCM algorithm.

3 The feature reduction fuzzy c-means clustering leveraging the marginal kurtosis measure

A. Problem Definition

In data mining and data analysis tasks, if a feature or variable possesses only one mode (Fig. 1 (a)), it is a less important feature or variable. However, if a feature or value has two different modes (Fig. 1 (b)), it remains important for the learning algorithm. Owing to incompetence of the MVR index in [[16]] in measuring feature importance with this distribution, in this paper we devise a new index to measure feature importance.

Graph: Fig. 1 Features of two modal types. (a) Unimodal distribution; (b) bimodal distribution.

Definition 1. Given a dataset X, let η = (X - EX2, the marginal kurtosis measure of X is defined as follows:

(10) δj=(Eηvar(η))j=(E((X-E(X))2)E((X-E(X))2-E((X-E(X))2))2)j=(E((X-E(X))2)E((X-E(X))4)-(E((X-E(X))2))2)j=(1/E((X-E(X))4)(E((X-E(X))2))2-1)j=(1kurtosis(X)-1)j

In Equation (10), δj is the mean-to-standard-variance-ratio of η, and it is actually a function of the kurtosis of X. Thus, we name it the marginal kurtosis measure, and denote it as MKM. Compared with the MVR of X, the MKM of X has the property of being unaffected by data normalization.

Next, we use an example to illustrate the advantage of the MKM over the MVR to measure the feature importance.

Example 1. In this example, we produce a synthetic dataset with 400 data points. Component x2 and component x3 of this dataset are generated from a Gaussian mixture model k=12αkN(μk,k) with the parameters α1 = α2 = 0.5, 1=(1001) and 2=(0.5000.5) . Components x1 and x4 follow uniform distributions over the interval [0, 4] and the interval [0, 2], respectively. We denote this dataset as D1. The MVR value of the original dataset D1 is δMVR = [1.497, 0.981, 1.089, 3.085]. Since δ1 and δ4 are larger than δ2 and δ3, the 1st feature and 4th feature are considered to be two important features. However, if D1 is normalized, the MVR of D1 becomes δMVR = [5.944, 8.940, 8.906, 6.161], and then the 2nd feature and 3rd feature become important features (Fig. 3). Since the feature importance is an attribute of a dataset, the feature importance should remain unchanged, even if the data are normalized. However, when D1 is normalized, the MVR index of D1 is changed. If the MVR index is used to measure the feature importance, the algorithm actually cannot differentiate the important feature(s).

To present an intuitive understanding of the property of feature importance, we further investigate the distribution of the features of dataset D1 shown in Fig. 4. We can clearly see that the 2nd feature and 3rd feature are compact with a cluster-like structure; therefore, they should be more important than the other two features, which coincides with δMKM = [1 . 072, 1 . 360, 1 . 341, 1 . 143] indicating the 2nd feature and 3rd feature of dataset D1 are two importance features. If the MVR is used to measure the feature importance, the MVR of D1 is δMVR = [1 . 497, 0 . 981, 1 . 089, 3 . 085], indicating that the 1st feature and 4th feature are important features, but in fact, the 2nd feature and 3rd feature are actually two important features of dataset D1. What mislead the algorithm is that if D1 is normalized, the MVR of D1 becomes δMVR = [5 . 944, 8 . 940, 8 . 908, 6 . 161], and then the 2nd and 3rd features become important features. Similarly, the 3rd feature and 4th feature of the iris dataset are two important features because these features have a cluster-like structure, as shown in Fig. 5, and the MKM of iris is δMKM = [0 . 834, 0 . 666, 1 . 282, 1 . 222]; however, the MVR of iris is δMVR = [8 . 103, 13 . 455, 5 . 228, 4 . 527]. Thus, in this paper, we use the MKM index to measure the feature importance instead of the MVR index.

B. The F ormulation of Feature Reduction Fuzzy C-Means Clustering L everaging the M arginal K urtosis M easure

In this section, inspired by the work of Yang et al. [[16]], we propose a novel and robust feature reduction fuzzy clustering algorithm leveraging the marginal kurtosis measure, named the FRFCM-MKM, to mitigate the disadvantages of the FRFCM algorithm. In this method, the MKM index is used to measure the feature importance of a dataset, and each feature has an individual weight updating equation in each iteration. We develop a new threshold to determine which feature(s) is/are to be discarded. In this process, the features with small weights less than the threshold will be removed from datasets. Let X ={  x i | i  = 1, 2,. . . , n } be a dataset of d space, and the weights of the features are denoted as a weight vector w = [w1,. . . , wdT. The objective function of the FRFCM-MKM algorithm is minimized in the form that follows:

(11) J(U,V,w)=i=1nk=1cuikmj=1dδjwj(xij-vkj)2+τ·n·cj=1d(wj-δj)2

To obtain the optimal clustering results, the FRFCM-MKM is minimized subject to the following constraints:

(12) s.tk=1cuik=1,j=1dwj=1,wj0

where δj is computed by Equation (10). δj is the marginal kurtosis measure of X, and utilized to control parameter wj in the FRFCM-MKM algorithm. From the above, we see that the clustering problem of finding the optimal cluster assignments of objects is now formulated as a constrained optimization problem, i.e., to find the optimal values of U, V, and w subject to a set of constraints. Considering that U, V and w are continuous variables, we can use the method of Lagrange multipliers with the first order necessary condition to derive their solutions.

(13) L=J+i=1nλi(1-k=1cuik)+β(1-j=1dwj)+j=1dγjwj

(14) Luik=muikm-1j=1dδjwj(xij-vkj)2-λi=0

(15) Lvkj=-2δjwji=1nuikmvkj(xij-vkj)=0

(16) Lwj=i=1nk=1cuikmδj(xij-vkj)2+γ(wj-δj)-β+γj=0

(17) γj0

(18) γjwj=0

According to Equation (14), the iteration formula of uij can be obtained as follows:

(19) uik=[j=1dδjwj(xij-vkj)2]11-m/s=1c[j=1dδjwj(xsj-vkj)2]11-m

From Equation (15), we obtain

(20) vkj=i=1nuikmxij/i=1nμikm

From Equation (16), it can be obtained that

(21) wj=1|I+|-1|I+|sI+δj+δjI+12τ·n·c|I+|sI+i=1nk=1cuikmδj(xis-vks)2-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2II

(22) I-={j:wj=0}I+={j:wj>0}

I - denotes the set of indexes of the zero-weight (redundant) features, while I+ denotes the set of indexes of the positive weight (important) features. |I-| and |I+| are the cardinality of I- and I+, respectively. The elements of I- and I+ are computed by the same method as in [[23]], which is outlined as follows.

Procedure for determining I- and I+
1. Initialize I+ = φ, I- = 1, 2, ..., n;
2. s ← s + 1,

Is+=Is-1++{p},Is-=Is-1--{p}

, where

p=argminjIs-1-{12τ·n·ci=1nk=1cuikmδj(xij-vkj)2-δj}

;
3. Examine whether wj > 0 by computing Equation (21), where

f=argmaxjI+{12τ·n·ci=1nk=1cuikmδj(xij-vkj)2-δj}

. If yes, go to step 2; otherwise, set

I+=Is-1+

,

I-=Is-1-

and terminate.

Here we give the detailed derivation of the update equations of wj from the Lagrangian in Equation (13) and the Karush–Kuhn–Tucker conditions in Equations (16)–(18). Combining Equations (13) and (16) results in

(23) wj=β2τ·n·c-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2-12τ·n·cγj+δj

According to j=1dwj=1 , it can be obtained that

(24) βp2τ·n·c-12τ·n·cj=1di=1nk=1cuikmδj(xij-vkj)2-12τ·n·cj=1dγj+j=1dδj=1β=2τ·n·cd(1-j=1dδj)+1dj=1di=1nk=1cμikmδj(xij-vkj)2+1dj=1dγj

By plugging Equation (24) into Equation (23), the closed-form solution for the feature weights is obtained as

(25) wj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2+12τ·n·c·dj=1dγj-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2-12τ·n·cγj+δj

Now we consider Equation (17) for two respective cases.

1) γj = 0, ∀ j ∈ {1,. . . , d}, Equation. (25) becomes

wj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2+δj

For each feature j, this set of solutions is valid only if

1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2-12τ·n·ci=1nj=1kuijmδj(xij-vkj)2+δj0

Otherwise, we consider the second case.

2) If γj > 0 for at least one j, according to Equation (18) when γj > 0, wj = 0; therefore,

(26) 1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2+12τ·n·c·dj=1dγj-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2-12τ·n·cγj+δj=0

From the above equation, we obtain that

(27) 12τ·n·cγj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2+12τ·n·c·dj=1dγj-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2+δj

Constrained by Equation (12), obviously, it is known that not all wj can be 0. Therefore, the set { j|j = 1,. . . , d} is separated into two subsets denoted as I- and I+, where

(28) I-={j:wj=0}I+={j:wj>0}φ

Therefore, Equation (27) becomes

12τ·n·cγj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2+12τ·n·c·d(sI+γs+sI-γs)-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2+δj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2+|I-|2τ·n·c·dγj-12τ·n·ci=1nj=1kuikmδj(xij-vkj)2+δj|I+|2τ·n·c·dγj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2+δj

From the above equation, it can be obtained that

(29) γj=2τ·n·c|I+|-2τ·n·c|I+|j=1dδj+1|I+|j=1di=1nk=1cuikmδj(xij-vkj)2-d|I+|i=1nj=1kuikmδj(xij-vkj)2+2τ·n·c·d|I+|δj

By replacing the γj in Equation (25) with Equation (29), after some simplification, we obtain the final form as in Equation (21).

How to choose the threshold is an important step for the proposed clustering algorithm. To develop a feature-reduction scheme using an FRFCM algorithm, Yang et al. [[16]] used 1/nd as a threshold to determine which feature(s) is/are to be selected or discarded. Yet, 1/nd decreases if n is increasing. For different sized datasets generated from the same distribution, they must have the same feature importance, and should have similar feature weights in the feature weighting clustering algorithm. Consequently, the thresholds should be equal. However, the thresholds computed by 1/nd range from 0.071 to 0.0071 (Table 1), though the feature importance of the datasets are equal. Thus, 1/nd is not a suitable threshold for some data in the feature reduction learning algorithm.

Table 1 Threshold for different sized datasets drawn from same distribution

Sizen = 50n = 200n = 500n = 1000n = 5000

1nd

0.0710.0500.0220.01580.0071

Now, we try to find a proper threshold to remove the feature(s) with small weight(s) instead of using 1/nd as a threshold. Since δj is a feature importance measure, we can construct a threshold using the information of the δjs. In Equation (11), if the δjs are normalized, let τ→ + ∞, and it can obtain wj ≈ δj when the FRFCM-MKM algorithm converges, which inspired us to use the δjs to construct a threshold. According to the property of the Pythagorean means, as the smallest of the three Pythagorean means, the harmonic mean tends to emphasize the impact of small δj s while minimizing the impact of large δj s. Then, we pick d/j=1d1δj as a threshold since d/j=1d1δj<d/j=1d1δj< j=1dδjd<1dj=1dδj=1d . The proposed FRFCM-MKM algorithm can be summarized as follows.

C. FRFCM-MKM Algorithm

Initialization: Fix ɛ > 0 Give cluster number k, randomly initialize the cluster center V(0), randomly initialize the feature weight w(0), and set t = 0.
Step 1: Compute the δj using data set X by Equation (10).
Step 2: Calculate membership function U(t) using δj, V(t-1)
   and w(t-1) by Equation (19).
Step 3: Update cluster center V(t)using U(t) by Equation (20).
Step 4: Update w(t) using δj,U(t)and V(t) by Equation (21), normalize w(t).
Step 5: remove total r number of j feature components for w(t), if

wj(t)d/j=1d1δj

, and set d(new) = d - r.
Step 6: Compute w(t) by Equation (21).
Step 7: If ∥U(t) - U(t-1) ∥ < ɛ, then quit;
   else set t = t + 1, d = d(new) and go back to Step 1.

Next, we provide the detailed proof of convergence theorems for the FRFC-Means algorithm.

D. Convergence Theorems for FRFCM-MKM Algorithm

Theorem 1. If V and w are fixed, then U is a strict local minimum of J (U, V, w) if and only if U is calculated by (19).

Theorem 2. If U and w are fixed, then V is a strict local minimum of J (U, V, w) if and only if V is calculated by (20).

Theorem 3. If U and V are fixed, then w is a strict local minimum of J (U, V, w) if and only if w is calculated by (21).

Referring to [[24], [26]], Theorems 1 and 2 can be easily proved. Because the main difference between FCM and FRFCM-MKM exists in the involved feature weights, here we only give a detailed proof of theorem 3 below.

Proof. We first prove the necessary condition for minimum of J (U, V, w) by using the Lagrangian multiplier method to transform the constraint minimization problem into an unconstrained problem. We take the Lagrangian function as follows.

(30) L1(w,β)=i=1Nj=1Kuikmj=1dδjwj(xij-vkj)2+τ·n·cj=1d(wj-δj)2+β(1-j=1dwj)

where 21 β is Lagrangian multiplier, By setting the gradient of L1 with respect to wj and β to zero it can be obtained:

(31) L1wj=i=1nk=1cuikmδj(xij-vkj)2+2τ·n·c·(wj-δj)-β=0

(32) L1β=j=1dwj-1=0

Then, follow the steps from Equation (20) to Equation (26), we have

(33) wj=1d-1dj=1dδj+12τ·n·c·dj=1di=1nk=1cuikmδj(xij-vkj)2+12τ·n·c·dj=1dγj-12τ·n·ci=1nk=1cuikmδj(xij-vkj)2-12τ·n·cγj+δj

The necessary condition is proved.

Secondly, we prove the sufficient condition for minimum of J (U, V, w). To analysis the sufficiency of Theorem 3, we can check Hessian Matrix of Lagrangian function L1 (w, β) denoted as HL1 (w, β), because

(34) L2wj=i=1nk=1cuikmδj(xij-vkj)2+2τ·n·c·(wj-δj)-β=02L2wjwl=κjl2τ

where κjl is Kronecker index with κjl={1,ifj=l0,ifjl ,

2L2wlβ=2L2βwl=1.

Thus, the bordered Hessian matrix with regard to w and β is

(35) HL1(w,β)=[2τ·n·c00002τ·n·c000002τ·n·c]

Since we know τ > 0, and n · c > 0, the HL1(w,β) is a diagonal matrix with positive diagonal entries. So the matrix HL1(w,β) is positive definite. J (U, V, w) must have the minimum point, and Equation (25) is a sufficient condition for w to be a local minimum of J (U, V, w).

And in the same way, the case of feature reduction can be proved. Then theorem 3 can be verified. □

4 Experimental study

In this section, to evaluate the performance of the proposed FRFCM-MKM algorithm, the WKM, EWKM, WFCM, SCAD2, ESSC, and the FRFCM are chosen for analysis. A series of experiments are performed with various datasets. In most cases, running the clustering algorithm on raw data does not work well. Therefore, we normalized each feature fj = [x1j, x2j,. . . , xnjT by subtracting its mean and dividing it by its range, as shown below:

yj=fjT-fj·1T·1max(fj)-min(fj)

where 1 is an all one element vector.

We set the threshold value ɛ and the maximum number of restarts to ɛ = 105 and t = 300, respectively, in all the algorithms. The parameter of the fuzziness m in all fuzzy-type clustering algorithms and our algorithm is set to 2. The experiments are performed on a PC with an Intel Core i5-4590 with 4 GB of memory. All code is written in the MATLAB computing environment.

A. Performance M etrics

Three performance metrics including the accuracy (AC) [[27]], the normalized mutual information (NMI) [[28]], and the adjusted Rand index (ARI) [[29]] are used to evaluate the performance of clustering algorithms. They are averaged over thirty different runs with random initialization of the feature weights and with random initializations of the centroids using the initialization strategy in reference [[30]].

Accuracy: We evaluate the clustering results by comparing the obtained labels using clustering algorithms with the provided ground truths. The second metric we use is the AC with AC=1nk=1cdk , where dk indicates the number of data points that are correctly clustered for cluster k, and n is number of the data points in the dataset. A larger AC means better performance of the clustering algorithm.

ARI: The Rand index (RI) proposed by Rand [[31]] measures the agreement between two crisp partitions of a set of data points. The ARI is the adjusted form of the Rand index. The ARI has a maximum value of 1, and its expected value is 0 in the case of random clusters. A larger ARI implies more agreement between two partitions, and, hence, the greater effectiveness of the algorithm since the two partitions are produced by a clustering algorithm and human experts, respectively. The adjusted Rand Index is computed as follows:

(36) ARI=ij(nij2)-[i(ai2)j(bj2)]/(n2)12[i(ai2)+j(bj2)]-[i(ai2)j(bj2)]/(n2)

where n is the total number of data points, c is the number of clusters, nij = |S i  ∩ Sj|, ai=j=1c|SiSj| and bi=i=1c|SiSj| .

NMI: The NMI [[28]] index is another popular index that is used to evaluate clustering results. This index measures the agreement of the clustering results produced by an algorithm and the ground truth. If we refer to a class as the ground truth and a cluster as the results of a clustering algorithm, the NMI is calculated as follows:

(37) NMI=ijnijlog(n·nijai·bj)i=1nilog(nin)(jnjlog(njn))

where n is the total number of data points, c is the number of clusters, nij = |S i  ∩ Sj|, ai=j=1c|SiSj| and bi=i=1c|SiSj| .

B. Experiment

Experiment 1: the feature weight representation ability

In this experiment, we generate the same dataset denoted by D2 as that used in example 1 of reference [[16]]. This dataset has 400 data points with components x2 and x3 being generated from the Gaussian mixture model k=12αkN(μk,k) , where the parameters α1 = α2 = 0.5, μ1 = (3, 3) T, μ2 = (7, 7) T, 1=(1001) and 2=(0.5000.5) . Components x1 and x4 follow uniform distributions over the intervals [0, 10] and [0, 12], respectively.

To compare the feature weight representation abilities of the WKM, FRFCM and FRFCM-MKM, we apply the WKM, FRFCM and FRFCM-MKM algorithms to dataset D2 thirty times to examine whether the algorithms assign proper weights to features. Since the prototype-based clustering algorithms are sensitive to the initialization, to reduce the sensitivity of the initialization for clustering algorithms, we first choose five sets of centers V1, V2, V3, V4 and V5 from D2 as the initial cluster centers by using the initialization strategy in the k-means++ [[30]] and then implement the WKM, FRFCM and FRFCM-MKM with different initial feature weights. To save space, though the algorithms are conducted thirty times, we merely plot the bar graphs of the first ten runs to study the feature assignment problem of the three algorithms. For the WKM, we plot the bar graphs of the final weights, but for the FRFCM and FRFCM-MKM, the bar graphs are for the first iterations since the weights in the first iterations play a vital role in the feature reduction process. Each color of the bar graphs represents the weight assignment for a run of the algorithm. As shown in the bar graphs (Table 2), for the five sets of fixed initial clusters, the WKM and FRFCM do not properly assign weights to features using random initial weights because different initial weights may lead to different weight assignments. Especially, the FRFCM algorithm may assign small weights to important features (i.e., the 2nd and 3rd features) but large weights to unimportant features (i.e., the 1st and 4th features). Thus, in the feature reduction process, the 2nd and 3rd features may be discarded. The case of D2 being normalized is given in Table 3. V˜1 to V˜5 are five sets of fixed initial cluster centers. The bar graphs show that the weights produced by the FRFCM are less sensitive to different initial feature weights when the data are normalized. However, the FRFCM still does not properly assign weights to features since the 2nd and 3rd features obtain small weights. However, our method is always capable of properly assigning weights to features whether the data are normalized or not, and the 2nd and 3rd features always obtain large weights.

Table 2 Weight assignments of the WKM, FRFCM and FRFCM-MKM for unnormalized D2 with fixed cluster centers

Table 3 Weight assignments of the WKM, FRFCM and FRFCM-MKM for normalized D2 with fixed initial weights

We next run the WKM, FRFCM and FRFCM-MKM for the unnormalized and normalized D2 datasets with equal initial feature weights w = [1/4, 1/4, 1/4, 1/4] T and with random initial cluster centers. The bar graphs of the weight assignments for these algorithms are shown in Tables 4 and 5, which demonstrate that the FRFCM-MKM represents the features importance well. When the initial weights are fixed and the centers are randomly initialized, both the WKM and FRFCM do not represent the features well.

Table 4 Weight assignments of the WKM, FRFCM and FRFCM-MKM for unnormalized D2 when the initial feature weights are fixed as w = [1/4, 1/4, 1/4, 1/4] T

Table 5 Weight assignments of the WKM, FRFCM and FRFCM-MKM for normalized D2 when the initial feature weights are fixed as w = [1/4, 1/4, 1/4, 1/4] T

The reason that the FRFCM-MKM represents the features well is that the second term in the objective function is used to control the value of wj s; and the larger the value of τ is, the closer the value of wj is to that of δj, and the more robust the algorithm. Fig. 6 shows the weight assignments of the FRFCM-MKM under different parameters. As τ increases, the feature weights remain the same for each run, which indicates the robustness of the proposed method.

Now we compare the clustering results achieved by the WKM, EWKM, WFCM, SCAD2, ESSC, FRFCM and FRFCM-MKM algorithms. Table 8 lists the clustering results of the WKM, EWKM, WFCM, SCAD2, ESSC, FRFCM and FRFCM-MKM algorithms for unnormalized and normalized D2. The WKM and EWKM produce unsuitable feature weights, and the clustering results are worse. The SCAD2 and ESSC produce suitable feature weights on normalized D2 (Table 7), and the clustering result are desirable. The FRFCM does not produce suitable feature weights for dataset D2, and the important feature(s) is/are discarded from the data at the first iteration, which leads to an incorrect clustering result. When D2 is normalized, the FRFCM assigns comparative larger weights to the four features (Table 3). Meanwhile, the threshold calculated by 1/nd is 0.025, and this value is used to determine which feature(s) will be retained or removed. However, the threshold is so small that the FRFCM fails to remove any feature(s) in the clustering process. In this case, the FRFCM algorithm clusters the data well in the full space, but the feature reduction mechanism in the FRFCM fails. The proposed algorithm outperforms the other clustering methods on the dataset D2 with AC = 0.998, NMI = 0.977 and ARI = 0.990, and the main reason is that the FRFCM-MKM properly represents the feature weights. Then, in the feature reduction learning process, the 1st feature and 4th feature are discarded. The FRFCM-MKM is actually implemented in a 2-D subspace of D2, as shown in Fig. 2(a); therefore, the desired result can be obtained. Moreover, The weights obtained by the FRFCM-MKM always represent the importance of features well regardless of whether the data are normalized or not because the weight update equation is dominated by part I (see Equation (21)), and part II is merely used to adjust the weight value. This characteristic makes the FRFCM-MKM produce almost equal weights for when the data are normalized and not unnormalized, and this makes the feature weights robust to the initialization. This experiment demonstrates that the FRFCM-MKM can properly assign the weights to the features, and it also shows that the weights produced by the FRFCM-MKM are robust to the initialization.

Graph: Fig. 2 (a) Two component data points generated from a Gaussian mixture model; (b)The Gaussian mixture model data points by adding component x3.

Graph: Fig. 3 (a) δMVR value of D1 (unnormalized); (b) δMVR value for D1 (normalized),

Graph: Fig. 4 Distribution of the different features of dataset D1. (a) 1st feature. (b) 2nd feature. (c) 3rd feature. (d) 4th feature.

Graph: Fig. 5 Distribution of the different features of the iris dataset. (a) 1st feature. (b) 2nd feature. (c) 3rd feature. (d) 4th feature.

Table 6 Weight assignments of the WKM, FRFCM and FRFCM-MKM for unnormalized D2 with the centers and weights randomly initialized

Table 7 The final weight assignments of the SCAD2 and ESSC on normalized D2

Method1st feature2nd feature3rd feature4th feature
SCAD20.2460.2530.2650.235
0.2330.2640.2750.227
ESSC0.1980.3210.3290.152
0.1330.3670.3800.120

Table 8 Comparing the clustering results achieved by seven algorithms on dataset D2

DatasetsPerformance metricWKMEWKMWFCMSCAD2ESSCFRFCMFRFCM-KMK
D2AC0.834±0.2210.836±0.2090.988±0.0670.935±0.0000.910±0.0980.822±0.2180.998±0.000
(unnormalized)NMI0.625±0.4660.597±0.4360.958±0.1380.660±0.0000.664±0.3190.619±0.4790.977±0.000
ARI0.635±0.4730.620±0.4500.970±0.1400.756±0.0000.714±0.3030.627±0.4850.990±0.000
D2AC0.839±0.2130.864±0.1990.932±0.1650.998±0.0000.998±0.0000.998±0.0000.998±0.000
(Normalized)NMI0.621±0.4590.712±0.4310.841±0.3420.977±0.0000.977±0.0000.977±0.0000.977±0.000
ARI0.635±0.4670.683±0.4520.852±0.3470.990±0.0000.990±0.0000.990±0.0000.990±0.000

Experiment 2: the effect of suppressing noisy/redundant feature(s)

In this experiment, for the iris dataset, we add h (h = 4, 8, 12) noisy features to 4 real ones. The noisy features are Gaussian noise. We let the noisy features follow a Gaussian distribution, i.e., x˜jN(0,h) , where j = 1,. . . , h, h = 4, 8, 12, and normalized these datasets. Now we compare the noisy feature suppression abilities of the WKM, EWKM, WFCM, SCAD2, ESSC, FRFCM and FRFCM-MKM on the noisy iris dataset. Because the feature weights produced by the comparing algorithms is relative to the dispersion of features, the obtained weights of noisy feature is much larger. However, the FRFCM-MKM algorithms produce close-to-zero weights for noisy features. The reason is that the weight update equation is mainly controlled by part I (see Equation (21)). Fig. 7 (a)∼(c) demonstrate that the WMK is able to produce large weights for important features and small weights for unimportant features and noisy features, but the weights produced by the WMK are sensitive to the initialization. The WMK may produce large feature weight(s) for noisy features when the initialization is improper. The FRFCM can assign proper features weights for the noisy iris dataset, but the assignment decreases the weights of the important features and assigns comparatively large weights to noisy features (Fig. 7. (d)∼(f)). Moreover, the thresholds computed by 1/nd for the noisy Iris are 0.029, 0.024, and 0.020, respectively. The thresholds are so small that the FRFCM is not capable of removing any unimportant features and noisy features from the dataset; hence, the FRFCM cannot suppress the noisy features well. The FRFCM-MKM algorithm suppresses the noisy features well and produces large weights for the 3rd and 4th features since these features are important features, small weights for the 1st and 2nd features, and close-to-zero weights for the noisy features (Fig. 7(g)∼(i)). Thus, unimportant features and noisy features can be identified and discarded. It would be interesting to note that, on the iris with noisy features, our proposed method performs surprisingly well. Our method achieves the same results as on the original iris with AC = 0.967, NMI = 0.885 and ARI = 0.904. It indicates that the FRFCM-MKM can suppress noisy features well and the noisy features have no negative effect on our method. Compared with the FRFCM, the improvements of FRFCM-MKM on the iris with 4 and 8 noisy features are 5.11% for AC, 10.04% for NMI and 15.01% for ARI. Because the EWKM, SCAD2 and ESSC are unable to identify the noisy features and assign large weights to the noisy features (Fig. 8), the clustering performance is rather poor. The clustering results of the seven algorithms are listed in Table 9.

Graph: Fig. 6 The weight assignments of the FRFCM-MKM under different parameters.

Graph: Fig. 7 (a)∼(c)Weight assignments of the WKM with 4, 8, and 12 noisy features added to iris, respectively. (d)∼(f) Weight assignments of the FRFCM with 4, 8, and 12 noisy features added to iris, respectively. (g)∼(i) Weight assignments of the FRFCM-MKM with 4, 8, and 12 noisy features added to iris, respectively.

Graph: Fig. 8 (a)∼(c) Weight assignments of the EWKM with 4, 8, and 12 noisy features added to iris, respectively. (d)∼(f) Weight assignments of SCAD2 with 4, 8, and 12 noisy features added to iris, respectively. (g)∼(i) Weight assignments of ESSC with 4, 8, and 12 noisy features added to iris, respectively.

Table 9 Comparing the clustering results achieved by the seven algorithms on the Iris data and noisy Iris data

DatasetPerformance metricWKMEWKMWFCMSCAD2ESSCFRFCMFRFCM-KMK
4 featuresAC0.855±0.1840.854±0.1050.920±0.1150.702±0.0000.893±0.0000.900±0.0000.967±0.000
NMI0.807±0.1110.728±0.0440.836±0.0720.723±0.0000.743±0.0000.766±0.0000.885±0.000
ARI0.774±0.2010.692±0.0950.833±0.1220.880±0.0000.729±0.0000.744±0.0000.904±0.000
+4 noiseAC0.884±0.1650.807±0.1480.953±0.0000.893±0.0000.913±0.0000.920±0.0000.967±0.000
NMI0.809±0.1440.711±0.0640.850±0.0000.739±0.0000.767±0.0000.777±0.0000.885±0.000
ARI0.797±0.1990.655±0.1330.868±0.0000.728±0.0000.771±0.0000.786±0.0000.904±0.000
+8 noiseAC0.855±0.1840.854±0.1050.920±0.1150.880±0.0000.893±0.0000.900±0.0000.967±0.000
NMI0.807±0.1110.728±0.0440.836±0.0720.723±0.0000.743±0.0000.766±0.0000.885±0.000
ARI0.774±0.2010.692±0.0950.833±0.1220.702±0.0000.729±0.0000.744±0.0000.904±0.000
+12 noiseAC0.881±0.1780.795±0.1010.933±0.0000.833±0.0000.887±0.0000.893±0.0000.967±0.000
NMI0.789±0.2100.664±0.0520.823±0.0000.665±0.0000.705±0.0000.723±0.0000.885±0.000
ARI0.781±0.2400.604±0.0910.818±0.0000.619±0.0000.711±0.0000.726±0.0000.904±0.000

Experiment 3: clustering results on synthetic datasets.

Now we perform clustering on six high-dimensional synthetic datasets to demonstrate the effectiveness of the proposed algorithm. Each dataset contains 1024 samples and 16 clusters, and the datasets used in this experiment can be downloaded from the webpage of the Speech and Image Processing Unit [[32]]. The detailed information on these six high dimensional synthetic datasets is summarized in Table 10.

Table 10 Brief information for the high dimensional synthetic datasets

Dataset#Instance#Feature#Cluster#Object in each cluster
Dim0321024321664
Dim0641024641664
Dim12810241281664
Dim25610242561664
Dim51210245121664
Dim1024102410241664

Table 11 shows the clustering results for the seven different methods on the datasets ranging from 64 to 1,024 samples. The clustering algorithms are applied to the six synthetic datasets Dim64 to Dim1024 that represent higher dimensionality varying datasets. Compared with the other approaches, the FRFCM-MKM algorithm achieves excellent clustering results as represented by the AC, NMI and ARI on Dim256Dim1024. As shown in Table 12, Moreover, the run-time of the FRFCM-MKM algorithm on these six datasets are lowest compared with those of the other algorithms. Its run-time ranges from 0.004 to 0.006 seconds on Dim256Dim1024. The average time of WKM, EWKM, FCM, WFCM, SCAD2, ESSC, FRFCM and FRFCM-MKM on dim032 to dim1024 are 0.159, 0.421, 0.068, 4.136, 0.832, 0.842, 0.058, 0.005 second respectively. The FRFCM-MKM is 10 times faster than the FRFCM in average. The SCAD2, ESSC and FRFCM methods also achieve desirable results; however, the run-times of these algorithms are relatively high. Although the FRFCM uses a feature reduction scheme, when these data are normalized, the FRFCM fails to discard the unimportant features from Dim032Dim256. The FRFCM can remove 40 unimportant features and retain 472 features for dim512. Meanwhile, the FRFCM removes 523 unimportant features from Dim1024 and retains 501 features. However, the FRFCM-MKM retains 5, 5, 4, 6, 5, and 10 features for Dim032Dim1024, respectively, and it only utilizes a small number of features in the clustering process (Table 12).

Table 11 Clustering performance on dim032-dim1024 using the WKM, EWKM, FCM, WFCM, SCAD2, ESSC, FRFCM and FRFCM-MKM

DatasetPerformance metricWKMEWKMWFCMSCAD2ESSCFRFCMFRFCM-KMK
Dim032AC0.682±0.0830.933±0.0430.994±0.0210.995±0.0170.995±0.0190.995±0.0170.981±0.035
NMI0.905±0.0310.982±0.0130.998±0.0060.999±0.0050.999±0.0050.999±0.0050.995±0.010
ARI0.681±0.0930.932±0.0460.995±0.0200.995±0.0180.995±0.0190.995±0.0180.982±0.034
Dim064AC0.640±0.0830.938±0.0351.000±0.0001.000±0.0001.000±0.0001.000±0.0000.986±0.032
NMI0.890±0.0320.983±0.0101.000±0.0001.000±0.0001.000±0.0001.000±0.0000.996±0.009
ARI0.639±0.1000.938±0.0341.000±0.0001.000±0.0001.000±0.0001.000±0.0000.987±0.030
Dim128AC0.654±0.0820.948±0.0470.302±0.0811.000±0.0001.000±0.0001.000±0.0000.987±0.031
NMI0.895±0.0310.986±0.0130.683±0.0771.000±0.0001.000±0.0001.000±0.0000.996±0.008
ARI0.649±0.0920.946±0.0500.270±0.0911.000±0.0001.000±0.0001.000±0.0000.987±0.029
Dim256AC0.606±0.1010.985±0.0280.263±0.0721.000±0.0001.000±0.0001.000±0.0001.000±0.000
NMI0.876±0.0410.996±0.0070.657±0.0631.000±0.0001.000±0.0001.000±0.0001.000±0.000
ARI0.605±0.1080.985±0.0280.237±0.0701.000±0.0001.000±0.0001.000±0.0001.000±0.000
Dim512AC0.613±0.0660.977±0.0310.344±0.0721.000±0.0001.000±0.0001.000±0.0001.000±0.000
NMI0.883±0.0260.994±0.0080.720±0.0571.000±0.0001.000±0.0001.000±0.0001.000±0.000
ARI0.623±0.0660.976±0.0320.305±0.0711.000±0.0001.000±0.0001.000±0.0001.000±0.000
Dim1024AC0.612±0.0950.989±0.0250.460±0.0761.000±0.0001.000±0.0001.000±0.0001.000±0.000
NMI0.875±0.0440.997±0.0070.809±0.0451.000±0.0001.000±0.0001.000±0.0001.000±0.000
ARI0.598±0.1190.989±0.0250.457±0.0991.000±0.0001.000±0.0001.000±0.0001.000±0.000

Table 12 Numbers of original and final features obtained from the FRFCM and the FRFCM-MKM and the total run-times (in seconds) for the synthetic datasets for each algorithm

DatasetsOriginal dimensionFinal dimension by FRFCMFinal dimension by FRFCM-MKMTotal run-time(seconds)
WKMEWKMFCMWFCMSCAD2ESSCFRFCMFRFCM-KMK
Dim032323250.0120.0290.0060.1380.1400.1300.0100.004
Dim064646450.0210.0640.0130.3170.1540.1230.0110.004
Dim12812812840.0590.1860.0282.5140.3800.4150.0310.005
Dim25625625660.1230.2950.0503.2540.5640.5710.0630.004
Dim51251247250.2320.6180.1146.4091.3021.2050.1090.004
Dim10241024501100.5081.3340.19512.1832.4522.6080.1220.006

Experiment 4: clustering results on some real dataset

In this experiment, we investigate the performances of the proposed algorithm on some real world datasets, and their properties are summarized in Table 13. We compare the AC, NMI and ARI of the FRFCM-MKM with those of the WKM, EWKM, FCM, WFCM, SCAD2, and FRFCM-MKM. The comparisons use different initial cluster centers with different initial feature weights.

Table 13 Summary of some real world Datasets

Dataset#Instances#Features#Cluster
iris15043
wdbc_all569302
wpbc198322
sonar208602
glass21496
Breast_cancel69982
Movement_libras3609015
ionosphere351332
bupa34562
colon6220002
prostate_GE10259662
SMK_CAN_1987187199932
ORL400257640

Table 14 shows the clustering results of seven different clustering algorithms on thirteen real datasets. Compared with the other approaches, the FRFCM-MKM achieves excellent clustering results on nine out of thirteen datasets according to the AC, NMI and ARI. The improvement of the FRFCM-MKM compared to the FRFCM is significant both in the performance metrics and feature reduction. On the easily-clustered datasets such as iris, wdbc_all, breast_cancer, FRFCM-MKM is superior to the other six algorithms. The differences between FRFCM-MKM and FRFCM on the iris dataset are 6.67%, 11.62%, and 21.51% for AC, NMI and ARI metric. On the wdbc_all dataset the differences are 0.76%, 3.94%, and 3.45%, while on the breast_cancer dataset they are 1.16%, 14.15%, 2%. The final features by running FRFCM-MKM on these datasets are 2, 10, and 5, respectively, compared with 4, 30 and 9 by FRFCM. On other hard-clustered datasets such as glass, movement_libras, and SMK_CAN_1987 FRFCM-MKM is still superior to the other six algorithms. The differences between FRFCM-MKM and FRFCM on glass are 7.55%, 7.64% and 18.29% for AC, NMI and ARI metric. On the movement_libras on the SMK_CAN_1987 dataset they are 1.49%, 12.90% and 5.26%. As shown in Table 15, the FRFCM-MKM retains 4, 26, and 21 features in the final step on these datasets respectively, while FRFCM retains7, 90 and 19993 features. On the wpbc dataset, the proposed algorithm retains 11 important features in the cluster process with AC = 0.576, while the FRFCM retains 31 features with AC = 0.581. On the ORL dataset, our method uses 215 important features of the data to cluster with AC = 0.503. Although the EWKM algorithm performs best on these data with AC = 0.600, it uses all 2567 features for clustering; therefore, the EWKM takes more time to cluster data. The clustering performance of the proposed algorithm is not the best on these data sets, but its performances are almost equivalent to those of the comparison algorithm while using a small number of features. Table 16 shows the average performance on all datasets. The performance of proposed algorithm is obviously better than the other six methods in terms of AC, NMI and ARI. According to the experimental results on the datasets, we can infer the following conclusions:

Table 14 Clustering performances on real datasets using the WKM, EWKM, SCAD2, FCM, FRFCM, and FRFCM-MKM

DatasetPerformance metricWKMEWKMWFCMSCAD2ESSCFRFCMFRFCM-KMK
irisAC0.857±0.1780.852±0.1040.937±0.0830.880±0.0000.893±0.0000.900±0.0000.967±0.000
NMI0.786±0.1610.727±0.0470.847±0.0520.723±0.0000.743±0.0000.766±0.0000.885±0.000
ARI0.759±0.2300.689±0.0950.850±0.0880.702±0.0000.729±0.0000.744±0.0000.904±0.000
wdbc_allAC0.852±0.0000.926±0.0060.921±0.0000.928±0.0000.930±0.0000.926±0.0000.933±0.000
NMI0.477±0.0000.627±0.0300.615±0.0000.615±0.0000.627±0.0000.609±0.0000.633±0.000
ARI0.486±0.0000.722±0.0210.706±0.0000.730±0.0000.736±0.0000.724±0.0000.749±0.000
wpbcAC0.571±0.0290.590±0.0210.601±0.0000.581±0.0000.581±0.0000.581±0.0000.576±0.000
NMI0.013±0.0110.024±0.0060.027±0.0000.015±0.0000.015±0.0000.021±0.0000.013±0.000
ARI0.015±0.0180.027±0.0160.035±0.0000.020±0.0000.020±0.0000.022±0.0000.018±0.000
sonarAC0.582±0.0150.541±0.0250.536±0.0120.548±0.0010.548±0.0070.540±0.0030.625±0.000
NMI0.059±0.0210.024±0.0230.005±0.0020.007±0.0000.008±0.0020.005±0.0010.049±0.000
ARI0.023±0.0080.004±0.0080.001±0.0030.004±0.0000.005±0.0030.002±0.0010.058±0.000
glassAC0.468±0.0410.461±0.0400.466±0.0370.435±0.0290.441±0.0150.437±0.0270.470±0.021
NMI0.310±0.0530.332±0.0490.333±0.0380.334±0.0290.329±0.0270.314±0.0300.338±0.006
ARI0.197±0.0420.191±0.0450.201±0.0380.183±0.0220.181±0.0150.175±0.0270.207±0.004
breast cancelAC0.946±0.0430.659±0.0160.951±0.0000.951±0.0000.957±0.0000.948±0.0000.959±0.000
NMI0.693±0.1000.026±0.0390.706±0.0000.704±0.0000.729±0.0000.692±0.0000.718±0.000
ARI0.798±0.1170.009±0.0290.812±0.0000.813±0.0000.834±0.0000.802±0.0000.818±0.000
movement librasAC0.410±0.0530.436±0.0290.447±0.0250.245±0.0230.266±0.0260.248±0.0200.342±0.015
NMI0.533±0.0810.580±0.0200.575±0.0170.358±0.0200.376±0.0250.365±0.0220.451±0.010
ARI0.256±0.0630.295±0.0260.299±0.0220.148±0.0170.162±0.0170.152±0.0150.183±0.008
ionosphereAC0.703±0.0410.725±0.0320.704±0.0000.709±0.0000.709±0.0000.709±0.0000.698±0.000
NMI0.133±0.0510.160±0.0570.120±0.0000.130±0.0000.130±0.0000.130±0.0000.104±0.000
ARI0.161±0.0590.198±0.0630.163±0.0000.173±0.0000.173±0.0000.173±0.0000.153±0.000
bupaAC0.548±0.0050.548±0.0030.548±0.0000.507±0.0000.522±0.0000.507±0.0000.551±0.000
NMI0.503±0.0020.503±0.0000.503±0.0000.499±0.0000.499±0.0000.499±0.0000.504±0.000
ARI– 0.006±0.004– 0.006±0.001– 0.007±0.000– 0.007±0.000– 0.008±0.000– 0.007±0.0000.006±0.000
colonAC0.557±0.0170.556±0.0180.548±0.0000.548±0.0000.581±0.0180.565±0.0000.600±0.000
NMI0.006±0.0040.005±0.0030.002±0.0000.006±0.0000.006±0.0030.010±0.0000.008±0.000
ARI– 0.003±0.010– 0.004±0.006– 0.008±0.000– 0.006±0.0000.006±0.0090.001±0.0000.008±0.000
prostate_GEAC0.563±0.0170.576±0.0090.570±0.0000.570±0.0000.572±0.0000.568±0.0000.580±0.000
NMI0.016±0.0210.018±0.0080.017±0.0000.018±0.0000.017±0.0000.016±0.0000.020±0.000
ARI0.015±0.0130.016±0.0070.017±0.0000.017±0.0000.017±0.0000.016±0.0000.017±0.000
SMK_CAN_1987AC0.537±0.0150.549±0.0150.551±0.0000.604±0.0000.604±0.0000.604±0.0000.613±0.000
NMI0.004±0.0040.007±0.0040.007±0.0000.031±0.0000.031±0.0000.031±0.0000.035±0.000
ARI0.002±0.0050.006±0.0060.006±0.0000.038±0.0000.038±0.0000.038±0.0000.040±0.000
ORLAC0.490±0.0060.600±0.0460.367±0.0160.038±0.0020.112±0.0330.088±0.6230.503±0.192
NMI0.775±0.0300.819±0.0180.656±0.0100.361±0.0120.426±0.0250.354±0.0410.709±0.008
ARI0.385±0.0560.597±0.0440.227±0.0130.054±0.0080.048±0.0090.040±0.0100.343±0.015

Table 15 Numbers of original and final features obtained from the FRFCM and FRFCM–MKM and the total run-time (in seconds) for the real datasets for each algorithm

DatasetsOriginal dimension dFinal dimension d by FRFCMFinal dimension d by FRFCM-MKMtotal run-time (seconds)
WKMEWKMWFCMSCAD2ESSCFRFCMFRFCM-KMK
iris4420.0010.0010.0050.0040.0050.0010.001
wdbc_all3030100.0020.0030.0450.0360.0340.0030.003
wpbc3231110.0010.0020.0380.0400.3540.0010.003
sonar605750.0010.0030.0850.1190.6210.0040.002
glass9740.0010.0010.0220.0210.0200.0010.002
breast cancel10950.0010.0030.0200.0140.0280.0020.003
movement libras9090260.0100.0120.9621.3881.9460.0100.007
ionosphere3333120.0020.0020.0200.0160.0220.0040.004
bupa6530.0010.0020.0230.0190.0210.0020.003
colon20001700130.0050.0070.1330.1662.4930.0170.003
prostate_GE5966566650.0430.2971.5461.1041.3400.0630.002
SMK_CAN_19871999319693210.2311.90310.43113.82816.2110.2840.009
ORL25762538221.30211.391142.83952.94680.0280.6080.020

Table 16 The average value of the results of the FRFCM-MKM and other six methods on thirteen real datasets

Performance metricWKMEWKMWFCMSCAD2ESSCFRFCMFRFCM-MKM
AC0.6220.6170.6270.5800.5940.5860.647
NMI0.3310.2960.3390.2920.3030.2930.368
ARI0.2380.2110.2540.2210.2260.2220.270

  • 1) The proposed algorithm is robust to initialization, while other algorithms are lack of robustness to initialization.
  • 2) The proposed method represents the feature weights well, while the other methods do not produce proper weights for features.
  • 3) Compared with FRFCM, FRFCM-MKM achieves better result in overall.
  • 4) The MKM index proposed in this paper is better than the MVR index in measuring feature importance.
  • 5) The threshold based on Pythagorean means of using the information of feature weights is more reasonable than the threshold used in FRCM.

Next, we analyze the computational complexity of the FRFCM-MKM. The computational complexity of the FRFCM-MKM depends on three updating stages: (1) update the membership matrix U, which needs O (nc2d); (2) update the cluster centers vk, which needs O (nc); and (3) update the weight wj, which needs O (ncd2). The overall computational complexity for the FRFCM-MKM is O (nc2d + nc + ncd2).

5 Conclusion

This study focuses on the feature reduction learning in fuzzy clustering. We develop the MKM index to measure the feature importance. Based on it, a novel objective function is proposed to simultaneously minimize the within cluster dispersion and the discrepancy between the feature importance and feature weights. Then, the feature reducing scheme is used in the clustering process. From the results of clustering both synthetic datasets and real datasets. We observe that the proposed algorithm not only achieves superior performance in terms of the accuracy, adjusted Rand index, and normalized mutual information but also shows great stability regarding the initialization.

This paper proposed an index named MKM to for measuring feature importance. Despite its advantages in measuring feature importance, this index only considers the marginal kurtosis of features. It does not take the feature dispersion into account. This is the shortcoming of our paper. The MKM index works on most datasets in our experiments, but it does not work on some datasets in our experiment, such as the ionosphere and ORL datasets. For future research, one proposed research direction is to design a more much better index to for measuring feature importance by taking MKM and feature dispersion into account together, based on which new feature reduction learning algorithms based will be developed. Another possible research direction is to extend application of our method to multi-view clustering and possibilistic clustering.

Footnotes 1 This work was supported in part by the National Natural Science Foundation of China under Grant 61572236 and Grant 61972181, and in part by the Natural Science Foundation of Jiangsu Province under Grant BK20191331. References Lloyd S.P., Least squares quantization in PCM, IEEE Trans Information Theory. 28 (1982), 129-137. 2 Dunn J.C., A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters, J Cybernet. 3 (1973), 32-57. 3 Bezdek J.C., Pattern Recognition with Fuzzy Objective Function Algorithms, Kluwer Academic Publishers Norwell, MA, USA, (1981). 4 Deng Z., Jiang Y., Chung F.L., Ishibuchi H., Choi K. and Wang S., Transfer prototype-based fuzzy clustering, IEEE Transactions on Fuzzy Systems. 24 (5) (2016), 1210-1232. 5 Gong M., Su L., Jia M. and Chen W., Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images, IEEE Transactions on Fuzzy Systems. 22 (1) (2014), 98-109. 6 Chang S.T., Lu K.P. and Yang M.S., Fuzzy change-point algorithms for regression models, IEEE Transactions on Fuzzy Systems. 23 (6) (2015), 2343-2357. 7 Zhou J., Chen L., Chen C.L.P., Zhang Y. and Li H.-X., Fuzzy clustering with the entropy of attribute weights, Neurocomputing. 198 (2016), 125-134. 8 Guyon I., Gunn S., Ben-Hur A. and Dror G., Result Analysis of the NIPS 2003 Feature Selection Challenge, In Neural Information Processing Systems (NIPS), (2005). 9 Huang J.Z., Ng M.K., Rong H. and Li Z., Automated Variable Weighting in k-Means Type Clustering, IEEE Transaction on Pattern Analysis and Machine Intelligence. 27 (5) (2005), 657-668. Jing L., Ng M.K. and Huang J.Z., An Entropy Weighting k-Means Algorithm for Subspace Clustering of High-Dimensional Sparse Data, IEEE Transactions on Knowledge and Data Engineering. 19 (8) (2007), 1026-1041. Amorim R.C.D. and Mirkin B., Minkowski metric, feature weighting and anomalous cluster initializing in K-Means clustering[J], Pattern Recognition. 45 (3) (2012), 1061-1075. Svetlova L., Mirkin B. and Lei H., MFWK-Means: Minkowski metric Fuzzy Weighted K-Means for high dimensional data clustering, Proceedings of the 2013 IEEE 14th International Conference on Information Reuse and Integration, IEEE IRI 2013. (2013), 692-699. Wang X., Wang Y. and Wang L., Improving fuzzy c-means clustering based on feature weight learning, Pattern Recognition Letters. 25 (10) (2004), 1123-1132. Frigui H. and Nasraoui O., Unsupervised learning of prototypes and attribute weights, Pattern Recognition. 37 (3) (2004), 567-581. Deng Z., Choi K., Chung F.L. and Wang S., Enhanced soft subspace clustering integrating within-cluster and between-cluster information, Pattern Recognition. 43 (3) (2010), 767-781. Yang M.S. and Nataliani Y., A Feature-Reduction Fuzzy Clustering Algorithm Based on Feature-Weighted Entropy [J], IEEE Transactions on Fuzzy Systems. 26 (2) (2018), 817-835. Yang M. and Sinaga K.P., A Feature-Reduction Multi-View k-Means Clustering Algorithm, in, IEEE Access. 7 (2019), 114472-114486. doi: 10.1109/ACCESS.2019.2934179 Yang M. and Benjamin J.B.M., Feature-Weighted Possibilistic C-Means Clustering With a Feature-Reduction Framework, in IEEE Transactions on Fuzzy Systems doi: 10.1109/TFUZZ.2020.2968879 Xing H.J., Wang X. and Ha M., A comparative experimental study of feature-weight learning approaches[C]//. Proceedings of the IEEE International Conference on Systems, Man and Cybernetics Anchorage, Alaska, USA, October 9–12, 2011. IEEE, (2011). Tsai C.-Y. and Chiu C.-C., Developing a feature weight self-adjustment mechanism for a K-means clustering algorithm, Comput Statist Data Anal. 52 (2008), 4658-4672. Yeung D. and Wang X., Improving performance of similarity-based clustering by feature weight learning, IEEE Transactions on Pattern Analysis and Machine Intelligence. 24 (4) (2002), 556-561. Hashemzadeh M., Oskouei A.G. and Farajzadeh N., New fuzzy C-means clustering method based on feature-weight and cluster-weight learning, Applied Soft Computing Journal (2019). Mei J.P. and Chen. L., Fuzzy clustering with weighted medoids for relational data, Pattern Recognition. 43 (5) (2010), 1964-1974. Bezdek J.C., A convergence theorem for the fuzzy ISODATA clustering algorithms, IEEE Trans Pattern Anal Mach Intell. 2 (1980), 1-8. Wang J., Wang S., Chung F., et al., Fuzzy partition based soft subspace clustering and its applications in high dimensional data[J], Information Sciences (2013). Zangwill W.I., Nonlinear programming: a unified approach, Englewood Cliffs, NJ: Prentice Hall, (1969). Cai D., He X. and Han J., Document clustering using locality preserving indexing, IEEE Transactions on Knowledge and Data Engineering. 17 (12) (2005), 1624-1637. Strehl A. and Ghosh J., Cluster ensembles—A knowledge reuse frame-work for combining multiple partitions, J Mach Learn Res. 3 (2003), 583-617. Hubert L. and Arabie P., Comparing partitions, J Classif. 2 (1) (1985), 193-218. Arthur D. and Vassilvitskii S., k-means++: The advantages of careful seeding, In SODA, pp. 1027–1035, (2007). Rand W.M., Objective criteria for the evaluation of clustering methods, Journal of the American Statistical Association. 66 (1971), 846-850. Speech and Image Processing Unit, School of Computing University of Eastern Finland, Clustering datasets [Online]< http://cs.joensuu.fi/sipu/datasets/ >.

By Xingguang Pan and Shitong Wang

Reported by Author; Author

Titel:
Feature reduction fuzzy C-Means algorithm leveraging the marginal kurtosis measure
Autor/in / Beteiligte Person: Pan, Xingguang ; Wang, Shitong
Link:
Zeitschrift: Journal of Intelligent & Fuzzy Systems, Jg. 39 (2020-11-19), S. 7259-7279
Veröffentlichung: IOS Press, 2020
Medientyp: unknown
ISSN: 1875-8967 (print) ; 1064-1246 (print)
DOI: 10.3233/jifs-200714
Schlagwort:
  • Statistics and Probability
  • 0209 industrial biotechnology
  • Computer science
  • General Engineering
  • 02 engineering and technology
  • Fuzzy logic
  • Measure (mathematics)
  • Reduction (complexity)
  • 020901 industrial engineering & automation
  • Artificial Intelligence
  • Feature (computer vision)
  • 0202 electrical engineering, electronic engineering, information engineering
  • Kurtosis
  • 020201 artificial intelligence & image processing
  • Algorithm
Sonstiges:
  • Nachgewiesen in: OpenAIRE

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -