Using Feature Weighting as a Tool for Clustering Applications
Abstract
The weighted variant of k-Means (Wk-Means), which assigns values to features based on their relevance, is a well-known approach to address the shortcoming of k-Means with data containing noisy and irrelevant features. This research aims first to explore how feature weighting can be used for feature selection, second to investigate the performance of Minkowski weighted k- Means (MWk-Means), and its intelligent variant, on datasets defined in different p-norms, and third to address the problem of missing values with a weighted variant of k-Means. A partial distance approach is used to address the problem of missing values for weighted variant of k- Means.
Anomalous clustering has been successfully used to detect natural clusters and initialize centroids in k-means type algorithms. Similarly, extensive work has been carried out on using feature weights to rescale features under Minkowski Lp metrics for p ≥ 1 . In this thesis, aspects from both of these approaches enable feature weights to be detected based on natural clusters present in the training data, but the clusters are not limited to spherical shape. Two methods, mean-FSFW and max-FSFW, are developed as further extensions of intelligent Minkowski Weighted k-Means(iMWk-Means), where feature weights are used as indices for feature selection with no requirement for user-specified parameters.
The proposed feature selection methods are able to significantly reduce the number of noisy features. These methods are further extended to mean-FSFWextPD and max-FSFWextPD to address missing values and are found to be better alternatives than existing imputation methods.
The effect of feature weighting on clustering of dataset defined in varying p-norms is further explored in the thesis. An algorithm that translates a dataset into different p-norms has been proposed. The capability of MWk-Means to read true shapes of clusters defined in different p- norms is explored.
To address the problem of missing feature values in weighted variant of k-Means, different missing-value imputation methods are tested. The MWk-Means and its intelligent variant are further extended to incorporate the partial distance approach, specifically to address the problem of missing values.
All these methods are tested in both synthetic and real-world datasets against three models of noise - noisy feature added, feature blurring and cluster-wise feature blurring - where applicable. These noises are generated from Gaussian and uniform distribution with three different strength of noise, i.e., no noise, half noise and full noise
Overall, results demonstrate that feature weighting can improve feature selection. The partial- distance approach, with feature weights, is effective at ignoring missing values, and cluster retrieval in various p-norm spaces is effective.
Publication date
2021-10-24Published version
https://doi.org/10.18745/th.25233https://doi.org/10.18745/th.25233
Funding
Default funderDefault project
Other links
http://hdl.handle.net/2299/25233Metadata
Show full item recordThe following license files are associated with this item: