|
![]() |
|||
|
||||
OverviewThere are several feature selection techniques that can be used for classification and clustering, including: Wrapper methods: These methods use a specific learning algorithm to evaluate the importance of each feature. Examples include forward selection and backward elimination. Filter methods: These methods use a statistical test to evaluate the importance of each feature. Examples include chi-squared test and mutual information. Embedded methods: These methods use a learning algorithm that has built-in feature selection capabilities. Examples include Lasso and Ridge regression in linear models. Hybrid methods: These methods combine the strengths of wrapper and filter methods. Correlation-based feature selection (CFS): This method uses correlation between features and the target variable to select the relevant features. Recursive Feature Elimination (RFE): This method recursively removing attributes and building a model on those attributes that remain. It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute. Overall, the choice of feature selection technique will depend on the specific problem and dataset at hand. The data mining tasks are often confronted with many challenges, biggest being the large dimension of the datasets. For successful data mining, the most important criterion is the dimensionality reduction of the dataset. The problem of dimensionality has imposed a very big challenge towards the efficiency of the data mining algorithms. The data mining algorithms cannot handle these high dimensional data as they render the mining tasks intractable. Thus, it becomes necessary to reduce the dimensionality of the data. There are two methods of dimensionality reduction. They are the feature selection and feature extraction methods (Bishop, 1995, Devijver and Kittler, 1982, Fukunaga, 1990). Feature selection method reduce the dimensionality of the original feature space by selecting a subset of features without any transformation. It preserves the physical interpretability of the selected features as in the original space. Feature extraction method reduce the dimensionality by linear transformation of the input features into a completely different space. The linear transformation involved in feature extraction cause the features to be altered, making their interpretation difficult. Features in the transformed space lose their physical interpretability and their original contribution becomes difficult to ascertain (Bishop, 1995). The choice of the dimensionality reduction method is completely application specific and depends on the nature of the data. Feature selection is advantageous especially as features keep their original physical meaning because no transformation of data is made. This may be important for a better problem understanding in some applications such as text mining and genetic analysis where only relevant information is analysed. Full Product DetailsAuthor: Ananya GuptaPublisher: Vikatan Publishing Solutions Imprint: Vikatan Publishing Solutions Dimensions: Width: 15.20cm , Height: 1.00cm , Length: 22.90cm Weight: 0.263kg ISBN: 9788119665051ISBN 10: 8119665058 Pages: 190 Publication Date: 15 January 2023 Audience: General/trade , General Format: Paperback Publisher's Status: Active Availability: Temporarily unavailable ![]() The supplier advises that this item is temporarily unavailable. It will be ordered for you and placed on backorder. Once it does come back in stock, we will ship it out to you. Table of ContentsReviewsAuthor InformationTab Content 6Author Website:Countries AvailableAll regions |