Cost-Sensitive Measures of Algorithm Similarity for Meta-Learning
Main Article Content
Abstract
-Knowledge about algorithm similarity is an important aspect of meta-learning, where the information gathered from previous learning tasks can be used to guide the selection of algorithms for new datasets. Usually this task is done by comparing global performance measures across different datasets or alternatively, comparing the performance of algorithms at the instance-level. In both cases, the previous similarity measures do not consider misclassification costs, and hence they neglect an important information that can be exploited in different learning contexts. In this paper we present algorithm similarity measures that deals with cost proportions and different threshold choice methods for building crisp classifiers from learned models. Experiments were performed in a meta-learning study with 50 different learning tasks. The similarity measures were adopted to cluster algorithms according to their aggregated performance on the learning tasks. The clustering process revealed similarity between algorithms under different perspectives.
Article Details
How to Cite
MELO, Carlos Eduardo Castor de Melo; PRUDÊNCIO, Ricardo Bastos Cavalcante Prudêncio.
Cost-Sensitive Measures of Algorithm Similarity for Meta-Learning.
BRACIS, [S.l.], jan. 2017.
Available at: <http://541213.vlyrfqsea.asia/index.php/bracis/article/view/534>. Date accessed: 28 nov. 2024.
Issue
Section
Artigos