Abstract
There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.
- Batista, G. E. A. P. A., Bazan, A. L., and Monard, M. C. Balancing Training Data for Automated Annotation of Keywords: a Case Study. In WOB (2003), pp. 35--43.Google Scholar
- Bauer, E., and Kohavi, R. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning 36 (1999), 105--139. Google ScholarDigital Library
- Blake, C., and Merz, C. UCI Repository of Machine Learning Databases, 1998. http://www.ics.uci.edu/~mlearn/MLRepository.html.Google Scholar
- Chawla, N. V. C4.5 and Imbalanced Data Sets: Investigating the Effect of Sampling Method, Probabilistic Estimate, and Decision Tree Structure. In Workshop on Learning from Imbalanced Data Sets II (2003).Google Scholar
- Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer, W. P. SMOTE: Synthetic Minority Over-sampling Technique. JAIR 16 (2002), 321--357. Google ScholarDigital Library
- Ciaccia, P., Patella, M., and Zezula, P. M-tree: an Efficient Access Method for Similarity Search in Metric Spaces. In VLDB (1997), pp. 426--435. Google ScholarDigital Library
- Domingos, P. MetaCost: A General Method for Making Classifiers Cost-Sensitive. In KDD (1999), pp. 155--164. Google ScholarDigital Library
- Drummond, C., and Holte, R. C. C4.5, Class Imbalance, and Cost Sensitivity: Why Under-sampling beats Over-sampling. In Workshop on Learning from Imbalanced Data Sets II (2003).Google Scholar
- Ferri, C., Flach, P., and Hernández-Orallo, J. Learning Decision Trees Using the Area Under the ROC Curve. In ICML (2002), pp. 139--146. Google ScholarDigital Library
- Hand, D. J. Construction and Assessment of Classification Rules. John Wiley and Sons, 1997.Google Scholar
- Hart, P. E. The Condensed Nearest Neighbor Rule. IEEE Transactions on Information Theory IT-14 (1968), 515--516.Google ScholarDigital Library
- Japkowicz, N. Class Imbalances: Are We Focusing on the Right Issue? In Workshop on Learning from Imbalanced Data Sets II (2003).Google Scholar
- Japkowicz, N., and Stephen, S. The Class Imbalance Problem: A Systematic Study. IDA Journal 6, 5 (2002), 429--449. Google ScholarDigital Library
- Kubat, M., and Matwin, S. Addressing the Course of Imbalanced Training Sets: One-sided Selection. In ICML (1997), pp. 179--186.Google Scholar
- Laurikkala, J. Improving Identification of Difficult Small Classes by Balancing Class Distribution. Tech. Rep. A-2001-2, University of Tampere, 2001. Google ScholarDigital Library
- Ling, C. X., and Li, C. Data Mining for Direct Mining: Problems and Solutions. In KDD (1998), pp. 73--79.Google Scholar
- Mitchell, T. M. Machine Learning. McGraw-Hill, 1997. Google ScholarDigital Library
- Prati, R. C., Batista, G. E. A. P. A., and Monard, M. C. Class Imbalances versus Class Overlapping: an Analysis of a Learning System Behavior. In MICAI (2004), pp. 312--321. LNAI 2972.Google ScholarCross Ref
- Provost, F. J., and Fawcett, T. Analysis and Visualization of Classifier Performance: Comparison under Imprecise Class and Cost-Distributions. In KDD (1997), pp. 43--48.Google Scholar
- Quinlan, J. R. C4.5 Programs for Machine Learning. Morgan Kaufmann, CA, 1988. Google ScholarDigital Library
- Stanfill, C., and Waltz, D. Instance-based Learning Algorithms. Communications of the ACM 12 (1986), 1213--1228. Google ScholarDigital Library
- Tomek, I. Two Modifications of CNN. IEEE Transactions on Systems Man and Communications SMC-6 (1976), 769--772.Google Scholar
- Weiss, G. M., and Provost, F. Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction. JAIR 19 (2003), 315--354. Google ScholarDigital Library
- Wilson, D. L. Asymptotic Properties of Nearest Neighbor Rules Using Edited Data. IEEE Transactions on Systems, Man, and Communications 2, 3 (1972), 408--421.Google Scholar
- Wilson, D. R., and Martinez, T. R. Reduction Techniques for Exemplar-Based Learning Algorithms. Machine Learning 38, 3 (2000), 257--286. Google ScholarDigital Library
- Zadrozny, B., and Elkan, C. Learning and Making Decisions When Costs and Probabilities are Both Unknown. In KDD (2001), pp. 204--213. Google ScholarDigital Library
Index Terms
- A study of the behavior of several methods for balancing machine learning training data
Recommendations
Evaluation of machine learning methods for impostor detection in web applications
AbstractApplying machine learning (ML) methods to multi-factor authentication is becoming increasingly popular. However, there is no comprehensive methodology to evaluate biometric systems based on machine learning in the literature. This paper proposes ...
Highlights- General methodology for evaluating ML impostor recognition using biometric traits.
- Evaluation of the influence of the impostors number on their detection rate.
- Evaluation of the impact of the number of records representing user’s ...
Several SVM Ensemble Methods Integrated with Under-Sampling for Imbalanced Data Learning
ADMA '09: Proceedings of the 5th International Conference on Advanced Data Mining and ApplicationsImbalanced data learning (IDL) is one of the most active and important fields in machine learning research. This paper focuses on exploring the efficiencies of four different SVM ensemble methods integrated with under-sampling in IDL. The experimental ...
Machine Learning for Social Behavior Understanding
CGI 2018: Proceedings of Computer Graphics International 2018Human brain has an ability to perform a massive processing of auxiliary information such as visual cues, cognitive and social interactions, contextual and spatio-temporal data. Similarly to a human brain, social behavioral cues can aid the reliable ...
Comments