skip to main content
article

A study of the behavior of several methods for balancing machine learning training data

Authors Info & Claims
Published:01 June 2004Publication History
Skip Abstract Section

Abstract

There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.

References

  1. Batista, G. E. A. P. A., Bazan, A. L., and Monard, M. C. Balancing Training Data for Automated Annotation of Keywords: a Case Study. In WOB (2003), pp. 35--43.Google ScholarGoogle Scholar
  2. Bauer, E., and Kohavi, R. An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning 36 (1999), 105--139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Blake, C., and Merz, C. UCI Repository of Machine Learning Databases, 1998. http://www.ics.uci.edu/~mlearn/MLRepository.html.Google ScholarGoogle Scholar
  4. Chawla, N. V. C4.5 and Imbalanced Data Sets: Investigating the Effect of Sampling Method, Probabilistic Estimate, and Decision Tree Structure. In Workshop on Learning from Imbalanced Data Sets II (2003).Google ScholarGoogle Scholar
  5. Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer, W. P. SMOTE: Synthetic Minority Over-sampling Technique. JAIR 16 (2002), 321--357. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ciaccia, P., Patella, M., and Zezula, P. M-tree: an Efficient Access Method for Similarity Search in Metric Spaces. In VLDB (1997), pp. 426--435. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Domingos, P. MetaCost: A General Method for Making Classifiers Cost-Sensitive. In KDD (1999), pp. 155--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Drummond, C., and Holte, R. C. C4.5, Class Imbalance, and Cost Sensitivity: Why Under-sampling beats Over-sampling. In Workshop on Learning from Imbalanced Data Sets II (2003).Google ScholarGoogle Scholar
  9. Ferri, C., Flach, P., and Hernández-Orallo, J. Learning Decision Trees Using the Area Under the ROC Curve. In ICML (2002), pp. 139--146. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Hand, D. J. Construction and Assessment of Classification Rules. John Wiley and Sons, 1997.Google ScholarGoogle Scholar
  11. Hart, P. E. The Condensed Nearest Neighbor Rule. IEEE Transactions on Information Theory IT-14 (1968), 515--516.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Japkowicz, N. Class Imbalances: Are We Focusing on the Right Issue? In Workshop on Learning from Imbalanced Data Sets II (2003).Google ScholarGoogle Scholar
  13. Japkowicz, N., and Stephen, S. The Class Imbalance Problem: A Systematic Study. IDA Journal 6, 5 (2002), 429--449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Kubat, M., and Matwin, S. Addressing the Course of Imbalanced Training Sets: One-sided Selection. In ICML (1997), pp. 179--186.Google ScholarGoogle Scholar
  15. Laurikkala, J. Improving Identification of Difficult Small Classes by Balancing Class Distribution. Tech. Rep. A-2001-2, University of Tampere, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Ling, C. X., and Li, C. Data Mining for Direct Mining: Problems and Solutions. In KDD (1998), pp. 73--79.Google ScholarGoogle Scholar
  17. Mitchell, T. M. Machine Learning. McGraw-Hill, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Prati, R. C., Batista, G. E. A. P. A., and Monard, M. C. Class Imbalances versus Class Overlapping: an Analysis of a Learning System Behavior. In MICAI (2004), pp. 312--321. LNAI 2972.Google ScholarGoogle ScholarCross RefCross Ref
  19. Provost, F. J., and Fawcett, T. Analysis and Visualization of Classifier Performance: Comparison under Imprecise Class and Cost-Distributions. In KDD (1997), pp. 43--48.Google ScholarGoogle Scholar
  20. Quinlan, J. R. C4.5 Programs for Machine Learning. Morgan Kaufmann, CA, 1988. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Stanfill, C., and Waltz, D. Instance-based Learning Algorithms. Communications of the ACM 12 (1986), 1213--1228. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Tomek, I. Two Modifications of CNN. IEEE Transactions on Systems Man and Communications SMC-6 (1976), 769--772.Google ScholarGoogle Scholar
  23. Weiss, G. M., and Provost, F. Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction. JAIR 19 (2003), 315--354. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Wilson, D. L. Asymptotic Properties of Nearest Neighbor Rules Using Edited Data. IEEE Transactions on Systems, Man, and Communications 2, 3 (1972), 408--421.Google ScholarGoogle Scholar
  25. Wilson, D. R., and Martinez, T. R. Reduction Techniques for Exemplar-Based Learning Algorithms. Machine Learning 38, 3 (2000), 257--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Zadrozny, B., and Elkan, C. Learning and Making Decisions When Costs and Probabilities are Both Unknown. In KDD (2001), pp. 204--213. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A study of the behavior of several methods for balancing machine learning training data
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGKDD Explorations Newsletter
      ACM SIGKDD Explorations Newsletter  Volume 6, Issue 1
      Special issue on learning from imbalanced datasets
      June 2004
      117 pages
      ISSN:1931-0145
      EISSN:1931-0153
      DOI:10.1145/1007730
      Issue’s Table of Contents

      Copyright © 2004 Authors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 June 2004

      Check for updates

      Qualifiers

      • article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader