skip to main content
10.1145/2487575.2487629acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
poster

Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms

Authors Info & Claims
Published:11 August 2013Publication History

ABSTRACT

Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.

References

  1. M. Adankon and M. Cheriet. Model selection for the LS-SVM. application to handwriting recognition. Pattern Recognition, 42(12):3264--3270, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. R. Bardenet, M. Brendel, B. Kégl, and M. Sebag. Collaborative hyperparameter tuning. In Proc. of ICML-13, 2013.Google ScholarGoogle Scholar
  3. Y. Bengio. Gradient-based optimization of hyperparameters. Neural Computation, 12(8):1889--1900, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for Hyper-Parameter Optimization. In Proc. of NIPS-11, 2011.Google ScholarGoogle Scholar
  5. J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. JMLR, 13:281--305, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Biem. A model selection criterion for classification: Application to HMM topology optimization. In Proc. of ICDAR-03, pages 104--108. IEEE, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H. Bozdogan. Model selection and Akaike's information criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52(3):345--370, 1987.Google ScholarGoogle ScholarCross RefCross Ref
  8. P. Brazdil, C. Soares, and J. Da Costa. Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50(3):251--277, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical Report UBC TR-2009-23 and arXiv:1012.2599v1, Department of Computer Science, University of British Columbia, 2009.Google ScholarGoogle Scholar
  10. O. Chapelle, V. Vapnik, and Y. Bengio. Model selection for small sample regression. Machine Learning, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Desautels, A. Krause, and J. Burdick. Parallelizing exploration-exploitation tradeoffs with gaussian process bandit optimization. In Proc. of ICML-12, 2012.Google ScholarGoogle Scholar
  12. A. Frank and A. Asuncion. UCI machine learning repository, 2010. URL: http://archive.ics.uci.edu/ml. University of California, Irvine, School of Information and Computer Sciences.Google ScholarGoogle Scholar
  13. X. Guo, J. Yang, C. Wu, C. Wang, and Y. Liang. A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing, 71(16):3211--3215, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. Witten. The WEKA data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10--18, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proc. of LION-5, pages 507--523, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. F. Hutter, H. Hoos, K. Leyton-Brown, and T. Stützle. ParamILS: an automatic algorithm configuration framework. JAIR, 36(1):267--306, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. F. Hutter, H. H. Hoos, and K. Leyton-Brown. Parallel algorithm configuration. In Proc. of LION-6, pages 55--70, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black box functions. Journal of Global Optimization, 13:455--492, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proc. of IJCAI-95, pages 1137--1145, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009.Google ScholarGoogle Scholar
  21. R. Leite, P. Brazdil, and J. Vanschoren. Selecting classification algorithms with active testing. In Proc. of MLDM-12, pages 117--131, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. M. López-Ibáñez, J. Dubois-Lacoste, T. Stützle, and M. Birattari. The irace package, iterated race for automatic algorithm configuration. Technical Report TR/IRIDIA/2011-004, IRIDIA, Université Libre de Bruxelles, Belgium, 2011.Google ScholarGoogle Scholar
  23. O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proc. of NIPS-94, pages 59--66, 1994.Google ScholarGoogle Scholar
  24. A. McQuarrie and C. Tsai. Regression and time series model selection. World Scientific, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  25. B. Pfahringer, H. Bensusan, and C. Giraud-Carrier. Meta-learning by landmarking various learning algorithms. In Proc. of ICML-00, pages 743--750, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. Rückstieß, and J. Schmidhuber. PyBrain. JMLR, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of computer models. In N. Flournoy, W. Rosenberger, and W. Wong, editors, New Developments and Applications in Experimental Design, volume 34, pages 11--25. Institute of Mathematical Statistics, Hayward, California, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  28. J. Snoek, H. Larochelle, and R. Adams. Opportunity cost in Bayesian optimization. In NIPS Workshop on Bayesian Optimization, Sequential Experimental Design, and Bandits, 2011. Published online.Google ScholarGoogle Scholar
  29. J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Proc. of NIPS-12, 2012.Google ScholarGoogle Scholar
  30. N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. of ICML-10, pages 1015--1022, 2010.Google ScholarGoogle Scholar
  31. V. Strijov and G. Weber. Nonlinear regression model generation using hyperparameter optimization. Computers & Mathematics with Applications, 60(4):981--988, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. R. Vilalta and Y. Drissi. A perspective view and survey of meta-learning. Artif. Intell. Rev., 18(2):77--95, Oct. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. L. Xu, H. H. Hoos, and K. Leyton-Brown. Hydra: Automatically configuring algorithms for portfolio-based selection. In Proc. of AAAI-10, pages 210--216, 2010.Google ScholarGoogle Scholar
  34. P. Zhao and B. Yu. On model selection consistency of lasso. JMLR, 7:2541--2563, Dec. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '13: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
          August 2013
          1534 pages
          ISBN:9781450321747
          DOI:10.1145/2487575

          Copyright © 2013 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 August 2013

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • poster

          Acceptance Rates

          KDD '13 Paper Acceptance Rate125of726submissions,17%Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

          KDD '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader