ABSTRACT
Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.
- M. Adankon and M. Cheriet. Model selection for the LS-SVM. application to handwriting recognition. Pattern Recognition, 42(12):3264--3270, 2009. Google ScholarDigital Library
- R. Bardenet, M. Brendel, B. Kégl, and M. Sebag. Collaborative hyperparameter tuning. In Proc. of ICML-13, 2013.Google Scholar
- Y. Bengio. Gradient-based optimization of hyperparameters. Neural Computation, 12(8):1889--1900, 2000. Google ScholarDigital Library
- J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for Hyper-Parameter Optimization. In Proc. of NIPS-11, 2011.Google Scholar
- J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. JMLR, 13:281--305, 2012. Google ScholarDigital Library
- A. Biem. A model selection criterion for classification: Application to HMM topology optimization. In Proc. of ICDAR-03, pages 104--108. IEEE, 2003. Google ScholarDigital Library
- H. Bozdogan. Model selection and Akaike's information criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52(3):345--370, 1987.Google ScholarCross Ref
- P. Brazdil, C. Soares, and J. Da Costa. Ranking learning algorithms: Using IBL and meta-learning on accuracy and time results. Machine Learning, 50(3):251--277, 2003. Google ScholarDigital Library
- E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical Report UBC TR-2009-23 and arXiv:1012.2599v1, Department of Computer Science, University of British Columbia, 2009.Google Scholar
- O. Chapelle, V. Vapnik, and Y. Bengio. Model selection for small sample regression. Machine Learning, 2001. Google ScholarDigital Library
- T. Desautels, A. Krause, and J. Burdick. Parallelizing exploration-exploitation tradeoffs with gaussian process bandit optimization. In Proc. of ICML-12, 2012.Google Scholar
- A. Frank and A. Asuncion. UCI machine learning repository, 2010. URL: http://archive.ics.uci.edu/ml. University of California, Irvine, School of Information and Computer Sciences.Google Scholar
- X. Guo, J. Yang, C. Wu, C. Wang, and Y. Liang. A novel LS-SVMs hyper-parameter selection based on particle swarm optimization. Neurocomputing, 71(16):3211--3215, 2008. Google ScholarDigital Library
- M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. Witten. The WEKA data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10--18, 2009. Google ScholarDigital Library
- F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proc. of LION-5, pages 507--523, 2011. Google ScholarDigital Library
- F. Hutter, H. Hoos, K. Leyton-Brown, and T. Stützle. ParamILS: an automatic algorithm configuration framework. JAIR, 36(1):267--306, 2009. Google ScholarDigital Library
- F. Hutter, H. H. Hoos, and K. Leyton-Brown. Parallel algorithm configuration. In Proc. of LION-6, pages 55--70, 2012. Google ScholarDigital Library
- D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black box functions. Journal of Global Optimization, 13:455--492, 1998. Google ScholarDigital Library
- R. Kohavi. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proc. of IJCAI-95, pages 1137--1145, 1995. Google ScholarDigital Library
- A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009.Google Scholar
- R. Leite, P. Brazdil, and J. Vanschoren. Selecting classification algorithms with active testing. In Proc. of MLDM-12, pages 117--131, 2012. Google ScholarDigital Library
- M. López-Ibáñez, J. Dubois-Lacoste, T. Stützle, and M. Birattari. The irace package, iterated race for automatic algorithm configuration. Technical Report TR/IRIDIA/2011-004, IRIDIA, Université Libre de Bruxelles, Belgium, 2011.Google Scholar
- O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Proc. of NIPS-94, pages 59--66, 1994.Google Scholar
- A. McQuarrie and C. Tsai. Regression and time series model selection. World Scientific, 1998.Google ScholarCross Ref
- B. Pfahringer, H. Bensusan, and C. Giraud-Carrier. Meta-learning by landmarking various learning algorithms. In Proc. of ICML-00, pages 743--750, 2000. Google ScholarDigital Library
- T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. Rückstieß, and J. Schmidhuber. PyBrain. JMLR, 2010. Google ScholarDigital Library
- M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of computer models. In N. Flournoy, W. Rosenberger, and W. Wong, editors, New Developments and Applications in Experimental Design, volume 34, pages 11--25. Institute of Mathematical Statistics, Hayward, California, 1998.Google ScholarCross Ref
- J. Snoek, H. Larochelle, and R. Adams. Opportunity cost in Bayesian optimization. In NIPS Workshop on Bayesian Optimization, Sequential Experimental Design, and Bandits, 2011. Published online.Google Scholar
- J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Proc. of NIPS-12, 2012.Google Scholar
- N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. of ICML-10, pages 1015--1022, 2010.Google Scholar
- V. Strijov and G. Weber. Nonlinear regression model generation using hyperparameter optimization. Computers & Mathematics with Applications, 60(4):981--988, 2010. Google ScholarDigital Library
- R. Vilalta and Y. Drissi. A perspective view and survey of meta-learning. Artif. Intell. Rev., 18(2):77--95, Oct. 2002. Google ScholarDigital Library
- L. Xu, H. H. Hoos, and K. Leyton-Brown. Hydra: Automatically configuring algorithms for portfolio-based selection. In Proc. of AAAI-10, pages 210--216, 2010.Google Scholar
- P. Zhao and B. Yu. On model selection consistency of lasso. JMLR, 7:2541--2563, Dec. 2006. Google ScholarDigital Library
Index Terms
- Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms
Recommendations
Auto-WEKA 2.0: automatic model selection and hyperparameter optimization in WEKA
WEKA is a widely used, open-source machine learning platform. Due to its intuitive interface, it is particularly popular with novice users. However, such users often find it hard to identify the best approach for their particular dataset among the many ...
Performance Evaluation of Ensemble Methods For Software Fault Prediction: An Experiment
ASWEC ' 15 Vol. II: Proceedings of the ASWEC 2015 24th Australasian Software Engineering ConferenceIn object-oriented software development, a plethora of studies have been carried out to present the application of machine learning algorithms for fault prediction. Furthermore, it has been empirically validated that an ensemble method can improve ...
Research of Text Categorization on WEKA
ISDEA '13: Proceedings of the 2013 Third International Conference on Intelligent System Design and Engineering ApplicationsThe choice of algorithm is a key text categorization problem. In order to evaluation synthetically, analyzed three popular text categorization algorithm that are naive Bayes (NB), decision tree(DT) and support vector machines(SVM). Carried on simulation ...
Comments