skip to main content
10.1145/2522848.2531739acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Emotion recognition in the wild challenge 2013

Published:09 December 2013Publication History

ABSTRACT

Emotion recognition is a very active field of research. The Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2013 Grand Challenge consists of an audio-video based emotion classification challenges, which mimics real-world conditions. Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such laboratory controlled data poorly represents the environment and conditions faced in real-world situations. The goal of this Grand Challenge is to define a common platform for evaluation of emotion recognition methods in real-world conditions. The database in the 2013 challenge is the Acted Facial Expression in the Wild (AFEW), which has been collected from movies showing close-to-real-world conditions.

References

  1. Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In CVPR4HB10, 2010.Google ScholarGoogle Scholar
  2. Maja Pantic, Michel François Valstar, Ron Rademaker, and Ludo Maat. Web-based database for facial expression analysis. In Proceedings of the IEEE International Conference on Multimedia and Expo, ICME'05, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  3. Michel Valstar, Bihan Jiang, Marc Mehu, Maja Pantic, and Scherer Klaus. The first facial expression recognition and analysis challenge. In Proceedings of the Ninth IEEE International Conference on Automatic Face Gesture Recognition and Workshops, FG'11, pages 314--321, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  4. Gary McKeown, Michel François Valstar, Roderick Cowie, and Maja Pantic. The semaine corpus of emotionally coloured character interactions. In IEEE ICME, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  5. Björn Schuller, Michel François Valstar, Florian Eyben, Gary McKeown, Roddy Cowie, and Maja Pantic. Avec 2011-the first international audio/visual emotion challenge. In ACII (2), pages 415--424, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Björn Schuller, Michel Valstar, Florian Eyben, Roddy Cowie, and Maja Pantic. Avec 2012: the continuous audio/visual emotion challenge. In ICMI, pages 449--456, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Abhinav Dhall, Jyoti Joshi, Ibrahim Radwan, and Roland Goecke. Finding happiest moments in a social context. In ACCV, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. A semi-automatic method for collecting richly labelled large facial expression databases from movies. IEEE Multimedia, 19(3):34--41, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Jacob Whitehill, Gwen Littlewort, Ian R. Fasel, Marian Stewart Bartlett, and Javier R. Movellan. Toward Practical Smile Detection. IEEE TPAMI, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Abhinav Dhall, Roland Goecke, Simon Lucey, and Tom Gedeon. Static Facial Expression Analysis In Tough Conditions: Data, Evaluation Protocol And Benchmark. In ICCVW, BEFIT'11, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  11. P.F. Felzenszwalb and D.P. Huttenlocher. Pictorial Structures for Object Recognition. IJCV, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In CVPR, pages 2879--2886, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Navneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR, pages 886--893, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Tobias Gehrig and Hazım Kemal Ekenel. A common framework for real-time emotion recognition and facial action unit detection. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pages 1--6. IEEE, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  15. Guoying Zhao and Matti Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE TPAMI, 29:915--928, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Björn Schuller, Michel Valstar, Florian Eyben, Gary McKeown, Roddy Cowie, and Maja Pantic. Avec 2011--the first international audio/visual emotion challenge. In Affective Computing and Intelligent Interaction, pages 415--424. Springer Berlin Heidelberg, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Björn Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devillers, Christian A Müller, and Shrikanth S Narayanan. The interspeech 2010 paralinguistic challenge. In INTERSPEECH, pages 2794--2797, 2010.Google ScholarGoogle Scholar
  18. Florian Eyben, Martin Wollmer, and Bjorn Schuller. Openear--introducing the munich open-source emotion and affect recognition toolkit. In Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on, pages 1--6. IEEE, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  19. Florian Eyben, Martin Wöllmer, and Björn Schuller. Opensmile: the munich versatile and fast open-source audio feature extractor. In ACM Multimedia, pages 1459--1462, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Emotion recognition in the wild challenge 2013

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interaction
        December 2013
        630 pages
        ISBN:9781450321297
        DOI:10.1145/2522848

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 December 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ICMI '13 Paper Acceptance Rate49of133submissions,37%Overall Acceptance Rate453of1,080submissions,42%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader