%0 Book Section %B Computer Vision – ECCV 2014, Lecture Notes in Computer Science %D 2014 %T Seeing is worse than believing: Reading people’s minds better than computer-vision methods recognize actions %A Andrei Barbu %A Daniel Barrett %A Wei Chen %A N. Siddharth %A Caiming Xiong %A Jason J. Corso %A Christiane D. Fellbaum %A Catherine Hanson %A Stephen José Hanson %A Sebastien Helie %A Evguenia Malaia %A Barak A. Pearlmutter %A Jeffrey Mark Siskind %A Thomas Michael Talavage %A Ronnie B. Wilbur %X
We had human subjects perform a one-out-of-six class action recognition task from video stimuli while undergoing functional magnetic resonance imaging (fMRI). Support-vector machines (SVMs) were trained on the recovered brain scans to classify actions observed during imaging, yielding average classification accuracy of 69.73% when tested on scans from the same subject and of 34.80% when tested on scans from different subjects. An apples-to-apples comparison was performed with all publicly available software that implements state-of-the-art action recognition on the same video corpus with the same cross-validation regimen and same partitioning into training and test sets, yielding classification accuracies between 31.25% and 52.34%. This indicates that one can read people’s minds better than state-of-the-art computer-vision methods can perform action recognition.
%B Computer Vision – ECCV 2014, Lecture Notes in Computer Science %S 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V %I Springer International Publishing %C Zurich, Switzerland %V 8693 %P 612–627 %G eng %! Computer Vision – ECCV 2014 %R 10.1007/978-3-319-10602-1_40