|Title||Individual Differences in Face Looking Behavior Generalize from the Lab to the World|
|Publication Type||Journal Article|
|Year of Publication||2016|
|Authors||Peterson, MF, Lin, J, Zaun, I, Kanwisher, N|
|Journal||Journal of Vision|
Recent laboratory studies have found large, stable individual differences in the location people first fixate when identifying faces, ranging from the brows to the mouth. Importantly, this variation is strongly associated with differences in fixation-specific identification performance such that an individual’s recognition ability is maximized when looking at their preferred location (Mehoudar, Arizpe, Baker, & Yovel, 2014; Peterson & Eckstein, 2013). This finding suggests that face representations are retinotopic and individuals enact gaze strategies that optimize identification, yet the extent to which this behavior reflects real-world gaze behavior is unknown. Here, we used mobile eye-trackers to test whether individual differences in face-gaze generalize from lab to real-world vision. In-lab fixations were measured with a speeded face identification task, while real-world behavior was measured as subjects freely walked around the MIT campus. We found a strong correlation between the patterns of individual differences in face-gaze in the laboratory and real-world settings. Our findings support the hypothesis that individuals optimize real-world face identification by consistently fixating the same location and thus strongly constraining the space of retinotopic input. The methods developed for this study entailed collecting a large set of high-definition, wide field-of-view natural videos from head-mounted cameras and the viewer’s fixation position, allowing us to characterize subject’s actually-experienced real-world retinotopic images. These images enable us to ask how vision is optimized not just for the statistics of the “natural images” found in web databases, but of the truly natural, retinotopic images that have landed on actual human retinae during real-world experience.
- CBMM Funded