Decoding task and stimulus representation in face-responsive cortex

TitleDecoding task and stimulus representation in face-responsive cortex
Publication TypeConference Abstract
Year of Publication2015
AuthorsKliemann, D, Jacoby, N, Anzellottti, S, Saxe, R
Notes

To be presented

Citation Key989
Full Text

Introduction

Faces are a rich source of information, for example about others’ identity, stable traits (e.g. age, gender, race) and fleeting states of mind (e.g. gaze, emotional expression). Some of these features may be processed automatically (Critchley et al., 2000), but observers can also deliberately attend to some features while ignoring others. Inspired by evidence that attention dramatically shifts the representation of objects in the ventral visual stream (Harel et al., 2014), we investigated how shifting the focus of attention between facial features affects the representation of faces in face-responsive cortex.

Methods

We measured neural responses with functional magnetic resonance imaging (fMRI) while subjects (n = 28, 18 male, mean age  = 26.6 years) watched 4s naturalistic movie clips of dynamic positive and negative facial expressions. Over 8 runs, subjects watched 192 unique movies. Subjects were instructed to judge either the person's age (age task: over versus under 40 years old) or the valence of their emotional expression (emotion task: positive vs. negative). The delay between task prompt and stimulus was jittered (mean 8s). Reaction times (RT) for each trial were included as a parametric nuisance regressor. In addition to the main task, subjects completed a functional localizer (faces vs. objects) to independently identify face-responsive regions of interest (ROIs) in bilateral posterior and anterior superior temporal sulcus (pSTS, aSTS), dorsal and ventral medial prefrontal cortex (d/vMPFC), and right fusiform face area (rFFA).

We used split-half multivariate pattern analyses (MVPA, Haxby et al., 2001) to test whether the pattern of BOLD response in each ROI (and in the whole brain, using a searchlight analysis) contained information about the valence of the emotional expression, and/or the participant's task.

Results

Subjects (n=3) performing significantly below chance on the emotion task for 2 or more runs were excluded. RTs were slower for negative versus positive emotional expressions (F(1,24) = 7.3, p = .013), and for the age vs. the emotion task (F(1,24) = 23.4, p < .001).

Consistent with prior literature (Adolphs et al., 2002) attending to emotional expressions led to increased neural responses in STS and insula. Attending to the person's age led to increased responses in rFFA, intra-parietal sulcus and lateral prefrontal cortex, the latter potentially related to increased difficulty and ambiguity of the age task.

When subjects were attending to the emotional expression, the valence of the emotional expression could be decoded from neural patterns in rpSTS and MPFC (Fig1, whole brain search light, p < .005). However, when subjects were attending to the person's age, the valence of the emotional expression could no longer be decoded.

By contrast, subjects' task could be decoded in all regions of the face network (all p < .007). Further analyses showed that the pattern in these regions (except rFFA) could still decode task even when i) generalizing across emotional valence and ii) when controlling for task difficulty by analyzing a matched subset of trials. Importantly, these face-responsive regions contain information about the task only while subjects are viewing the video of the face, not at the time of the task prompt or during the delay between the prompt and the video (Fig. 2).

Conclusions

Our results suggest that subjects' deliberate focus of attention dramatically shapes the information represented about faces in face-responsive cortical regions. While watching the same videos of dynamic emotional facial expressions, information about the expression’s valence could be decoded only when subjects were attending to emotion; we found no evidence of stimulus-driven representations that were unaffected by task. At the same time, subjects' task was strongly represented in all face regions, but only while viewing a face. These results reveal a powerful influence of top-down signals on cortical representations of faces.

Research Area: 

CBMM Relationship: 

  • CBMM Related