%0 Journal Article %J Annual Review of Vision Science %D 2018 %T Invariant Recognition Shapes Neural Representations of Visual Input %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %K computational neuroscience %K Invariance %K neural decoding %K visual representations %X

Recognizing the people, objects, and actions in the world around us is a crucial aspect of human perception that allows us to plan and act in our environment. Remarkably, our proficiency in recognizing semantic categories from visual input is unhindered by transformations that substantially alter their appearance (e.g., changes in lighting or position). The ability to generalize across these complex transformations is a hallmark of human visual intelligence, which has been the focus of wide-ranging investigation in systems and computational neuroscience. However, while the neural machinery of human visual perception has been thoroughly described, the computational principles dictating its functioning remain unknown. Here, we review recent results in brain imaging, neurophysiology, and computational neuroscience in support of the hypothesis that the ability to support the invariant recognition of semantic entities in the visual world shapes which neural representations of sensory input are computed by human visual cortex.

%B Annual Review of Vision Science %V 4 %P 403 - 422 %8 10/2018 %G eng %U https://www.annualreviews.org/doi/10.1146/annurev-vision-091517-034103 %N 1 %! Annu. Rev. Vis. Sci. %R 10.1146/annurev-vision-091517-034103 %0 Journal Article %J Cell Reports %D 2018 %T Real-Time Readout of Large-Scale Unsorted Neural Ensemble Place Codes %A Hu, Sile %A Ciliberti, Davide %A Grosmark, Andres D. %A Michon, édéric %A Ji, Daoyun %A Hector Penagos %A áki, örgy %A Matthew A. Wilson %A Kloosterman, Fabian %A Chen, Zhe %K GPU %K memory replay %K neural decoding %K place codes %K population decoding %K spatiotemporal patterns %X

Uncovering spatial representations from large-scale ensemble spike activity in specific brain circuits provides valuable feedback in closed-loop experiments. We develop a graphics processing unit (GPU)-powered population-decoding system for ultrafast reconstruction of spatial positions from rodents’ unsorted spatiotemporal spiking patterns, during run behavior or sleep. In comparison with an optimized quad-core central processing unit (CPU) implementation, our approach achieves an ∼20- to 50-fold increase in speed in eight tested rat hippocampal, cortical, and thalamic ensemble recordings, with real-time decoding speed (approximately fraction of a millisecond per spike) and scalability up to thousands of channels. By accommodating parallel shuffling in real time (computation time <15 ms), our approach enables assessment of the statistical significance of online-decoded “memory replay” candidates during quiet wakefulness or sleep. This open-source software toolkit supports the decoding of spatial correlates or content-triggered experimental manipulation in closed-loop neuroscience experiments.

%B Cell Reports %V 25 %P 2635 - 2642.e5 %8 Jan-12-2018 %G eng %U https://www.sciencedirect.com/science/article/pii/S2211124718317960 %N 10 %! Cell Reports %R https://doi.org/10.1016/j.celrep.2018.11.033 %0 Journal Article %J NeuroImage %D 2018 %T What is changing when: decoding visual information in movies from human intracranial recordings %A Leyla Isik %A Jedediah Singer %A Nancy Kanwisher %A Madsen JR %A Anderson WS %A Gabriel Kreiman %K Electrocorticography (ECoG) %K Movies %K Natural vision %K neural decoding %K object recognition %K Ventral pathway %X

The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision.

%B NeuroImage %V 180, Part A %P 147-159 %8 10/2018 %G eng %U https://www.sciencedirect.com/science/article/pii/S1053811917306742 %) Available online 18 August 2017 %R 10.1016/j.neuroimage.2017.08.027 %0 Journal Article %J Cerebral Cortex %D 2017 %T Differential Processing of Isolated Object and Multi-item Pop-Out Displays in LIP and PFC. %A Ethan Meyers %A Andy Liang %A Fumi Katsuki %A Christos Constantinidis %K Attention %K lateral intraparietal area %K neural decoding %K posterior parietal cortex %K prefrontal cortex %X

Objects that are highly distinct from their surroundings appear to visually "pop-out." This effect is present for displays in which: (1) a single cue object is shown on a blank background, and (2) a single cue object is highly distinct from surrounding objects; it is generally assumed that these 2 display types are processed in the same way. To directly examine this, we applied a decoding analysis to neural activity recorded from the lateral intraparietal (LIP) area and the dorsolateral prefrontal cortex (dlPFC). Our analyses showed that for the single-object displays, cue location information appeared earlier in LIP than in dlPFC. However, for the display with distractors, location information was substantially delayed in both brain regions, and information first appeared in dlPFC. Additionally, we see that pattern of neural activity is similar for both types of displays and across different color transformations of the stimuli, indicating that location information is being coded in the same way regardless of display type. These results lead us to hypothesize that 2 different pathways are involved processing these 2 types of pop-out displays.

%B Cerebral Cortex %8 10/2017 %G eng %U https://academic.oup.com/cercor/advance-article/doi/10.1093/cercor/bhx243/4430784 %R 10.1093/cercor/bhx243 %0 Journal Article %J J Neurophysiol %D 2017 %T A fast, invariant representation for human action in the visual system. %A Leyla Isik %A Andrea Tacchetti %A Tomaso Poggio %K action recognition %K magnetoencephalography %K neural decoding %K vision %X

Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition, however, the underlying neural computations remain poorly understood. We use magnetoencephalography (MEG) decoding and a dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discriminates between different actions, and when it does so in a manner that is invariant to changes in 3D viewpoint. We measure the latency difference between invariant and non-invariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. We were unable to detect a difference in decoding latency or temporal profile between invariant and non-invariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time, and that both form and motion information are crucial for fast, invariant action recognition.

%B J Neurophysiol %P jn.00642.2017 %8 11/2017 %G eng %R 10.1152/jn.00642.2017