Module 2 Research Update

Module 2 Research Update

Date Posted:  February 17, 2021
Date Recorded:  February 16, 2021
CBMM Speaker(s):  Gabriel Kreiman, Mengmi Zhang, Will Xiao Speaker(s):  Jie Zheng
  • All Captioned Videos
  • CBMM Research
Description: 

Abstracts:

Speaker: Mengmi Zhang
Title: The combination of eccentricity, bottom-up, and top-down cues explain conjunction and asymmetric visual search
Abstract: Visual search requires complex interactions between visual processing, eye movements, object recognition, memory, and decision making. Elegant psychophysics experiments have described the task characteristics and stimulus properties that facilitate or slow down visual search behavior. En route towards a quantitative framework that accounts for the mechanisms orchestrating visual search, here we propose an image-computable biologically-inspired computational model that takes a target and a search image as inputs and produces a sequence of eye movements. To compare the model against human behavior, we consider nine foundational experiments that demonstrate two intriguing principles of visual search: (i) asymmetric search costs when looking for a certain object A among distractors B versus the reverse situation of locating B among distractors A; (ii) the increase in search costs associated with feature conjunctions. The proposed computational model has three main components, an eccentricity-dependent visual feature processor learnt through natural image statistics, bottom-up saliency, and target-dependent top-down cues. Without any prior exposure to visual search stimuli or any task-specific training, the model demonstrates the essential properties of search asymmetries and slower reaction time in feature conjunction tasks. Furthermore, the model can generalize to real-world search tasks in complex natural environments. The proposed model unifies previous theoretical frameworks into an image-computable architecture that can be directly and quantitatively compared against psychophysics experiments and can also provide a mechanistic basis that can be evaluated in terms of the underlying neuronal circuits.

Speaker: Jie Zheng
Title: Neurons detect cognitive boundaries to structure episodic memories in humans
Abstract:While experience is continuous, memories are organized as discrete events. Cognitive boundaries are thought to segment experience and structure memory, but how this process is implemented remains unclear. We recorded the activity of single neurons in the human medial temporal lobe during the formation and retrieval of memories with complex narratives. Neurons responded to abstract cognitive boundaries between different episodes. Boundary-induced neural state changes during encoding predicted subsequent recognition accuracy but impaired event order memory, mirroring a fundamental behavioral tradeoff between content and time memory. Furthermore, the neural state following boundaries was reinstated during both successful retrieval and false memories. These findings reveal a neuronal substrate for detecting cognitive boundaries that transform experience into mnemonic episodes and structure mental time travel during retrieval.

Speaker: Will Xiao
Title: Adversarial images for the Primate Brain
Abstract: Deep artificial neural networks have been proposed as a model of primate vision. However, these networks are vulnerable to adversarial attacks, whereby introducing minimal noise can fool networks into misclassifying images. Primate vision is thought to be robust to such adversarial images. We evaluated this assumption by designing adversarial images to fool primate vision. To do so, we first trained a model to predict responses of face-selective neurons in macaque inferior temporal cortex. Next, we modified images, such as human faces, to match their model-predicted neuronal responses to a target category, such as monkey faces, with a small budget for pixel value change. These adversarial images elicited neuronal responses similar to the target category. Remarkably, the same images fooled monkeys and humans at the behavioral level. These results call for closer inspection of the adversarial sensitivity of primate vision, and show that a model of visual neuron activity can be used to specifically direct primate behavior.

Associated Research Module: