Export 789 results:
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas. (2020).
Evidence that recurrent pathways between the prefrontal and inferior temporal cortex is critical during core object recognition . COSYNE (2020).
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Face selective patches in marmoset frontal cortexAbstract. Nature Communications 11, (2020).
Fast Recurrent Processing via Ventrolateral Prefrontal Cortex Is Needed by the Primate Ventral Stream for Robust Core Visual Object Recognition. Neuron (2020). doi:10.1016/j.neuron.2020.09.035
The fine structure of surprise in intuitive physics: when, why, and how much?. Proceedings of the 42th Annual Meeting of the Cognitive Science Society - Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020, virtual, July 29 - August 1, 2020 ( ) (2020). at <https://cogsci.mindmodeling.org/2020/papers/0761/index.html>
Function approximation by deep networks. Communications on Pure & Applied Analysis 19, 4085 - 4095 (2020).
Gross means Great. Progress in Neurobiology 195, 101924 (2020).
Hierarchical neural network models that more closely match primary visual cortex tend to better explain higher level visual cortical responses . COSYNE (2020).
Hierarchical structure is employed by humans during visual motion perception. Proceedings of the National Academy of Sciences 117, 24581 - 24589 (2020).
Hippocampal remapping as hidden state inference. eLife 9, (2020).
Incorporating intrinsic suppression in deep neural networks captures dynamics of adaptation in neurophysiology and perception. Science Advances 6, eabd4205 (2020).
Infants represent 'like-kin' affiliation . Budapest Conference on Cognitive Development (2020).
Infants’ sensitivity to shape changes in 2D visual forms. Infancy 25, 618 - 639 (2020).
The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys. Nature Communications 11, (2020).
Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence. Neuron 108, 413 - 423 (2020).
Learning a Natural-language to LTL Executable Semantic Parser for Grounded Robotics. (Proceedings of Conference on Robot Learning (CoRL-2020), 2020). at <https://corlconf.github.io/paper_385/>
Learning a natural-language to LTL executable semantic parser for grounded robotics. (2020). doi:https://doi.org/10.48550/arXiv.2008.03277
Learning abstract structure for drawing by efficient motor program induction. Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) (2020). at <https://papers.nips.cc/paper/2020/hash/1c104b9c0accfca52ef21728eaf01453-Abstract.html>
Learning Compositional Rules via Neural Program Synthesis. Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) (2020). at <https://proceedings.neurips.cc/paper/2020/hash/7a685d9edd95508471a9d3d6fcace432-Abstract.html>
Learning from multiple informants: Children’s response to epistemic bases for consensus judgments. Journal of Experimental Child Psychology 192, 104759 (2020).