Publication
Export 864 results:
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020). doi:10.1109/IROS45743.2020.9341325
Evidence that recurrent pathways between the prefrontal and inferior temporal cortex is critical during core object recognition . COSYNE (2020).
An Exit Strategy from the Covid-19 Lockdown based on Risk-sensitive Resource Allocation. (2020). CBMM-Memo-106.pdf (431.13 KB)
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Fast Recurrent Processing via Ventrolateral Prefrontal Cortex Is Needed by the Primate Ventral Stream for Robust Core Visual Object Recognition. Neuron (2020). doi:10.1016/j.neuron.2020.09.035 PIIS0896627320307595.pdf (3.92 MB)
The fine structure of surprise in intuitive physics: when, why, and how much?. Proceedings of the 42th Annual Meeting of the Cognitive Science Society - Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020, virtual, July 29 - August 1, 2020 ( ) (2020). at <https://cogsci.mindmodeling.org/2020/papers/0761/index.html>
For interpolating kernel machines, the minimum norm ERM solution is the most stable. (2020). CBMM_Memo_108.pdf (1015.14 KB) Better bound (without inequalities!) (1.03 MB)
Function approximation by deep networks. Communications on Pure & Applied Analysis 19, 4085 - 4095 (2020). 1534-0392_2020_8_4085.pdf (514.57 KB)
Hierarchical neural network models that more closely match primary visual cortex tend to better explain higher level visual cortical responses . COSYNE (2020).
Hierarchical structure is employed by humans during visual motion perception. Proceedings of the National Academy of Sciences 117, 24581 - 24589 (2020).
Hierarchically Local Tasks and Deep Convolutional Networks. (2020). CBMM_Memo_109.pdf (2.12 MB)
Implicit dynamic regularization in deep networks. (2020). v1.2 (2.29 MB) v.59 Update on rank (2.43 MB)
Incorporating intrinsic suppression in deep neural networks captures dynamics of adaptation in neurophysiology and perception. Science Advances 6, eabd4205 (2020). gk7967.pdf (3.07 MB)
Infants represent 'like-kin' affiliation . Budapest Conference on Cognitive Development (2020).
The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys. Nature Communications 11, (2020). s41467-020-17714-3.pdf (25.01 MB)
Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence. Neuron 108, 413 - 423 (2020).
Learning a natural-language to LTL executable semantic parser for grounded robotics. (2020). doi:https://doi.org/10.48550/arXiv.2008.03277 CBMM-Memo-122.pdf (1.03 MB)
Learning a Natural-language to LTL Executable Semantic Parser for Grounded Robotics. (Proceedings of Conference on Robot Learning (CoRL-2020), 2020). at <https://corlconf.github.io/paper_385/>
Learning abstract structure for drawing by efficient motor program induction. Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) (2020). at <https://papers.nips.cc/paper/2020/hash/1c104b9c0accfca52ef21728eaf01453-Abstract.html>
Learning Compositional Rules via Neural Program Synthesis. Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) (2020). at <https://proceedings.neurips.cc/paper/2020/hash/7a685d9edd95508471a9d3d6fcace432-Abstract.html> 2003.05562.pdf (2.51 MB)
Learning from multiple informants: Children’s response to epistemic bases for consensus judgments. Journal of Experimental Child Psychology 192, 104759 (2020).