Publication
Deep compositional robotic planners that follow natural language commands . International Conference on Robotics and Automation (ICRA) (2020).
Trajectory Prediction with Linguistic Representations. (2022).
CBMM-Memo-132.pdf (1.15 MB)
Deep compositional robotic planners that follow natural language commands. (2020).
CBMM-Memo-124.pdf (1.03 MB)
Compositional RL Agents That Follow Language Commands in Temporal Logic. Frontiers in Robotics and AI 8, (2021).
frobt-08-689550.pdf (1.57 MB)
Deep sequential models for sampling-based planning. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018) (2018). doi:10.1109/IROS.2018.8593947
kuo2018planning.pdf (637.67 KB)
Compositional Networks Enable Systematic Generalization for Grounded Language Understanding. (2021).
CBMM-Memo-129.pdf (1.2 MB)
Deep Compositional Robotic Planners that Follow Natural Language Commands. Workshop on Visually Grounded Interaction and Language (ViGIL) at the Thirty-third Annual Conference on Neural Information Processing Systems (NeurIPS), (2019). at <https://vigilworkshop.github.io/>
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas. (2020).
CBMM-Memo-125.pdf (2.12 MB)
Compositional RL Agents That Follow Language Commands in Temporal Logic. (2021).
CBMM-Memo-127.pdf (2.12 MB)
Encoding formulas as deep networks: Reinforcement learning for zero-shot execution of LTL formulas. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020). doi:10.1109/IROS45743.2020.9341325
Trajectory Prediction with Linguistic Representations. 2022 IEEE International Conference on Robotics and Automation (ICRA) (2022). doi:10.1109/ICRA46639.2022.9811928
Color-Biased Regions of the Ventral Visual Pathway Lie between Face- and Place-Selective Regions in Humans, as in Macaques. Journal of Neuroscience 36, 1682 - 1697 (2016).
Human-level concept learning through probabilistic program induction. Science 350, 1332-1338 (2015).
Building machines that learn and think like people. (2016).
machines_that_think.pdf (3.45 MB)
Two areas for familiar face recognition in the primate brain. Science 357, 591 - 595 (2017).
591.full_.pdf (928.29 KB)
A fast link between face perception and memory in the temporal pole. Science eabi6671 (2021). doi:10.1126/science.abi6671
NSF Science and Technology Centers – The Class of 2013. (2013).
NSFGender2013_poster.pdf (2.77 MB)
High-frequency oscillations in human and monkey neocortex during the wake–sleep cycle. Proceedings of the National Academy of Sciences (2016). doi:10.1073/pnas.1523583113
BetaGammaSleepAwakeFull.pdf (3.68 MB)
Sustained Activity Encoding Working Memories: Not Fully Distributed. Trends in Neurosciences 40 , 328-346 (2017).
An empirical assay of view-invariant object learning in humans and comparison with baseline image-computable models. bioRxiv (2023). at <https://www.biorxiv.org/content/10.1101/2022.12.31.522402v1>
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation. (2016).
faceMirrorSymmetry_memo_ver01.pdf (3.93 MB)
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).
journal.pcbi_.1004390.pdf (2.04 MB)
]