Export 783 results:
Spatial cognition across development. SRCD (2017).
Sustained Activity Encoding Working Memories: Not Fully Distributed. Trends in Neurosciences 40 , 328-346 (2017).
Symmetry Regularization. (2017).
Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017). doi:10.1109/CVPR.2017.269
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Ten-month-old infants infer the value of goals from the costs of actions. Science 358, 1038-1041 (2017).
Ten-month-old infants infer value from effort. SRCD (2017).
Ten-month-old infants infer value from effort. Society for Research in Child Development (2017).
Thalamic contribution to CA1-mPFC interactions during sleep. Society for Neuroscience's Annual Meeting - SfN 2017 (2017).
Theoretical principles of multiscale spatiotemporal control of neuronal networks: a complex systems perspective. (2017). doi:10.1101/097618
Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks”. (2017).
Thinking fast or slow? A reinforcement-learning approach. Society for Personality and Social Psychology (2017).
Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN. 34th International Conference on Machine Learning 70, 1733-1741 (2017).
Two areas for familiar face recognition in the primate brain. Science 357, 591 - 595 (2017).
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
What is changing when: Decoding visual information in movies from human intracranial recordings. Neuroimage (2017). doi:https://doi.org/10.1016/j.neuroimage.2017.08.027
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
Why does deep and cheap learning work so well?. Journal of Statistical Physics 168, 1223–1247 (2017).