Publication
Export 860 results:
Shape and Material from Sound. Advances in Neural Information Processing Systems 30 1278–1288 (2017). at <http://papers.nips.cc/paper/6727-shape-and-material-from-sound.pdf>
Six-month-old infants expect agents to minimize the cost of their actions. Cognition 160, 35-42 (2017).
Size-Independent Sample Complexity of Neural Networks. (2017). 1712.06541.pdf (278.77 KB)
Spatial cognition across development. SRCD (2017).
Sustained Activity Encoding Working Memories: Not Fully Distributed. Trends in Neurosciences 40 , 328-346 (2017).
Symmetry Regularization. (2017). CBMM-Memo-063.pdf (6.1 MB)
Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017). doi:10.1109/CVPR.2017.269 Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks.pdf (2.86 MB)
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Ten-month-old infants infer the value of goals from the costs of actions. Science 358, 1038-1041 (2017). ivc_full_preprint_withsm.pdf (1.6 MB)
Ten-month-old infants infer value from effort. Society for Research in Child Development (2017).
Ten-month-old infants infer value from effort. SRCD (2017).
Thalamic contribution to CA1-mPFC interactions during sleep. Society for Neuroscience's Annual Meeting - SfN 2017 (2017). AbstractSFNfinal.docx (13.14 KB)
Theoretical principles of multiscale spatiotemporal control of neuronal networks: a complex systems perspective. (2017). doi:10.1101/097618 StimComplexity.pdf (218.1 KB)
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017). CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Theory of Deep Learning IIb: Optimization Properties of SGD. (2017). CBMM-Memo-072.pdf (3.66 MB)
Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017). CBMM-Memo-073.pdf (2.65 MB) CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB) CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB) CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks”. (2017). CBMM-Memo-071.pdf (2.54 MB)
Thinking fast or slow? A reinforcement-learning approach. Society for Personality and Social Psychology (2017). KoolEtAl_SPSP_2017.pdf (670.35 KB)
Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN. 34th International Conference on Machine Learning 70, 1733-1741 (2017). 1612.05231.pdf (2.3 MB)
Two areas for familiar face recognition in the primate brain. Science 357, 591 - 595 (2017). 591.full_.pdf (928.29 KB)
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
What is changing when: Decoding visual information in movies from human intracranial recordings. Neuroimage (2017). doi:https://doi.org/10.1016/j.neuroimage.2017.08.027
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2 art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Why does deep and cheap learning work so well?. Journal of Statistical Physics 168, 1223–1247 (2017). 1608.08225.pdf (2.14 MB)