Export 653 results:
Paul, R., Barbu, A., Felshin, S., Katz, B. & Roy, N. Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Liu, S., Ullman, T. D., Tenenbaum, J. B. & Spelke, E. S. Ten-month-old infants infer the value of goals from the costs of actions. Science 358, 1038-1041 (2017).PDF icon ivc_full_preprint_withsm.pdf (1.6 MB)
Liu, S., Ullman, T., Tenenbaum, J. B. & Spelke, E. S. Ten-month-old infants infer value from effort. Society for Research in Child Development (2017).
Liu, S., Ullman, T., Tenenbaum, J. B. & Spelke, E. S. Ten-month-old infants infer value from effort. SRCD (2017).
Varela, C. & Wilson, M. A. Thalamic contribution to CA1-mPFC interactions during sleep. Society for Neuroscience's Annual Meeting - SfN 2017 (2017).File AbstractSFNfinal.docx (13.14 KB)
Dehghani, N. Theoretical principles of multiscale spatiotemporal control of neuronal networks: a complex systems perspective. (2017). doi:10.1101/097618PDF icon StimComplexity.pdf (218.1 KB)
Poggio, T. & Liao, Q. Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).PDF icon CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Zhang, C. et al. Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).PDF icon CBMM-Memo-072.pdf (3.66 MB)
Poggio, T. et al. Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).PDF icon CBMM-Memo-073.pdf (2.65 MB)PDF icon CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)PDF icon CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)PDF icon CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
Cano-Córdoba, F., Sarma, S. & Subirana, B. Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks”. (2017).PDF icon CBMM-Memo-071.pdf (2.54 MB)
Kool, W., Gershman, S. J. & Cushman, F. A. Thinking fast or slow? A reinforcement-learning approach. Society for Personality and Social Psychology (2017).PDF icon KoolEtAl_SPSP_2017.pdf (670.35 KB)
Jing, L. et al. Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN. 34th International Conference on Machine Learning 70, 1733-1741 (2017).PDF icon 1612.05231.pdf (2.3 MB)
Landi, S. M. & Freiwald, W. A. Two areas for familiar face recognition in the primate brain. Science 357, 591 - 595 (2017).PDF icon 591.full_.pdf (928.29 KB)
Leibo, J. Z., Liao, Q., Anselmi, F., Freiwald, W. A. & Poggio, T. View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
Isik, L., Singer, J., Madsen, J., Kanwisher, N. & Kreiman, G. What is changing when: Decoding visual information in movies from human intracranial recordings. Neuroimage (2017). doi:
Mhaskar, H., Liao, Q. & Poggio, T. When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2PDF icon art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Lin, H. & Tegmark, M. Why does deep and cheap learning work so well?. Journal of Statistical Physics 168, 1223–1247 (2017).PDF icon 1608.08225.pdf (2.14 MB)
Dillon, M. R. & Spelke, E. S. Young children's use of distance and angle information during map reading. SRCD (2017).