Publication

Found 912 results
Author [ Title(Asc)] Type Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
T
Kool, W., Gershman, S. J. & Cushman, F. A. Thinking fast or slow? A reinforcement-learning approach. Society for Personality and Social Psychology (2017).PDF icon KoolEtAl_SPSP_2017.pdf (670.35 KB)
Miconi, T., Groomes, L. & Kreiman, G. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task. Cerebral Cortex 26(7), 26:3064-3082 (2016).
Miconi, T., Groomes, L. & Kreiman, G. There’s Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task [dataset]. (2016).
Miconi, T., Groomes, L. & Kreiman, G. There’s Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task [code]. (2016).
Dasgupta, I., Schulz, E., Tenenbaum, J. B. & Gershman, S. J. A theory of learning to infer. Psychological Review 127, 412 - 441 (2020).
Cano-Córdoba, F., Sarma, S. & Subirana, B. Theory of Intelligence with Forgetting: Mathematical Theorems Explaining Human Universal Forgetting using “Forgetting Neural Networks”. (2017).PDF icon CBMM-Memo-071.pdf (2.54 MB)
Poggio, T. et al. Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).PDF icon CBMM-Memo-073.pdf (2.65 MB)PDF icon CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)PDF icon CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)PDF icon CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
Zhang, C. et al. Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).PDF icon CBMM-Memo-072.pdf (3.66 MB)
Banburski, A. et al. Theory III: Dynamics and Generalization in Deep Networks. (2018).PDF icon Original, intermediate versions are available under request (2.67 MB)PDF icon CBMM Memo 90 v12.pdf (4.74 MB)PDF icon Theory_III_ver44.pdf Update Hessian (4.12 MB)PDF icon Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB)PDF icon fixing errors and sharpening some proofs (2.45 MB)
Poggio, T. & Liao, Q. Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).PDF icon CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Poggio, T. & Liao, Q. Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).PDF icon 03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).PDF icon CBMM-Memo-058v1.pdf (2.42 MB)PDF icon CBMM-Memo-058v5.pdf (2.45 MB)PDF icon CBMM-Memo-058-v6.pdf (2.74 MB)PDF icon Proposition 4 has been deleted (2.75 MB)
Poggio, T. & Liao, Q. Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).PDF icon 02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Liao, Q., Banburski, A. & Poggio, T. Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Dehghani, N. Theoretical principles of multiscale spatiotemporal control of neuronal networks: a complex systems perspective. (2017). doi:10.1101/097618PDF icon StimComplexity.pdf (218.1 KB)
Poggio, T., Banburski, A. & Liao, Q. Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117PDF icon PNASlast.pdf (915.3 KB)
Poggio, T., Banburski, A. & Liao, Q. Theoretical Issues in Deep Networks. (2019).PDF icon CBMM Memo 100 v1 (1.71 MB)PDF icon CBMM Memo 100 v3 (8/25/2019) (1.31 MB)PDF icon CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Varela, C. & Wilson, M. A. Thalamic contribution to CA1-mPFC interactions during sleep. Society for Neuroscience's Annual Meeting - SfN 2017 (2017).File AbstractSFNfinal.docx (13.14 KB)
Liu, S., Ullman, T., Tenenbaum, J. B. & Spelke, E. S. Ten-month-old infants infer value from effort. Society for Research in Child Development (2017).
Liu, S., Ullman, T., Tenenbaum, J. B. & Spelke, E. S. Ten-month-old infants infer value from effort. SRCD (2017).
Liu, S., Ullman, T. D., Tenenbaum, J. B. & Spelke, E. S. Ten-month-old infants infer the value of goals from the costs of actions. Science 358, 1038-1041 (2017).PDF icon ivc_full_preprint_withsm.pdf (1.6 MB)
Liu, Y. et al. Temporally delayed linear modelling (TDLM) measures replay in both animals and humans. eLife 10, (2021).
Schrimpf, M., Sato, F., Sanghavi, S. & DiCarlo, J. J. Temporal information for action recognition only needs to be integrated at a choice level in neural networks and primates . COSYNE (2020).
Paul, R., Barbu, A., Felshin, S., Katz, B. & Roy, N. Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Mao, J. et al. Temporal and Object Quantification Networks. Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (Zhou, Z. - H.) (2021). doi:10.24963/ijcai.2021/386PDF icon 0386.pdf (472.5 KB)

Pages