Publication
Sustained Activity Encoding Working Memories: Not Fully Distributed. Trends in Neurosciences 40 , 328-346 (2017).
Symmetry Regularization. (2017).
CBMM-Memo-063.pdf (6.1 MB)
Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017). doi:10.1109/CVPR.2017.269
Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks.pdf (2.86 MB)
Synthesizing theories of human language with Bayesian program inductionAbstract. Nature Communications 13, (2022).
s41467-022-32012-w.pdf (2.19 MB)
System Identification of Neural Systems: If We Got It Right, Would We Know?. Proceedings of the 40th International Conference on Machine Learning, PMLR 202, 12430-12444 (2023).
han23d.pdf (797.48 KB)
A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron 98, (2018).
Task-specific neural processes underlying conflict resolution during cognitive control. BioRxiv (2022). doi:10.1101/2022.01.16.476535
2022.01.16.476535v1.full_.pdf (22.96 MB)
Teachers recruit mentalizing regions to represent learners’ beliefs. Proceedings of the National Academy of Sciences 120, (2023).
Temporal and Object Quantification Networks. Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence () (2021). doi:10.24963/ijcai.2021/386
0386.pdf (472.5 KB)
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Temporal information for action recognition only needs to be integrated at a choice level in neural networks and primates . COSYNE (2020).
Temporally delayed linear modelling (TDLM) measures replay in both animals and humans. eLife 10, (2021).
Ten-month-old infants infer the value of goals from the costs of actions. Science 358, 1038-1041 (2017).
ivc_full_preprint_withsm.pdf (1.6 MB)
Ten-month-old infants infer value from effort. Society for Research in Child Development (2017).
Ten-month-old infants infer value from effort. SRCD (2017).
Thalamic contribution to CA1-mPFC interactions during sleep. Society for Neuroscience's Annual Meeting - SfN 2017 (2017).
AbstractSFNfinal.docx (13.14 KB)
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Theoretical principles of multiscale spatiotemporal control of neuronal networks: a complex systems perspective. (2017). doi:10.1101/097618
StimComplexity.pdf (218.1 KB)
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
]