Publication
For HyperBFs AGOP is a greedy approximation to gradient descent. (2024).
CBMM-Memo-148.pdf (1.06 MB)
Formation of Representations in Neural Networks. (2024).
CBMM-Memo-150.pdf (4.03 MB)
On Generalization Bounds for Neural Networks with Low Rank Layers. (2024).
CBMM-Memo-151.pdf (697.31 KB)
On the Power of Decision Trees in Auto-Regressive Language Modeling. (2024).
CBMM-Memo-149.pdf (2.11 MB)
Self-Assembly of a Biologically Plausible Learning Circuit. (2024).
CBMM-Memo-152.pdf (1.84 MB)
On efficiently computable functions, deep networks and sparse compositionality. (2025).
Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
Multiplicative Regularization Generalizes Better Than Additive Regularization. (2025).
CBMM Memo 158.pdf (4.8 MB)
Position: A Theory of Deep Learning Must Include Compositional Sparsity. (2025).
CBMM Memo 159.pdf (676.35 KB)
What if Eye..? Computationally Recreating Vision Evolution. arXiv (2025). at <https://arxiv.org/abs/2501.15001>
2501.15001v1.pdf (5.2 MB)
]