Publication
Export 11 results:
Filters: Author is Hrushikesh Mhaskar [Clear All Filters]
An analysis of training and generalization errors in shallow and deep networks. Neural Networks 121, 229 - 241 (2020).
Function approximation by deep networks. Communications on Pure & Applied Analysis 19, 4085 - 4095 (2020).
1534-0392_2020_8_4085.pdf (514.57 KB)

An analysis of training and generalization errors in shallow and deep networks. (2019).
CBMM-Memo-098.pdf (687.36 KB)
CBMM Memo 098 v4 (08/2019) (2.63 MB)


An analysis of training and generalization errors in shallow and deep networks. (2018).
CBMM-Memo-076.pdf (772.61 KB)
CBMM-Memo-076v2.pdf (2.67 MB)


Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)




When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)

Deep vs. shallow networks : An approximation theory perspective. (2016).
Original submission, visit the link above for the updated version (960.27 KB)

Deep vs. shallow networks: An approximation theory perspective. Analysis and Applications 14, 829 - 848 (2016).
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)



