Publication
Learning mid-level codes for natural sounds. Computational and Systems Neuroscience (Cosyne) 2016 (2016). at <http://www.cosyne.org/c/index.php?title=Cosyne2016_posters_2>
Wiktor_COSYNE_2015_hierarchy_final.pdf (2.52 MB)
Learning Mid-Level Codes for Natural Sounds. Advances and Perspectives in Auditory Neuroscience (2016).
APAN_large_JHM kopia.pdf (19.74 MB)
Adaptive Compression of Statistically Homogenous Sensory Signals. Computational and Systems Neuroscience (COSYNE) (2017).
Co-occurrence statistics of natural sound features predict perceptual grouping. Computational and Systems Neuroscience (Cosyne) 2018 (2018).
Learning Mid-Level Auditory Codes from Natural Sound Statistics. (2017).
MlynarskiMcDermott_Memo060.pdf (7.11 MB)
Co-occurrence statistics of natural sound features predict perceptual grouping. Computational and Systems Neuroscience (COSYNE) (2018). at <http://www.cosyne.org/c/index.php?title=Cosyne_18>
Lossy Compression of Uninformative Stimuli in the Auditory System. Association for Otolaryngology Mid-Winter Meeting (2017).
Adaptive Coding for Dynamic Sensory Inference. eLife (2018).
Learning Mid-Level Codes for Natural Sounds. Association for Otolaryngology Mid-Winter Meeting (2017).
A normalization model of visual search predicts single trial human fixations in an object search task. (2014).
CBMM-Memo-008.pdf (854.51 KB)
There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task. Cerebral Cortex 26(7), 26:3064-3082 (2016).
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
An analysis of training and generalization errors in shallow and deep networks. Neural Networks 121, 229 - 241 (2020).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
An analysis of training and generalization errors in shallow and deep networks. (2019).
CBMM-Memo-098.pdf (687.36 KB)
CBMM Memo 098 v4 (08/2019) (2.63 MB)
Function approximation by deep networks. Communications on Pure & Applied Analysis 19, 4085 - 4095 (2020).
1534-0392_2020_8_4085.pdf (514.57 KB)
Deep vs. shallow networks: An approximation theory perspective. Analysis and Applications 14, 829 - 848 (2016).
An analysis of training and generalization errors in shallow and deep networks. (2018).
CBMM-Memo-076.pdf (772.61 KB)
CBMM-Memo-076v2.pdf (2.67 MB)
Deep vs. shallow networks : An approximation theory perspective. (2016).
Original submission, visit the link above for the updated version (960.27 KB)
How PFC and LIP process single and multiple-object ‘pop-out’ displays. Society for Neuroscience (2015). at <https://www.sfn.org/~/media/SfN/Documents/Annual%20Meeting/FinalProgram/NS2015/Full%20Abstract%20PDFs%202015/SfN15_Abstracts_PDF_Nanos.ashx>
New Data Science tools for analyzing neural data and computational models. Society for Neuroscience (2016).
Differential Processing of Isolated Object and Multi-item Pop-Out Displays in LIP and PFC. Cerebral Cortex (2017). doi:10.1093/cercor/bhx243
]