%0 Generic %D 2024 %T Compositional Sparsity of Learnable Functions %A Tomaso Poggio %A Maia Fraser %X

*This paper will appear in June/July 2024 in the Bulletin of the American Mathematical Society*

Neural networks have demonstrated impressive success in various domains, raising the question of what fundamental principles underlie the effectiveness of the best AI systems and quite possibly of human intelligence. This perspective argues that compositional sparsity, or the property that a compositional function have "few" constituent functions, each depending on only a small subset of inputs, is a key principle underlying successful learning architectures. Surprisingly, all functions that are efficiently Turing computable have a compositional sparse representation. Furthermore, deep networks that are also sparse can exploit this general property to avoid the “curse of dimensionality". This framework suggests interesting implications about the role that machine learning may play in mathematics.

%8 02/2024 %2

https://hdl.handle.net/1721.1/153475

%0 Book %D 2023 %T Cervelli menti algoritmi %A Tomaso Poggio %A Marco Magrini %X

L'intelligenza - quella cosa con cui capiamo il mondo - è un mistero ancora aperto. Se soltanto noi umani abbiamo un linguaggio, un alfabeto, una scienza non vuol dire che deteniamo il monopolio dell'intelligenza. Condividiamo questa esistenza con milioni di altre specie, animali e vegetali, dotate di un tale ventaglio di capacità cognitive da comporre una gradazione quasi infinita di intelligenze. All'improvviso, il loro numero si è messo a crescere. Grazie all'apparizione congiunta di algoritmi più sofisticati, di oceaniche basi di dati e di un'enorme potenza di calcolo, l'antica aspirazione di replicare matematicamente l'intelligenza umana ha raggiunto traguardi inaspettati. Seppur lontano dal riuscirci, un piccolo zoo di intelligenze artificiali è già in grado di svolgere numerosi compiti tipicamente umani. In questo libro, un giornalista e un pioniere dell'intelligenza artificiale raccontano (con la voce dello scienziato) degli albori di una nuova tecnologia «generale» che, come l'elettricità o il computer, è destinata a trasformare la società, l'economia e la vita quotidiana, con un carico di rischi e di opportunità. Cosa ci dobbiamo aspettare da questa straordinaria evoluzione? Che cosa guadagneremo e che cosa perderemo? Non esistono risposte sicure. Ma è certamente l'occasione per nuove, straordinarie scoperte scientifiche. A cominciare dai segreti dell'intelligenza stessa.

[google translated]

Intelligence – that thing with which we understand the world – is still an open mystery. If only we humans have a language, an alphabet, a science does not mean that we hold the monopoly of intelligence. We share this existence with millions of other species, animals and plants, endowed with such a range of cognitive abilities to compose an almost infinite gradation of intelligence. Suddenly, their numbers started to grow. Thanks to the joint appearance of more sophisticated algorithms, ocean databases and enormous computing power, the ancient aspiration to mathematically replicate human intelligence has reached unexpected milestones. Although far from succeeding, a small zoo of artificial intelligence is already able to perform numerous typically human tasks. In this book, a journalist and a pioneer of artificial intelligence tell (with the scientist’s voice) of the dawn of a new “general” technology that, like electricity or computer, is intended to transform society, the economy and everyday life, with a load of risks and opportunities. What should we expect from this extraordinary evolution? What will we gain and what will we lose? There are no safe answers. But it is certainly an opportunity for new, extraordinary scientific discoveries. Let’s start with the secrets of intelligence itself.

%I Sperling & Kupfer %P 272 %8 10/2023 %@ 9788820077761 %G eng %U https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini %0 Generic %D 2023 %T Feature learning in deep classifiers through Intermediate Neural Collapse %A Akshay Rangamani %A Marius Lindegaard %A Tomer Galanti %A Tomaso Poggio %X

In this paper, we conduct an empirical study of the feature learning process in deep classifiers. Recent research has identified a training phenomenon called Neural Collapse (NC), in which the top-layer feature embeddings of samples from the same class tend to concentrate around their means, and the top layer’s weights align with those features. Our study aims to investigate if these properties extend to intermediate layers. We empirically study the evolution of the covariance and mean of representations across different layers and show that as we move deeper into a trained neural network, the within-class covariance decreases relative to the between-class covariance. Additionally, we find that in the top layers, where the between-class covariance is dominant, the subspace spanned by the class means aligns with the subspace spanned by the most significant singular vector components of the weight matrix in the corresponding layer. Finally, we discuss the relationship between NC and Associative Memories (Willshaw et al., 1969).

%8 02/2023 %2

https://hdl.handle.net/1721.1/148239

%0 Generic %D 2023 %T The Janus effects of SGD vs GD: high noise and low rank %A Mengjia Xu %A Tomer Galanti %A Akshay Rangamani %A Lorenzo Rosasco %A Andrea Pinto %A Tomaso Poggio %X

It was always obvious that  SGD with small minibatch size yields for neural networks much higher asymptotic fluctuations in the updates of the weight matrices than GD. It has also been often reported that SGD in deep RELU networks shows empirically a low-rank bias in the weight matrices. A recent  theoretical analysis derived a bound on the rank and linked it to the size of the SGD fluctuations [25]. In this paper, we provide an empirical and  theoretical analysis of the convergence of SGD vs GD, first for deep RELU networks and then for the case of linear regression, where sharper estimates can be obtained and which is of independent interest. In the linear case, we prove that the component $W^\perp$ of the matrix $W$ corresponding to the null space of the data matrix $X$ converges to zero for both SGD and GD, provided the regularization term is non-zero. Because of the larger number of updates required to go through all the training data, the convergence rate {\it per epoch} of these components is much faster for SGD than for GD. In practice, SGD has a much stronger bias than GD towards solutions for weight matrices $W$ with high fluctuations -- even when the choice of mini batches is deterministic -- and low rank, provided the initialization is from a random matrix. Thus SGD  with non-zero regularization, shows the coupled phenomenon of  asymptotic noise and a low-rank bias-- unlike GD.

%8 12/2024 %2

https://hdl.handle.net/1721.1/153227

%0 Generic %D 2023 %T Norm-Based Generalization Bounds for Compositionally Sparse Neural Networks %A Tomer Galanti %A Mengjia Xu %A Liane Galanti %A Tomaso Poggio %X

In this paper, we investigate the Rademacher complexity of deep sparse neural networks, where each neuron receives a small number of inputs. We prove generalization bounds for multilayered sparse ReLU neural networks, including convolutional neural networks. These bounds differ from previous ones, as they consider the norms of the convolutional filters instead of the norms of the associated Toeplitz matrices, independently of weight sharing between neurons.

As we show theoretically, these bounds may be orders of magnitude better than standard norm-based generalization bounds and empirically, they are almost non-vacuous in estimating generalization in various simple classification problems. Taken together, these results suggest that compositional sparsity of the underlying target function is critical to the success of deep neural networks.

%2

https://hdl.handle.net/1721.1/145776

%0 Conference Paper %B NeurIPS 2023 %D 2023 %T Norm-based Generalization Bounds for Sparse Neural Networks %A Tomer Galanti %A Mengjia Xu %A Liane Galanti %A Tomaso Poggio %X

In this paper, we derive norm-based generalization bounds for sparse ReLU neural networks, including convolutional neural networks. These bounds differ from previous ones because they consider the sparse structure of the neural network architecture and the norms of the convolutional filters, rather than the norms of the (Toeplitz) matrices associated with the convolutional layers. Theoretically, we demonstrate that these bounds are significantly tighter than standard norm-based generalization bounds. Empirically, they offer relatively tight estimations of generalization for various simple classification problems. Collectively, these findings suggest that the sparsity of the underlying target function and the model’s architecture plays a crucial role in the success of deep learning.

%B NeurIPS 2023 %C New Orleans %8 12/2023 %G eng %U https://proceedings.neurips.cc/paper_files/paper/2023/file/8493e190ff1bbe3837eca821190b61ff-Paper-Conference.pdf %0 Generic %D 2023 %T SGD and Weight Decay Provably Induce a Low-Rank Bias in Deep Neural Networks %A Tomer Galanti %A Zachary Siegel %A Aparna Gupte %A Tomaso Poggio %X

In this paper, we study the bias of Stochastic Gradient Descent (SGD) to learn low-rank weight matrices when training deep ReLU neural networks. Our results show that training neural networks with mini-batch SGD and weight decay causes a bias towards rank minimization over the weight matrices. Specifically, we show, both theoretically and empirically, that this bias is more pronounced when using smaller batch sizes, higher learning rates, or increased weight decay. Additionally, we predict and observe empirically that weight decay is necessary to achieve this bias. Finally, we empirically investigate the connection between this bias and generalization, finding that it has a marginal effect on generalization. Our analysis is based on a minimal set of assumptions and applies to neural networks of any width or depth, including those with residual connections and convolutional layers.

%2

https://hdl.handle.net/1721.1/148230

%0 Generic %D 2022 %T PCA as a defense against some adversaries %A Aparna Gupte %A Andrzej Banburski %A Tomaso Poggio %X

Neural network classifiers are known to be highly vulnerable to adversarial perturbations in their inputs. Under the hypothesis that adversarial examples lie outside of the sub-manifold of natural images, previous work has investigated the impact of principal components in data on adversarial robustness. In this paper we show that there exists a very simple defense mechanism in the case where adversarial images are separable in a previously defined $(k,p)$ metric. This defense is very successful against the  popular Carlini-Wagner attack, but less so against some other common attacks like FGSM. It is interesting to note that the defense is still successful for relatively large perturbations.

%2

https://hdl.handle.net/1721.1/141424

%0 Journal Article %J IEEE Access %D 2022 %T Representation Learning in Sensory Cortex: a theory %A Anselmi, Fabio %A Tomaso Poggio %K Artificial neural networks %K Hubel Wiesel model %K Invariance %K Sample Complexity %K Simple and Complex cells %K visual cortex %X

We review and apply a computational theory based on the hypothesis that the feedforward path of the ventral stream in visual cortex’s main function is the encoding of invariant representations of images. A key justification of the theory is provided by a result linking invariant representations to small sample complexity for image recognition - that is, invariant representations allow learning from very few labeled examples. The theory characterizes how an algorithm that can be implemented by a set of "simple" and "complex" cells - a "Hubel Wiesel module" – provides invariant and selective representations. The invariance can be learned in an unsupervised way from observed transformations. Our results show that an invariant representation implies several properties of the ventral stream organization, including the emergence of Gabor receptive filelds and specialized areas. The theory requires two stages of processing: the first, consisting of retinotopic visual areas such as V1, V2 and V4 with generic neuronal tuning, leads to representations that are invariant to translation and scaling; the second, consisting of modules in IT (Inferior Temporal cortex), with class- and object-specific tuning, provides a representation for recognition with approximate invariance to class specific transformations, such as pose (of a body, of a face) and expression. In summary, our theory is that the ventral stream’s main function is to implement the unsupervised learning of "good" representations that reduce the sample complexity of the final supervised learning stage.

%B IEEE Access %P 1 - 1 %8 09/2022 %G eng %U https://ieeexplore.ieee.org/document/9899392/ %! IEEE Access %R 10.1109/ACCESS.2022.3208603 %0 Generic %D 2022 %T SGD Noise and Implicit Low-Rank Bias in Deep Neural Networks %A Tomer Galanti %A Tomaso Poggio %X

We analyze deep ReLU neural networks trained with mini-batch stochastic gradient decent and weight decay. We prove that the source of the SGD noise is an implicit low rank constraint across all of the weight matrices within the network. Furthermore, we show, both theoretically and empirically, that when training a neural network using Stochastic Gradient Descent (SGD) with a small batch size, the resulting weight matrices are expected to be of small rank. Our analysis relies on a minimal set of assumptions and the neural networks may include convolutional layers, residual connections, as well as batch normalization layers.

%8 03/2022 %2

https://hdl.handle.net/1721.1/141380

%0 Generic %D 2022 %T System identification of neural systems: If we got it right, would we know? %A Yena Han %A Tomaso Poggio %A Brian Cheung %X

Various artificial neural networks developed by engineers have been evaluated as models of the brain, such as the ventral stream in the primate visual cortex. After being trained on large datasets, the network outputs are compared to recordings of biological neurons. Good performance in reproducing neural responses is taken as validation for the model. This system identification approach is different from the traditional ways to test theories and associated models in the natural sciences. Furthermore, it lacks a clear foundation in terms of theory and empirical validation. Here we begin characterizing some of these emerging approaches: what do they tell us? To address this question, we benchmark their ability to correctly identify a model by replacing the brain recordings with recordings from a known ground truth model. We evaluate commonly used identification techniques such as neural regression (linear regression on a population of model units) and centered kernel alignment (CKA). Even in the setting where the correct model is among the candidates, we find that the performance of these approaches at system identification is quite variable; it also depends significantly on factors independent of the ground truth architecture, such as scoring function and dataset.

%8 07/2022 %2

https://hdl.handle.net/1721.1/143617

%0 Journal Article %J IEEE Signal Processing Magazine %D 2021 %T Deep Learning for Seismic Inverse Problems: Toward the Acceleration of Geophysical Analysis Workflows %A Amir Adler %A Araya-Polo, Mauricio %A Tomaso Poggio %X

Seismic inversion is a fundamental tool in geophysical analysis, providing a window into Earth. In particular, it enables the reconstruction of large-scale subsurface Earth models for hydrocarbon exploration, mining, earthquake analysis, shallow hazard assessment, and other geophysical tasks.

%B IEEE Signal Processing Magazine %V 38 %P 89 - 119 %8 03/2021 %G eng %U https://ieeexplore.ieee.org/abstract/document/9363496 %N 2 %! IEEE Signal Process. Mag. %R 10.1109/MSP.2020.3037429 %0 Generic %D 2021 %T Distribution of Classification Margins: Are All Data Equal? %A Andrzej Banburski %A Fernanda De La Torre %A Nishka Pant %A Ishana Shastri %A Tomaso Poggio %X

Recent theoretical results show that gradient descent on deep neural networks under exponential loss functions locally maximizes classification margin, which is equivalent to minimizing the norm of the weight matrices under margin  constraints. This property of the solution however does not fully characterize the generalization performance. We motivate theoretically and show empirically that the area under the curve of the margin distribution on the training set is in fact a good measure of generalization. We then show that, after data separation is achieved, it is possible to dynamically reduce the training set by more than 99% without significant loss of performance. Interestingly, the resulting subset of “high capacity” features is not consistent across different training runs, which is consistent with the theoretical claim that all training points should converge to the same asymptotic margin under SGD and in the presence of both batch normalization and weight decay.

%2

https://hdl.handle.net/1721.1/129744

%0 Generic %D 2021 %T Dynamics and Neural Collapse in Deep Classifiers trained with the Square Loss %A M. Xu %A Akshay Rangamani %A Andrzej Banburski %A Q. Liao %A Tomer Galanti %A Tomaso Poggio %X

We overview several properties -- old and new --  of training overparametrized deep networks under the square loss. We first consider a model of the dynamics of gradient flow under the square loss in deep homogeneous ReLU networks. We study the convergence to a solution with the absolute minimum $\rho$, which is the product of the Frobenius norms of each layer weight matrix, when normalization by  Lagrange multipliers (LM) is used together with Weight Decay (WD) under different forms of gradient descent. A main property of the minimizers that bounds their expected error {\it for a specific network architecture} is $\rho$. In particular, we derive novel norm-based bounds for convolutional layers that are orders of magnitude better than classical bounds for dense networks. Next we prove that quasi-interpolating solutions obtained by Stochastic Gradient Descent (SGD) in the presence of WD have a bias towards low rank weight matrices -- that, as we also explain, should improve generalization. The same analysis predicts the existence of an inherent SGD noise for deep networks. In both cases, we verify our predictions experimentally. We then predict Neural Collapse and its properties without any specific assumption -- unlike other published proofs. Our analysis supports the idea that the advantage of deep networks relative to other classifiers is greater for the problems that are appropriate for sparse deep architectures such as CNNs. The deep reason compositionally sparse  target functions  can be approximated well by ``sparse'' deep networks without incurring in the curse of dimensionality.

%0 Generic %D 2021 %T The Effects of Image Distribution and Task on Adversarial Robustness %A Owen Kunhardt %A Arturo Deza %A Tomaso Poggio %X

In this paper, we propose an adaptation to the area under the curve (AUC) metric to measure the adversarial robustness of a model over a particular ε-interval [ε0, ε1] (interval of adversarial perturbation strengths) that facilitates unbiased comparisons across models when they have different initial ε0 performance. This can be used to determine how adversarially robust a model is to different image distributions or task (or some other variable); and/or to measure how robust a model is comparatively to other models. We used this adversarial robustness metric on models of an MNIST, CIFAR-10, and a Fusion dataset (CIFAR-10 + MNIST) where trained models performed either a digit or object recognition task using a LeNet, ResNet50, or a fully connected network (FullyConnectedNet) architecture and found the following: 1) CIFAR-10 models are inherently less adversarially robust than MNIST models; 2) Both the image distribution and task that a model is trained on can affect the adversarial robustness of the resultant model. 3) Pretraining with a different image distribution and task sometimes carries over the adversarial robustness induced by that image distribution and task in the resultant model; Collectively, our results imply non-trivial differences of the learned representation space of one perceptual system over another given its exposure to different image statistics or tasks (mainly objects vs digits). Moreover, these results hold even when model systems are equalized to have the same level of performance, or when exposed to approximately matched image statistics of fusion images but with different tasks.

%8 02/2021 %2

https://hdl.handle.net/1721.1/129813

%0 Generic %D 2021 %T Evaluating the Adversarial Robustness of a Foveated Texture Transform Module in a CNN %A Jonathan Gant %A Andrzej Banburski %A Arturo Deza %A Tomaso Poggio %B NeurIPS 2021 %8 12/2021 %U https://nips.cc/Conferences/2021/Schedule?showEvent=21868 %0 Generic %D 2021 %T From Associative Memories to Powerful Machines %A Tomaso Poggio %X

Associative memories were implemented as simple networks of threshold neurons by Willshaw and Longuet-Higgins in the '60s. Today's deep networks are quite similar: they can be regarded as approximating look-up tables, similar to Gaussian RBF networks. Thinking about deep networks as large associative memories provides a more realistic and sober perspective on the promises of deep learning.
Such associative networks are not powerful enough to  account for intelligent abilities such as language or logic. Could evolution have discovered how to go beyond simple reflexes and associative memories?  I will discuss how inventions such as recurrence and hidden states can transform look-up tables in powerful computing machines. In a July 2022 update I outline a theory framework explaining how deep networks may work, including transformers. The framework is based on proven results plus a couple of conjectures -- still open.

 

%8 01/2021 %2

https://hdl.handle.net/1721.1/129402

%0 Generic %D 2021 %T From Marr’s Vision to the Problem of Human Intelligence %A Tomaso Poggio %8 09/2021 %2

https://hdl.handle.net/1721.1/131234

%0 Journal Article %J Neural Networks %D 2020 %T An analysis of training and generalization errors in shallow and deep networks %A Hrushikesh Mhaskar %A Tomaso Poggio %K deep learning %K generalization error %K interpolatory approximation %X

This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.

%B Neural Networks %V 121 %P 229 - 241 %8 01/2020 %G eng %U https://www.sciencedirect.com/science/article/abs/pii/S0893608019302552 %! Neural Networks %R 10.1016/j.neunet.2019.08.028 %0 Generic %D 2020 %T Biologically Inspired Mechanisms for Adversarial Robustness %A Manish Vuyyuru Reddy %A Andrzej Banburski %A Nishka Pant %A Tomaso Poggio %X

A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small perturbations in visual stimuli but the underlying mechanisms that give rise to this robust perception are not understood. In this work, we investigate the role of two biologically plausible mechanisms in adversarial robustness. We demonstrate that the non-uniform sampling performed by the primate retina and the presence of multiple receptive fields with a range of receptive field sizes at each eccentricity improve the robustness of neural networks to small adversarial perturbations. We verify that these two mechanisms do not suffer from gradient obfuscation and study their contribution to adversarial robustness through ablation studies.

%8 06/2020 %2

https://hdl.handle.net/1721.1/125981

%0 Journal Article %J Nature Communications %D 2020 %T Complexity Control by Gradient Descent in Deep Networks %A Tomaso Poggio %A Qianli Liao %A Andrzej Banburski %X

Overparametrized deep network predict well despite the lack of an explicit complexity control during training such as an explicit regularization term. For exponential-type loss functions, we solve this puzzle by showing an effective regularization effect of gradient descent in terms of the normalized weights that are relevant for classification.

%B Nature Communications %V 11 %8 02/2020 %G eng %U https://www.nature.com/articles/s41467-020-14663-9 %R https://doi.org/10.1038/s41467-020-14663-9 %0 Conference Paper %B Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop at NeurIPS 2020 %D 2020 %T CUDA-Optimized real-time rendering of a Foveated Visual System %A Elian Malkin %A Arturo Deza %A Tomaso Poggio %X

The spatially-varying field of the human visual system has recently received a resurgence of interest with the development of virtual reality (VR) and neural networks. The computational demands of high resolution rendering desired for VR can be offset by savings in the periphery [16], while neural networks trained with foveated input have shown perceptual gains in i.i.d and o.o.d generalization [25, 6]. In this paper, we present a technique that exploits the CUDA GPU architecture to efficiently generate Gaussian-based foveated images at high definition (1920px × 1080px) in real-time (165 Hz), with a larger number of pooling regions than previous Gaussian-based foveation algorithms by several orders of magnitude [10, 25], producing a smoothly foveated image that requires no further blending or stitching, and that can be well fit for any contrast sensitivity function. The approach described can be adapted from Gaussian blurring to any eccentricity-dependent image processing and our algorithm can meet demand for experimentation to evaluate the role of spatially-varying processing across biological and artificial agents, so that foveation can be added easily on top of existing systems rather than forcing their redesign (“emulated foveated renderer” [22]). Altogether, this paper demonstrates how a GPU, with a CUDA block-wise architecture, can be employed for radially-variant rendering, with opportunities for more complex post-processing to ensure a metameric foveation scheme [33].

%B Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop at NeurIPS 2020 %8 12/2020 %G eng %U https://arxiv.org/abs/2012.08655 %0 Generic %D 2020 %T Dreaming with ARC %A Andrzej Banburski %A Anshula Gandhi %A Simon Alford %A Sylee Dandekar %A Peter Chin %A Tomaso Poggio %X

Current machine learning algorithms are highly specialized to whatever it is they are meant to do –– e.g. playing chess, picking up objects, or object recognition.  How can we extend this to a system that could solve a wide range of problems?  We argue that this can be achieved by a modular system –– one that can adapt to solving different problems by changing only the modules chosen and the order in which those modules are applied to the problem. The recently introduced ARC (Abstraction and Reasoning Corpus) dataset serves as an excellent test of abstract reasoning. Suited to the modular approach, the tasks depend on a set of human Core Knowledge inbuilt priors. In this paper we implement these priors as the modules of our system. We combine these modules using a neural-guided program synthesis. 

%B Learning Meets Combinatorial Algorithms workshop at NeurIPS 2020 %8 11/2020 %2

https://hdl.handle.net/1721.1/128607

%0 Journal Article %J arXiv %D 2020 %T Explicit regularization and implicit bias in deep network classifiers trained with the square loss %A Tomaso Poggio %A Qianli Liao %X

Deep ReLU networks trained with the square loss have been observed to perform well in classification tasks. We provide here a theoretical justification based on analysis of the associated gradient flow. We show that convergence to a solution with the absolute minimum norm is expected when normalization techniques such as Batch Normalization (BN) or Weight Normalization (WN) are used together with Weight Decay (WD). The main property of the minimizers that bounds their expected error is the norm: we prove that among all the close-to-interpolating solutions, the ones associated with smaller Frobenius norms of the unnormalized weight matrices have better margin and better bounds on the expected classification error. With BN but in the absence of WD, the dynamical system is singular. Implicit dynamical regularization -- that is zero-initial conditions biasing the dynamics towards high margin solutions -- is also possible in the no-BN and no-WD case. The theory yields several predictions, including the role of BN and weight decay, aspects of Papyan, Han and Donoho's Neural Collapse and the constraints induced by BN on the network weights.

%B arXiv %8 12/2020 %G eng %U https://arxiv.org/abs/2101.00072 %0 Generic %D 2020 %T For interpolating kernel machines, the minimum norm ERM solution is the most stable %A Akshay Rangamani %A Lorenzo Rosasco %A Tomaso Poggio %X

We study the average CVloo stability of kernel ridge-less regression and derive corresponding risk bounds. We show that the interpolating solution with minimum norm has the best CVloo stability, which in turn is controlled by the condition number of the empirical kernel matrix. The latter can be characterized in the asymptotic regime where both the dimension and cardinality of the data go to infinity. Under the assumption of random kernel matrices, the corresponding test error follows a double descent curve.

%8 06/2020 %1

https://arxiv.org/abs/2006.15522

%2

https://hdl.handle.net/1721.1/125927

%0 Journal Article %J Communications on Pure & Applied Analysis %D 2020 %T Function approximation by deep networks %A Hrushikesh Mhaskar %A Tomaso Poggio %K approximation on the Euclidean sphere %K deep networks %K degree of approximation %X

We show that deep networks are better than shallow networks at approximating functions that can be expressed as a composition of functions described by a directed acyclic graph, because the deep networks can be designed to have the same compositional structure, while a shallow network cannot exploit this knowledge. Thus, the blessing of compositionality mitigates the curse of dimensionality. On the other hand, a theorem called good propagation of errors allows to "lift" theorems about shallow networks to those about deep networks with an appropriate choice of norms, smoothness, etc. We illustrate this in three contexts where each channel in the deep network calculates a spherical polynomial, a non-smooth ReLU network, or another zonal function network related closely with the ReLU network.

%B Communications on Pure & Applied Analysis %V 19 %P 4085 - 4095 %8 08/2020 %G eng %U http://aimsciences.org//article/doi/10.3934/cpaa.2020181 %N 8 %R 10.3934/cpaa.2020181 %0 Generic %D 2020 %T Hierarchically Local Tasks and Deep Convolutional Networks %A Arturo Deza %A Qianli Liao %A Andrzej Banburski %A Tomaso Poggio %K Compositionality %K Inductive Bias %K perception %K Theory of Deep Learning %X

The main success stories of deep learning, starting with ImageNet, depend on convolutional networks, which on certain tasks perform significantly better than traditional shallow classifiers, such as support vector machines. Is there something special about deep convolutional networks that other learning machines do not possess? Recent results in approximation theory have shown that there is an exponential advantage of deep convolutional-like networks in approximating functions with hierarchical locality in their compositional structure. These mathematical results, however, do not say which tasks are expected to have input-output functions with hierarchical locality. Among all the possible hierarchically local tasks in vision, text and speech we explore a few of them experimentally by studying how they are affected by disrupting locality in the input images. We also discuss a taxonomy of tasks ranging from local, to hierarchically local, to global and make predictions about the type of networks required to perform  efficiently on these different types of tasks.

%8 06/2020 %1

https://arxiv.org/abs/2006.13915

%2

https://hdl.handle.net/1721.1/125980

%0 Generic %D 2020 %T Implicit dynamic regularization in deep networks %A Tomaso Poggio %A Qianli Liao %A Mengjia Xu %X

Square loss has been observed to perform well in classification tasks, at least as well as crossentropy. However, a theoretical justification is lacking. Here we develop a theoretical analysis for the square loss that  complements the existing asymptotic analysis for the exponential loss.

%8 08/2020 %2

https://hdl.handle.net/1721.1/126653

%0 Generic %D 2020 %T Loss landscape: SGD has a better view %A Tomaso Poggio %A Yaim Cooper %X

Consider a loss function ... where f(x) is a deep feedforward network with R layers, no bias terms and scalar output. Assume the network is overparametrized that is, d >> n, where d is the number of parameters and n is the number of data points. The networks are assumed to interpolate the training data (e.g. the minimum of L is zero). If GD converges, it will converge to a critical point of L, namely a solution of ... There are two kinds of critical points - those for which each term of the above sum vanishes individually, and those for which the expression only vanishes when all the terms are summed. The main claim in this note is that while GD can converge to both types of critical points, SGD can only converge to the first kind, which include all global minima.

See image below for full formulas.

%8 07/2020 %2

https://hdl.handle.net/1721.1/126041

%0 Journal Article %J IEEJ Transactions on Electrical and Electronic Engineering %D 2020 %T An Overview of Some Issues in the Theory of Deep Networks %A Tomaso Poggio %A Andrzej Banburski %X

During the last few years, significant progress has been made in the theoretical understanding of deep networks. We review our contributions in the areas of approximation theory and optimization. We also introduce a new approach based on cross‐validation leave‐one‐out stability to estimate bounds on the expected error of overparametrized classifiers, such as deep networks.

%B IEEJ Transactions on Electrical and Electronic Engineering %V 15 %P 1560 - 1571 %8 10/2020 %G eng %U https://onlinelibrary.wiley.com/toc/19314981/15/11 %N 11 %! IEEJ Trans Elec Electron Eng %R 10.1002/tee.23243 %0 Journal Article %J Scientific Reports %D 2020 %T Scale and translation-invariance for novel objects in human vision %A Yena Han %A Gemma Roig %A Geiger, Gad %A Tomaso Poggio %X

Though the range of invariance in recognition of novel objects is a basic aspect of human vision, its characterization has remained surprisingly elusive. Here we report tolerance to scale and position changes in one-shot learning by measuring recognition accuracy of Korean letters presented in a flash to non-Korean subjects who had no previous experience with Korean letters. We found that humans have significant scale-invariance after only a single exposure to a novel object. The range of translation-invariance is limited, depending on the size and position of presented objects. to understand the underlying brain computation associated with the invariance properties, we compared experimental data with computational modeling results. our results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance, by encoding different scale channels as well as eccentricity-dependent representations captured by neurons’ receptive field sizes and sampling density that change with eccentricity. Our psychophysical experiments and related simulations strongly suggest that the human visual system uses a computational strategy that differs in some key aspects from current deep learning architectures, being more data efficient and relying more critically on eye-movements.

%B Scientific Reports %V 10 %8 01/2020 %G eng %U http://www.nature.com/articles/s41598-019-57261-6 %N 1411 %! Sci Rep %R 10.1038/s41598-019-57261-6 %0 Generic %D 2020 %T Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). %A Tomaso Poggio %X

We consider here the class of supervised learning algorithms known as Empirical Risk Minimization (ERM). The classical theory by Vapnik and others characterize universal consistency of ERM in the classical regime in which the architecture of the learning network is fixed and n, the number of training examples, goes to infinity. We do not have a similar general theory for the modern regime of interpolating regressors and overparameterized deep networks, in which d > n as n goes to infinity.

In this note I propose the outline of such a theory based on the specific notion of CVloo stability of the learning algorithm with respect to perturbations of the training set. The theory shows that for interpolating regressors and separating classifiers (either kernel machines or deep RELU networks)

  1. minimizing CVloo stability minimizes the expected error
  2.  minimum norm solutions are the most stable solutions

The hope is that this approach may lead to a unified theory encompassing both the modern regime and the classical one.

%8 03/2020 %2

https://hdl.handle.net/1721.1/124343

%0 Journal Article %J Proceedings of the National Academy of Sciences %D 2020 %T Theoretical issues in deep networks %A Tomaso Poggio %A Andrzej Banburski %A Qianli Liao %X

While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum norm minimizers. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.

%B Proceedings of the National Academy of Sciences %P 201907369 %8 Sep-06-2020 %G eng %U https://www.pnas.org/content/early/2020/06/08/1907369117 %! Proc Natl Acad Sci USA %R 10.1073/pnas.1907369117 %0 Generic %D 2019 %T An analysis of training and generalization errors in shallow and deep networks %A Hrushikesh Mhaskar %A Tomaso Poggio %K deep learning %K generalization error %K interpolatory approximation %X

This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.

%8 05/2019 %1

https://arxiv.org/abs/1802.06266

%2

https://hdl.handle.net/1721.1/121183

%0 Conference Paper %B International Conference on Learning Representations, (ICLR 2019) %D 2019 %T Biologically-plausible learning algorithms can scale to large datasets. %A Will Xiao %A Chen, Honglin %A Qianli Liao %A Tomaso Poggio %X

The backpropagation (BP) algorithm is often thought to be biologically implau- sible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two biologically-plausible algorithms, pro- posed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight sym- metry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) finds that although feedback alignment (FA) and some variants of target-propagation (TP) perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry (SS) algo- rithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights do not share magnitudes but share signs. We examined the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet; RetinaNet for MS COCO). Surprisingly, networks trained with sign- symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018) and es- tablish a new benchmark for future biologically-plausible learning algorithms on more difficult datasets and more complex architectures.

%B International Conference on Learning Representations, (ICLR 2019) %G eng %0 Conference Paper %B 81st EAGE Conference and Exhibition 2019 %D 2019 %T Deep Recurrent Architectures for Seismic Tomography %A Amir Adler %A Mauricio Araya-Polo %A Tomaso Poggio %X

This paper introduces novel deep recurrent neural network architectures for Velocity Model Building (VMB), which is beyond what Araya-Polo et al 2018 pioneered with the Machine Learning-based seismic tomography built with convolutional non-recurrent neural network. Our investigation includes the utilization of basic recurrent neural network (RNN) cells, as well as Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells. Performance evaluation reveals that salt bodies are consistently predicted more accurately by GRU and LSTM-based architectures, as compared to non-recurrent architectures. The results take us a step closer to the final goal of a reliable fully Machine Learning-based tomography from pre-stack data, which when achieved will reduce the VMB turnaround from weeks to days.

%B 81st EAGE Conference and Exhibition 2019 %8 06/2019 %G eng %0 Generic %D 2019 %T Double descent in the condition number %A Tomaso Poggio %A Gil Kur %A Andrzej Banburski %X

In solving a system of n linear equations in d variables   Ax=b, the condition number of the (n,d) matrix A measures how   much errors in the data b affect the solution x. Bounds of   this type are important in many inverse problems. An example is   machine learning where the key task is to estimate an underlying   function from a set of measurements at random points in a high   dimensional space and where low sensitivity to error in the data is   a requirement for good predictive performance. Here we report the   simple observation that when the columns of A are random vectors,   the condition number of A is highest, that is worse, when d=n,   that is when the inverse of A exists. An overdetermined system   (n>d) and especially an underdetermined system (n<d), for which   the pseudoinverse must be used instead of the inverse, typically   have significantly better, that is lower, condition numbers. Thus   the condition number of A plotted as function of d shows a   double descent behavior with a peak at d=n.

%8 12/2019 %2

https://hdl.handle.net/1721.1/123108

%0 Conference Paper %B NAS Sackler Colloquium on Science of Deep Learning %D 2019 %T Dynamics & Generalization in Deep Networks -Minimizing the Norm %A Andrzej Banburski %A Qianli Liao %A Brando Miranda %A Lorenzo Rosasco %A Jack Hidary %A Tomaso Poggio %B NAS Sackler Colloquium on Science of Deep Learning %C Washington D.C. %8 03/2019 %G eng %0 Conference Paper %B Vision Science Society %D 2019 %T Eccentricity Dependent Neural Network with Recurrent Attention for Scale, Translation and Clutter Invariance %A Jiaxuan Zhang %A Yena Han %A Tomaso Poggio %A Gemma Roig %B Vision Science Society %C Florida, USA %8 05/2019 %G eng %0 Conference Paper %B Vision Science Society %D 2019 %T Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks %A Yena Han %A Gemma Roig %A Geiger, Gad %A Tomaso Poggio %B Vision Science Society %C Florida, USA %8 05/2019 %G eng %0 Conference Paper %B Vision Science Society %D 2019 %T Properties of invariant object recognition in human oneshot learning suggests a hierarchical architecture different from deep convolutional neural networks %A Yena Han %A Gemma Roig %A Geiger, Gad %A Tomaso Poggio %B Vision Science Society %C St Pete Beach, FL, USA %8 05/2019 %G eng %U https://jov.arvojournals.org/article.aspx?articleid=2749961https://jov.arvojournals.org/article.aspx?articleid=2749961 %R 10.1167/19.10.28d %0 Generic %D 2019 %T Theoretical Issues in Deep Networks %A Tomaso Poggio %A Andrzej Banburski %A Qianli Liao %X

While deep learning is successful in a number of applications, it is not yet well understood theoretically.  A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization by gradient descent and good out-of-sample performance --- why the expected error does not suffer, despite the absence of explicit regularization, when the networks are overparametrized. We review our recent results towards this goal. In {\it approximation theory} both shallow and deep networks are known to approximate any continuous functions on a bounded domain at a cost which is exponential (the number of parameters is exponential in the dimensionality of the function). However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can have a linear dependence on dimensionality, unlike shallow networks. In characterizing {\it minimization} of the empirical exponential loss we consider the gradient descent dynamics of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to the normalized network. The dynamics of the normalized weights implied by standard gradient descent turns out to be equivalent to the dynamics of the constrained problem of minimizing an exponential-type loss subject to a unit $L_2$ norm constraint. In particular, the dynamics of the typical, unconstrained gradient descent converges to the same critical points of the constrained problem. Thus, there is {\it implicit regularization} in training deep networks under exponential-type loss functions with gradient descent. The critical points of the flow are hyperbolic minima (for any long but finite time) and minimum norm minimizers (e.g. maxima of the margin). Though appropriately normalized networks can show a small generalization gap (difference between empirical and expected loss) even for finite $N$ (number of training examples) wrt the exponential loss, they do not generalize in terms of the classification error. Bounds on it for finite $N$ remain an open problem. Nevertheless, our results, together with other recent papers, characterize an implicit vanishing regularization by gradient descent which is likely to be a key prerequisite -- in terms of complexity control -- for the good performance of deep overparametrized ReLU classifiers.

%8 08/2019 %2

https://hdl.handle.net/1721.1/122014

%0 Generic %D 2019 %T Theories of Deep Learning: Approximation, Optimization and Generalization %A Qianli Liao %A Andrzej Banburski %A Tomaso Poggio %B TECHCON 2019 %8 09/2019 %0 Conference Paper %B ICML %D 2019 %T Weight and Batch Normalization implement Classical Generalization Bounds %A Andrzej Banburski %A Qianli Liao %A Brando Miranda %A Lorenzo Rosasco %A Jack Hidary %A Tomaso Poggio %B ICML %C Long Beach/California %8 06/2019 %G eng %0 Generic %D 2018 %T An analysis of training and generalization errors in shallow and deep networks %A Hrushikesh Mhaskar %A Tomaso Poggio %K deep learning %K generalization error %K interpolatory approximation %X

An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit evaluates a trigonometric polynomial. It is well understood in the theory of function approximation that ap- proximation by trigonometric polynomials is a “role model” for many other processes of approximation that have inspired many theoretical constructions also in the context of approximation by neural and RBF networks. In this paper, we argue that the maximum loss functional is necessary to measure the generalization error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error, and how much error to expect at which test data. An interesting feature of our new method is that the variance in the training data is no longer an insurmountable lower bound on the generalization error.

%8 02/2018 %1

arXiv:1802.06266

%2

http://hdl.handle.net/1721.1/113843

%0 Generic %D 2018 %T Biologically-plausible learning algorithms can scale to large datasets %A Will Xiao %A Honglin Chen %A Qianli Liao %A Tomaso Poggio %X

The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this "weight transport problem" (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP's weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures.

%8 11/2018 %1

https://arxiv.org/abs/1811.03567

%2

https://hdl.handle.net/1721.1/121157

%0 Generic %D 2018 %T Can Deep Neural Networks Do Image Segmentation by Understanding Insideness? %A Kimberly M. Villalobos %A Jamel Dozier %A Vilim Stih %A Andrew Francl %A Frederico Azevedo %A Tomaso Poggio %A Tomotake Sasaki %A Xavier Boix %X

THIS MEMO IS REPLACED BY CBMM MEMO 105

A key component of visual cognition is the understanding of spatial relationships among objects. Albeit effortless to our visual system, state-of-the-art artificial neural networks struggle to distinguish basic spatial relationships among elements in an image. As shown here, deep neural networks (DNNs) trained with hundreds of thousands of labeled examples cannot accurately distinguish whether pixels lie inside or outside 2D shapes, a problem that seems much simpler than image segmentation. In this paper, we sought to analyze the capability of ANN to solve such inside/outside problems using an analytical approach. We demonstrate that it is a mathematically tractable problem and that two previously proposed algorithms, namely the Ray-Intersection Method and the Coloring Method, achieve perfect accuracy when implemented in the form of DNNs.

%8 12/2018 %0 Generic %D 2018 %T Classical generalization bounds are surprisingly tight for Deep Networks %A Qianli Liao %A Brando Miranda %A Jack Hidary %A Tomaso Poggio %X

Deep networks are usually trained and tested in a regime in which the training classification error is not a good predictor of the test error. Thus the consensus has been that generalization, defined as convergence of the empirical to the expected error, does not hold for deep networks. Here we show that, when normalized appropriately after training, deep networks trained on exponential type losses show a good linear dependence of test loss on training loss. The observation, motivated by a previous theoretical analysis of overparametrization and overfitting, not only demonstrates the validity of classical generalization bounds for deep learning but suggests that they are tight. In addition, we also show that the bound of the classification error by the normalized cross entropy loss is empirically rather tight on the data sets we studied.

%8 07/2018 %1

arXiv:1807.09659

%2

http://hdl.handle.net/1721.1/116911

%0 Journal Article %J Journal of Neurophysiology %D 2018 %T A fast, invariant representation for human action in the visual system %A Leyla Isik %A Andrea Tacchetti %A Tomaso Poggio %X

Humans can effortlessly recognize others’ actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition; however, the underlying neural computations remain poorly understood. We use magnetoencephalography decoding and a data set of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discriminates between different actions, and when it does so in a manner that is invariant to changes in 3D viewpoint. We measure the latency difference between invariant and noninvariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. We were unable to detect a difference in decoding latency or temporal profile between invariant and noninvariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time and that both form and motion information are crucial for fast, invariant action recognition.

Associated Dataset: MEG action recognition data

%B Journal of Neurophysiology %G eng %U https://www.physiology.org/doi/10.1152/jn.00642.2017 %R https://doi.org/10.1152/jn.00642.2017 %0 Journal Article %J Annual Review of Vision Science %D 2018 %T Invariant Recognition Shapes Neural Representations of Visual Input %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %K computational neuroscience %K Invariance %K neural decoding %K visual representations %X

Recognizing the people, objects, and actions in the world around us is a crucial aspect of human perception that allows us to plan and act in our environment. Remarkably, our proficiency in recognizing semantic categories from visual input is unhindered by transformations that substantially alter their appearance (e.g., changes in lighting or position). The ability to generalize across these complex transformations is a hallmark of human visual intelligence, which has been the focus of wide-ranging investigation in systems and computational neuroscience. However, while the neural machinery of human visual perception has been thoroughly described, the computational principles dictating its functioning remain unknown. Here, we review recent results in brain imaging, neurophysiology, and computational neuroscience in support of the hypothesis that the ability to support the invariant recognition of semantic entities in the visual world shapes which neural representations of sensory input are computed by human visual cortex.

%B Annual Review of Vision Science %V 4 %P 403 - 422 %8 10/2018 %G eng %U https://www.annualreviews.org/doi/10.1146/annurev-vision-091517-034103 %N 1 %! Annu. Rev. Vis. Sci. %R 10.1146/annurev-vision-091517-034103 %0 Generic %D 2018 %T Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results %A Luke Arend %A Yena Han %A Martin Schrimpf %A Pouya Bashivan %A Kohitij Kar %A Tomaso Poggio %A James J. DiCarlo %A Xavier Boix %X

Deep neural networks have been shown to predict neural responses in higher visual cortex. The mapping from the model to a neuron in the brain occurs through a linear combination of many units in the model, leaving open the question of whether there also exists a correspondence at the level of individual neurons. Here we show that there exist many one-to-one mappings between single units in a deep neural network model and neurons in the brain. We show that this correspondence at the single- unit level is ubiquitous among state-of-the-art deep neural networks, and grows more pronounced for models with higher performance on a large-scale visual recognition task. Comparing matched populations—in the brain and in a model—we demonstrate a further correspondence at the level of the population code: stimulus category can be partially decoded from real neural responses using a classifier trained purely on a matched population of artificial units in a model. This provides a new point of investigation for phenomena which require fine-grained mappings between deep neural networks and the brain.

%8 11/2018 %2

http://hdl.handle.net/1721.1/118847

%0 Journal Article %J Bulletin of the Polish Academy of Sciences: Technical Sciences %D 2018 %T Theory I: Deep networks and the curse of dimensionality %A Tomaso Poggio %A Qianli Liao %K convolutional neural networks %K deep and shallow networks %K deep learning %K function approximation %X

We review recent work characterizing the classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.

%B Bulletin of the Polish Academy of Sciences: Technical Sciences %V 66 %G eng %N 6 %0 Journal Article %J Bulletin of the Polish Academy of Sciences: Technical Sciences %D 2018 %T Theory II: Deep learning and optimization %A Tomaso Poggio %A Qianli Liao %X

The landscape of the empirical risk of overparametrized deep convolutional neural networks (DCNNs) is characterized with a mix of theory and experiments. In part A we show the existence of a large number of global minimizers with zero empirical error (modulo inconsistent equations). The argument which relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity. We show with simulations that the corresponding polynomial network is indistinguishable from the RELU network. According to Bezout theorem, the global minimizers are degenerate unlike the local minima which in general should be non-degenerate. Further we experimentally analyzed and visualized the landscape of empirical risk of DCNNs on CIFAR-10 dataset. Based on above theoretical and experimental observations, we propose a simple model of the landscape of empirical risk. In part B, we characterize the optimization properties of stochastic gradient descent applied to deep networks. The main claim here consists of theoretical and experimental evidence for the following property of SGD: SGD concentrates in probability – like the classical Langevin equation – on large volume, ”flat” minima, selecting with high probability degenerate minimizers which are typically global minimizers.

%B Bulletin of the Polish Academy of Sciences: Technical Sciences %V 66 %G eng %N 6 %R 10.24425/bpas.2018.125925 %0 Generic %D 2018 %T Theory III: Dynamics and Generalization in Deep Networks %A Andrzej Banburski %A Qianli Liao %A Brando Miranda %A Tomaso Poggio %A Lorenzo Rosasco %A Jack Hidary %A Fernanda De La Torre %X

The key to generalization is controlling the complexity of
            the network. However, there is no obvious control of
            complexity -- such as an explicit regularization term --
            in the training of deep networks for classification. We
            will show that a classical form of norm control -- but
            kind of hidden -- is present in deep networks trained with
            gradient descent techniques on exponential-type losses. In
            particular, gradient descent induces a dynamics of the
            normalized weights which converge for $t \to \infty$ to an
            equilibrium which corresponds to a minimum norm (or
            maximum margin) solution. For sufficiently large but
            finite $\rho$ -- and thus finite $t$ -- the dynamics
            converges to one of several margin maximizers, with the
            margin monotonically increasing towards a limit stationary
            point of the flow. In the usual case of stochastic
            gradient descent, most of the stationary points are likely
            to be convex minima corresponding to a regularized,
            constrained minimizer -- the network with normalized
            weights-- which is stable and has asymptotic zero
            generalization gap, asymptotically for $N \to \infty$,
            where $N$ is the number of training examples. For finite,
            fixed $N$ the generalizaton gap may not be zero but the
            minimum norm property of the solution can provide, we
            conjecture, good expected performance for suitable data
            distributions. Our approach extends some of the results of
            Srebro from linear networks to deep networks and provides
            a new perspective on the implicit bias of gradient
            descent. We believe that the elusive complexity control we
            describe is responsible for the puzzling empirical finding
            of good predictive performance by deep networks, despite
            overparametrization. 

%8 06/2018 %2

http://hdl.handle.net/1721.1/116692

%0 Journal Article %D 2017 %T Compression of Deep Neural Networks for Image Instance Retrieval %A Vijay Chandrasekhar %A Jie Lin %A Qianli Liao %A Olivier Morère %A Antoine Veillard %A Lingyu Duan %A Tomaso Poggio %X

Image instance retrieval is the problem of retrieving images from a database which contain the same object. Convolutional Neural Network (CNN) based descriptors are becoming the dominant approach for generating {\it global image descriptors} for the instance retrieval problem. One major drawback of CNN-based {\it global descriptors} is that uncompressed deep neural network models require hundreds of megabytes of storage making them inconvenient to deploy in mobile applications or in custom hardware. In this work, we study the problem of neural network model compression focusing on the image instance retrieval task. We study quantization, coding, pruning and weight sharing techniques for reducing model size for the instance retrieval problem. We provide extensive experimental results on the trade-off between retrieval performance and model size for different types of networks on several data sets providing the most comprehensive study on this topic. We compress models to the order of a few MBs: two orders of magnitude smaller than the uncompressed models while achieving negligible loss in retrieval performance.

%8 01/2017 %G eng %U https://arxiv.org/abs/1701.04923 %0 Generic %D 2017 %T Do Deep Neural Networks Suffer from Crowding? %A Anna Volokitin %A Gemma Roig %A Tomaso Poggio %X

Crowding is a visual effect suffered by humans, in which an object that can be recognized in isolation can no longer be recognized when other objects, called flankers, are placed close to it. In this work, we study the effect of crowding in artificial Deep Neural Networks for object recognition. We analyze both standard deep convolutional neural networks (DCNNs) as well as a new version of DCNNs which is 1) multi-scale and 2) with size of the convolution filters change depending on the eccentricity wrt to the center of fixation. Such networks, that we call eccentricity-dependent, are a computational model of the feedforward path of the primate visual cortex. Our results reveal that the eccentricity-dependent model, trained on target objects in isolation, can recognize such targets in the presence of flankers, if the targets are near the center of the image, whereas DCNNs cannot. Also, for all tested networks, when trained on targets in isolation, we find that recognition accuracy of the networks decreases the closer the flankers are to the target and the more flankers there are. We find that visual similarity between the target and flankers also plays a role and that pooling in early layers of the network leads to more crowding. Additionally, we show that incorporating the flankers into the images of the training set does not improve performance with crowding.

Associated code for this paper.

%8 06/2017 %1

arXiv:1706.08616

%2

http://hdl.handle.net/1721.1/110348

%0 Generic %D 2017 %T Eccentricity Dependent Deep Neural Networks for Modeling Human Vision %A Gemma Roig %A Francis Chen %A X Boix %A Tomaso Poggio %B Vision Sciences Society %0 Conference Paper %B AAAI Spring Symposium Series, Science of Intelligence %D 2017 %T Eccentricity Dependent Deep Neural Networks: Modeling Invariance in Human Vision %A Francis Chen %A Gemma Roig %A Leyla Isik %A X Boix %A Tomaso Poggio %X

Humans can recognize objects in a way that is invariant to scale, translation, and clutter. We use invariance theory as a conceptual basis, to computationally model this phenomenon. This theory discusses the role of eccentricity in human visual processing, and is a generalization of feedforward convolutional neural networks (CNNs). Our model explains some key psychophysical observations relating to invariant perception, while maintaining important similarities with biological neural architectures. To our knowledge, this work is the first to unify explanations of all three types of invariance, all while leveraging the power and neurological grounding of CNNs.

%B AAAI Spring Symposium Series, Science of Intelligence %G eng %U https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/15360 %0 Journal Article %J J Neurophysiol %D 2017 %T A fast, invariant representation for human action in the visual system. %A Leyla Isik %A Andrea Tacchetti %A Tomaso Poggio %K action recognition %K magnetoencephalography %K neural decoding %K vision %X

Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition, however, the underlying neural computations remain poorly understood. We use magnetoencephalography (MEG) decoding and a dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discriminates between different actions, and when it does so in a manner that is invariant to changes in 3D viewpoint. We measure the latency difference between invariant and non-invariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. We were unable to detect a difference in decoding latency or temporal profile between invariant and non-invariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time, and that both form and motion information are crucial for fast, invariant action recognition.

%B J Neurophysiol %P jn.00642.2017 %8 11/2017 %G eng %R 10.1152/jn.00642.2017 %0 Report %D 2017 %T Fisher-Rao Metric, Geometry, and Complexity of Neural Networks %A Liang, Tengyuan %A Tomaso Poggio %A Alexander Rakhlin %A Stokes, James %K capacity control %K deep learning %K Fisher-Rao metric %K generalization error %K information geometry %K Invariance %K natural gradient %K ReLU activation %K statistical learning theory %X

We study the relationship between geometry and capacity measures for deep  neural  networks  from  an  invariance  viewpoint.  We  introduce  a  new notion  of  capacity — the  Fisher-Rao  norm — that  possesses  desirable  in- variance properties and is motivated by Information Geometry. We discover an analytical characterization of the new capacity measure, through which we establish norm-comparison inequalities and further show that the new measure serves as an umbrella for several existing norm-based complexity measures.  We  discuss  upper  bounds  on  the  generalization  error  induced by  the  proposed  measure.  Extensive  numerical  experiments  on  CIFAR-10 support  our  theoretical  findings.  Our  theoretical  analysis  rests  on  a  key structural lemma about partial derivatives of multi-layer rectifier networks.

%B arXiv.org %8 11/2017 %G eng %U https://arxiv.org/abs/1711.01530 %0 Generic %D 2017 %T On the Human Visual System Invariance to Translation and Scale %A Yena Han %A Gemma Roig %A Gadi Geiger %A Tomaso Poggio %B Vision Sciences Society %0 Conference Paper %B AAAI Spring Symposium Series, Science of Intelligence %D 2017 %T Is the Human Visual System Invariant to Translation and Scale? %A Yena Han %A Gemma Roig %A Gadi Geiger %A Tomaso Poggio %B AAAI Spring Symposium Series, Science of Intelligence %G eng %0 Generic %D 2017 %T Invariant action recognition dataset %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %X

To study the effect of changes in view and actor on action recognition, we filmed a dataset of five actors performing five different actions (drink, eat, jump, run and walk) on a treadmill from five different views (0, 45, 90, 135, and 180 degrees from the front of the actor/treadmill; the treadmill rather than the camera was rotated in place to acquire from different viewpoints). The dataset was filmed on a fixed, constant background. To avoid low-level object/action confounds (e.g. the action “drink” being classified as the only videos with water bottle in the scene) and guarantee that the main sources of variation of visual appearance are due to actions, actors and viewpoint, the actors held the same objects (an apple and a water bottle) in each video, regardless of the action they performed. This controlled design allows us to test hypotheses on the computational mechanisms underlying invariant recognition in the human visual system without having to settle for a synthetic dataset.

More information and the dataset files can be found here - https://doi.org/10.7910/DVN/DMT0PG

%8 11/2017 %U https://doi.org/10.7910/DVN/DMT0PG %0 Journal Article %J PLoS Comp. Bio %D 2017 %T Invariant recognition drives neural representations of action sequences %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %X

Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.

Associated Dataset: MEG action recognition data

%B PLoS Comp. Bio %G eng %0 Journal Article %J PLOS Computational Biology %D 2017 %T Invariant recognition drives neural representations of action sequences %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %E Berniker, Max %X

Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences.

%B PLOS Computational Biology %V 13 %P e1005859 %8 12/2017 %G eng %U http://dx.plos.org/10.1371/journal.pcbi.1005859 %N 12 %R 10.1371/journal.pcbi.1005859 %0 Book Section %B Computational and Cognitive Neuroscience of Vision %D 2017 %T Invariant Recognition Predicts Tuning of Neurons in Sensory Cortex %A Jim Mutch %A F. Anselmi %A Andrea Tacchetti %A Lorenzo Rosasco %A JZ. Leibo %A Tomaso Poggio %B Computational and Cognitive Neuroscience of Vision %I Springer %P 85-104 %G eng %0 Generic %D 2017 %T Musings on Deep Learning: Properties of SGD %A Chiyuan Zhang %A Qianli Liao %A Alexander Rakhlin %A Karthik Sridharan %A Brando Miranda %A Noah Golowich %A Tomaso Poggio %X

[formerly titled "Theory of Deep Learning III: Generalization Properties of SGD"]

In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization.

%8 04/2017 %2

http://hdl.handle.net/1721.1/107841

%0 Generic %D 2017 %T Object-Oriented Deep Learning %A Qianli Liao %A Tomaso Poggio %X

We investigate an unconventional direction of research that aims at converting neural networks, a class of distributed, connectionist, sub-symbolic models into a symbolic level with the ultimate goal of achieving AI interpretability and safety. To that end, we propose Object-Oriented Deep Learning, a novel computational paradigm of deep learning that adopts interpretable “objects/symbols” as a basic representational atom instead of N-dimensional tensors (as in traditional “feature-oriented” deep learning). For visual processing, each “object/symbol” can explicitly package common properties of visual objects like its position, pose, scale, probability of being an object, pointers to parts, etc., providing a full spectrum of interpretable visual knowledge throughout all layers. It achieves a form of “symbolic disentanglement”, offering one solution to the important problem of disentangled representations and invariance. Basic computations of the network include predicting high-level objects and their properties from low-level objects and binding/aggregating relevant objects together. These computations operate at a more fundamental level than convolutions, capturing convolution as a special case while being significantly more general than it. All operations are executed in an input-driven fashion, thus sparsity and dynamic computation per sample are naturally supported, complementing recent popular ideas of dynamic networks and may enable new types of hardware accelerations. We experimentally show on CIFAR-10 that it can perform flexible visual processing, rivaling the performance of ConvNet, but without using any convolution. Furthermore, it can generalize to novel rotations of images that it was not trained for.

%8 10/2017 %2

http://hdl.handle.net/1721.1/112103

%0 Journal Article %D 2017 %T Pruning Convolutional Neural Networks for Image Instance Retrieval %A Gaurav Manek %A Jie Lin %A Vijay Chandrasekhar %A Lingyu Duan %A Sateesh Giduthuri %A Xiaoli Li %A Tomaso Poggio %K CNN %K Image Instance Re- trieval %K Pooling %K Pruning %K Triplet Loss %X

In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN). The objective is to heavily prune convolutional edges while maintaining retrieval performance. To this end, we introduce both data-independent and data-dependent heuristics to prune convolutional edges, and evaluate their performance across various compression rates with different deep descriptors over several benchmark datasets. Further, we present an end-to-end framework to fine-tune the pruned network, with a triplet loss function specially designed for the retrieval task. We show that the combination of heuristic pruning and fine-tuning offers 5x compression rate without considerable loss in retrieval performance.

%8 07/2017 %G eng %U https://arxiv.org/abs/1707.05455 %0 Conference Paper %B AAAI Spring Symposium Series, Science of Intelligence %D 2017 %T Representation Learning from Orbit Sets for One-shot Classification %A Andrea Tacchetti %A Stephen Voinea %A Georgios Evangelopoulos %A Tomaso Poggio %X

The sample complexity of a learning task is increased by transformations that do not change class identity. Visual object recognition for example, i.e. the discrimination or categorization of distinct semantic classes, is affected by changes in viewpoint, scale, illumination or planar transformations. We introduce a weakly-supervised framework for learning robust and selective representations from sets of transforming examples (orbit sets). We train deep encoders that explicitly account for the equivalence up to transformations of orbit sets and show that the resulting encodings contract the intra-orbit distance and preserve identity either by preserving reconstruction or by increasing the inter-orbit distance. We explore a loss function that combines a discriminative term, and a reconstruction term that uses a decoder-encoder map to learn to rectify transformation-perturbed examples, and demonstrate the validity of the resulting embeddings for one-shot learning. Our results suggest that a suitable definition of orbit sets is a form of weak supervision that can be exploited to learn semantically relevant embeddings.

%B AAAI Spring Symposium Series, Science of Intelligence %C AAAI %G eng %U https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/15357 %0 Generic %D 2017 %T Symmetry Regularization %A F. Anselmi %A Georgios Evangelopoulos %A Lorenzo Rosasco %A Tomaso Poggio %X

The properties of a representation, such as smoothness, adaptability, generality, equivari- ance/invariance, depend on restrictions imposed during learning. In this paper, we propose using data symmetries, in the sense of equivalences under transformations, as a means for learning symmetry- adapted representations, i.e., representations that are equivariant to transformations in the original space. We provide a sufficient condition to enforce the representation, for example the weights of a neural network layer or the atoms of a dictionary, to have a group structure and specifically the group structure in an unlabeled training set. By reducing the analysis of generic group symmetries to per- mutation symmetries, we devise an analytic expression for a regularization scheme and a permutation invariant metric on the representation space. Our work provides a proof of concept on why and how to learn equivariant representations, without explicit knowledge of the underlying symmetries in the data.

%8 05/2017 %2

http://hdl.handle.net/1721.1/109391

%0 Generic %D 2017 %T Theory II: Landscape of the Empirical Risk in Deep Learning %A Tomaso Poggio %A Qianli Liao %X

Previous theoretical work on deep learning and neural network optimization tend to focus on avoiding saddle points and local minima. However, the practical observation is that, at least for the most successful Deep Convolutional Neural Networks (DCNNs) for visual processing, practitioners can always increase the network size to fit the training data (an extreme example would be [1]). The most successful DCNNs such as VGG and ResNets are best used with a small degree of "overparametrization". In this work, we characterize with a mix of theory and experiments, the landscape of the empirical risk of overparametrized DCNNs. We first prove the existence of a large number of degenerate global minimizers with zero empirical error (modulo inconsistent equations). The zero-minimizers -- in the case of classification -- have a non-zero margin. The same minimizers are degenerate and thus very likely to be found by SGD that will furthermore select with higher probability the zero-minimizer with larger margin, as discussed in Theory III (to be released). We further experimentally explored and visualized the landscape of empirical risk of a DCNN on CIFAR-10 during the entire training process and especially the global minima. Finally, based on our theoretical and experimental results, we propose an intuitive model of the landscape of DCNN's empirical loss surface, which might not be as complicated as people commonly believe.

%8 03/2017 %1

arXiv:1703.09833

%2

http://hdl.handle.net/1721.1/107787

%0 Generic %D 2017 %T Theory of Deep Learning IIb: Optimization Properties of SGD %A Chiyuan Zhang %A Qianli Liao %A Alexander Rakhlin %A Brando Miranda %A Noah Golowich %A Tomaso Poggio %X

In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence for the following conjecture about SGD: SGD concentrates in probability - like the classical Langevin equation – on large volume, “flat” minima, selecting flat minimizers which are with very high probability also global minimizers.

%8 12/2017 %2

http://hdl.handle.net/1721.1/115407

%0 Generic %D 2017 %T Theory of Deep Learning III: explaining the non-overfitting puzzle %A Tomaso Poggio %A Keji Kawaguchi %A Qianli Liao %A Brando Miranda %A Lorenzo Rosasco %A Xavier Boix %A Jack Hidary %A Hrushikesh Mhaskar %X

THIS MEMO IS REPLACED BY CBMM MEMO 90

A main puzzle of deep networks revolves around the absence of overfitting despite overparametrization and despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamical systems associated with gradient descent minimization of nonlinear networks behave near zero stable minima of the empirical error as gradient system in a quadratic potential with degenerate Hessian. The proposition is supported by theoretical and numerical results, under the assumption of stable minima of the gradient.

Our proposition provides the extension to deep networks of key properties of gradient descent methods for linear networks, that as, suggested in (1), can be the key to understand generalization. Gradient descent enforces a form of implicit regular- ization controlled by the number of iterations, and asymptotically converging to the minimum norm solution. This implies that there is usually an optimum early stopping that avoids overfitting of the loss (this is relevant mainly for regression). For classification, the asymptotic convergence to the minimum norm solution implies convergence to the maximum margin solution which guarantees good classification error for “low noise” datasets.

The implied robustness to overparametrization has suggestive implications for the robustness of deep hierarchically local networks to variations of the architecture with respect to the curse of dimensionality.

%8 12/2017 %1

arXiv:1801.00173

%2

http://hdl.handle.net/1721.1/113003

%0 Journal Article %J Current Biology %D 2017 %T View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation %A JZ. Leibo %A Qianli Liao %A F. Anselmi %A W. A. Freiwald %A Tomaso Poggio %X

The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. 

%B Current Biology %V 27 %P 1-6 %8 01/2017 %G eng %R http://dx.doi.org/10.1016/j.cub.2016.10.015 %0 Conference Proceedings %B AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence %D 2017 %T When and Why Are Deep Networks Better Than Shallow Ones? %A Hrushikesh Mhaskar %A Qianli Liao %A Tomaso Poggio %X
While the universal approximation property holds both for hierarchical and shallow networks, deep networks can approximate the class of compositional functions as well as shallow networks but with exponentially lower number of training parameters and sample complexity. Compositional functions are obtained as a hierarchy of local constituent functions, where "local functions'' are functions with low dimensionality. This theorem proves an old conjecture by Bengio on the role of depth in networks, characterizing precisely the conditions under which it holds. It also suggests possible answers to the the puzzle of why high-dimensional deep networks trained on large training sets often do not seem to show overfit.
%B AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence %G eng %0 Journal Article %J International Journal of Automation and Computing %D 2017 %T Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review %A Tomaso Poggio %A Hrushikesh Mhaskar %A Lorenzo Rosasco %A Brando Miranda %A Qianli Liao %K convolutional neural networks %K deep and shallow networks %K deep learning %K function approximation %K Machine Learning %K Neural Networks %X

The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.

%B International Journal of Automation and Computing %P 1-17 %8 03/2017 %G eng %U http://link.springer.com/article/10.1007/s11633-017-1054-2?wt_mc=Internal.Event.1.SEM.ArticleAuthorOnlineFirst %R 10.1007/s11633-017-1054-2 %0 Generic %D 2016 %T Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex %A Qianli Liao %A Tomaso Poggio %X

We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the effectiveness of the architectures by testing them on the CIFAR-10 dataset.

%8 04/2016 %1

arXiv:1604.03640

%2

http://hdl.handle.net/1721.1/102238

%0 Journal Article %J A Sponsored Supplement to Science %D 2016 %T Deep Leaning: Mathematics and Neuroscience %A Tomaso Poggio %X

Understanding the nature of intelligence is one of the greatest challenges in science and technology today. Making significant progress toward this goal will require the interaction of several disciplines including neuroscience and cognitive science, as well as computer science, robotics, and machine learning. In this paper, I will discuss the implications of recent empirical successes in many applications, such as image categorizations, face identification, localization, action recognition through a machine learning technique called "deep learning," which is based on multi-layer or hierarchical neural networks. Such neural networks have become a central tool in machine learning.

%B A Sponsored Supplement to Science %V Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience %P 9-12 %8 12/2016 %G eng %U http://science.imirus.com/Mpowered/imirus.jsp?volume=scim16&issue=6&page=10 %& 9 %0 Generic %D 2016 %T Deep Learning: mathematics and neuroscience %A Tomaso Poggio %X

Science and Engineering of Intelligence

The problems of Intelligence are, together, the greatest problem in science and technology today. Making significant progress towards their solution will require the interaction of sev- eral disciplines involving neuroscience and cognitive science in addition to computer sci- ence, robotics and machine learning...

%8 04/2016 %0 Journal Article %J Analysis and Applications %D 2016 %T Deep vs. shallow networks: An approximation theory perspective %A Hrushikesh Mhaskar %A Tomaso Poggio %K blessed representation %K deep and shallow networks %K Gaussian networks %K ReLU networks %X
The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function — the ReLU function — used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.
%B Analysis and Applications %V 14 %P 829 - 848 %8 01/2016 %G eng %U http://www.worldscientific.com/doi/abs/10.1142/S0219530516400042 %N 06 %! Anal. Appl. %R 10.1142/S0219530516400042 %0 Generic %D 2016 %T Deep vs. shallow networks : An approximation theory perspective %A Hrushikesh Mhaskar %A Tomaso Poggio %X

The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function – the ReLU function – used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning. 

Journal submitted version.

%8 08/2016 %1

arXiv:1608.03287

%2

http://hdl.handle.net/1721.1/103911

%0 Generic %D 2016 %T Fast, invariant representation for human action in the visual system %A Leyla Isik %A Andrea Tacchetti %A Tomaso Poggio %X

Isik, L*, Tacchetti, A*, and Poggio, T (* authors contributed equally to this work)

 

The ability to recognize the actions of others from visual input is essential to humans' daily lives. The neural computations underlying action recognition, however, are still poorly understood. We use magnetoencephalography (MEG) decoding and a computational model to study action recognition from a novel dataset of well-controlled, naturalistic videos of five actions (run, walk, jump, eat drink) performed by five actors at five viewpoints. We show for the first that that actor- and view-invariant representations for action arise in the human brain as early as 200 ms. We next extend a class of biologically inspired hierarchical computational models of object recognition to recognize actions from videos and explain the computations underlying our MEG findings. This model achieves 3D viewpoint-invariance by the same biologically inspired computational mechanism it uses to build invariance to position and scale. These results suggest that robustness to complex transformations, such as 3D viewpoint invariance, does not require special neural architectures, and further provide a mechanistic explanation of the computations driving invariant action recognition.

%8 01/2016 %U http://arxiv.org/abs/1601.01358 %1

arXiv:1601.01358v1

%2

http://hdl.handle.net/1721.1/100804

%0 Generic %D 2016 %T Foveation-based Mechanisms Alleviate Adversarial Examples %A Luo, Yan %A X Boix %A Gemma Roig %A Tomaso Poggio %A Qi Zhao %X

We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions. To see this, first, we report results in ImageNet that lead to a revision of the hypothesis that adversarial perturbations are a consequence of CNNs acting as a linear classifier: CNNs act locally linearly to changes in the image regions with objects recognized by the CNN, and in other regions the CNN may act non-linearly. Then, we corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation. This is because, hypothetically, the CNNs for ImageNet are robust to changes of scale and translation of the object produced by the foveation, but this property does not generalize to transformations of the perturbation. As a result, the accuracy after a foveation is almost the same as the accuracy of the CNN without the adversarial perturbation, even if the adversarial perturbation is calculated taking into account a foveation.

%8 01/2016 %G English %1

arXiv:1511.06292

%2

http://hdl.handle.net/1721.1/100981

%0 Generic %D 2016 %T Group Invariant Deep Representations for Image Instance Retrieval %A Olivier Morère %A Antoine Veillard %A Jie Lin %A Julie Petta %A Vijay Chandrasekhar %A Tomaso Poggio %X

Most image instance retrieval pipelines are based on comparison of vectors known as global image descriptors between a query image and the database images. Due to their success in large scale image classification, representations extracted from Convolutional Neural Networks (CNN) are quickly gaining ground on Fisher Vectors (FVs) as state-of-the-art global descriptors for image instance retrieval. While CNN-based descriptors are generally remarked for good retrieval performance at lower bitrates, they nevertheless present a number of drawbacks including the lack of robustness to common object transformations such as rotations compared with their interest point based FV counterparts.


In this paper, we propose a method for computing invariant global descriptors from CNNs. Our method implements a recently proposed mathematical theory for invariance in a sensory cortex modeled as a feedforward neural network. The resulting global descriptors can be made invariant to multiple arbitrary transformation groups while retaining good discriminativeness.


Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated. We also show that our method which has few parameters is not prone to over fitting: improvements generalize well across datasets with different properties with regard to invariances. Finally, we show that our descriptors are able to compare favourably to other state-of-theart compact descriptors in similar bitranges, exceeding the highest retrieval results reported in the literature on some datasets. A dedicated dimensionality reduction step –quantization or hashing– may be able to further improve the competitiveness of the descriptors.

%8 01/2016 %G English %1

arXiv:1601.02093v1

%2

http://hdl.handle.net/1721.1/100796

%0 Conference Paper %B Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) %D 2016 %T Holographic Embeddings of Knowledge Graphs %A Maximilian Nickel %A Lorenzo Rosasco %A Tomaso Poggio %X

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.

%B Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) %C Phoenix, Arizona, USA %G eng %0 Conference Paper %B Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) %D 2016 %T How Important Is Weight Symmetry in Backpropagation? %A Qianli Liao %A JZ. Leibo %A Tomaso Poggio %X

Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes. This "weight transport problem" (Grossberg 1987) is thought to be one of the main reasons to doubt BP's biologically plausibility. Using 15 different classification datasets, we systematically investigate to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014) but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter -- the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100% concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations/stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a "Batch Manhattan" (BM) update rule.

%B Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) %C Phoenix, AZ. %G eng %U https://cbmm.mit.edu/sites/default/files/publications/liao-leibo-poggio.pdf %0 Conference Paper %B Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) %D 2016 %T How Important Is Weight Symmetry in Backpropagation? %A Qianli Liao %A JZ. Leibo %A Tomaso Poggio %X

Gradient backpropagation (BP) requires symmetric feedforward and feedback connections -- the same weights must be used for forward and backward passes. This "weight transport problem" (Grossberg 1987) is thought to be one of the main reasons to doubt BP's biologically plausibility. Using 15 different classification datasets, we systematically investigate to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to Lillicrap et al.'s demonstration (Lillicrap et al. 2014) but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter -- the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100% concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations/stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) (Ioffe and Szegedy 2015) and/or a "Batch Manhattan" (BM) update rule.

%B Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) %I Association for the Advancement of Artificial Intelligence %C Phoenix, AZ. %8 Accepted %G eng %0 Journal Article %J Information and Inference %D 2016 %T Introduction Special issue: Deep learning %A Bach, Francis %A Tomaso Poggio %X

Faced with large amounts of data, the aim of machine learning is to make predictions. It applies to many types of data, such as images, sounds, biological data, etc. A key difficulty is to find relevant vectorial representations. While this problem had been often handled in a ad-hoc way by domain experts, it has recently proved useful to learn these representations directly from large quantities of data, and Deep Learning Convolutional Networks (DLCN) with ReLU nonlinearities have been particularly successful. The representations are then based on compositions of simple parameterized processing units, the depth coming from the large number of such compositions.

 

The goal of this special issue was to explore some of the mathematical ideas and problems at the heart of deep learning. In particular, two key mathematical questions about deep learning are:

These questions are still open and a full theory of Deep Learning is still in the making. This special issue, however, begins with two papers that provide a useful contribution to several other theoretical questions surrounding supervised deep learning.

%B Information and Inference %V 5 %P 103-104 %G eng %U http://imaiai.oxfordjournals.org/content/5/2/103.short %R 10.1093/imaiai/iaw010 %0 Journal Article %J Information and Inference: A Journal of the IMA %D 2016 %T On invariance and selectivity in representation learning %A F. Anselmi %A Lorenzo Rosasco %A Tomaso Poggio %X

We study the problem of learning from data representations that are invariant to transformations, and at the same time selective, in the sense that two points have the same representation if one is the transformation of the other. The mathematical results here sharpen some of the key claims of i-theory—a recent theory of feedforward processing in sensory cortex (Anselmi et al., 2013, Theor. Comput. Sci. and arXiv:1311.4158; Anselmi et al., 2013, Magic materials: a theory of deep hierarchical architectures for learning sensory representations. CBCL Paper; Anselmi & Poggio, 2010, Representation learning in sensory cortex: a theory. CBMM Memo No. 26).

%B Information and Inference: A Journal of the IMA %P iaw009 %8 05/2016 %G eng %U http://imaiai.oxfordjournals.org/lookup/doi/10.1093/imaiai/iaw009 %! Information and Inference %R 10.1093/imaiai/iaw009 %0 Generic %D 2016 %T Learning Functions: When Is Deep Better Than Shallow %A Hrushikesh Mhaskar %A Qianli Liao %A Tomaso Poggio %X

While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower number of training parameters as well as VC-dimension. This theorem settles an old conjecture by Bengio on the role of depth in networks. We then define a general class of scalable, shift-invariant algorithms to show a simple and natural set of requirements that justify deep convolutional networks.

%U https://arxiv.org/pdf/1603.00988v4.pdf %1

arXiv:1603.00988

%2

http://hdl.handle.net/1721.1/101635

%0 Journal Article %J arXiv.org %D 2016 %T Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval %A Olivier Morère %A Antoine Veillard %A Vijay Chandrasekhar %A Tomaso Poggio %K CNN %K Hashing %K Image Instance Retrieval %K Invariant Representation %K Regularization %K unsupervised learning %X

The goal of this work is the computation of very compact binary hashes for image instance retrieval. Our approach has two novel contributions. The first one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a mathematical theory for computing group invariant transformations with feed-forward neural networks. NIP is able to produce compact and well-performing descriptors with visual representations extracted from convolutional neural networks. We specifically incorporate scale, translation and rotation invariances but the scheme can be extended to any arbitrary sets of transformations. We also show that using moments of increasing order throughout nesting is important. The NIP descriptors are then hashed to the target code size (32-256 bits) with a Restricted Boltzmann Machine with a novel batch-level regularization scheme specifically designed for the purpose of hashing (RBMH). A thorough empirical evaluation with state-of-the-art shows that the results obtained both with the NIP descriptors and the NIP+RBMH hashes are consistently outstanding across a wide range of datasets.

%B arXiv.org %8 03/2016 %G eng %U https://arxiv.org/abs/1603.04595 %0 Journal Article %J Public Library of Science | PLoS ONE %D 2016 %T Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing %A Cheston Tan %A Tomaso Poggio %X

Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists  for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled “holistic processing”, while non-face objects are not processed  holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make
face processing holistic. Using a model of primate visual processing, we show that a single key factor, “neural tuning size”, is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology.

%B Public Library of Science | PLoS ONE %V 1(3): e0150980 %8 03/2016 %G eng %U http://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0150980 %R 10.1371/journal.pone.0150980 %0 Book Section %B From Neuron to Cognition via Computational Neuroscience %D 2016 %T Object and Scene Perception %A Owen Lewis %A Tomaso Poggio %X

Overview

This textbook presents a wide range of subjects in neuroscience from a computational perspective. It offers a comprehensive, integrated introduction to core topics, using computational tools to trace a path from neurons and circuits to behavior and cognition. Moreover, the chapters show how computational neuroscience—methods for modeling the causal interactions underlying neural systems—complements empirical research in advancing the understanding of brain and behavior.

The chapters—all by leaders in the field, and carefully integrated by the editors—cover such subjects as action and motor control; neuroplasticity, neuromodulation, and reinforcement learning; vision; and language—the core of human cognition.

The book can be used for advanced undergraduate or graduate level courses. It presents all necessary background in neuroscience beyond basic facts about neurons and synapses and general ideas about the structure and function of the human brain. Students should be familiar with differential equations and probability theory, and be able to pick up the basics of programming in MATLAB and/or Python. Slides, exercises, and other ancillary materials are freely available online, and many of the models described in the chapters are documented in the brain operation database, BODB (which is also described in a book chapter).

Available now through MIT Press - https://mitpress.mit.edu/neuron-cognition

%B From Neuron to Cognition via Computational Neuroscience %I The MIT Press %C Cambridge, MA, USA %@ 9780262034968 %G eng %U https://mitpress.mit.edu/neuron-cognition %& 17 %0 Report %D 2016 %T Spatio-temporal convolutional networks explain neural representations of human actions %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %G eng %0 Generic %D 2016 %T Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning %A Qianli Liao %A Kenji Kawaguchi %A Tomaso Poggio %X

We systematically explored a spectrum of normalization algorithms related to Batch Normalization (BN) and propose a generalized formulation that simultaneously solves two major limitations of BN: (1) online learning and (2) recurrent learning. Our proposal is simpler and more biologically-plausible. Unlike previous approaches, our technique can be applied out of the box to all learning scenarios (e.g., online learning, batch learning, fully-connected, convolutional, feedforward, recurrent and mixed — recurrent and convolutional) and compare favorably with existing approaches. We also propose Lp Normalization for normalizing by different orders of statistical moments. In particular, L1 normalization is well-performing, simple to implement, fast to compute, more biologically-plausible and thus ideal for GPU or hardware implementations.

%8 10/2016 %1

arXiv:1610.06160v1

%2

http://hdl.handle.net/1721.1/104906

%0 Generic %D 2016 %T Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality? %A Tomaso Poggio %A Hrushikesh Mhaskar %A Lorenzo Rosasco %A Brando Miranda %A Qianli Liao %X

[formerly titled "Why and When Can Deep - but Not Shallow - Networks Avoid the Curse of Dimensionality: a Review"]

The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.

%8 11/2016 %1

https://arxiv.org/abs/1611.00740v5

%2

http://hdl.handle.net/1721.1/105443

%0 Journal Article %J AI Magazine %D 2016 %T Turing++ Questions: A Test for the Science of (Human) Intelligence. %A Tomaso Poggio %A Ethan Meyers %X

It is becoming increasingly clear that there is an infinite number of definitions of intelligence. Machines that are intelligent in different narrow ways have been built since the 50s. We are entering now a golden age for the engineering of intelligence and the development of many different kinds of intelligent machines. At the same time there is a widespread interest among scientists in understanding a specific and well defined form of intelligence, that is human intelligence. For this reason we propose a stronger version of the original Turing test. In particular, we describe here an open-ended set of Turing++ Questions that we are developing at the Center for Brains, Minds and Machines at MIT — that is questions about an image. Questions may range from what is there to who is there, what is this person doing, what is this girl thinking about this boy and so on.  The plural in questions is to emphasize that there are many different intelligent abilities in humans that have to be characterized, and possibly replicated in a machine, from basic visual recognition of objects, to the identification of faces, to gauge emotions, to social intelligence, to language and much more. The term Turing++ is to emphasize that our goal is understanding human intelligence at all Marr’s levels — from the level of the computations to the level of the underlying circuits. Answers to the Turing++ Questions should thus be given in terms of models that match human behavior and human physiology — the mind and the brain. These requirements are thus well beyond the original Turing test. A whole scientific field that we call the science of (human) intelligence is required to make progress in answering our Turing++ Questions. It is connected to neuroscience and to the engineering of intelligence but also separate from both of them.

%B AI Magazine %V 37 %P 73-77 %8 03/2016 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/view/2641 %N 1 %R http://dx.doi.org/10.1609/aimag.v37i1.2641 %0 Generic %D 2016 %T View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation %A JZ. Leibo %A Qianli Liao %A W. A. Freiwald %A F. Anselmi %A Tomaso Poggio %X

The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving transformations like depth-rotations [ 33 , 32 , 23 , 13 ]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [ 46 , 8 , 44 , 29 ]. While simulations of these models recapitulate the ventral stream’s progression from early view-specific to late view-tolerant representations, they fail to generate the most salient property of the intermediate representation for faces found in the brain: mirror-symmetric tuning of the neural population to head orientation [ 16 ]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules can provide approximate invariance at the top level of the network. While most of the learning rules do not yield mirror-symmetry in the mid-level representations, we characterize a specific biologically-plausible Hebb-type learning rule that is guaranteed to generate mirror-symmetric tuning to faces tuning at intermediate levels of the architecture.

%8 06/2016 %1

arXiv:1606.01552v1 [cs.NE]

%2

http://hdl.handle.net/1721.1/103394

%0 Book %D 2016 %T Visual Cortex and Deep Networks: Learning Invariant Representations %A Tomaso Poggio %A F. Anselmi %X

The ventral visual stream is believed to underlie object recognition in primates. Over the past fifty years, researchers have developed a series of quantitative models that are increasingly faithful to the biological architecture. Recently, deep learning convolution networks—which do not reflect several important features of the ventral stream architecture and physiology—have been trained with extremely large datasets, resulting in model neurons that mimic object recognition but do not explain the nature of the computations carried out in the ventral stream. This book develops a mathematical framework that describes learning of invariant representations of the ventral stream and is particularly relevant to deep convolutional learning networks.

The authors propose a theory based on the hypothesis that the main computational goal of the ventral stream is to compute neural representations of images that are invariant to transformations commonly encountered in the visual environment and are learned from unsupervised experience. They describe a general theoretical framework of a computational theory of invariance (with details and proofs offered in appendixes) and then review the application of the theory to the feedforward path of the ventral stream in the primate visual cortex.

%I The MIT Press %C Cambridge, MA, USA %P 136 %8 09/2016 %@ Hardcover: 9780262034722 | eBook: 9780262336703 %G eng %U https://mitpress.mit.edu/books/visual-cortex-and-deep-networks %0 Generic %D 2015 %T Deep Convolutional Networks are Hierarchical Kernel Machines %A F. Anselmi %A Lorenzo Rosasco %A Cheston Tan %A Tomaso Poggio %X

We extend i-theory to incorporate not only pooling but also rectifying nonlinearities in an extended HW module (eHW) designed for supervised learning. The two operations roughly correspond to invariance and selectivity, respectively. Under the assumption of normalized inputs, we show that appropriate linear combinations of rectifying nonlinearities are equivalent to radial kernels. If pooling is present an equivalent kernel also exist. Thus present-day DCNs (Deep Convolutional Networks) can be exactly equivalent to a hierarchy of kernel machines with pooling and non-pooling layers. Finally, we describe a conjecture for theoretically understanding hierarchies of such modules. A main consequence of the conjecture is that hierarchies of eHW modules minimize memory requirements while computing a selective and invariant representation.

%8 06/17/2015 %1

arXiv:1508.01084

%2

http://hdl.handle.net/1721.1/100200

%0 Conference Paper %B INTERSPEECH-2015 %D 2015 %T Discriminative Template Learning in Group-Convolutional Networks for Invariant Speech Representations %A Chiyuan Zhang %A Stephen Voinea %A Georgios Evangelopoulos %A Lorenzo Rosasco %A Tomaso Poggio %B INTERSPEECH-2015 %I International Speech Communication Association (ISCA) %C Dresden, Germany %8 09/2015 %G eng %U http://www.isca-speech.org/archive/interspeech_2015/i15_3229.html %0 Generic %D 2015 %T Holographic Embeddings of Knowledge Graphs %A Maximilian Nickel %A Lorenzo Rosasco %A Tomaso Poggio %K Associative Memory %K Knowledge Graph %K Machine Learning %X

Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator, HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.

%8 11/16/2015 %G English %1

arXiv:1510.04935

%2

http://hdl.handle.net/1721.1/100203

%0 Generic %D 2015 %T How Important is Weight Symmetry in Backpropagation? %A Qianli Liao %A JZ. Leibo %A Tomaso Poggio %X

Gradient backpropagation (BP) requires symmetric feedforward and feedback connections—the same weights must be used for forward and backward passes. This “weight transport problem” [1] is thought to be one of the main reasons of BP’s biological implausibility. Using 15 different classification datasets, we systematically study to what extent BP really depends on weight symmetry. In a study that turned out to be surprisingly similar in spirit to Lillicrap et al.’s demonstration [2] but orthogonal in its results, our experiments indicate that: (1) the magnitudes of feedback weights do not matter to performance (2) the signs of feedback weights do matter—the more concordant signs between feedforward and their corresponding feedback connections, the better (3) with feedback weights having random magnitudes and 100% concordant signs, we were able to achieve the same or even better performance than SGD. (4) some normalizations/stabilizations are indispensable for such asymmetric BP to work, namely Batch Normalization (BN) [3] and/or a “Batch Manhattan” (BM) update rule.

%8 11/29/2015 %1

http://arxiv.org/abs/1510.05067

%2

http://hdl.handle.net/1721.1/100797

%0 Generic %D 2015 %T On Invariance and Selectivity in Representation Learning %A F. Anselmi %A Lorenzo Rosasco %A Tomaso Poggio %X

We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other. The mathematical results here sharpen some of the key claims of i-theory, a recent theory of feedforward processing in sensory cortex.

%8 03/23/2015 %G English %1

arXiv:1503.05938v1

%2

http://hdl.handle.net/1721.1/100194

%0 Generic %D 2015 %T The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex %A JZ. Leibo %A Qianli Liao %A F. Anselmi %A Tomaso Poggio %8 07/2015 %0 Journal Article %J PLOS Computational Biology %D 2015 %T The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex %A JZ. Leibo %A Qianli Liao %A F. Anselmi %A Tomaso Poggio %X

Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions.

%B PLOS Computational Biology %V 11 %P e1004390 %8 10/23/2015 %G eng %U http://dx.plos.org/10.1371/journal.pcbi.1004390 %N 10 %! Invariance and Domain Specificity %R 10.1371/journal.pcbi.1004390 %0 Generic %D 2015 %T Invariant representations for action recognition in the visual system. %A Andrea Tacchetti %A Leyla Isik %A Tomaso Poggio %B Vision Sciences Society %C Journal of vision %V 15 %U http://jov.arvojournals.org/article.aspx?articleid=2433666 %N 12 %R 10.1167/15.12.558 %0 Generic %D 2015 %T Invariant representations for action recognition in the visual system %A Leyla Isik %A Andrea Tacchetti %A Tomaso Poggio %B Computational and Systems Neuroscience %0 Generic %D 2015 %T I-theory on depth vs width: hierarchical function composition %A Tomaso Poggio %A F. Anselmi %A Lorenzo Rosasco %X

Deep learning networks with convolution, pooling and subsampling are a special case of hierarchical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can approximate functions of several variables, in particular those that are compositions of low dimensional functions. We show that the power of a deep network architecture with respect to a shallow network is rather independent of the specific nonlinear operations in the network and depends instead on the the behavior of the VC-dimension. A shallow network can approximate compositional functions with the same error of a deep network but at the cost of a VC-dimension that is exponential instead than quadratic in the dimensionality of the function. To complete the argument we argue that there exist visual computations that are intrinsically compositional. In particular, we prove that recognition invariant to translation cannot be computed by shallow networks in the presence of clutter. Finally, a general framework that includes the compositional case is sketched. The key condition that allows tall, thin networks to be nicer that short, fat networks is that the target input-output function must be sparse in a certain technical sense.

%8 12/29/2015 %2

http://hdl.handle.net/1721.1/100559

%0 Conference Paper %B Advances in Neural Information Processing Systems (NIPS 2015) 28 %D 2015 %T Learning with a Wasserstein Loss %A Charlie Frogner %A Chiyuan Zhang %A Hossein Mobahi %A Mauricio Araya-Polo %A Tomaso Poggio %X

Learning to predict multi-label outputs is challenging, but in many problems there is a natural metric on the outputs that can be used to improve predictions. In this paper we develop a loss function for multi-label learning, based on the Wasserstein distance. The Wasserstein distance provides a natural notion of dissimilarity for probability measures. Although optimizing with respect to the exact Wasserstein distance is costly, recent work has described a regularized approximation that is efficiently computed. We describe an efficient learning algorithm based on this regularization, as well as a novel extension of the Wasserstein distance from prob- ability measures to unnormalized measures. We also describe a statistical learning bound for the loss. The Wasserstein loss can encourage smoothness of the predic- tions with respect to a chosen metric on the output space. We demonstrate this property on a real-data tag prediction problem, using the Yahoo Flickr Creative Commons dataset, outperforming a baseline that doesn’t use the metric.

%B Advances in Neural Information Processing Systems (NIPS 2015) 28 %G eng %U http://arxiv.org/abs/1506.05439 %0 Conference Paper %B NIPS 2015 %D 2015 %T Learning with Group Invariant Features: A Kernel Perspective %A Youssef Mroueh %A Stephen Voinea %A Tomaso Poggio %X
We analyze in this paper a random feature map based on a theory of invariance (I-theory) introduced in Anselmi et.al. 2013. More specifically, a group invariant signal signature is obtained through cumulative distributions of group-transformed random projections. Our analysis bridges invariant feature learning with kernel methods, as we show that this feature map defines an expected Haar-integration kernel that is invariant to the specified group action. We show how this non-linear random feature map approximates this group invariant kernel uniformly on a set of N points. Moreover, we show that it defines a function space that is dense in the equivalent Invariant Reproducing Kernel Hilbert Space. Finally, we quantify error rates of the convergence of the empirical risk minimization, as well as the reduction in the sample complexity of a learning algorithm using such an invariant representation for signal classification, in a classical supervised learning setting
%B NIPS 2015 %G eng %U https://papers.nips.cc/paper/5798-learning-with-group-invariant-features-a-kernel-perspective %0 Generic %D 2015 %T Notes on Hierarchical Splines, DCLNs and i-theory %A Tomaso Poggio %A Lorenzo Rosasco %A Amnon Shashua %A Nadav Cohen %A F. Anselmi %X

We define an extension of classical additive splines for multivariate
function approximation that we call hierarchical splines. We show that the
case of hierarchical, additive, piece-wise linear splines includes present-day
Deep Convolutional Learning Networks (DCLNs) with linear rectifiers and
pooling (sum or max). We discuss how these observations together with
i-theory may provide a framework for a general theory of deep networks.

%2

http://hdl.handle.net/1721.1/100201

%0 Generic %D 2015 %T A Science of Intelligence %A Christof Koch %A Tomaso Poggio %X

We are in the midst of a revolution in machine intelligence, the engineering of getting computers to perform tasks that, until recently, could only be done by people. You can speak to your smart phone and it answers back, software identifies faces at border-crossings and labels people and objects in pictures posted to social media. Algorithms can teach themselves to play Atari video games. A camera and chip embedded into the front view-mirror of top-of-the-line sedans let the vehicle drive autonomously on the open road...

%8 07/2015 %0 Journal Article %J Theoretical Computer Science %D 2015 %T Unsupervised learning of invariant representations %A F. Anselmi %A JZ. Leibo %A Lorenzo Rosasco %A Jim Mutch %A Andrea Tacchetti %A Tomaso Poggio %K convolutional networks %K Cortex %K Hierarchy %K Invariance %X

The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n→∞n→∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n→1n→1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a “good” representation for supervised learning, characterized by small sample complexity. We consider the case of visual object recognition, though the theory also applies to other domains like speech. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translation, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and selective signature can be computed for each image or image patch: the invariance can be exact in the case of group transformations and approximate under non-group transformations. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such signature. The theory offers novel unsupervised learning algorithms for “deep” architectures for image and speech recognition. We conjecture that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and selective for recognition—and show how this representation may be continuously learned in an unsupervised way during development and visual experience.

%B Theoretical Computer Science %8 06/25/2015 %G eng %U http://www.sciencedirect.com/science/article/pii/S0304397515005587 %R 10.1016/j.tcs.2015.06.048 %0 Generic %D 2015 %T What if... %A Tomaso Poggio %X

The background: DCLNs (Deep Convolutional Learning Networks) are doing very well

Over the last 3 years and increasingly so in the last few months, I have seen supervised DCLNs — feedforward and recurrent — do more and more of everything quite well. They seem to learn good representations for a growing number of speech and text problems (for a review by the pioneers in the field see LeCun, Bengio, Hinton, 2015). More interestingly, it is increasingly clear, as I will discuss later, that instead of being trained on millions of labeled examples they can be trained in implicitly supervised ways. This breakthrough in machine learning triggers a few dreams. What if we have now the basic answer to how to develop brain-like intelligence and its basic building blocks?...

%8 06/2015 %0 Generic %D 2014 %T Can a biologically-plausible hierarchy effectively replace face detection, alignment, and recognition pipelines? %A Qianli Liao %A JZ. Leibo %A Youssef Mroueh %A Tomaso Poggio %K Computer vision %K Face recognition %K Hierarchy %K Invariance %X

The standard approach to unconstrained face recognition in natural photographs is via a detection, alignment, recognition pipeline. While that approach has achieved impressive results, there are several reasons to be dissatisfied with it, among them is its lack of biological plausibility. A recent theory of invariant recognition by feedforward hierarchical networks, like HMAX, other convolutional networks, or possibly the ventral stream, implies an alternative approach to unconstrained face recognition. This approach accomplishes detection and alignment implicitly by storing transformations of training images (called templates) rather than explicitly detecting and aligning faces at test time. Here we propose a particular locality-sensitive hashing based voting scheme which we call “consensus of collisions” and show that it can be used to approximate the full 3-layer hierarchy implied by the theory. The resulting end-to-end system for unconstrained face recognition operates on photographs of faces taken under natural conditions, e.g., Labeled Faces in the Wild (LFW), without aligning or cropping them, as is normally done. It achieves a drastic improvement in the state of the art on this end-to-end task, reaching the same level of performance as the best systems operating on aligned, closely cropped images (no outside training data). It also performs well on two newer datasets, similar to LFW, but more difficult: LFW-jittered (new here) and SUFR-W.

%8 03/2014 %1

arXiv:1311.4082v3

%2

http://hdl.handle.net/1721.1/100164

%0 Generic %D 2014 %T Computational role of eccentricity dependent cortical magnification. %A Tomaso Poggio %A Jim Mutch %A Leyla Isik %K Invariance %K Theories for Intelligence %X

We develop a sampling extension of M-theory focused on invariance to scale and translation. Quite surprisingly, the theory predicts an architecture of early vision with increasing receptive field sizes and a high resolution fovea — in agreement with data about the cortical magnification factor, V1 and the retina. From the slope of the inverse of the magnification factor, M-theory predicts a cortical “fovea” in V1 in the order of 40 by 40 basic units at each receptive field size — corresponding to a foveola of size around 26 minutes of arc at the highest resolution, ≈6 degrees at the lowest resolution. It also predicts uniform scale invariance over a fixed range of scales independently of eccentricity, while translation invariance should depend linearly on spatial frequency. Bouma’s law of crowding follows in the theory as an effect of cortical area-by-cortical area pooling; the Bouma constant is the value expected if the signature responsible for recognition in the crowding experiments originates in V2. From a broader perspective, the emerging picture suggests that visual recognition under natural conditions takes place by composing information from a set of fixations, with each fixation providing recognition from a space-scale image fragment — that is an image patch represented at a set of increasing sizes and decreasing resolutions.

%8 06/2014 %1

arXiv:1406.1770v1

%2

http://hdl.handle.net/1721.1/100181

%0 Generic %D 2014 %T A Deep Representation for Invariance And Music Classification %A Chiyuan Zhang %A Georgios Evangelopoulos %A Stephen Voinea %A Lorenzo Rosasco %A Tomaso Poggio %K Audio Representation %K Hierarchy %K Invariance %K Machine Learning %K Theories for Intelligence %X

Representations in the auditory cortex might be based on mechanisms similar to the visual ventral stream; modules for building invariance to transformations and multiple layers for compositionality and selectivity. In this paper we propose the use of such computational modules for extracting invariant and discriminative audio representations. Building on a theory of invariance in hierarchical architectures, we propose a novel, mid-level representation for acoustical signals, using the empirical distributions of projections on a set of templates and their transformations. Under the assumption that, by construction, this dictionary of templates is composed from similar classes, and samples the orbit of variance-inducing signal transformations (such as shift and scale), the resulting signature is theoretically guaranteed to be unique, invariant to transformations and stable to deformations. Modules of projection and pooling can then constitute layers of deep networks, for learning composite representations. We present the main theoretical and computational aspects of a framework for unsupervised learning of invariant audio representations, empirically evaluated on music genre classification.

%8 03/2014 %1

arXiv:1404.0400v1

%2

http://hdl.handle.net/1721.1/100163

%0 Conference Paper %B ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing %D 2014 %T A Deep Representation for Invariance and Music Classification %A Chiyuan Zhang %A Georgios Evangelopoulos %A Stephen Voinea %A Lorenzo Rosasco %A Tomaso Poggio %K acoustic signal processing %K signal representation %K unsupervised learning %B ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing %I IEEE %C Florence, Italy %8 05/04/2014 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6854954 %R 10.1109/ICASSP.2014.6854954 %0 Journal Article %J J Neurophysiol %D 2014 %T The dynamics of invariant object recognition in the human visual system. %A Leyla Isik %A Ethan Meyers %A JZ. Leibo %A Tomaso Poggio %K Adolescent %K Adult %K Evoked Potentials, Visual %K Female %K Humans %K Male %K Pattern Recognition, Visual %K Reaction Time %K visual cortex %X

The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.

Corresponding Dataset - The dynamics of invariant object recognition in the human visual system.

%B J Neurophysiol %V 111 %P 91-102 %8 01/2014 %G eng %U http://jn.physiology.org/content/early/2013/09/27/jn.00394.2013.abstract %N 1 %R 10.1152/jn.00394.2013 %0 Generic %D 2014 %T The dynamics of invariant object recognition in the human visual system. %A Leyla Isik %A Ethan Meyers %A JZ. Leibo %A Tomaso Poggio %X

This is the dataset for corresponding Journal Article - The dynamics of invariant object recognition in the human visual system.

 

The human visual system can rapidly recognize objects despite transformations that alter their appearance. The precise timing of when the brain computes neural representations that are invariant to particular transformations, however, has not been mapped in humans. Here we employ magnetoencephalography decoding analysis to measure the dynamics of size- and position-invariant visual information development in the ventral visual stream. With this method we can read out the identity of objects beginning as early as 60 ms. Size- and position-invariant visual information appear around 125 ms and 150 ms, respectively, and both develop in stages, with invariance to smaller transformations arising before invariance to larger transformations. Additionally, the magnetoencephalography sensor activity localizes to neural sources that are in the most posterior occipital regions at the early decoding times and then move temporally as invariant information develops. These results provide previously unknown latencies for key stages of human-invariant object recognition, as well as new and compelling evidence for a feed-forward hierarchical model of invariant object recognition where invariance increases at each successive visual area along the ventral stream.

 

Dataset files can be downloaded here - http://dx.doi.org/10.7910/DVN/KRUPXZ

11 subjects’ MEG data from Isik et al., 2014. Data is available in raw .fif format or in Matlab raster format that is compatible with the neural decoding toolbox (readout.info).

For Matlab code to pre-process this MEG data, and run the decoding analyses please visit

https://bitbucket.org/lisik/meg_decoding

%8 01/2014 %R http://dx.doi.org/10.7910/DVN/KRUPXZ %0 Generic %D 2014 %T The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex %A JZ. Leibo %A Qianli Liao %A F. Anselmi %A Tomaso Poggio %K Neuroscience %K Theories for Intelligence %X

Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions.

%8 04/2014 %G eng %U http://biorxiv.org/lookup/doi/10.1101/004473 %2

http://hdl.handle.net/1721.1/100168

%R 10.1101/004473 %0 Generic %D 2014 %T Learning An Invariant Speech Representation %A Georgios Evangelopoulos %A Stephen Voinea %A Chiyuan Zhang %A Lorenzo Rosasco %A Tomaso Poggio %K Theories for Intelligence %X

Recognition of speech, and in particular the ability to generalize and learn from small sets of labelled examples like humans do, depends on an appropriate representation of the acoustic input. We formulate the problem of finding robust speech features for supervised learning with small sample complexity as a problem of learning representations of the signal that are maximally invariant to intraclass transformations and deformations. We propose an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain and empirically evaluate its validity for voiced speech sound classification. Our version of the theory requires the memory-based, unsupervised storage of acoustic templates — such as specific phones or words — together with all the transformations of each that normally occur. A quasi-invariant representation for a speech segment can be obtained by projecting it to each template orbit, i.e., the set of transformed signals, and computing the associated one-dimensional empirical probability distributions. The computations can be performed by modules of filtering and pooling, and extended to hierarchical architectures. In this paper, we apply a single-layer, multicomponent representation for phonemes and demonstrate improved accuracy and decreased sample complexity for vowel classification compared to standard spectral, cepstral and perceptual features.

%8 06/2014 %1

arXiv:1406.3884

%2

http://hdl.handle.net/1721.1/100186

%0 Conference Paper %B NIPS 2013 %D 2014 %T Learning invariant representations and applications to face verification %A Qianli Liao %A JZ. Leibo %A Tomaso Poggio %K Computer vision %X

One approach to computer object recognition and modeling the brain’s ventral stream involves unsupervised learning of representations that are invariant to common transformations. However, applications of these ideas have usually been limited to 2D affine transformations, e.g., translation and scaling, since they are easiest to solve via convolution. In accord with a recent theory of transformation-invariance [1], we propose a model that, while capturing other common convolutional networks as special cases, can also be used with arbitrary identity-preserving transformations. The model’s wiring can be learned from videos of transforming objects—or any other grouping of images into sets by their depicted object. Through a series of successively more complex empirical tests, we study the invariance/discriminability properties of this model with respect to different transformations. First, we empirically confirm theoretical predictions (from [1]) for the case of 2D affine transformations. Next, we apply the model to non-affine transformations; as expected, it performs well on face verification tasks requiring invariance to the relatively smooth transformations of 3D rotation-in-depth and changes in illumination direction. Surprisingly, it can also tolerate clutter “transformations” which map an image of a face on one background to an image of the same face on a different background. Motivated by these empirical findings, we tested the same model on face verification benchmark tasks from the computer vision literature: Labeled Faces in the Wild, PubFig [2, 3, 4] and a new dataset we gathered—achieving strong performance in these highly unconstrained cases as well.

%B NIPS 2013 %I Advances in Neural Information Processing Systems 26 %C Lake Tahoe, Nevada %8 02/2014 %G eng %U http://nips.cc/Conferences/2013/Program/event.php?ID=4074 %0 Generic %D 2014 %T Neural tuning size is a key factor underlying holistic face processing. %A Cheston Tan %A Tomaso Poggio %K Theories for Intelligence %X

Faces are a class of visual stimuli with unique significance, for a variety of reasons. They are ubiquitous throughout the course of a person’s life, and face recognition is crucial for daily social interaction. Faces are also unlike any other stimulus class in terms of certain physical stimulus characteristics. Furthermore, faces have been empirically found to elicit certain characteristic behavioral phenomena, which are widely held to be evidence of “holistic” processing of faces. However, little is known about the neural mechanisms underlying such holistic face processing. In other words, for the processing of faces by the primate visual system, the input and output characteristics are relatively well known, but the internal neural computations are not. The main aim of this work is to further the fundamental understanding of what causes the visual processing of faces to be different from that of objects. In this computational modeling work, we show that a single factor – “neural tuning size” – is able to account for three key phenomena that are characteristic of face processing, namely the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole ‐ Part Effect (WPE). Our computational proof ‐ of ‐ principle provides specific neural tuning properties that correspond to the poorly ‐ understood notion of holistic face processing, and connects these neural properties to psychophysical behavior. Overall, our work provides a unified and parsimonious theoretical account for the disparate empirical data on face ‐ specific processing, deepening the fundamental understanding of face processing

%8 06/2014 %1

arXiv:1406.3793

%2

http://hdl.handle.net/1721.1/100185

%0 Conference Paper %B INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association %D 2014 %T Phone Classification by a Hierarchy of Invariant Representation Layers %A Chiyuan Zhang %A Stephen Voinea %A Georgios Evangelopoulos %A Lorenzo Rosasco %A Tomaso Poggio %K Hierarchy %K Invariance %K Neural Networks %K Speech Representation %X

We propose a multi-layer feature extraction framework for speech, capable of providing invariant representations. A set of templates is generated by sampling the result of applying smooth, identity-preserving transformations (such as vocal tract length and tempo variations) to arbitrarily-selected speech signals. Templates are then stored as the weights of “neurons”. We use a cascade of such computational modules to factor out different types of transformation variability in a hierarchy, and show that it improves phone classification over baseline features. In addition, we describe empirical comparisons of a) different transformations which may be responsible for the variability in speech signals and of b) different ways of assembling template sets for training. The proposed layered system is an effort towards explaining the performance of recent deep learning networks and the principles by which the human auditory cortex might reduce the sample complexity of learning in speech recognition. Our theory and experiments suggest that invariant representations are crucial in learning from complex, real-world data like natural speech. Our model is built on basic computational primitives of cortical neurons, thus making an argument about how representations might be learned in the human auditory cortex.

%B INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association %I International Speech Communication Association (ISCA) %C Singapore %G eng %U http://www.isca-speech.org/archive/interspeech_2014/i14_2346.html %0 Generic %D 2014 %T Representation Learning in Sensory Cortex: a theory. %A F. Anselmi %A Tomaso Poggio %X

We review and apply a computational theory of the feedforward path of the ventral stream in visual cortex based on the hypothesis that its main function is the encoding of invariant representations of images. A key justification of the theory is provided by a theorem linking invariant representations to small sample complexity for recognition – that is, invariant representations allows learning from very few labeled examples. The theory characterizes how an algorithm that can be implemented by a set of ”simple” and ”complex” cells – a ”HW module” – provides invariant and selective representations. The invariance can be learned in an unsupervised way from observed transformations. Theorems show that invariance implies several properties of the ventral stream organization, including the eccentricity dependent lattice of units in the retina and in V1, and the tuning of its neurons. The theory requires two stages of processing: the first, consisting of retinotopic visual areas such as V1, V2 and V4 with generic neuronal tuning, leads to representations that are invariant to translation and scaling; the second, consisting of modules in IT, with class- and object-specific tuning, provides a representation for recognition with approximate invariance to class specific transformations, such as pose (of a body, of a face) and expression. In the theory the ventral stream main function is the unsupervised learning of ”good”
representations that reduce the sample complexity of the final supervised learning stage.

%8 11/2014 %2

http://hdl.handle.net/1721.1/100191

%0 Generic %D 2014 %T Is Research in Intelligence an Existential Risk? %A Tomaso Poggio %X

Recent months have seen an increasingly public debate taking form around the risks of AI (Artificial Intelligence). A letter signed by Nobel prizes and other physicists defined AI as the top existential risk to mankind. More recently, Tesla CEO Elon Musk has been quoted saying that it is “potentially more dangerous than nukes.” Physicist Stephen Hawking told the BBC that “the development of full artificial intelligence could spell the end of the human race”. And of course recent films such as Her and Transcendence have reinforced the message. Thoughtful comments by experts in the field such as Rod Brooks, Oren Etsioni and others have done little to settle the debate.

As the Director of a new multi-institution, NSF-funded and MIT-based Science and Technology Center — called the Center for Brains, Minds and Machines (CBMM) — I am arguing here on behalf of my collaborators and many colleagues, that the terms of the debate should be fundamentally rephrased. Our vision of the Center’s research integrates cognitive science, neuroscience, computer science, and artificial intelligence. Our belief is that understanding intelligence and replicating it in machines, goes hand in hand with understanding how the brain and the mind perform intelligent computations. The convergence and recent progress in technology, mathematics, and neuroscience has created a new opportunity for synergy across fields.  The dream of understanding intelligence is an old one. Yet, as the debate around AI shows, now is an exciting time to pursue this vision.  Our mission at CBMM is thus to establish an emerging field, the Science and Engineering of Intelligence. This integrated effort should ultimately make fundamental progress with great value to science, technology, and society. We believe that we must push ahead with research, not pull back.

%8 12/2014 %0 Generic %D 2014 %T Speech Representations based on a Theory for Learning Invariances %A Stephen Voinea %A Chiyuan Zhang %A Georgios Evangelopoulos %A Lorenzo Rosasco %A Tomaso Poggio %X

Recognition of sounds and speech from a small number of labelled examples (like humans do), depends on the properties of the representation of the acoustic input. We formulate the problem of extracting robust speech features for supervised learning with small sample complexity as a problem of learning representations of the signal that are maximally invariant to intraclass transformations and deformations. We propose an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain, that requires the memory-based, unsupervised storage of acoustic templates -- such as specific phones or words -- together with all the transformations of each that normally occur. A quasi-invariant representation for a speech signal can be obtained by projecting it to a number of template orbits, i.e., each one a set of transformed template signals, and computing the associated one-dimensional empirical probability distributions. The computations are perfomed by modules of filtering and pooling, that can be used for obtaining a mapping in single- or multilayer architectures. We consider several aspects of such representations including different signal scales (word vs. frame), input domains (raw waveforms vs. frequency filterbank responses), structures (shallow vs. multilayer/hierarchical), and ways of sampling from template orbit sets given a set of observations (explicit vs. learned). Preliminary empirical evaluations for learning to separate speech phones and words are given on TIMIT and subsets of TI-DIGITS. 

%C SANE 2014 - Speech and Audio in the Northeast %8 10/2014 %9 poster presentation %0 Generic %D 2014 %T Subtasks of Unconstrained Face Recognition %A JZ. Leibo %A Qianli Liao %A Tomaso Poggio %K Face identification %K Invariance %K Labeled Faces in the Wild %K Same-different matching %K Synthetic data %X

Unconstrained face recognition remains a challenging computer vision problem despite recent exceptionally high results (∼ 95% accuracy) on the current gold standard evaluation dataset: Labeled Faces in the Wild (LFW) (Huang et al., 2008; Chen et al., 2013). We offer a decomposition of the unconstrained problem into subtasks based on the idea that invariance to identity-preserving transformations is the crux of recognition. Each of the subtasks in the Subtasks of Unconstrained Face Recognition (SUFR) challenge consists of a same-different face-matching problem on a set of 400 individual synthetic faces rendered so as to isolate a specific transformation or set of transformations. We characterized the performance of 9 different models (8 previously published) on each of the subtasks. One notable finding was that the HMAX-C2 feature was not nearly as clutter-resistant as had been suggested by previous publications (Leibo et al., 2010; Pinto et al., 2011). Next we considered LFW and argued that it is too easy of a task to continue to be regarded as a measure of progress on unconstrained face recognition. In particular, strong performance on LFW requires almost no invariance, yet it cannot be considered a fair approximation of the outcome of a detection→alignment pipeline since it does not contain the kinds of variability that realistic alignment systems produce when working on non-frontal faces. We offer a new, more difficult, natural image dataset: SUFR-in-the-Wild (SUFR-W), which we created using a protocol that was similar to LFW, but with a few differences designed to produce more need for transformation invariance. We present baseline results for eight different face recognition systems on the new dataset and argue that it is time to retire LFW and move on to more difficult evaluations for unconstrained face recognition.

Click here for more information on related dataset >

%I 9th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. (VISAPP). %C Lisbon, Portugal %8 01/2014 %0 Generic %D 2014 %T Subtasks of unconstrained face recognition %A JZ. Leibo %A Qianli Liao %A Tomaso Poggio %K Computer vision %X

This package contains:

1.  SUFR-W, a dataset of “in the wild” natural images of faces gathered from the internet. The protocol used to create the dataset is described in Leibo, Liao and Poggio (2014).

2.  The full set of SUFR synthetic datasets, called the “Subtasks of Unconstrained Face Recognition Challenge” in Leibo, Liao and Poggio (2014).

Click here for more information & download >

Click here to download the data set directly >

%8 01/2014 %0 Book Section %B The History of Neuroscience in Autobiography Volume 8 %D 2014 %T Tomaso A. Poggio %A Tomaso Poggio %A Larry R. Squire %X

Tomaso Poggio began his career in collaboration with Werner Reichardt quantitatively characterizing the visuomotor control system in the fly. With David Marr, he introduced the seminal idea of levels of analysis in computational neuroscience. He introduced regularization as a mathematical framework to approach the ill-posed problems of vision and—more importantly—the key problem of learning from data. He has contributed to the early development of the theory of learning—in particular introducing the mathematics of radial basis functions (RBF)—and has supervised learning in reproducing kernel Hilbert spaces (RKHSs) and stability. In the last decade, he has developed an influential quantitative model of visual recognition in the visual cortex, recently extended in a theory of sensory perception. He is one of the most cited computational scientists with contributions ranging from the biophysical and behavioral studies of the visual system to the computational analyses of vision and learning in humans and machines.

%B The History of Neuroscience in Autobiography Volume 8 %I Society for Neuroscience %V 8 %8 04/2014 %@ 978-0-615-94079-3 %G eng %U https://www.sfn.org/about/history-of-neuroscience/autobiographical-chapters %0 Generic %D 2014 %T Unsupervised learning of clutter-resistant visual representations from natural videos. %A Qianli Liao %A JZ. Leibo %A Tomaso Poggio %X

Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning rules are not known, recent results [4, 5, 6] suggest the operation of an unsupervised temporal-association-based method e.g., Foldiak’s trace rule [7]. Such methods exploit the temporal continuity of the visual world by assuming that visual experience over short timescales will tend to have invariant identity content. Thus, by associating representations of frames from nearby times, a representation that tolerates whatever transformations occurred in the video may be achieved. Many previous studies verified that such rules can work in simple situations without background clutter, but the presence of visual clutter has remained problematic for this approach. Here we show that temporal association based on large class-specific filters (templates) avoids the problem of clutter. Our system learns in an unsupervised way from natural videos gathered from the internet, and is able to perform a difficult unconstrained face recognition task on natural images (Labeled Faces in the Wild [8]).

%8 09/2014 %1

arXiv:1409.3879v1

%2

http://hdl.handle.net/1721.1/100187

%0 Generic %D 2014 %T Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning? %A F. Anselmi %A JZ. Leibo %A Lorenzo Rosasco %A Jim Mutch %A Andrea Tacchetti %A Tomaso Poggio %K Computer vision %K Pattern recognition %X

The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n→∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n→1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on the unsupervised, automatic learning of a “good” representation for supervised learning, characterized by small sample complexity (n). We consider the case of visual object recognition though the theory applies to other domains. The starting point is the conjecture, proved in specific cases, that image representations which are invariant to translations, scaling and other transformations can considerably reduce the sample complexity of learning. We prove that an invariant and unique (discriminative) signature can be computed for each image patch, I, in terms of empirical distributions of the dot-products between I and a set of templates stored during unsupervised learning. A module performing filtering and pooling, like the simple and complex cells described by Hubel and Wiesel, can compute such estimates. Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts. The theory extends existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable, and discriminative for recognition—and that this representation may be continuously learned in an unsupervised way during development and visual experience.

%8 03/2014 %1

1311.4158v5

%2

http://hdl.handle.net/1721.1/90566

%0 Conference Paper %B INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association %D 2014 %T Word-level Invariant Representations From Acoustic Waveforms %A Stephen Voinea %A Chiyuan Zhang %A Georgios Evangelopoulos %A Lorenzo Rosasco %A Tomaso Poggio %K Invariance %K Speech Representation %K Theories for Intelligence %X

Extracting discriminant, transformation-invariant features from raw audio signals remains a serious challenge for speech recognition. The issue of speaker variability is central to this problem, as changes in accent, dialect, gender, and age alter the sound waveform of speech units at multiple scales (phonemes, words, or phrases). Approaches for dealing with this variability have typically focused on analyzing the spectral properties of speech at the level of frames, on par with frame-level acoustic modeling usually applied to speech recognition systems. In this paper, we propose a framework for representing speech at the whole-word level and extracting features from the acoustic, temporal domain, without the need for spectral encoding or pre-processing. Leveraging recent work on unsupervised learning of invariant sensory representations, we extract a signature for a word by first projecting its raw waveform onto a set of templates and their transformations, and then forming empirical estimates of the resulting one-dimensional distributions via histograms. The representation and relevant parameters are evaluated for word classification on a series of datasets with increasing speaker-mismatch difficulty, and the results are compared to those of an MFCC-based representation.

%B INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association %I International Speech Communication Association (ISCA) %C Singapore %G eng %U http://www.isca-speech.org/archive/interspeech_2014/i14_2385.html %0 Book Section %B Empirical Inference %D 2013 %T On Learnability, Complexity and Stability %A Villa, Silvia %A Lorenzo Rosasco %A Tomaso Poggio %A Schölkopf, Bernhard %A Luo, Zhiyuan %A Vovk, Vladimir %X

Empirical Inference, Chapter 7

Editors: Bernhard Schölkopf, Zhiyuan Luo and Vladimir Vovk

Abstract:

We consider the fundamental question of learnability of a hypothesis class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik. We survey classic results characterizing learnability in terms of suitable notions of complexity, as well as more recent results that establish the connection between learnability and stability of a learning algorithm.

%B Empirical Inference %I Springer Berlin Heidelberg %C Berlin, Heidelberg %P 59 - 69 %@ 978-3-642-41135-9 %G eng %U http://link.springer.com/10.1007/978-3-642-41136-6 %& 7 %R 10.1007/978-3-642-41136-610.1007/978-3-642-41136-6_7 %0 Generic %D 2013 %T NSF Science and Technology Centers – The Class of 2013 %A Eaton Lattman %A Tomaso Poggio %A Robert Westervelt %I North America Gender Summit %C Washington, D.C. %8 11/2013 %0 Conference Proceedings %D 2013 %T Unsupervised Learning of Invariant Representations in Hierarchical Architectures. %A F. Anselmi %A JZ. Leibo %A Lorenzo Rosasco %A Jim Mutch %A Andrea Tacchetti %A Tomaso Poggio %K convolutional networks %K Hierarchy %K Invariance %K visual cortex %X

Representations that are invariant to translation, scale and other transformations, can considerably reduce the sample complexity of learning, allowing recognition of new object classes from very few examples – a hallmark of human recognition. Empirical estimates of one-dimensional projections of the distribution induced by a group of affine transformations are proven to represent a unique and invariant signature associated with an image. We show how projections yielding invariant signatures for future images can be learned automatically, and updated continuously, during unsupervised visual experience. A module performing filtering and pooling, like simple and complex cells as proposed by Hubel and Wiesel, can compute such estimates. Under this view, a pooling stage estimates a one-dimensional probability distribution. Invariance from observations through a restricted window is equivalent to a sparsity property w.r.t. to a transformation, which yields templates that are a) Gabor for optimal simultaneous invariance to translation and scale or b) very specific for complex, class-dependent transformations such as rotation in depth of faces. Hierarchical architectures consisting of this basic Hubel-Wiesel module inherit its properties of invariance, stability, and discriminability while capturing the compositional organization of the visual world in terms of wholes and parts, and are invariant to complex transformations that may only be locally affine. The theory applies to several existing deep learning convolutional architectures for image and speech recognition. It also suggests that the main computational goal of the ventral stream of visual cortex is to provide a hierarchical representation of new objects which is invariant to transformations, stable, and discriminative for recognition – this representation may be learned in an unsupervised way from natural visual experience.

Read paper>

%8 11/2013 %G eng %0 Conference Paper %B Advances in Neural Information Processing Systems 25 (NIPS 2012) %D 2012 %T Learning manifolds with k-means and k-flats %A Guillermo D. Canas %A Tomaso Poggio %A Lorenzo Rosasco %X

We study the problem of estimating a manifold from random samples. In partic- ular, we consider piecewise constant and piecewise linear estimators induced by k-means and k-flats, and analyze their performance. We extend previous results for k-means in two separate directions. First, we provide new results for k-means reconstruction on manifolds and, secondly, we prove reconstruction bounds for higher-order approximation (k-flats), for which no known results were previously available. While the results for k-means are novel, some of the technical tools are well-established in the literature. In the case of k-flats, both the results and the mathematical tools are new.

%B Advances in Neural Information Processing Systems 25 (NIPS 2012) %8 12/2012 %G eng %U https://papers.nips.cc/paper/2012/hash/b20bb95ab626d93fd976af958fbc61ba-Abstract.html %0 Generic %D 2011 %T A Large Video Database for Human Motion Recognition %A E. Garrote %A H. Jhuang %A H. Huehne %A Tomaso Poggio %A T. Serre %X

With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lack far behind.

Here we introduce HMDB collected from various sources, mostly from movies, and a small proportion from public databases such as the Prelinger archive, YouTube and Google videos. The dataset contains 6849 clips divided into 51 action categories, each containing a minimum of 101 clips.

The actions categories can be grouped in five types:

  1. General facial actions smile, laugh, chew, talk.
  2. Facial actions with object manipulation: smoke, eat, drink.
  3. General body movements: cartwheel, clap hands, climb, climb stairs, dive, fall on the floor, backhand flip, handstand, jump, pull up, push up, run, sit down, sit up, somersault, stand up, turn, walk, wave.
  4. Body movements with object interaction: brush hair, catch, draw sword, dribble, golf, hit something, kick ball, pick, pour, push something, ride bike, ride horse, shoot ball, shoot bow, shoot gun, swing baseball bat, sword exercise, throw.
  5. Body movements for human interaction: fencing, hug, kick someone, kiss, punch, shake hands, sword fight.

Click HERE to see documentation or to download ‘A Large Video Database for Human Motion Recognition.’ >

%8 01/2011 %0 Generic %D 2010 %T CNS (“Cortical Network Simulator”): a GPU-based framework for simulating cortically-organized networks %A Jim Mutch %A Ulf Knoblich %A Tomaso Poggio %X

A general GPU-based framework for the fast simulation of “cortically-organized” networks, defined as networks consisting of n-dimensional layers of similar cells.

This is a fairly broad class, including more than just “HMAX” models. We have developed specialized CNS packages for HMAX feature hierarchy models (hmax), convolutional networks (cnpkg), and networks of Hodgkin-Huxley spiking cells (hhpkg).

While CNS is designed for use with a GPU, it can run (much more slowly) without one. It does, however, require MATLAB.

CNS (“Cortical Network Simulator”)

%8 01/2010 %0 Generic %D 2010 %T System for Mouse Behavior Recognition %A E. Garrote %A H. Jhuang %A V. Khilnani %A Tomaso Poggio %A T. Serre %A X. Yu %X

Neurobehavioural analysis of mouse phenotypes requires the monitoring of mouse behaviour over long periods of time. In this study, we describe a trainable computer vision system enabling the automated analysis of complex mouse behaviours. We provide software and an extensive manually annotated video database used for training and testing the system. Our system performs on par with human scoring, as measured from ground-truth manual annotations of thousands of clips of freely behaving mice. As a validation of the system, we characterized the home-cage behaviours of two standard inbred and two non-standard mouse strains. From these data, we were able to predict in a blind test the strain identity of individual animals with high accuracy. Our video-based software will complement existing sensor-based automated approaches and enable an adaptable, comprehensive, high-throughput, fine-grained, automated analysis of mouse behaviour.

Click here for more information and to download >

%8 01/2010