We present an improved method for symbolic regression that seeks to fit data toformulas that are Pareto-optimal, in the sense of having the best accuracy for a givencomplexity. It improves on the previous state-of-the-art by typically being ordersof magnitude more robust toward noise and bad data, and also by discovering manyformulas that stumped previous methods. We develop a method for discoveringgeneralized symmetries (arbitrary modularity in the computational graph of aformula) from gradient properties of a neural network fit. We use normalizingflows to generalize our symbolic regression method to probability distributionsfrom which we only have samples, and employ statistical hypothesis testing toaccelerate robust brute-force search.

Readthedocs: https://ai-feynman.readthedocs.io/en/latest/

%B Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)
%8 12/2020
%G eng
%0 Conference Proceedings
%B 34th International Conference on Machine Learning
%D 2017
%T Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNN
%A Jing, Li
%A Shen, Yichen
%A Dubček, Tena
%A Peurifoi, John
%A Skirlo, Scott
%A LeCun, Yann
%A Max Tegmark
%A Soljačić, Marin
%B 34th International Conference on Machine Learning
%V 70
%P 1733-1741
%8 08/2017
%G eng
%U https://arxiv.org/abs/1612.05231
%0 Journal Article
%J Journal of Statistical Physics
%D 2017
%T Why does deep and cheap learning work so well?
%A Henry Lin
%A Max Tegmark
%K Artificial neural networks
%K deep learning
%K Statistical physics
%X We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that *n* variables cannot be multiplied using fewer than 2*n* neurons in a single hidden layer.

Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ- measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands.

%B PLOS Computational Biology %8 11/2016 %G eng %U http://dx.plos.org/10.1371/journal.pcbi.1005123 %! PLoS Comput Biol %R 10.1371/journal.pcbi.100512310.1371