CBMM presenting at NeurIPS 2020

Neural Information Processing Systems logo
December 7, 2020

The Center for Brains, Minds and Machines is well-represented at the thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2020).

IBM-MIT launch new AI challenge to push the limits of object recognition models

Below, you will find some of the listings to papers/proceedings/posters accepted and accompanying coverage:

CBMM is proud to have hosted this year's Shared Visual Representations in Human & Machine Intelligence, 2020 NeurIPS Workshop, Dec. 12, 2020. Videos now available.

categorization of cats

"Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations"
Dapello, J, Marques, T, Schrimpf, M, Geiger, F, Cox, D, DiCarlo, JJ, Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)

 a): A 1920 × 1080 (PNG) image divided into 32 × 32 pixel blocks, each colored according toitsdistancefromthefixationpoint(imagecenter). b):Foveated1920×1080image,computed with our foveated optimization scheme based on adaptive gaussian blurring (generated at a rate of 165 Hz) c): Close up of image a) at transition from fovea into periphery. d): Corresponding close up of image b). e): Original image. Notice that even without image blending such as cosine windowing functions, the final foveated output from b) looks smoothly generated in reference to e).

“CUDA-Optimized real-time rendering of a Foveated Visual System”
E. Malkin, Deza, A., and Poggio, T. A., Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop at NeurIPS 2020.

Left Top: Distributions of sampling points for 5 different retinal fixations (center, top right,top left, bottom right, bottom left). Red dots represent pixels that would be sampled from the originalstandard image to form the retina sampled image.  Left Bottom:  Effect of retina sampling on animage of a flat checkerboard. Images presented were re-sampled at 5 different fixation points (sameas above). Right Top: Shown in red, the centering of scale-space fragments on 5 different corticalfixations on the image (center, bottom right, top right, bottom left, top left). Scale-space fragments forImageNet images were of the sizes 40x40, 80x80, 160x160, 240x240. Right Bottom: The resulting 4scale-space fragments from 1 fixation point. For a single fixation point, the 4 scale-space fragmentsresult from crops of varying sizes but are all Gaussian downsampled to the size of the smallest(40x40).

“Biologically Inspired Mechanisms for Adversarial Robustness”
M. Vuyyuru Reddy, Banburski, A., Pant, N., and Poggio, T., NeurIPS 2020 Poster Session6 , Dec. 10 from 12:00-2:00pm.

One of the four-way mirroring tasks and the program discovered that solves it. The programwas discovered only after four iterations of enumeration and compression.

“Dreaming with ARC”
A. Banburski, Gandhi, A., Alford, S., Dandekar, S., Chin, P., and Poggio, T., Learning Meets Combinatorial Algorithms workshop at NeurIPS 2020.

Illustration of our synthesis-based rule learner and comparison to previous work.  A)Previous work [9]: Support examples are encoded into an external neural memory. A query outputis predicted by conditioning on the query input sequence and interacting with the external memoryvia attention. B) Our model: Given a support set of input-output examples, our model produces adistribution over candidate grammars.  We sample from this distribution, and symbolically checkconsistency of each sampled grammar against the support set until a grammar is found which satisfiesthe input-output examples in the support set. This approach allows much more effective search thanselecting the maximum likelihood grammar from the network.

"Learning Compositional Rules via Neural Program Synthesis"
Nye, M, Solar-Lezama, A, Tenenbaum, JB, Lake, BM, Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)

Examples of graph modularity that our algorithm can auto-discover. Lines denote real- valued variables and ovals denote functions, with larger ones being more complex.

“AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity”
S. - M. Udrescu, Tan, A., Feng, J., Neto, O., Wu, T., and Tegmark, M., in Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020).

Overview of the simulation and the hierarchical planner.  (A) Key components of thesimulation. (B) The hierarchical planner in our simulation. At each step the planner searches for anaction based on the agent’s belief represented by a set of particles.

PHASE: PHysically-grounded Abstract Social Eventsfor Machine Social Perception
A. Netanyahu, Shu, T., Katz, B., Barbu, A., and Tenenabum, J. B., in Shared Visual Representations in Human and Machine Intelligence (SVRHM) workshop at NeurIPS 2020.

Learning abstract structure for drawing by efficient motor program induction
L. Tian, Ellis, K., Kryven, M., and Tenenbaum, J. B., in Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020),