Most computations that people do in everyday life are very expensive. Recent research highlights that humans make efficient use of their limited computational resources to tackle these problems. Memory is a crucial aspect of algorithmic efficiency and permits the reuse of past computation through memoization. We review neural and behavioral evidence of humans reusing past computations across several domains, including mental imagery, arithmetic, planning, and probabilistic inference. Recent developments in neural networks expand the scope of computational reuse with a distributed form of memoization called amortization. This opens many new avenues of research. Computer scientists have long recognized that naive implementations of algorithms often result in a paralyzing degree of redundant computation. More sophisticated implementations harness the power of memory by storing computational results and reusing them later. We review the application of these ideas to cognitive science, in four case studies (mental arithmetic, mental imagery, planning, and probabilistic inference). Despite their superficial differences, these cognitive processes share a common reliance on memory that enables efficient computation.

}, keywords = {amortization, inference, memory, mental arithmetic, mental imagery, planning}, issn = {13646613}, doi = {10.1016/j.tics.2020.12.008}, url = {https://linkinghub.elsevier.com/retrieve/pii/S1364661320303053}, author = {Ishita Dasgupta and Samuel J Gershman} } @article {4501, title = {A theory of learning to infer.}, journal = {Psychological Review}, volume = {127}, year = {2020}, month = {04/2020}, pages = {412 - 441}, abstract = {Bayesian theories of cognition assume that people can integrate probabilities rationally. However, several empirical findings contradict this proposition: human probabilistic inferences are prone to systematic deviations from optimality. Puzzlingly, these deviations sometimes go in opposite directions. Whereas some studies suggest that people underreact to prior probabilities (*base rate neglect*), other studies find that people underreact to the likelihood of the data (*conservatism*). We argue that these deviations arise because the human brain does not rely solely on a general-purpose mechanism for approximating Bayesian inference that is invariant across queries. Instead, the brain is equipped with a recognition model that maps queries to probability distributions. The parameters of this recognition model are optimized to get the output as close as possible, on average, to the true posterior. Because of our limited computational resources, the recognition model will allocate its resources so as to be more accurate for high probability queries than for low probability queries. By adapting to the query distribution, the recognition model learns to infer. We show that this theory can explain why and when people underreact to the data or the prior, and a new experiment demonstrates that these two forms of underreaction can be systematically controlled by manipulating the query distribution. The theory also explains a range of related phenomena: memory effects, belief bias, and the structure of response variability in probabilistic reasoning. We also discuss how the theory can be integrated with prior sampling-based accounts of approximate inference.

Stochasticity is an essential part of explaining the world. Increasingly, neuroscientists and cognitive scientists are identifying mechanisms whereby the brain uses probabilistic reasoning in representational, predictive, and generative settings. But stochasticity is not always useful: robust perception and memory retrieval require representations that are immune to corruption by stochastic noise. In an effort to combine these robust representations with stochastic computation, we present an architecture that generalizes traditional recurrent attractor networks to follow probabilistic Markov dynamics between stable and noise-resistant fixed points.

}, author = {Ishita Dasgupta and Jeremy Bernstein and David Rolnick and Haim Sompolinsky} } @article {2257, title = {Where do hypotheses come from?}, year = {2016}, month = {10/2016}, abstract = {Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes{\textquoteright} rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model{\textquoteright}s prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments.

}, author = {Ishita Dasgupta and Eric Schulz and Samuel J Gershman} }