Learning and Reasoning in Symbolic Domains

Learning and Reasoning in Symbolic Domains
Learning and Reasoning in Symbolic Domains

Early AI researchers predicted that symbolic problems - theorem proving and game playing, for example - would be the first for which computers would match human performance. The reasons for this prediction remain compelling: symbolic problems are crisply expressible in formal language, they are largely free of much of the noise and dependence on world knowledge, and human and animal performance on symbolic tasks does not appear to be supported by large swaths of specialized cortex. Unlike problems in areas like sensation and motor control, symbolic problems seem to exist on computers’ “home turf.” It is mysterious, therefore that modern models of symbolic problem solving lag so far behind human performance.

The hypothesis of this project is that the human edge on symbolic problems is attributable to general cognitive skills more frequently associated with “core” cognitive domains such as perception and action. We are focusing on three such skills in particular.

Simulation. Evidence from a number of sources suggests that humans and animals reason about their environment by simulating it. This enables them, for instance, to predict future states of the world, and to select actions designed to achieve a particular goal. Simulation is often presented as an alternative to symbolic reasoning, since it operates on a concrete world state rather than on first-order, symbolic representations. In the first thrust of this project, we investigate how the two strategies can operator together. We use experiments to characterize the situations in which humans deploy simulation, and models to show how simulation can support abstracted, explicitly symbolic problem solving.

Similarity. Many successful models in vision, audition, and language processing operate by mapping an input signal into a feature space with a similarity metric that approximates semantic similarity in the input space. An analogous feature space for symbolic problems would guide discrete search, and support efficient gradient-based optimization. Our work in this area will extend previous efforts in this direction by developing compositional vector space embeddings for first-order logical formulae.

Information foraging. A large body of experimental and modeling work across a number of domains describes how humans structure their interactions with the environment to efficiently converge to correct hypotheses. Here, we apply similar ideas to symbolic learning, studying how people design experiments to speed up search in theory learning problems.

Associated Research Thrust(s): 

Principal Investigators: