Image interpretation by iterative bottom-up top- down processing

TitleImage interpretation by iterative bottom-up top- down processing
Publication TypeCBMM Memos
Year of Publication2021
AuthorsUllman, S, Assif, L, Strugatski, A, Vatashsky, B-Z, Levi, H, Netanyahu, A, Yaari, AUri
Number120
Date Published11/2021
Abstract

Scene understanding requires the extraction and representation of scene components, such as objects and their parts, people, and places, together with their individual properties, as well as relations and interactions between them. We describe a model in which meaningful scene structures are extracted from the image by an iterative process, combining bottom-up (BU) and top-down (TD) networks, interacting through a symmetric bi-directional communication between them (‘counter-streams’ structure). The BU- TD model extracts and recognizes scene constituents with their selected properties and relations, and uses them to describe and understand the image.

The scene representation is constructed by the iterative use of three components. The first model component is a bottom-up stream that extracts selected scene elements, properties and relations. The second component (‘cognitive augmentation’) augments the extracted visual representation based on relevant non-visual stored representations. It also provides input to the third component, the top-down stream, in the form of a TD instruction, instructing the model what task to perform next. The top-down stream then guides the BU visual stream to perform the selected task in the next cycle. During this

process, the visual representations extracted from the image can be combined with relevant non- visual representations, so that the final scene representation is based on both visual information extracted from the scene and relevant stored knowledge of the world.
We show how the BU-TD model composes complex visual tasks from sequences of steps, invoked by individual TD instructions. In particular, we describe how a sequence of TD-instructions is used to extract from the scene structures of interest, including an algorithm to automatically select the next TD- instruction in the sequence. The selection of TD instruction depends in general on the goal, the image, and on information already extracted from the image in previous steps. The TD-instructions sequence is therefore not a fixed sequence determined at the start, but an evolving program (or ‘visual routine’) that depends on the goal and the image.

The extraction process is shown to have favourable properties in terms of combinatorial generalization,

generalizing well to novel scene structures and new combinations of objects, properties and relations not seen during training. Finally, we compare the model with relevant aspects of the human vision, and suggest directions for using the BU-TD scheme for integrating visual and cognitive components in the process of scene understanding.

 

DSpace@MIT

https://hdl.handle.net/1721.1/139678

Download:  PDF icon CBMM-Memo-120.pdf
CBMM Memo No:  120

Associated Module: 

CBMM Relationship: 

  • CBMM Funded