Deep-learning neural networks have come a long way in the past several years—we now have systems that are capable of beating people at complex games such as shogi, Go and chess. But is the progress of such systems limited by their basic architecture? Shimon Ullman, with the Weizmann Institute of Science, addresses this question in a Perspectives piece in the journal Science and suggests some ways computer scientists might reach beyond simple AI systems to create artificial general intelligence (AGI) systems.
Deep learning networks are able to learn because they have been programmed to create artificial neurons and the connections between them. As they encounter new data, new neurons and communication paths between them are formed—very much like the way the human brain operates. But such systems require extensive training (and a feedback system) before they are able to do anything useful, which stands in stark contrast to the way that humans learn. We do not need to watch thousands of people in action to learn to follow someone's gaze, for example, or to figure out that a smile is something positive.
Ullman suggests this is because humans are born with what he describes as preexisting network structures that are encoded into our neural circuitry. Such structures, he explains, provide growing infants with an understanding of the physical world in which they exist—a base upon which they can build more complex structures that lead to general intelligence. If computers had similar structures, they, too, might develop physical and social skills without the need for thousands of examples...
Read the full article on Tech Xplore's website using the link below.