@article {4087, title = {Using neuroscience to develop artificial intelligence}, journal = {Science}, volume = {363}, year = {2019}, month = {02/2019}, pages = {692 - 693}, chapter = {692}, abstract = {

When the mathematician Alan Turing posed the question {\textquotedblleft}Can machines think?{\textquotedblright} in the first line of his seminal 1950 paper that ushered in the quest for artificial intelligence (AI) (1), the only known systems carrying out complex computations were biological nervous systems. It is not surprising, therefore, that scientists in the nascent field of AI turned to brain circuits as a source for guidance. One path that was taken since the early attempts to perform intelligent computation by brain-like circuits (2), and which led recently to remarkable successes, can be described as a highly reductionist approach to model cortical circuitry. In its basic current form, known as a {\textquotedblleft}deep network{\textquotedblright} (or deep net) architecture, this brain-inspired model is built from successive layers of neuron-like elements, connected by adjustable weights, called {\textquotedblleft}synapses{\textquotedblright} after their biological counterparts (3). The application of deep nets and related methods to AI systems has been transformative. They proved superior to previously known methods in central areas of AI research, including computer vision, speech recognition and production, and playing complex games. Practical applications are already in broad use, in areas such as computer vision and speech and text translation, and large-scale efforts are under way in many other areas. Here, I discuss how additional aspects of brain circuitry could supply cues for guiding network models toward broader aspects of cognition and general AI.

}, issn = {0036-8075}, doi = {10.1126/science.aau6595}, url = {http://www.sciencemag.org/lookup/doi/10.1126/science.aau6595}, author = {Shimon Ullman} }