Deep Learning AI Explained: Neural Networks [California News Times]

November 4, 2021

Ballyhooed artificial intelligence technology, known as “deep learning,” brings back ideas 70 years ago.

Over the last decade, the best performing artificial intelligence systems, such as smartphone voice recognition and Google’s latest automatic translation capabilities, have come from a technique called “deep learning.”

Deep learning is, in fact, a new name for an approach to artificial intelligence called neural networks. Neural networks have been in fashion for over 70 years. Neural networks were first proposed by Warren McCullough and Walter Pitts in 1944. University of Chicago Researcher who moved to MIT As a founding member of what was sometimes called the first cognitive science department in 1952.

Neural networks were a major area of ​​research in both neuroscience and computer science until 1969. According to computer science folklore, neural nets were killed by MIT mathematicians Marvin Minsky and Seymour Papert. The new MIT Computer Science and Artificial Intelligence Laboratory.

Most deep learning applications use “convolutional” neural networks. In this neural network, the nodes in each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) in the next layer. Credit: Jose-Luis Olivares / MIT

The technology then revived in the 1980s, eclipsing the eclipse again in the first decade of the new century, and back like a second fuss, largely supported by the increased processing power of graphics chips.

“There is this idea that scientific ideas are a bit like a virus epidemic,” said Eugene McDermott, a professor of brain cognitive science at MIT, a researcher at the McGovern Institute for Brain Research at MIT, and director of the Brain Center at MIT. Tomaso Poggio says. , Mind, and machine. “There are clearly five or six basic strains of influenza virus, each of which seems to return in a period of about 25 years. People become infected and develop an immune response, so the next 25 years Not infected. And there is a new generation ready to be infected with the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it down, and immunize it. — They get tired of it. Therefore, ideas need the same kind of periodicity! “

Serious problem

Neural networks are a means of machine learning, and computers learn to perform several tasks by analyzing training examples. Examples are usually pre-labeled by hand. For example, object recognition systems supply thousands of labeled images of cars, homes, coffee cups, etc., and detect visual patterns in the images that consistently correlate with a particular label.

A neural network that roughly models the human brain consists of thousands or millions of closely interconnected simple processing nodes. Most of today’s neural networks are organized into layers of nodes, which are “feedforward”. That is, the data moves the node in only one direction. Each node may be connected to multiple nodes in the layer below it that receives the data and some nodes in the layer above it that sends the data.

The node assigns each incoming connection a number called a “weight.” When the network is active, nodes receive different data items (different numbers) for each connection and multiply them by the associated weights. Then sum the resulting products to produce a single number. If that number is below the threshold, the node will not pass data to the next layer. When the number exceeds the threshold, the node “starts”. This usually means that today’s neural networks send numbers (the sum of weighted inputs) along all outgoing connections.

When a neural net is trained, all its weights and thresholds are initially set to random values. Training data is sent to the bottom layer (input layer), passes through subsequent layers, is multiplied and added in a complex way, and finally reaches the output layer, where it is radically transformed. During training, the weights and thresholds are continuously adjusted until training data with the same label consistently produces similar output.

Mind and machine

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they were not placed in layers and the researchers did not specify a training mechanism. McCullough and Pitts showed that neural networks can, in principle, compute all the functions that a digital computer can perform. The result is neuroscience rather than computer science. The point was to suggest that the human brain can be considered a computing device.

Neural networks continue to be a valuable tool for neuroscience research.For example, a specific Network layout Also rule Adjusting the weights and thresholds reproduced the observed features of human neuroanatomy and cognition. This shows that the brain is capturing something about how information is processed.

The first trainable neural network, Perceptron, was demonstrated in 1957 by Cornell University psychologist Frank Rosenblatt. The design of the perceptron was very similar to that of modern neural networks, except that there was only one layer with adjustable weights and thresholds sandwiched between the inputs. And the output layer.

Perceptrons was an active field of study in both new areas of psychology and computer science until 1959, when Minsky and Papert published a book entitled “Perceptrons”. This book has shown that it takes an unrealistic amount of time to perform certain fairly common calculations on Perceptron.

“Of course, using a slightly more complex machine, such as two layers, would remove all these limitations,” Poggio says. However, at that time, the book had a chilling effect on neural network research.

“We need to put these things in a historical context,” says Poggio. “They were advocating programming — for languages ​​like Lisp. Many years ago, people were still using analog computers. At that time, whether programming was the way to go. It wasn’t obvious at all. I think they went a little overboard, but as always, it’s not black and white. Given this competition between analog and digital computing, they were at the time. I fought for the right thing."...

Read the full story on the California News Times website using the link below.

Associated CBMM Pages: