AI Is Nothing Like a Brain, and That’s OK [Quanta Magazine]

April 30, 2025

The brain’s astounding cellular diversity and networked complexity could show how to make AI better.

By Yasemin Saplakoglu, Staff Writer

In 1943, a pair of neuroscientists were trying to describe how the human nervous system works when they accidentally laid the foundation for artificial intelligence. In their mathematical framework (opens a new tab) for how systems of cells can encode and process information, Warren McCulloch and Walter Pitts argued that each brain cell, or neuron, could be thought of as a logic device: It either turns on or it doesn’t. A network of such “all-or-none” neurons, they wrote, can perform simple calculations through true or false statements.

“They were actually, in a sense, describing the very first artificial neural network,” said Tomaso Poggio (opens a new tab) of the Massachusetts Institute of Technology, who is one of the founders of computational neuroscience.

McCulloch and Pitts’ framework laid the groundwork for many of the neural networks that underlie the most powerful AI systems. These algorithms, built to recognize patterns in data, have become so competent at complex tasks that their products can seem eerily human. ChatGPT’s text is so conversational and personal that some people are falling in love (opens a new tab). Image generators can create pictures so realistic that it can be hard to tell when they’re fake. And deep learning algorithms are solving scientific problems that have stumped humans for decades. These systems’ abilities are part of the reason the AI vocabulary is so rich in language from human thought, such as intelligence, learning and hallucination.

But there is a problem: The initial McCulloch and Pitts framework is “complete rubbish,” said the science historian Matthew Cobb (opens a new tab) of the University of Manchester, who wrote the book The Idea of the Brain: The Past and Future of Neuroscience (opens a new tab). “Nervous systems aren’t wired up like that at all.”

 

When you poke at even the most general comparison between biological and artificial intelligence — that both learn by processing information across layers of networked nodes — their similarities quickly crumble.

Artificial neural networks are “huge simplifications,” said Leo Kozachkov (opens a new tab), a postdoctoral fellow at IBM Research who will soon lead a computational neuroscience lab at Brown University. “When you look at a picture of a real biological neuron, it’s this wicked complicated thing.” These wicked complicated things come in many flavors and form thousands of connections to one another, creating dense, thorny networks whose behaviors are controlled by a menagerie of molecules released on precise timescales.

The vast cellular complex that is our nervous system generates our feelings, thoughts, consciousness and intelligence — everything that makes us who we are. Many processes seem to unfold instantaneously and simultaneously, orchestrated by an organ that evolution molded for hundreds of millions of years from pieces it found in the ancient oceans, culminating in an information storage and processing system that can ask existential questions about itself.

“[The brain] is the most complex piece of active matter in the known universe,” said Christof Koch (opens a new tab), a neuroscientist at the Allen Institute for Brain Science in Seattle. “Brains have always been compared to the most advanced piece of machinery.”

But no piece of machinery — from telephone switchboard or radio tube to supercomputer or neural network — ever measured up.

The brain’s neuronal diversity and networked complexity is lost in artificial neural networks. But computational neuroscientists — experts on both brains and computers — say that’s OK. Although the two systems have diverged along separate evolutionary paths, computer scientists and neuroscientists still have much to learn by comparing them. Infusing biological strategies could improve the efficiency and effectiveness of artificial neural networks. The latter could, in turn, be a model to understand the human brain.

With AI, “we are in the process not of re-creating human biology,” said Thomas Naselaris (opens a new tab), a neuroscientist at the University of Minnesota, but “of discovering new routes to intelligence.” And in doing so, the hope is that we’ll understand more of our own.

An Electronic ‘Brain’

In 1958, two years after the term “artificial intelligence” was coined at a math and computer science workshop at Dartmouth College, the U.S. Navy unveiled what The New York Times (opens a new tab) called an “electronic ‘brain’” that teaches itself.

This computer, known as the “perceptron,” wasn’t particularly advanced. It could read a simple code — a series of holes punched into a card — and predict whether the next hole would appear on the right or the left, represented by a binary output: 0 or 1. The perceptron made these calculations through a series of nodes, also called neurons. But they were neurons only in spirit...

Read the full article on the QuantaMagazine website using the link below.

Associated CBMM Pages: