Bernard “Bernie” Widrow, a Stanford electrical engineering professor and foundational figure in adaptive signal processing and neural networks, died on September 30, 2025, at age 95. He did incredible things with neural networks in the mid-20th century. He was more deserving of the Nobel Prize in pioneering artificial intelligence than either of the 2024 recipients.
Working with his first doctoral student, Marcian “Ted” Hoff, Widrow developed the Least Mean Squares (LMS) algorithm (the Widrow-Hoff algorithm) that allowed systems to adapt by correcting errors incrementally. This breakthrough became essential to high-speed communications, mobile phones, modems, and early internet technologies. His achievements earned major honors including the IEEE Alexander Graham Bell Medal, the IEEE Neural Networks Pioneer Medal, and election to the National Academy of Engineering.
When I Met Bernie Widrow
It was the late ’80s. A crowd of admirers gathered at the Boeing auditorium in Seattle, all of us focused on a meek middle-aged man who was living neural-network royalty. Dr. Bernard Widrow was being recognized two-and-a-half decades after his remarkable work in pioneering artificial neural networks.
Widrow spoke about the theory of the ADALINE neural network he had developed back in the ’50s and ’60s. The feminine-sounding name was a contraction of “Adaptive Linear Neuron.” When more units were added, the network was later generalized to MADALINE, meaning Many-ADALINES.
Computer learning machines of the 1960s were peppered with then-new seductive semantic labels like neural networks and perceptrons. The Stanford MADALINE invented by Widrow was said to be a machine that “in some respects thinks like a man” (though Widrow said repeatedly that he didn’t like to use the term “thinks” because “we don’t really understand what thinking is about”).
These machines were impressive even by today’s standards. Widrow created a neural network that, like today’s AI, learned from repeated observations. It beat the local weatherman at forecasting weather and translated languages from spoken form to print. Around the same time, Claude Shannon at Bell Labs used relay switching to play chess and taught robotic mice how to master mazes; and Cornell’s Frank Rosenblatt was dazzling the world with his perceptron neural network. All this happened over sixty years ago.
Science in Action
In Seattle’s Boeing auditorium, Widrow played for us a clip from an old black-and-white TV program called Science in Action, a weekly show that ran from 1950 to 1966 and featured a variety of guest scientists. In this particular 1958 episode, a young Bernie Widrow appeared on screen before us, sporting a fresh buzz haircut fashionable during that era. The crowd chuckled.
This TV program provides an interesting glimpse into the public’s attitude toward computers and explains the leap Widrow made. The host, zoologist Earl Sannard Herald, gestures toward a wall-sized computer and says:
“By now we’re all fairly familiar with computers such as this one, and we can now remember with some amusement the fears that many of us expressed that machines such as these might someday take over the world. Today we recognize them for what they are… huge calculators, arithmetic machines, no more capable of acting for themselves than a desktop adding machine. However, a totally new class of computing machine has come into being. It’s called an adaptive computer, and it’s important because it can learn from its own experience.”
Widrow called his learning machine a neural network because it was loosely based on the 1943 McCulloch-Pitts model of the biological neuron. According to Hebbian learning published in 1949, the strength between neurons is determined by how often the connected neurons simultaneously fire. Today we simulate such weights using a digital computer program. But during the early 1960s, when computers were still wet behind the ears, Widrow used thin pencil leads suspended in a solution appropriate for electroplating for his interconnection weights.
Memistor
Electroplating was a more familiar concept in the ’50s and ’60s than it is now. Odd though it may seem to today’s families, fond parents used to bronze their children’s baby shoes using electroplating. The shoes – the more wrinkled the better – were dipped in a copper-based solution and dried. Then the shoes, able now to conduct electricity, were submerged in a plating solution and voltage was applied. Atom by atom, the shoes were covered in a thin coat of hard metal.
In high school I did a science project in silver plating where silver nitrate in solution was supposed to deposit a thin silver coating on submerged metal objects when a voltage was applied. I should have used gloves. I learned later that silver nitrate can be used to remove warts, treat nosebleeds, and cure gonorrhea; but even in contact with skin for long periods of time, silver nitrate is toxic, corrosive, and can cause burning. My electroplating worked but was far from meeting my expectations, and my fingers turned black from prolonged handling of the silver nitrate. My blackened fingers and I won no blue ribbons at the high school science fair.
The silver nitrate for silver plating is toxic, but is nothing compared to copper plating using a copper-cyanide solution. Copper-cyanide can cause headaches, dizziness, pounding of the heart, and vomiting. Adolf Hitler famously committed suicide by biting into a cyanide capsule.
Copper plating is used for coins. Pennies these days are not copper all the way through. Why? Because copper is expensive. Many years ago, my uncle Ed Hersman, a NASA engineer and stock market watcher, saw that the value of the copper in a penny was soon going to exceed a penny in cost. So he purchased ten thousand dollars in copper pennies that he stored in his basement in Doylestown, Ohio, in thirty-three-gallon drums. That’s a million pennies! And Uncle Ed was right. A penny’s worth of copper today is not enough to make a penny made of copper. So the United States Treasury begin to make pennies out of less expensive zinc and plates them with a thin veneer of copper. The United States Mint has more recently suspended the production of pennies following an order from President Trump.
The electroplating process can be reversed by changing the voltage polarity. The copper surface of a penny can thereby be removed. The copper simply goes back into the liquid solution.
Bernie Widrow used electroplating to make his neuron interconnects. The Hebbian model of learning said a synaptic connection between two neurons grows stronger the more often the connected neurons simultaneously fired. The catchphrase summarizing Hebb’s law for biological neurons is: “Neurons that fire together wire together.” Widrow used metal-plated pencil leads to simulate this effect. The more plating was on the pencil lead, the thicker the pencil lead and the better it conducted. Widrow called the ever-changing pencil lead a “memistor” which stands for “memory resistor.” Why? Because the pencil lead conductance depends on the history of the pencil lead plating. In a sense, the current conductivity of the pencil lead was a function of its past plating history. Hence, the pencil lead was said to have memory. Widrow founded a company, Memistor, to promote and sell his ADALINE and MADALINE adaptive learning machine. The company ran from 1960 to 1980. I believe it was the first neural network company. Widrow’s memistor was analog and suffered from all the shortcomings of analog computing including poor accuracy. Updating neural interconnects digitally in software is much more precise, and quicker.
Neural Network Applications
In the old Science in Action episode black-and-white buzz-cut, Bernie walks his audience through a number of applications of his neural network that remain impressive today.
For one thing, the speech recognition systems of today are not new. Widrow’s 1960 neural network was able to process a phrase spoken into a microphone and immediately type out the English. The machine also translated other languages to English. A French phrase spoken into a microphone was translated and printed in English. Japanese too. Today’s voice recognition technology and language translation machines are more technically sophisticated and powerful, but Widrow was first.
Widrow’s neural network also was trained to play the casino game of twenty-one (also called blackjack). The neural network played against the dealer. The dealer followed fixed rules and the job of ADALINE was to adapt around the fixed set of rules. At each stage, the network decided on either another hit or to stop and hold.
ADALINE was trained by observing many blackjack games and seeing which betting strategies failed and which were successful. From this, ADALINE learned how to play blackjack and was able to nearly achieve the known optimal performance level. (Card counting, by the way, was a no-no.)
Then there’s physical balance. Balancing an upright broom handle on your index finger is called the “inverted pendulum” problem in control theory. The control of the Segway parallels that of the inverted pendulum. You are the broomstick being balanced by the Segway. Forty years before the 2001 introduction of the Segway, Widrow trained ADALINE to balance a broom placed on a movable cart. Give the broom a slight shove at the top to push the broom off equilibrium, and the balancing cart, mimicking a human’s movement to regain balance of a broom on a fingertip, moves back and forth in ever decreasing steps to reposition the broom in a stable upright position.
And then there’s weather forecasting. Using data provided by the San Francisco airport, Widrow trained his neural networks to forecast weather. Training looked at pressure patterns for one day—say Monday—and forecast whether or not it would rain Tuesday. ADALINE beat the official forecast accuracy of the human weatherman. ADALINE was accurate 83 percent of the time versus 67 percent for the weatherman.
Many of us remember the early days of connecting to the internet over ordinary phone lines. At the start of each connection, the line would run through an equalization sequence — that unforgettable screeching and warbling that sounded like a duck choking on a kazoo. The very same kind of equalization occurred in fax machines operating over telephone lines. This strange symphony was the Widrow–Hoff least-mean-squares (LMS) algorithm at work in real time, adapting the line to minimize transmission errors.
Transcription of voice to text, broom balancing, weather forecasting, and winning card games were demonstrated using Widrow’s rudimentary neural network AI over sixty years ago. Today’s AI machine does a better job due to speed and sophistication. But the fundamental algorithms for these AI tasks were being used in the 1960s.
Final Thoughts
Bernie Widrow’s passing reminds us that today’s “miraculous” AI rests on foundations laid by imaginative pioneers who were working with slide rules, pencil leads, and room-sized computers rather than GPUs and cloud clusters. Long before machine learning became fashionable or commercially lucrative, Widrow demonstrated that machines could adapt, learn from experience, and solve real problems in communication, control, and prediction. His humility about what machines truly “think,” combined with his boldness in building what no one had built before, marks him as both a great engineer and a wise scientist.
While history has not always granted him the recognition he deserved, every modern neural network quietly echoes his ideas. In that sense, Bernie Widrow has not really left us at all—his intellectual DNA continues to run through nearly every AI system we build today.
A portion of this article is repeated from:
Robert J. Marks II, Non-Computable You: What You Do Artificial Intelligence Never Will, Discovery Press (July 2022)
