Recover password

Computer learns how to imagine the future

SANTA FE, N.M. — In many ways, the human brain is still the best computer around. For one, it’s highly efficient. Our largest supercomputers require millions of watts, enough to power a small town, but the human brain uses approximately the same energy as a 20 watt bulb. While teenagers may seem to take forever to learn what their parents regard as basic life skills, humans and other animals are also capable of learning very quickly. Most of all, the brain is truly great at sorting through torrents of data to find the relevant information to act on.

At an early age, humans can reliably perform feats such as distinguishing an ostrich from a school bus, for instance – an achievement that seems simple, but illustrates the kind a task that even our most powerful computer vision systems can get wrong. We can also tell a moving car from the static background and predict where the car will be in the next half-second. Challenges like these, and far more complex ones, expose the limitations in our ability to make computers think like people do. But recent research at Los Alamos National Laboratory is changing all that.

Brain neuroscientists and computer scientists call this field neuromimetic computing – building computers inspired by how the cerebral cortex works. The cerebral cortex relies on billions of small biological “processors” called neurons. They store and process information in densely interconnected circuits called neural networks. In Los Alamos, researchers are simulating biological neural networks on supercomputers, enabling machines to learn about their surroundings, interpret data and make predictions much the way humans do.

This kind of machine learning is easy to grasp in principle, but hard to implement in a computer. Teaching neuromimetic machines to take on huge tasks like predicting weather and simulating nuclear physics is an enterprise requiring the latest in high-performance computing resources. With the laboratory’s world-class supercomputing center, Los Alamos researchers have become leaders in high-performance neural simulation and neuromimetic applications.

Advertisement

Continue reading

The lab has developed codes that run efficiently on supercomputers with millions of processing cores to crunch vast amounts of data and perform a mind-boggling number of calculations (over 10 quadrillion!) every second. Until recently, however, researchers attempting to simulate neural processing at anything close to the scale and complexity of the brain’s cortical circuits have been stymied by limitations on computer memory and computational power.

All that has changed with the new Trinity supercomputer at Los Alamos, which became fully operational in mid-2017. The fastest computer in the United States, Trinity has unique capabilities designed for the National Nuclear Security Administration’s stockpile stewardship mission, which includes highly complex nuclear simulations in the absence of testing nuclear weapons. All this capability means Trinity allows a fundamentally different approach to large-scale cortical simulations, enabling an unprecedented leap in the ability to model neural processing.

To test that capability on a limited-scale problem, computer scientists and neuroscientists at Los Alamos created a “sparse prediction machine” that executes a neural network on Trinity. A sparse prediction machine is designed to work like the brain: researchers expose it to data – in this case, thousands of video clips, each depicting a particular object, such as a horse running across a field or a car driving down a road.

Cognitive psychologists tell us that by the age of six to nine months, human infants can distinguish objects from background. Apparently, human infants learn about the visual world by training their neural networks on what they see while being toted around by their parents, well before the child can walk or talk.

Similarly, the neurons in a sparse prediction machine learn about the visual world simply by watching thousands of video sequences without using any of the associated human-provided labels – a major difference from other machine-learning approaches. A sparse prediction machine is simply exposed to a wide variety of video clips much the way a child accumulates visual experience.

When the sparse prediction machine on Trinity was exposed to thousands of eight-frame video sequences, each neuron eventually learned to represent a particular visual pattern. Whereas a human infant can have only a single visual experience at any given moment, the scale of Trinity meant it could train on 400 video clips simultaneously, greatly accelerating the learning process. The sparse prediction machine then uses the representations learned by the individual neurons, while at the same time developing the ability to predict the eighth frame from the preceding seven frames, for example, predicting how a car moves against a static background.

The Los Alamos sparse prediction machine consists of two neural networks executed in parallel, one called the Oracle, which can see the future, and the other called the Muggle, which learns to imitate the Oracle’s representations of future video frames it can’t see directly. With Trinity’s power, the Los Alamos team more accurately simulates the way a brain handles information by using only the fewest neurons at any given moment to explain the information at hand. That’s the “sparse” part, and it makes the brain very efficient and very powerful at making inferences about the world – and, hopefully, a computer more efficient and powerful, too.

After being trained in this way, the sparse prediction machine was able to create a new video frame that would naturally follow from the previous, real-world video frames. It saw seven video frames and predicted the eighth. In one example, it was able to continue the motion of car against a static background. The computer could imagine the future.

This ability to predict video frames based on machine learning is a meaningful achievement in neuromimetic computing, but the field still has a long way to go. As one of the principal scientific grand challenges of this century, understanding the computational capability of the human brain will transform such wide-ranging research and practical applications as weather forecasting and fusion energy research, cancer diagnosis and the advanced numerical simulations that support the stockpile stewardship program in lieu of real-world testing.

To support all those efforts, Los Alamos will continue experimenting with sparse prediction machines in neuromorphic computing, learning more about both the brain and computing, along with as-yet undiscovered applications on the wide, largely unexplored frontiers of quantum computing. We can’t predict where that exploration will lead, but like that made-up eighth video frame of the car, it’s bound to be the logical next step.

Garrett Kenyon is a computer scientist specializing in neurally inspired computing in the Information Sciences group at Los Alamos National Laboratory, where he studies the brain and models of neural networks on the Lab’s high-performance computers. Other members of the sparse prediction machine project were Boram Yoon of the Applied Computer Science group and Peter Schultz of the New Mexico Consortium.

 


TOP |