Delivery alert

There may be an issue with the delivery of your newspaper. This alert will expire at NaN. Click here for more info.

Recover password

Scalable clusters make HP R&D easy as Raspberry Pi

Gary Grider

SANTA FE, N.M. — Computer scientists have just addressed a challenge for managing Trinity, one of the world’s most advanced, exclusive supercomputers, by using one of the world’s least expensive, most widely available computers, the Raspberry Pi.

It’s actually not a scientific challenge, but a practical problem.

These two computing systems are complete opposites in size, raw computing power, memory and even power usage. Housed at Los Alamos National Laboratory, Trinity is the third-fastest supercomputer in the world, by one measure. But the Department of Energy and the lab didn’t design it to top the speed list. They wanted it to solve specific – and huge – physics problems that would bring any other machine to its knees while sucking in megawatts of power from the electric grid.

Trinity came fully on line in 2017 as the latest in a string of world-class supercomputers supporting Los Alamos’s mission of ensuring the safety and reliability of the U.S. nuclear stockpile. Central to accomplishing that mission is running complex, computationally intensive computer simulations of nuclear physics and related science. To get that done, the lab has been at the sharp leading edge of high-performance computing for decades.

So Trinity is big. It occupies thousands of square feet in a climate-controlled building. It’s a petascale machine, which means it can do many million billions of floating-point operations per second. It has two petabytes of memory. (If you had 1 petabyte of stored MP3 songs, you would need about 2,240 years to hear them all.) And it draws in 12 megawatts of electricity.

Raspberry Pi, on the other hand, is a credit card-sized computer that runs on just a few watts and starts at $25. It was developed by the Raspberry Pi Foundation, a UK-based charity, to put the power of digital “making” into the hands of people all over the world to help them learn, solve problems and have fun.

The Raspberry Pi is a marvel of modern consumer technology. It has a single quad core central processor; 1 gigabyte of RAM; USB ports; WiFi and Bluetooth Low Energy capabilities; stereo, composite video and camera ports; and the capability for touchscreen display, among other features. The little computer can be hooked to a keyboard and TV screen, and can do most everyday computing tasks, whether that’s word processing, streaming videos or writing computer programs.

And, as it turns out, the Raspberry Pi can also help solve a unique problem in the high-performance-computing world. In recent years, Los Alamos has been on a quest to help the systems software community work on very large supercomputers without actually testing on them.

The team needed to keep Trinity focused on its mission-critical work for the DOE/NNSA. They can’t always distract Trinity from mission science to test the more mundane programs of system software on tens of thousands components on a machine like Trinity, such as time to boot, times to launch a parallel program and health monitoring. And it’s not like you can keep an extra multimillion-dollar, megawatt-consuming petascale machine around just for R&D work in systems software.

Looking around the marketplace, the lab’s team realized the Raspberry Pi just might hold the solution. Here was an inexpensive computer using just a few watts. It could be networked into a several-thousand-node system large enough to simulate a supercomputer at a fraction of the cost of using a machine like Trinity. Yet no one had created a suitable, densely packaged, professional-grade Raspberry Pi system. So the Lab turned to SICORP of Albuquerque to collaborate on a solution and then together they worked with BitScope of Australia to develop easily scaled rack-mounted units.

With a total of 750 CPUs, or 3,000 cores, working together, the BitScope Pi Cluster modules give developers exclusive time on an inexpensive but highly parallelized platform where they can test, validate and scale up systems software, and get it to work reliably. Once the bugs are worked out, they can launch it on the big Trinity machine.

The Pi Clusters have lots of potential applications far beyond the rarefied world of Energy Department supercomputers. They can be used in education to help students learn their way around a supercomputing environment, and they will surely find application in the internet of things, which includes everything from home management systems to all kinds of industrial sensors, for instance.

So while Trinity carries out its mission to solve global-scale problems, these small clusters can bring related technology to our everyday world. The supercomputers at Los Alamos have a long history of pioneering new advances in computing that find new applications in the devices sitting on our desktop or in our pockets. The Raspberry Pi clusters make a sweet addition to that list.

Gary Grider is the leader of the High Performance Computing Division at Los Alamos National Laboratory, home of the Trinity supercomputer.