Imagine a massive meteor exploding above Earth with the power of 30 nuclear bombs as the atmosphere tears it apart. It can be hard to picture, especially since there are only two such extreme examples to draw on in the past century, but it could happen. So, when planetary defense scientists around the world wanted to know if such a meteor airburst over an ocean would cause coastline-flooding tsunamis, they turned to simulation and visualization scientists.
First, the planetary defense researchers created a model, a mathematical representation of the meteor racing through the Earth’s atmosphere toward deep ocean water. Then, to create a visualization, they loaded the results into ParaView, an open-source data visualization program developed at Los Alamos National Laboratory in 2002. In the tool, the visualization mapped the meteor’s mass and speed according to research from the model. They did the same for atmosphere, which incorporated pressure and heating in front of the meteor as it burned up in the atmosphere. Then, the visualization specialist displayed a detailed grid of the ocean water and land that was part of the mathematical model.
When run on a supercomputer, the simulation resembled a science fiction movie, the rock hurtling toward the deep ocean water of Earth. Although scientists had disagreed about the possible outcome, the result of the simulation was clear: As the extraterrestrial rock exploded, it sent a powerful shockwave to the sea that would have destroyed any boat within 100 miles. Yet, as some had predicted, there was no tsunami. It was a theory tested and reinforced by a simulation.
Scientists have long used models backed by rigorous study or fieldwork to explain processes that can’t be witnessed directly. However, in the past, the visualizations created from models were used mostly to communicate and complement scientific research. They helped explain the work, but they did not necessarily reveal new insights on their own. But with computing power constantly advancing, this model-based imagery created by such programs as ParaView is becoming increasingly vital to the scientific process itself.
Nowhere is this more apparent than in the international effort to model the warming atmosphere and its impact on Earth’s climate systems. One recent study from this effort, which includes scientists across the world using a suite of models, projected that thinning ice sheets could add up to 15 inches of global sea level rise this century. In this work, scientific visualizations are especially important because it’s difficult, expensive and time-consuming to conduct fieldwork in such places as Antarctica, and also because of the many complex systems at play.
In studying ice melt at the poles, Earth system researchers must understand how winds impact ocean currents and eddies, which can direct warm water beneath ice shelves – bodies of floating ice that connect the ice sheets to the ocean and act as a crucial barrier to the ice sheets and land ice behind them. Melting ice shelves affect ocean salinity, which changes ocean density and can draw in more warm water, leading to an ice-shelf melt feedback loop. For the most part, climate researchers understand the ramifications of their data long before it reaches a visualization program. But sometimes there are surprises.
In one case, climate researchers looking over simulation output from Antarctica’s Filchner-Ronne ice shelf noticed an unusual amount of cold freshwater in their study area. From previous simulations and analyses, they knew the influx couldn’t be attributed to local factors alone. Still, the freshwater was coming from somewhere and, more importantly, it was destabilizing regional oceanographic conditions that limit ice shelf melt rates. So, as they reviewed their data, the climate researchers devised a theory.
In prior years, confirming such an idea might have required flying to Antarctica and dropping tracer dye in the ocean, then waiting for the dye plume to slowly float tens or hundreds of miles to the study area. But with the aid of a supercomputer, a visualization specialist transformed the model’s data into an intricate scientific animation that played out their answer. As they’d expected, freshwater from ice shelves melting from other parts of the continent was being directed by ocean currents to their study site – again, a theory backed by empirical evidence, tested and supported by scientific visualization.
Ultimately, the Earth system model that found the mysterious freshwater influx will be combined with dynamic land ice models in order to better understand how ocean currents interact with ice shelves and land ice to control future sea level rise, creating the most accurate predictions ever of Earth’s warming future. Once finished, this U.S. Department of Energy climate super model, called the Energy Exascale Earth System Model (E3SM), will allow researchers to press play and watch a simulated video of how the warming environment, given current carbon dioxide levels, will reshape the world 200 years from now.
As always, visualizations still hold the power to relay visceral imagery to the public in a way that helps them understand science. But as supercomputers expand the ability to combine complex models, these scientific visualizations will play an increasingly crucial role in science itself. Because as the questions posed grow more complex – whether that’s a potential meteor strike or melting ice shelves – so must the tools used to find answers.
John Patchett is a staff scientist in the Information Sciences group at Los Alamos National Laboratory, where he does research in data science at scale, large-scale visualization and analysis, data-parallelism, and in-situ visualization and analysis