The international nuclear arms control regime is approaching a critical juncture. If new nuclear weapons treaties are to be negotiated, ratified and enforced, they will need to be underpinned by strong technical monitoring capabilities. The Department of Energy’s National Nuclear Security Administration is leveraging its expertise and technology to meet this challenge, understanding that in nuclear nonproliferation, you can’t verify what you can’t see.
The United States is placing renewed urgency on developing the science and technology required to monitor our adversaries’ nuclear activity — specifically by harnessing the power of artificial intelligence and the unmatched, high-performance computing capabilities found at DOE’s national laboratories.
DOE houses four of the world’s top 10 fastest supercomputers, including the top two, and we are already at work on developing three next-generation, exascale machines, able to conduct a billion billion calculations per second. Coupled with our advances in AI, those technologies will strengthen our nonproliferation efforts while helping to ensure that our own nuclear weapons stockpile remains safe, reliable and effective.
At Los Alamos National Laboratory, we are using AI to sift through data from an international network of sensors that look for underground seismic events that could indicate an illicit nuclear explosive test. With more than half a million seismic events worldwide each year, automated calculations are required to distinguish potential nuclear explosions from naturally occurring earthquakes.
Similarly, a team at Los Alamos is using AI to pinpoint the source of an underground nuclear explosion using signatures from gases that seep to the surface through rock fractures. Those gases may be driven hundreds of yards away from a detonation via a variety of paths, and so determining where to best place sensors to pick up the signal has proved exceptionally challenging, especially in areas where the structure of the rock is uncertain.
That’s where AI comes in: AI offers the potential to accelerate our physics-based models so we can account for the numerous combinations of seepage pathways to calculate uncertainties related to when a gas is most likely to reach the surface. That in turn will allow us to deploy sensors to the correct locations for the best chance of detection.
We are also using AI to fuse together disparate, heterogeneous data streams, such as social media posts, satellite imagery and weather data to look for signs of nuclear proliferation. While on their own each of these data points might not tell us much, when combined, they tell us quite a lot. To fuse this data and glean relevant information requires both advanced data analytics — sophisticated algorithm development for signal detection, natural language processing and data fusion — and advanced computing infrastructure to process that data. In short, the DOE’s supercomputers can sift through these immense data sets and “learn” how to recognize patterns of nuclear proliferation with unprecedented speed.
Our supercomputers, like “El Capitan,” the recently announced NNSA exascale machine to be built at Lawrence Livermore National Laboratory, will also use AI to analyze satellite images to look for changes in topography that could indicate an underground nuclear explosion, while other remote sensing satellites search for anomalies that could indicate a space-based nuclear detonation.
In many ways, AI is no longer an option but a necessity. We are awash in data from myriad sources — from dew points to Gross Domestic Products, polling numbers to Twitter trends. Each day, an estimated 2.5 quintillion bytes of data are created — the equivalent of the contents of the entire Library of Congress being produced more than 166,000 times. This will only increase as we develop new, faster and less expensive ways to collect data, such as compact, high-resolution imagers to be carried aboard CubeSats, satellites about the size of a loaf of bread that can be launched by the dozens per year. To be able to act on those immense data sets in time to make a difference, we need to be able to quickly analyze it. The combination of DOE and NNSA’s unrivaled supercomputers and AI will make that possible.
While the nuclear threat is not new, the dynamic global environment has created a renewed sense of urgency related to nuclear materials and the need to monitor them — and AI is key to that. At Los Alamos, we like to say that it takes a weapons lab to find a weapons lab — because our expertise gives us insight few others in the world possess. Continuing to push the boundaries of AI will give us the tools we need to keep our Nation and the world safe.
Nancy Jo Nicholas is the associate laboratory director for Global Security at Los Alamos National Laboratory and an expert in nuclear nonproliferation. Thom Mason is the director of Los Alamos National Laboratory.