Being able to accurately detect changes to the Earth’s surface using satellite imagery can aid in everything from climate change research and farming to human migration patterns and nuclear nonproliferation. But until recently, it was not possible to flexibly integrate images from multiple types of sensors — for example, ones that show surface changes (such as new building construction) versus ones that show material changes (such as water to sand). Now, with a new capability, we can — and in doing so, we get a more frequent and complete picture of what’s happening on the ground.
At Los Alamos National Laboratory, we’ve developed a flexible mathematical approach to identify changes in satellite image pairs collected from different satellite sensor types that use different sensing technologies, allowing for faster, more complete analysis. It’s easy to assume all satellite images are the same and, thus, comparing them is simple. But the reality is quite different. Hundreds of different imaging sensors are orbiting the Earth right now, and nearly all take pictures of the ground in a different way from the others.
Take, for example, imaging sensors that capture information from multiple spectral channels, or types light. These are among the most common type of sensors and give us the images most of us think of when we hear “satellite imagery.” These imaging sensors are alike in that they can capture color information beyond what the human eye can see, making them extremely sensitive to material changes. For example, they can clearly capture a grassy field that, a few weeks later, is replaced by synthetic turf.
But how they capture those changes varies widely from one sensor to the next. One might measure four different colors of light, for instance, while another measures six. Each sensor might measure the color red differently.
Add to this the fact these sensors aren’t the only type of satellite imaging. For example, there is also synthetic aperture radar, or SAR, which captures radar images of the Earth’s surface structure in fine detail. These SAR images are sensitive to surface changes or deformation and are commonly used for applications such as volcano monitoring and geothermal energy. So, once again, we have an imaging sensor that is capturing information in a completely different way from another.
This is a real challenge when comparing these images. When signals come from two different remote sensing techniques, traditional approaches for detecting changes will fail because the underlying math and physics no longer make sense. But there’s information to be had there because these sensors are all imaging the same scenes, just in different ways. So how can you look at all of these images captured by different methods in a way that automatically identifies changes over time?
Our mathematical approach makes this possible by creating a framework that not only compares images from different types of sensors, but also effectively “normalizes” the different types of imagery — all while maintaining the original signal information.
But the most important benefit of this image integration is that we’re able to see changes as frequent as minutes apart. Previously, the time elapsed between images captured by the same sensor could take days or weeks. But being able to integrate various types of images means we’re able to use data from more sensors faster, and thus see the changes more quickly, which allows for more rigorous analysis.
To test our method, we looked at images of the construction of the new SoFi Stadium in Los Angeles starting in 2016. We began by comparing the different types of images over the same date range to see which ones picked up which changes. For example, in one case, the roof of a building beside the stadium was replaced — changing it from beige to white over the course of several months. The spectral imaging sensors detected this change, because it was related to color and material. SAR, however, did not, as we expected. However, SAR was highly sensitive to surface changes due to moving dirt piles, whereas the spectral imagery was not.
When we integrated the images using our new analysis capability, we were able to see both changes — the surface and the material — at a much faster rate than if we focused on any one individual satellite. This had never been done before at scale, and it signals a potential fundamental shift in how satellite imagery is analyzed.
We were also able to demonstrate how changes could be detected much faster than before. In one instance, we were able to compare different spectral images collected just 12 minutes apart. In fact, it was so fast, we were able to detect a plane flying through the scene.
As space-based remote sensing continues to become more accessible — particularly with the explosive use of cubesats and smallsats in both government and commercial sectors — more satellite imagery will become available. That’s good news in theory because it means more data to feed comprehensive analysis. In practice, however, this analysis is challenged by the overwhelming volume of data, the diversity in sensor designs and the stove-piped nature of image repositories for different satellite providers. Furthermore, as image analysts become deluged with this tidal wave of imagery, the development of automated detection algorithms that “know where to look” is paramount.
This new approach to change detection won’t solve all of those challenges, but it will help by optimizing the strengths of various satellite imagers — and give us more clarity about the changing landscape of our world in the process.