Researchers have developed a new way to see how flooding might impact a region through the use of satellite imagery. The new method, developed by researchers at the Massachusetts Institute of Technology, combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, indicating where flooding is likely to occur based on the strength of an oncoming storm.

The team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.

The team’s physics-reinforced method generated satellite images of future flooding that were more accurate versus the AI-only method, which generated images of flooding in places where it is not physically possible.

The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.

“The idea is one day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”

Dubbed the “Earth Intelligence Engine,” the method is available as an online resource for others to try.

The researchers reported their results in the journal IEEE Transactions on Geoscience and Remote Sensing.

The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.

Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” said Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”

The authors used a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks.

The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.

Each network automatically improves its performance based on feedback from the other network.

The adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing, according to the researchers.

GANs can still produce hallucinations; an issue with AI at this time where factually incorrect information or features appear.

“Hallucinations can mislead viewers,” said Lütjens, highlighting a top of mind concern. “How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”

Flood hallucinations

The researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.

Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps.

The maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, feeding into a wind model that simulates the pattern and strength of winds over a local region and combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land.

A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.

“Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens asked.

The team first tested how generative AI alone would produce satellite images of future flooding.

They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey.

When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (like in locations at higher elevation).

To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge and flood patterns.

With the physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.

“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” said Newman. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”

The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.

Original article by Jennifer Chu | MIT News.

Featured image: A generative AI model visualizes how floods in Texas would look like in satellite imagery. The original photo is on the left, and the AI generated image is in on the right.

Photo Credit: Pre-flood images from Maxar Open Data Program via Gupta et al., CVPR Workshop Proceedings. Generated images from Lütjen et al., IEEE TGRS.