Learning Turbulent Flows with Generative Models: From operator-enhanced super-resolution and forecasting to sparse flow reconstructions

Brown University

Abstract

Neural operators are promising surrogates for dynamical systems but when trained with standard L2 losses they tend to oversmooth fine-scale turbulent structures. Here, we show that combining operator learning with generative modeling overcomes this limitation. We consider three practical turbulent-flow challenges where conventional neural operators fail: spatio-temporal super-resolution, forecasting, and sparse flow reconstruction. For Schlieren jet super-resolution, an adversarially trained neural operator (adv-NO) reduces the energy-spectrum error by 15x while preserving sharp gradients at neural operator-like inference cost. For 3D homogeneous isotropic turbulence, adv-NO trained on only 160 timesteps from a single trajectory forecasts accurately for five eddy-turnover times and offers 114x wall-clock speed-up at inference than the baseline diffusion-based forecasters, enabling near-real-time rollouts. For reconstructing cylinder wake flows from highly sparse Particle Tracking Velocimetry-like inputs, a conditional generative model infers full 3D velocity and pressure fields with correct phase alignment and statistics. These advances enable accurate reconstruction and forecasting at low compute cost, bringing near-real-time analysis and control within reach in experimental and computational fluid mechanics.

Outline of our Study

cars peace

We address three turbulence modeling challenges in this work: (1) super-resolution of low-resolution flow fields in both space and time, (2) forecasting turbulent flow evolution from limited training data, and (3) zero-shot full-field reconstruction from partial observations. We compare vanilla MSE-trained neural operators (NO) against generative models and physics-informed variants, highlighting mitigation of spectral bias. For 3D tasks, we train with limited data, acknowledging the difficulty of obtaining high-fidelity DNS or experimental datasets.

Task 1: Spatio-temporal Super-resolution

Case: Schlieren Visualizations of an Impinging Jet (M=1)

cars peace

Spatio-temporal super-resolution. a) Our objective: learn the mapping from low resolution low frame-rate (LRLF) input to high resolution high frame-rate (HRHF) output. b) Schematic of learning setups for conventionally trained neural operator (NO), adversarially trained NO (adv-NO), and NO combined with generative (Gen) models - variational autoencoder (VAE), generative adversarial network (GAN), and diffusion model (DM). Gen is used to super-resolve NO-predicted states. c) Comparing the energy spectrum. d) Spectrum error versus per-sample inference cost. Adv-NO attains low error at orders-of-magnitude lower compute than NO+GAN and NO+DM while outperforming NO and NO+VAE that have similar costs.

Task 2: Forecasting

Case: Homogeneous Isotropic Turbulence (\(Re_{\lambda} = 90\))

cars peace

We forecast homogeneous isotropic turbulence for 5 eddy-turnover time scales (\(t_E\)). Only a single simulation trajectory with 160 snapshots was available for training. Physics-informed training reduces the field error, but does not help improve the energy spectrum. Adv-NO achieves a good spectral fidelity till 5\(t_E\).

Task 3: Flow Reconstruction (from PTV-like meassurements)

Case: Turbulent Cylinder Wake (\(Re = 11,000\))

cars peace

Here we address a practical problem in experimental fluid mechanics - Reconstructing the full 3D velocity and pressure fields from sparse PIV-like velocity measurements. We compare a diffusion model (DM) and a generative adversarial network (GAN) in this study. First we train on the DNS of the system. The training dataset consist of a single simulation trajectory of 150 snapshots. Test Setup 1 - mimics volumetric PIV. Test Setup 2 - mimics planar PIV. Test Setup 3 - represents volumetric PIV in a sub-domain. Test Setup 4 - pressure reconstruction. Under extremely sparse velocity measurements, when GAN fails, the diffusion model achieves phase-alignement near the observation region and obeys the global statistics.

BibTeX

@article{oommen2025learning,
      author={Oommen, Vivek and Khodakarami, Siavash and Bora, Aniruddha and Wang, Zhicheng and Karniadakis, George Em},
      title={Learning Turbulent Flows with Generative Models: Super-resolution, Forecasting, and Sparse Flow Reconstruction},
      journal={arXiv preprint arXiv:2509.08752},
      year={2025}
    }