CSE Community Seminar | February 6, 2026
Abstract
Turbulence is ubiquitous in engineering and the natural sciences, yet its prediction remains a major challenge. Although the governing equations are fully known, the intrinsic complexity of turbulent flows makes accurate prediction via the Navier–Stokes equations computationally prohibitive. Consequently, practical applications rely on simplified models that trade accuracy for computational efficiency.
This compromise naturally raises a key question: to what extent can turbulence dynamics be simplified without sacrificing predictive capability? Equivalently, what is the minimum level of computational complexity required to obtain meaningful predictions?
In this seminar, we address these questions by defining model complexity and predictive skill using tools from information theory, enabling a systematic and principled application of Occam’s Razor to turbulence forecasting: models should be as simple as possible while retaining predictive power. We apply this framework to the prediction of extreme events in two-dimensional turbulence and demonstrate the existence of a fundamental lower limit on the simplicity of models capable of successfully forecasting such events. In particular, we show that the model complexity required to attain a prescribed level of forecasting skill increases exponentially with the prediction horizon, at a rate comparable to the largest Lyapunov exponent of the flow, thereby imposing a severe computational constraint on forecasting.
These results provide a general perspective on the computational limits of event prediction in turbulence and offer practical guidance for the development of reduced-order, data-driven, and physics-agnostic models in engineering applications. More broadly, they emphasize the importance of estimating turbulence complexity in order to determine how data-driven models should be scaled and what level of performance can be realistically expected.