Abstract

In the present work, we investigate the stability of turbulence closure predictions from neural network models and highlight the role of model-data-inconsistency during inference. We quantify this inconsistency by applying the Mahalanobis distance and demonstrate that the instability of the model predictions in practical large eddy simulations (LES) correlates with a deviation of the input data between the training dataset and actual simulation data. Moreover, the method of 'stability training' is applied to increase the robustness of recurrent artificial neural networks (ANN) against small perturbations in the input, which are typically unavoidable in any practical scenario. We show that this method can increase the stability of simulations with ANN-based closure term predictions significantly. The models also achieve good accuracy on the blind testing set in comparison to the baseline model trained without stability training. The work presented here can thus be seen as a building block towards long-term stable data-driven models for dynamical systems and highlights methods to detect and counter model-data-inconsistencies.

Full document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top
GET PDF

Document information

Published on 10/03/21
Submitted on 10/03/21

Volume 1700 - Data Science and Machine Learning, 2021
DOI: 10.23967/wccm-eccomas.2020.115
Licence: CC BY-NC-SA license

Document Score

0

Views 44
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?