m (Scipediacontent moved page Draft Content 886172881 to Zee et al 2021a) |
|||
Line 1: | Line 1: | ||
− | == | + | == Summary == |
We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020) | We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020) | ||
− | + | ||
== Video == | == Video == | ||
{{#evt:service=cloudfront|id=259001|alignment=center|filename=421.mp4}} | {{#evt:service=cloudfront|id=259001|alignment=center|filename=421.mp4}} |
We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020)
Published on 28/06/21
Accepted on 28/06/21
Submitted on 28/06/21
Volume MS02 - Applications of Goal-Oriented Error Estimation and Adaptivity, 2021
DOI: 10.23967/admos.2021.032
Licence: CC BY-NC-SA license
Are you one of the authors of this document?