<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Zee_et_al_2021a</id>
		<title>Zee et al 2021a - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Zee_et_al_2021a"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;action=history"/>
		<updated>2026-04-23T14:32:56Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;diff=226379&amp;oldid=prev</id>
		<title>Scipediacontent at 13:42, 28 June 2021</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;diff=226379&amp;oldid=prev"/>
				<updated>2021-06-28T13:42:41Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:42, 28 June 2021&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== &lt;del class=&quot;diffchange diffchange-inline&quot;&gt;Abstract &lt;/del&gt;==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;Summary &lt;/ins&gt;==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020)&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del class=&quot;diffchange diffchange-inline&quot;&gt;&amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;#160;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Video ==&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Video ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{#evt:service=cloudfront|id=259001|alignment=center|filename=421.mp4}}&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{#evt:service=cloudfront|id=259001|alignment=center|filename=421.mp4}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:226377:newid:226379 --&gt;
&lt;/table&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;diff=226377&amp;oldid=prev</id>
		<title>Scipediacontent: Scipediacontent moved page Draft Content 886172881 to Zee et al 2021a</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;diff=226377&amp;oldid=prev"/>
				<updated>2021-06-28T13:40:56Z</updated>
		
		<summary type="html">&lt;p&gt;Scipediacontent moved page &lt;a href=&quot;/public/Draft_Content_886172881&quot; class=&quot;mw-redirect&quot; title=&quot;Draft Content 886172881&quot;&gt;Draft Content 886172881&lt;/a&gt; to &lt;a href=&quot;/public/Zee_et_al_2021a&quot; title=&quot;Zee et al 2021a&quot;&gt;Zee et al 2021a&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 13:40, 28 June 2021&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan='2' style='text-align: center;' lang='en'&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;diff=226376&amp;oldid=prev</id>
		<title>Scipediacontent: Created page with &quot;== Abstract ==  We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The a...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Zee_et_al_2021a&amp;diff=226376&amp;oldid=prev"/>
				<updated>2021-06-28T13:40:53Z</updated>
		
		<summary type="html">&lt;p&gt;Created page with &amp;quot;== Abstract ==  We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The a...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== Abstract ==&lt;br /&gt;
&lt;br /&gt;
We consider the data-driven acceleration of Galerkin-based finite element discretizations for the approximation of partial differential equations (PDEs). The aim is to obtain approximations on meshes that are very coarse, but nevertheless resolve quantities of interest with striking accuracy. Our work is inspired by the the machine learning framework of Mishra (2018), who considered the data-driven acceleration of finite-difference schemes. The essential idea is to optimize a numerical method for a given coarse mesh, by minimizing a loss function consisting of errors with respect to the quantities of interest for obtained training data. Our main contribution lies in the identification of a stable and consistent parametric family of finite element methods on a given mesh. In particular, we consider a general Petrov-Galerkin method, where the trial space is fixed, but the test space has trainable parameters that are to be determined in the offline training process. Finding the optimal test space therefore amounts to obtaining a goal-oriented discretization that is completely tailored for the quantity of interest. The Petrov-Galerkin method is equivalent to a Minimal-Residual formulation, as commonly studied in the context of DPG and optimal Petrov-Galerkin methods. As is natural in deep learning, we use an artificial neural network to define the family of test spaces, whose parameters are learned from the data. Using numerical examples for the Laplacian and advection equation, we demonstrate that the trained method has superior approximation of quantities of interest even on very coarse meshes. [1] I. Brevis, I. Muga, and K. G. van der Zee, A machine-learning minimal-residual (ML-MRes) framework for goal-oriented nite element discretizations, Computers and Mathematics with Applications, to appear, https://doi.org/10.1016/j.camwa.2020.08.012 (2020)&lt;br /&gt;
                                                                                                &lt;br /&gt;
== Video ==&lt;br /&gt;
{{#evt:service=cloudfront|id=259001|alignment=center|filename=421.mp4}}&lt;br /&gt;
                                                &lt;br /&gt;
== Document ==&lt;br /&gt;
&amp;lt;pdf&amp;gt;Media:Draft_Content_886172881A_IDC6_421.pdf&amp;lt;/pdf&amp;gt;&lt;/div&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	</feed>