<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Mang_et_al_2024a</id>
		<title>Mang et al 2024a - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Mang_et_al_2024a"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;action=history"/>
		<updated>2026-05-06T08:33:17Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305774&amp;oldid=prev</id>
		<title>JSanchez: JSanchez moved page Draft Sanchez Pinedo 274870176 to Mang et al 2024a</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305774&amp;oldid=prev"/>
				<updated>2024-07-01T12:43:11Z</updated>
		
		<summary type="html">&lt;p&gt;JSanchez moved page &lt;a href=&quot;/public/Draft_Sanchez_Pinedo_274870176&quot; class=&quot;mw-redirect&quot; title=&quot;Draft Sanchez Pinedo 274870176&quot;&gt;Draft Sanchez Pinedo 274870176&lt;/a&gt; to &lt;a href=&quot;/public/Mang_et_al_2024a&quot; title=&quot;Mang et al 2024a&quot;&gt;Mang et al 2024a&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:43, 1 July 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan='2' style='text-align: center;' lang='en'&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>JSanchez</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305773&amp;oldid=prev</id>
		<title>JSanchez at 12:43, 1 July 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305773&amp;oldid=prev"/>
				<updated>2024-07-01T12:43:05Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:43, 1 July 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l3&quot; &gt;Line 3:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 3:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In machine learning process, hyper parameters are chosen in a way to decrease the prediction error and improve the convergence. However, the optimized hyper parameters have a limit in terms of enhancing the performance of the neural networks. In this work, the datasets used for the numerical experiments arise from the resolution of partial differential equations (PDE) defined on a spatial domain. We propose a DYNAmic WEIghted Loss (DYNAWEIL) function-based approach for neural networks that are used to learn these PDE’s solutions. This a two-step process: first we train for a few numbers of epochs in a classical way then the dynamic weighted loss function replaces the classical loss function by leveraging the information from past training error histories. To validate this method, we carry out numerical experiments with different neural networks on datasets arising on two different physics: Goldstein equation [1] and radiative transfer equation [2]. Thus, in order to demonstrate the relevance of this approach, we provide a comparison among a neural network model using a classical loss function, with and without hyper parameters optimization, and a dynamic weighted loss function for both versions.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In machine learning process, hyper parameters are chosen in a way to decrease the prediction error and improve the convergence. However, the optimized hyper parameters have a limit in terms of enhancing the performance of the neural networks. In this work, the datasets used for the numerical experiments arise from the resolution of partial differential equations (PDE) defined on a spatial domain. We propose a DYNAmic WEIghted Loss (DYNAWEIL) function-based approach for neural networks that are used to learn these PDE’s solutions. This a two-step process: first we train for a few numbers of epochs in a classical way then the dynamic weighted loss function replaces the classical loss function by leveraging the information from past training error histories. To validate this method, we carry out numerical experiments with different neural networks on datasets arising on two different physics: Goldstein equation [1] and radiative transfer equation [2]. Thus, in order to demonstrate the relevance of this approach, we provide a comparison among a neural network model using a classical loss function, with and without hyper parameters optimization, and a dynamic weighted loss function for both versions.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;== Full Paper ==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;pdf&amp;gt;Media:Draft_Sanchez Pinedo_274870176125.pdf&amp;lt;/pdf&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>JSanchez</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305771&amp;oldid=prev</id>
		<title>JSanchez at 12:43, 1 July 2024</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305771&amp;oldid=prev"/>
				<updated>2024-07-01T12:43:03Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 12:43, 1 July 2024&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot; &gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &amp;#160; &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;==Abstract==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;In machine learning process, hyper parameters are chosen in a way to decrease the prediction error and improve the convergence. However, the optimized hyper parameters have a limit in terms of enhancing the performance of the neural networks. In this work, the datasets used for the numerical experiments arise from the resolution of partial differential equations (PDE) defined on a spatial domain. We propose a DYNAmic WEIghted Loss (DYNAWEIL) function-based approach for neural networks that are used to learn these PDE’s solutions. This a two-step process: first we train for a few numbers of epochs in a classical way then the dynamic weighted loss function replaces the classical loss function by leveraging the information from past training error histories. To validate this method, we carry out numerical experiments with different neural networks on datasets arising on two different physics: Goldstein equation [1] and radiative transfer equation [2]. Thus, in order to demonstrate the relevance of this approach, we provide a comparison among a neural network model using a classical loss function, with and without hyper parameters optimization, and a dynamic weighted loss function for both versions.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mw_drafts_scipedia-sc_mwd_:diff:version:1.11a:oldid:305770:newid:305771 --&gt;
&lt;/table&gt;</summary>
		<author><name>JSanchez</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305770&amp;oldid=prev</id>
		<title>JSanchez: Created blank page</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Mang_et_al_2024a&amp;diff=305770&amp;oldid=prev"/>
				<updated>2024-07-01T12:43:01Z</updated>
		
		<summary type="html">&lt;p&gt;Created blank page&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>JSanchez</name></author>	</entry>

	</feed>