<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Pham_et_al_2019a</id>
		<title>Pham et al 2019a - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Pham_et_al_2019a"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Pham_et_al_2019a&amp;action=history"/>
		<updated>2026-04-21T20:43:55Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Pham_et_al_2019a&amp;diff=194468&amp;oldid=prev</id>
		<title>Scipediacontent: Scipediacontent moved page Draft Content 289484533 to Pham et al 2019a</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Pham_et_al_2019a&amp;diff=194468&amp;oldid=prev"/>
				<updated>2021-01-28T21:02:33Z</updated>
		
		<summary type="html">&lt;p&gt;Scipediacontent moved page &lt;a href=&quot;/public/Draft_Content_289484533&quot; class=&quot;mw-redirect&quot; title=&quot;Draft Content 289484533&quot;&gt;Draft Content 289484533&lt;/a&gt; to &lt;a href=&quot;/public/Pham_et_al_2019a&quot; title=&quot;Pham et al 2019a&quot;&gt;Pham et al 2019a&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 21:02, 28 January 2021&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan='2' style='text-align: center;' lang='en'&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Pham_et_al_2019a&amp;diff=194467&amp;oldid=prev</id>
		<title>Scipediacontent: Created page with &quot; == Abstract ==  International audience; With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequ...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Pham_et_al_2019a&amp;diff=194467&amp;oldid=prev"/>
				<updated>2021-01-28T21:02:28Z</updated>
		
		<summary type="html">&lt;p&gt;Created page with &amp;quot; == Abstract ==  International audience; With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequ...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
International audience; With the continuous growth in the air transportation demand, air traffic controllers will have to handle increased traffic and consequently more potential conflicts. That gives rise to the need for conflict resolution tools that can perform well in high-density traffic scenarios given a noisy environment. Unlike model-based approaches, learning-based or machine learning approaches can take advantage of historical traffic data and flexibly encapsulate the environmental uncertainty. In this study, we propose an artificial intelligent agent that is capable of resolving conflicts, in the presence of traffic and given uncertainties in conflict resolution maneuvers, without the need of prior knowledge about a set of rules mapping from conflict scenarios to expected actions. The conflict resolution task is formulated as a decision-making problem in large and complex action space, which is applicable for employing the reinforcement learning algorithm. Our work includes the development of a learning environment, scenario state representation, reward function, and learning algorithm. As a result, the proposed method, inspired from Deep Q-learning and Deep Deterministic Policy Gradient algorithms, can resolve conflicts, with a success rate of over 81%, in the presence of traffic and varying degrees of uncertainties.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Original document ==&lt;br /&gt;
&lt;br /&gt;
The different versions of the original document can be found in:&lt;br /&gt;
&lt;br /&gt;
* [https://hal-enac.archives-ouvertes.fr/hal-02138135 https://hal-enac.archives-ouvertes.fr/hal-02138135],&lt;br /&gt;
: [https://hal-enac.archives-ouvertes.fr/hal-02138135/document https://hal-enac.archives-ouvertes.fr/hal-02138135/document],&lt;br /&gt;
: [https://hal-enac.archives-ouvertes.fr/hal-02138135/file/ATM_Seminar_2019_paper_18.pdf https://hal-enac.archives-ouvertes.fr/hal-02138135/file/ATM_Seminar_2019_paper_18.pdf]&lt;/div&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	</feed>