<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Xu_et_al_2018f</id>
		<title>Xu et al 2018f - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Xu_et_al_2018f"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Xu_et_al_2018f&amp;action=history"/>
		<updated>2026-04-15T01:21:26Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Xu_et_al_2018f&amp;diff=208868&amp;oldid=prev</id>
		<title>Scipediacontent: Scipediacontent moved page Draft Content 552372298 to Xu et al 2018f</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Xu_et_al_2018f&amp;diff=208868&amp;oldid=prev"/>
				<updated>2021-02-03T20:30:15Z</updated>
		
		<summary type="html">&lt;p&gt;Scipediacontent moved page &lt;a href=&quot;/public/Draft_Content_552372298&quot; class=&quot;mw-redirect&quot; title=&quot;Draft Content 552372298&quot;&gt;Draft Content 552372298&lt;/a&gt; to &lt;a href=&quot;/public/Xu_et_al_2018f&quot; title=&quot;Xu et al 2018f&quot;&gt;Xu et al 2018f&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 20:30, 3 February 2021&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan='2' style='text-align: center;' lang='en'&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Xu_et_al_2018f&amp;diff=208867&amp;oldid=prev</id>
		<title>Scipediacontent: Created page with &quot; == Abstract ==  Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Xu_et_al_2018f&amp;diff=208867&amp;oldid=prev"/>
				<updated>2021-02-03T20:30:11Z</updated>
		
		<summary type="html">&lt;p&gt;Created page with &amp;quot; == Abstract ==  Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-ofthe-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.&lt;br /&gt;
&lt;br /&gt;
Comment: 9 pages, 12 figures, paper is accepted as a conference paper at IEEE Infocom 2018&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Original document ==&lt;br /&gt;
&lt;br /&gt;
The different versions of the original document can be found in:&lt;br /&gt;
&lt;br /&gt;
* [http://arxiv.org/abs/1801.05757 http://arxiv.org/abs/1801.05757]&lt;br /&gt;
&lt;br /&gt;
* [http://arxiv.org/pdf/1801.05757 http://arxiv.org/pdf/1801.05757]&lt;br /&gt;
&lt;br /&gt;
* [http://xplorestaging.ieee.org/ielx7/8464035/8485803/08485853.pdf?arnumber=8485853 http://xplorestaging.ieee.org/ielx7/8464035/8485803/08485853.pdf?arnumber=8485853],&lt;br /&gt;
: [http://dx.doi.org/10.1109/infocom.2018.8485853 http://dx.doi.org/10.1109/infocom.2018.8485853]&lt;br /&gt;
&lt;br /&gt;
* [https://dblp.uni-trier.de/db/journals/corr/corr1801.html#abs-1801-05757 https://dblp.uni-trier.de/db/journals/corr/corr1801.html#abs-1801-05757],&lt;br /&gt;
: [https://ieeexplore.ieee.org/document/8485853 https://ieeexplore.ieee.org/document/8485853],&lt;br /&gt;
: [https://arxiv.org/abs/1801.05757 https://arxiv.org/abs/1801.05757],&lt;br /&gt;
: [https://ui.adsabs.harvard.edu/abs/2018arXiv180105757X/abstract https://ui.adsabs.harvard.edu/abs/2018arXiv180105757X/abstract],&lt;br /&gt;
: [https://doi.org/10.1109/INFOCOM.2018.8485853 https://doi.org/10.1109/INFOCOM.2018.8485853],&lt;br /&gt;
: [https://experts.syr.edu/en/publications/experience-driven-networking-a-deep-reinforcement-learning-based- https://experts.syr.edu/en/publications/experience-driven-networking-a-deep-reinforcement-learning-based-],&lt;br /&gt;
: [https://academic.microsoft.com/#/detail/2963549123 https://academic.microsoft.com/#/detail/2963549123]&lt;/div&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	</feed>