<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Xiong_et_al_2018a</id>
		<title>Xiong et al 2018a - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://www.scipedia.com/wd/index.php?action=history&amp;feed=atom&amp;title=Xiong_et_al_2018a"/>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Xiong_et_al_2018a&amp;action=history"/>
		<updated>2026-04-21T17:52:17Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.27.0-wmf.10</generator>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Xiong_et_al_2018a&amp;diff=204885&amp;oldid=prev</id>
		<title>Scipediacontent: Scipediacontent moved page Draft Content 851561381 to Xiong et al 2018a</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Xiong_et_al_2018a&amp;diff=204885&amp;oldid=prev"/>
				<updated>2021-02-03T15:41:46Z</updated>
		
		<summary type="html">&lt;p&gt;Scipediacontent moved page &lt;a href=&quot;/public/Draft_Content_851561381&quot; class=&quot;mw-redirect&quot; title=&quot;Draft Content 851561381&quot;&gt;Draft Content 851561381&lt;/a&gt; to &lt;a href=&quot;/public/Xiong_et_al_2018a&quot; title=&quot;Xiong et al 2018a&quot;&gt;Xiong et al 2018a&lt;/a&gt;&lt;/p&gt;
&lt;table class=&quot;diff diff-contentalign-left&quot; data-mw=&quot;interface&quot;&gt;
				&lt;tr style='vertical-align: top;' lang='en'&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='1' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 15:41, 3 February 2021&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan='2' style='text-align: center;' lang='en'&gt;&lt;div class=&quot;mw-diff-empty&quot;&gt;(No difference)&lt;/div&gt;
&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	<entry>
		<id>https://www.scipedia.com/wd/index.php?title=Xiong_et_al_2018a&amp;diff=204884&amp;oldid=prev</id>
		<title>Scipediacontent: Created page with &quot; == Abstract ==  High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.scipedia.com/wd/index.php?title=Xiong_et_al_2018a&amp;diff=204884&amp;oldid=prev"/>
				<updated>2021-02-03T15:41:43Z</updated>
		
		<summary type="html">&lt;p&gt;Created page with &amp;quot; == Abstract ==  High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.&lt;br /&gt;
&lt;br /&gt;
Comment: Accepted to CVPR 2018. Project page: http://mmlab.ie.cuhk.edu.hk/projects/ST-Lattice&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Original document ==&lt;br /&gt;
&lt;br /&gt;
The different versions of the original document can be found in:&lt;br /&gt;
&lt;br /&gt;
* [http://arxiv.org/abs/1804.05472 http://arxiv.org/abs/1804.05472]&lt;br /&gt;
&lt;br /&gt;
* [http://arxiv.org/pdf/1804.05472 http://arxiv.org/pdf/1804.05472]&lt;br /&gt;
&lt;br /&gt;
* [http://xplorestaging.ieee.org/ielx7/8576498/8578098/08578913.pdf?arnumber=8578913 http://xplorestaging.ieee.org/ielx7/8576498/8578098/08578913.pdf?arnumber=8578913],&lt;br /&gt;
: [http://dx.doi.org/10.1109/cvpr.2018.00815 http://dx.doi.org/10.1109/cvpr.2018.00815]&lt;br /&gt;
&lt;br /&gt;
* [https://dblp.uni-trier.de/db/conf/cvpr/cvpr2018.html#ChenWYZXLL18 https://dblp.uni-trier.de/db/conf/cvpr/cvpr2018.html#ChenWYZXLL18],&lt;br /&gt;
: [http://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Optimizing_Video_Object_CVPR_2018_paper.pdf http://openaccess.thecvf.com/content_cvpr_2018/papers/Chen_Optimizing_Video_Object_CVPR_2018_paper.pdf],&lt;br /&gt;
: [http://openaccess.thecvf.com/content_cvpr_2018/html/Chen_Optimizing_Video_Object_CVPR_2018_paper.html http://openaccess.thecvf.com/content_cvpr_2018/html/Chen_Optimizing_Video_Object_CVPR_2018_paper.html],&lt;br /&gt;
: [https://ui.adsabs.harvard.edu/abs/2018arXiv180405472C/abstract https://ui.adsabs.harvard.edu/abs/2018arXiv180405472C/abstract],&lt;br /&gt;
: [https://academic.microsoft.com/#/detail/2963585656 https://academic.microsoft.com/#/detail/2963585656]&lt;/div&gt;</summary>
		<author><name>Scipediacontent</name></author>	</entry>

	</feed>