m (Scipediacontent moved page Draft Content 685931049 to Leroy et al 2018a)
 
Line 3: Line 3:
  
 
International audience; The rise of virtual and augmented reality fuels an increased need for contents suitable to these new technologies including 3D contents obtained from real scenes. We consider in this paper the problem of 3D shape reconstruction from multi-view RGB images. We investigate the ability of learning-based strategies to effectively benefit the reconstruction of arbitrary shapes with improved precision and robustness. We especially target real life performance capture, containing complex surface details that are difficult to recover with existing approaches. A key step in the multi-view reconstruction pipeline lies in the search for matching features between viewpoints in order to infer depth information. We propose to cast the matching on a 3D receptive field along viewing lines and to learn a multi-view photoconsistency measure for that purpose. The intuition is that deep networks have the ability to learn local photometric configurations in a broad way, even with respect to different orientations along various viewing lines of the same surface point. Our results demonstrate this ability, showing that a CNN, trained on a standard static dataset, can help recover surface details on dynamic scenes that are not perceived by traditional 2D feature based methods. Our evaluation also shows that our solution compares on par to state of the art reconstruction pipelines on standard evaluation datasets, while yielding significantly better results and generalization with realistic performance capture data.
 
International audience; The rise of virtual and augmented reality fuels an increased need for contents suitable to these new technologies including 3D contents obtained from real scenes. We consider in this paper the problem of 3D shape reconstruction from multi-view RGB images. We investigate the ability of learning-based strategies to effectively benefit the reconstruction of arbitrary shapes with improved precision and robustness. We especially target real life performance capture, containing complex surface details that are difficult to recover with existing approaches. A key step in the multi-view reconstruction pipeline lies in the search for matching features between viewpoints in order to infer depth information. We propose to cast the matching on a 3D receptive field along viewing lines and to learn a multi-view photoconsistency measure for that purpose. The intuition is that deep networks have the ability to learn local photometric configurations in a broad way, even with respect to different orientations along various viewing lines of the same surface point. Our results demonstrate this ability, showing that a CNN, trained on a standard static dataset, can help recover surface details on dynamic scenes that are not perceived by traditional 2D feature based methods. Our evaluation also shows that our solution compares on par to state of the art reconstruction pipelines on standard evaluation datasets, while yielding significantly better results and generalization with realistic performance capture data.
 
Document type: Part of book or chapter of book
 
 
== Full document ==
 
<pdf>Media:Draft_Content_685931049-beopen975-9178-document.pdf</pdf>
 
  
  
Line 15: Line 10:
  
 
* [https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf]
 
* [https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf]
 +
 +
* [http://link.springer.com/content/pdf/10.1007/978-3-030-01240-3_48 http://link.springer.com/content/pdf/10.1007/978-3-030-01240-3_48],
 +
: [http://dx.doi.org/10.1007/978-3-030-01240-3_48 http://dx.doi.org/10.1007/978-3-030-01240-3_48] under the license http://www.springer.com/tdm
 +
 +
* [http://openaccess.thecvf.com/content_ECCV_2018/papers/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.pdf http://openaccess.thecvf.com/content_ECCV_2018/papers/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.pdf],
 +
: [https://dblp.uni-trier.de/db/conf/eccv/eccv2018-9.html#LeroyFB18 https://dblp.uni-trier.de/db/conf/eccv/eccv2018-9.html#LeroyFB18],
 +
: [https://link.springer.com/chapter/10.1007/978-3-030-01240-3_48 https://link.springer.com/chapter/10.1007/978-3-030-01240-3_48],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286v2 https://hal.archives-ouvertes.fr/hal-01849286v2],
 +
: [http://openaccess.thecvf.com/content_ECCV_2018/html/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.html http://openaccess.thecvf.com/content_ECCV_2018/html/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.html],
 +
: [https://www.scipedia.com/public/Leroy_et_al_2018a https://www.scipedia.com/public/Leroy_et_al_2018a],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286/document https://hal.archives-ouvertes.fr/hal-01849286/document],
 +
: [https://eccv2018.org/openaccess/content_ECCV_2018/html/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.html https://eccv2018.org/openaccess/content_ECCV_2018/html/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.html],
 +
: [https://rd.springer.com/chapter/10.1007/978-3-030-01240-3_48 https://rd.springer.com/chapter/10.1007/978-3-030-01240-3_48],
 +
: [https://academic.microsoft.com/#/detail/2884556139 https://academic.microsoft.com/#/detail/2884556139]
 +
 +
* [https://hal.archives-ouvertes.fr/hal-01849286 https://hal.archives-ouvertes.fr/hal-01849286],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286/document https://hal.archives-ouvertes.fr/hal-01849286/document],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf]
 +
 +
* [https://hal.archives-ouvertes.fr/hal-01849286 https://hal.archives-ouvertes.fr/hal-01849286],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286v2/document https://hal.archives-ouvertes.fr/hal-01849286v2/document],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf]
 +
 +
* [https://hal.archives-ouvertes.fr/hal-01849286 https://hal.archives-ouvertes.fr/hal-01849286],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286v2/document https://hal.archives-ouvertes.fr/hal-01849286v2/document],
 +
: [https://hal.archives-ouvertes.fr/hal-01849286v2/file/1217.pdf https://hal.archives-ouvertes.fr/hal-01849286v2/file/1217.pdf]

Latest revision as of 22:33, 28 January 2021

Abstract

International audience; The rise of virtual and augmented reality fuels an increased need for contents suitable to these new technologies including 3D contents obtained from real scenes. We consider in this paper the problem of 3D shape reconstruction from multi-view RGB images. We investigate the ability of learning-based strategies to effectively benefit the reconstruction of arbitrary shapes with improved precision and robustness. We especially target real life performance capture, containing complex surface details that are difficult to recover with existing approaches. A key step in the multi-view reconstruction pipeline lies in the search for matching features between viewpoints in order to infer depth information. We propose to cast the matching on a 3D receptive field along viewing lines and to learn a multi-view photoconsistency measure for that purpose. The intuition is that deep networks have the ability to learn local photometric configurations in a broad way, even with respect to different orientations along various viewing lines of the same surface point. Our results demonstrate this ability, showing that a CNN, trained on a standard static dataset, can help recover surface details on dynamic scenes that are not perceived by traditional 2D feature based methods. Our evaluation also shows that our solution compares on par to state of the art reconstruction pipelines on standard evaluation datasets, while yielding significantly better results and generalization with realistic performance capture data.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1007/978-3-030-01240-3_48 under the license http://www.springer.com/tdm
https://dblp.uni-trier.de/db/conf/eccv/eccv2018-9.html#LeroyFB18,
https://link.springer.com/chapter/10.1007/978-3-030-01240-3_48,
https://hal.archives-ouvertes.fr/hal-01849286v2,
http://openaccess.thecvf.com/content_ECCV_2018/html/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.html,
https://www.scipedia.com/public/Leroy_et_al_2018a,
https://hal.archives-ouvertes.fr/hal-01849286/document,
https://eccv2018.org/openaccess/content_ECCV_2018/html/Vincent_Leroy_Shape_Reconstruction_Using_ECCV_2018_paper.html,
https://rd.springer.com/chapter/10.1007/978-3-030-01240-3_48,
https://academic.microsoft.com/#/detail/2884556139
https://hal.archives-ouvertes.fr/hal-01849286/document,
https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf
https://hal.archives-ouvertes.fr/hal-01849286v2/document,
https://hal.archives-ouvertes.fr/hal-01849286/file/1217.pdf
https://hal.archives-ouvertes.fr/hal-01849286v2/document,
https://hal.archives-ouvertes.fr/hal-01849286v2/file/1217.pdf
Back to Top

Document information

Published on 01/01/2018

Volume 2018, 2018
DOI: 10.1007/978-3-030-01240-3_48
Licence: CC BY-NC-SA license

Document Score

0

Views 1
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?