Abstract

Given the wide use of machine learning approaches based on opaque prediction models, understanding the reasons behind decisions of black box decision systems is nowadays a crucial topic. We address the problem of providing meaningful explanations in the widely-applied image classification tasks. In particular, we explore the impact of changing the neighborhood generation function for a local interpretable model-agnostic explanator by proposing four different variants. All the proposed methods are based on a grid-based segmentation of the images, but each of them proposes a different strategy for generating the neighborhood of the image for which an explanation is required. A deep experimentation shows both improvements and weakness of each proposed approach.


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1007/978-3-030-16148-4_5 under the license http://www.springer.com/tdm
https://link.springer.com/chapter/10.1007/978-3-030-16148-4_5,
https://rd.springer.com/chapter/10.1007/978-3-030-16148-4_5,
https://academic.microsoft.com/#/detail/2934229219
Back to Top

Document information

Published on 31/12/18
Accepted on 31/12/18
Submitted on 31/12/18

Volume 2019, 2019
DOI: 10.1007/978-3-030-16148-4_5
Licence: Other

Document Score

0

Views 1
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?