This paper proposes a study of existing environments used to enact data science pipelines applied to graphs. Data science pipelines are a new form of queries combining classic graph operations with artificial intelligence graph analytics operations. A pipeline defines a data flow consisting of tasks for querying, exploring and analysing graphs. Different environments and systems can be used for enacting pipelines. They range from graph NoSQL stores, programming languages extended with libraries providing graph processing and analytics functions, to full machine learning and artificial intelligence studios. The paper describes these environments and the design principles that they promote for enacting data science pipelines intended to query, process and explore data collections and particularly graphs.

Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1007/978-3-030-55814-7_23 under the license http://www.springer.com/tdm
Back to Top

Document information

Published on 01/01/2020

Volume 2020, 2020
DOI: 10.1007/978-3-030-55814-7_23
Licence: Other

Document Score


Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?