Replies: 3 comments 2 replies
-
About the parametric queries, this is achieved by the CLI, see options Maybe the best example is in this showcase project, for example:
Where |
Beta Was this translation helpful? Give feedback.
-
aha I see! Well too bad you can't do What I also find useful is splitting mapping queries in multiple queries, you know, for the conginitive load. Can you repeat |
Beta Was this translation helpful? Give feedback.
-
you could do something like the unix pipe by running insert queries instead of selects/constructs. e.g.
then more inserts then eventually a select or construct...
you can think of the pipeline as just incrementally building named graphs. |
Beta Was this translation helpful? Give feedback.
-
The website says
Pipelines
Use SPARQL Results Sets as input for parametric queries. Decompose your data integration workflow
That sounds interesting! But what do you mean exactly and how does it work? I currently use Apache Airflow to organise a decomposed mapping process, so anything in that direction is helpful.
Beta Was this translation helpful? Give feedback.
All reactions