Replies: 2 comments 1 reply
-
As far as I can tell, you don't need the middle task, as you suspected. I believe this would work fine: - name: check-artifacts
inputs:
parameters:
- name: path_objects
artifacts:
- name: path_objects
path: /app/objects/{{=sprig.base(inputs.parameters.path_objects)}}
s3:
key: '{{inputs.parameters.path_objects}}'
container:
image: alpine:latest
command: [sh, -c]
args: ["ls -la /app/objects"]
volumeMounts:
- name: workdir
mountPath: /app/objects Pretty much just merging them, nothing fancy. Argo's |
Beta Was this translation helpful? Give feedback.
-
The problem with this approach is that |
Beta Was this translation helpful? Give feedback.
-
I have a question regarding sharing volumes in a multi step workflow. My use case is the following: In a workflow I have an initial step that generates a json output with paths to an S3 bucket for example
["path/object1", "path/object2", "path/object3"]
(the number of paths is variable). After I have this info, in the next task I need to download all these objects and put them in a directory, let's say/app/objects
so its content will be/app/objects/object*
. The last task will read from that directory and use the files for further processing. What I have for now is the following:The result of the last task
check-artifacts
is what I expect:It does what it needs to do, but here comes my question. How can I improve this workflow taking into account that I just need 2 main tasks:
generate-paths
andcheck-artifacts
? I feel that the middle taskget-from-s3
creates overhead in the pipeline. They are just tasks that do nothing. They are simply used to spin up a pod that mounts a volume where the S3 files are going to be stored and used by the last task.Thank you folks!
Beta Was this translation helpful? Give feedback.
All reactions