Replies: 1 comment
-
You can do this with any remote path, not only s3. Take a look at the Kubeflow quickstart documentation, the dataset is downloaded from a website, then it will be managed by the internal artifact tracking subcomponent. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have kubeflow running on an on-prem cluster where I have a jupyter notebook server with a data volumne '/data' that has a file called sample.csv. I want to be able to read the csv in my kubeflow pipeline. Here is what my kubeflow pipeline looks like, not sure how I would integrate my csv from my notebook server. Any help would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions