You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, if we want to run a series of Jobs where the input of one jobs is the output of the previous one, we need to materialize the output on disk for the next one to be able to read it. People using Spark or Hadoop usually run a series of jobs rather than a single big job. Thus it is important to support a functionality of caching the results of one job to be consumed by the next job.
The text was updated successfully, but these errors were encountered:
Currently, if we want to run a series of Jobs where the input of one jobs is the output of the previous one, we need to materialize the output on disk for the next one to be able to read it. People using Spark or Hadoop usually run a series of jobs rather than a single big job. Thus it is important to support a functionality of caching the results of one job to be consumed by the next job.
The text was updated successfully, but these errors were encountered: