-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Spark DL notebooks with PyTriton on Databricks/Dataproc #483
base: branch-25.02
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall. A few comments.
In a future optimization we can look at something like https://github.com/triton-inference-server/client/blob/main/src/python/examples/simple_http_cudashm_client.py or for regular shm to reduce data copy (if I'm interpreting these correctly).
sudo /databricks/python3/bin/pip3 install --upgrade --force-reinstall -r temp_requirements.txt | ||
rm temp_requirements.txt | ||
|
||
set +x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a carriage return at the end of last line in all files this symbol appears.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deleted, also merged the tf/torch scripts into one for convenience.
"df = spark.read.parquet(\"imdb_test\").limit(100).cache()" | ||
"def _use_stage_level_scheduling(spark, rdd):\n", | ||
"\n", | ||
" if spark.version < \"3.4.0\":\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check probably not needed since predict_batch_udf is also not in spark < 3.4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"df = spark.read.parquet(data_path).limit(256).repartition(8)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is limit and repartition needed? And is this the right order? And why these numbers? A comment might be in order. Propagate any changes to other notebooks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was intended to test the minimal scenario of 1 batch per task—especially with tensorflow, too high of a number can be really slow (>1 min). (In previous versions we were limiting to 100 rows: https://github.com/NVIDIA/spark-rapids-examples/blob/branch-23.06/examples/ML%2BDL-Examples/Spark-DL/dl_inference/huggingface/conditional_generation.ipynb?short_path=d3949f8#L1208)
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 56, | ||
"execution_count": null, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fyi, spark.stop() below might be bad for databricks. It puts the cluster in a bad state. (at least in older versions like 13.3 from what I've seen).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, issue persists on latest runtime - addressed
"def stop_triton(it):\n", | ||
" import docker\n", | ||
" import time\n", | ||
"def stop_triton(pids):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this along with all the other triton related code that is common across the notebooks be moved to a single python file triton_utils.py that gets shipped via pyfiles with each Spark job and then imported in the notebooks? Would avoid a lot of repetition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Good idea, will definitely follow-up with this improvement. Note per pytriton team—with shm, there still will be an additional inter-process data copy (until Triton 3 release): |
Support for running DL Inference notebooks on CSP environments.
Notebook outputs are saved from running locally, but all notebooks were tested on Databricks/Dataproc.