-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch_size issue #752
Comments
With PR #753 you can get all the inputs of your batch sent to your model function at once. |
Issue not Resolved: |
It's not possible that your single query of 100K inputs (using |
I have just now installed from your branch and checked it, is still not working. So I sent 130000 rows and batch_size=50000 and when I checked the logs the input_rows length was 2357,4866 etc.That it is still automatically batching the input dataset. Even for 10K input rows and batch_size=1K still automatically batching.Please have a look. |
DId you build the query frontend docker image with my branch, and specify that image when starting Clipper? |
The issue is :I want to see how much time my predict() function is taking.Although I set batch_size to the number of rows of data I am sending but it is still batching it randomly and then calling the predict function. (We can know this by print(len(inputs)) )
Is there any way to disable the automatic batching that clipper is doing? So that the data is sent in one shot. I also tried increasing the slo_micros value but no luck there.
Here is the create_endpoint() code:
keras_deployer.create_endpoint(clipper_conn=clipper_conn, name="keras", version="1", input_type="doubles", func=predict, model_path_or_object="/home/ubuntu/keras_experiments/model.h5", batch_size=1000,slo_micros=300000000, pkgs_to_install=['pandas'])
Thanks in advance.
The text was updated successfully, but these errors were encountered: