You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the model pipeline on open shift uses GPU and has to be turned on and off manually. It would be better to have a process which triggers a mechanism to get the pod when needed. Here is a possible solution to the current process we aligned on today:
Currently the model pipeline on open shift uses GPU and has to be turned on and off manually. It would be better to have a process which triggers a mechanism to get the pod when needed. Here is a possible solution to the current process we aligned on today:
corporate_data_extraction/data_extractor/code/coordinator/Dockerfile
RUN apt-get install -y kubectl
cp ... (maybe manifest to be added)
-- has to be set-up to create the model-docker (model-pipeline-server-docker)
corporate_data_extraction/data_extractor/code/model_pipeline/Dockerfile
change sleep infinity --> return exit code something
corporate_data_extraction/data_extractor/code/infer_on_pdf.py Line 172: Something like that (maybe with Python package)
cmd = 'kubectl ...'
try:
os.system(cmd)
except Exception as e:
msg = "Error during kubectl"
return Response(msg, status=500)
The text was updated successfully, but these errors were encountered: