Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add forward to training servicer #227

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

thodkatz
Copy link
Collaborator

@thodkatz thodkatz commented Dec 12, 2024

It builds on top of the #225

I have implemented the forward method. The training is paused, we do the forward pass, and then we resume.

- Supported operations: start, resume, pause, shutdown
- pytorch-3dunet package is used as the framework to create the models
… failed

I caught an edge case, where events are blocked, because we have exited the training, and the tasks of the queue would remain unprocessed.
Creating and closing processes and threads can be quite time consuming
resulting to test timeouts if the tests performs a lot of actions.
Applying monkeypatch to a parent process won't propagated to a child process if the start method is spawn (macos) instead of fork (linux)
- To fix test on windows, convert label data to float64
The should stop callbacks are boolean, so we need to aggregate their return value. Previously the return value wasn't taken into account, and the callbacks were returning none
The enum is used for validation check before triggering one of them. Previously I was checking if the queue was alive, but that won't be enough, for example if you want to perform resume, while you are resumed, the queue is operational, but the action shouldn't be valid.
@thodkatz thodkatz force-pushed the add-forward-to-training-servicer branch 3 times, most recently from d482661 to 32cd26d Compare December 12, 2024 23:52
@thodkatz thodkatz force-pushed the add-forward-to-training-servicer branch from 619ef5f to 035a8b3 Compare December 19, 2024 10:50
Move NamedInt and Tensor proto to a separate file so training proto can
use as well
- The inference servicer had a procedure to list the available devices.
  This is needed or the training servicer as well. So list devices is
  decoupled to be shared.
If the training is running or paused, the forward, will retain the state
after completion. But it requires to pause so we can release memory and
do the forward pass.
@thodkatz thodkatz force-pushed the add-forward-to-training-servicer branch 2 times, most recently from 3dc2864 to 7744f91 Compare December 20, 2024 21:19
Copy link

codecov bot commented Dec 20, 2024

Codecov Report

Attention: Patch coverage is 65.33490% with 295 lines in your changes missing coverage. Please review.

Project coverage is 63.93%. Comparing base (5ea5d3a) to head (4798dbe).

Files with missing lines Patch % Lines
tiktorch/trainer.py 51.53% 79 Missing ⚠️
tiktorch/proto/training_pb2_grpc.py 55.55% 44 Missing ⚠️
tiktorch/server/session/backend/supervisor.py 73.50% 40 Missing ⚠️
tiktorch/proto/training_pb2.py 30.30% 23 Missing ⚠️
tiktorch/proto/utils_pb2.py 30.00% 21 Missing ⚠️
tiktorch/proto/inference_pb2.py 20.00% 20 Missing ⚠️
tiktorch/server/session/process.py 67.44% 14 Missing ⚠️
tiktorch/server/session/backend/commands.py 82.66% 13 Missing ⚠️
tiktorch/server/session/backend/base.py 72.72% 12 Missing ⚠️
tiktorch/server/grpc/training_servicer.py 89.28% 9 Missing ⚠️
... and 5 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #227      +/-   ##
==========================================
- Coverage   64.60%   63.93%   -0.67%     
==========================================
  Files          40       47       +7     
  Lines        2195     2745     +550     
==========================================
+ Hits         1418     1755     +337     
- Misses        777      990     +213     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@thodkatz thodkatz force-pushed the add-forward-to-training-servicer branch from 7744f91 to 50b0944 Compare December 20, 2024 21:27
Since both inference and training servicers have common the concept of
id, the training session id was replaced with the model session one used
for inference. This model session protobuf interfaced moved to a
separate utils proto file.

The PredictRequest being common, can be leveraged for abstraction.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant