-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for train on windows #37
base: master
Are you sure you want to change the base?
Conversation
dl_lib/engine/defaults.py
Outdated
@@ -66,7 +66,7 @@ def default_argument_parser(): | |||
# PyTorch still may leave orphan processes in multi-gpu training. | |||
# Therefore we use a deterministic way to obtain port, | |||
# so that users are aware of orphan processes by seeing the port occupied. | |||
port = 2 ** 15 + 2 ** 14 + hash(os.getuid()) % 2 ** 14 | |||
port = 2 ** 15 + 2 ** 14 + hash("User_name") % 2 ** 14 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hash("User_name") is a fix value, please don't do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a fixed value i know, but i think is impossible to train on 8-GPU windows machine, I will find a way to get uid on windows.
@@ -334,7 +338,7 @@ at::Tensor ROIAlign_forward_cuda( | |||
auto output_size = num_rois * pooled_height * pooled_width * channels; | |||
cudaStream_t stream = at::cuda::getCurrentCUDAStream(); | |||
|
|||
dim3 grid(std::min(at::cuda::ATenCeilDiv(output_size, 512L), 4096L)); | |||
dim3 grid(std::min(ceil_div((int)output_size, 512), 4096)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
at::cuda::ATenCeilDiv works for all platform, the real reason for not working on windows is 'L'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will change it and try to recompile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If i remove "L", could this function run correctly on linux? Could i just simple "L"?
@@ -390,7 +394,7 @@ at::Tensor ROIAlign_backward_cuda( | |||
|
|||
cudaStream_t stream = at::cuda::getCurrentCUDAStream(); | |||
|
|||
dim3 grid(std::min(at::cuda::ATenCeilDiv(grad.numel(), 512L), 4096L)); | |||
dim3 grid(std::min(ceil_div((int)grad.numel(), 512), 4096)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as last one.
@@ -52,7 +52,7 @@ | |||
SOLVER=dict( | |||
OPTIMIZER=dict( | |||
NAME="SGD", | |||
BASE_LR=0.02, | |||
BASE_LR=0.002, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please do not change this, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.02 is too bigger for one GPU, i will change back it.
@@ -0,0 +1,126 @@ | |||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Such a file is duplicated with tools/train_net.py, or you should consider combine them together
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will try use the same train way as on linux
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PTAL @Wang-zipeng
i just search PTAL's mean by google. |
dl_lib/engine/defaults.py
Outdated
@@ -66,7 +67,7 @@ def default_argument_parser(): | |||
# PyTorch still may leave orphan processes in multi-gpu training. | |||
# Therefore we use a deterministic way to obtain port, | |||
# so that users are aware of orphan processes by seeing the port occupied. | |||
port = 2 ** 15 + 2 ** 14 + hash(os.getuid()) % 2 ** 14 | |||
port = 2 ** 15 + 2 ** 14 + hash(getuser()) % 2 ** 14 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
port = 2 ** 15 + 2 ** 14 + hash(getuser()) % 2 ** 14 | |
port = 2 ** 15 + 2 ** 14 + hash(os.getuid() if sys.platform != "win32" else 1) % 2 ** 14 |
@@ -334,7 +338,7 @@ at::Tensor ROIAlign_forward_cuda( | |||
auto output_size = num_rois * pooled_height * pooled_width * channels; | |||
cudaStream_t stream = at::cuda::getCurrentCUDAStream(); | |||
|
|||
dim3 grid(std::min(at::cuda::ATenCeilDiv(output_size, 512L), 4096L)); | |||
dim3 grid(std::min(at::cuda::ATenCeilDiv(static_cast<int64_t>(output_size), static_cast<int64_t>(512)), static_cast<int64_t>(4096))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to break this long line of code.
@@ -390,7 +394,7 @@ at::Tensor ROIAlign_backward_cuda( | |||
|
|||
cudaStream_t stream = at::cuda::getCurrentCUDAStream(); | |||
|
|||
dim3 grid(std::min(at::cuda::ATenCeilDiv(grad.numel(), 512L), 4096L)); | |||
dim3 grid(std::min(at::cuda::ATenCeilDiv(static_cast<int64_t>(grad.numel()), static_cast<int64_t>(512)), static_cast<int64_t>(4096))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto.
setup.py
Outdated
@@ -39,6 +41,8 @@ def get_extensions(): | |||
"-D__CUDA_NO_HALF_CONVERSIONS__", | |||
"-D__CUDA_NO_HALF2_OPERATORS__", | |||
] | |||
if "Windows" == os_name: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is sys.platform
suitable for your case?
tools/train_net.py
Outdated
if eval_space_Gb > free_space_Gb: | ||
logger.warning(f"{Fore.RED}Remaining space({free_space_Gb}GB) " | ||
f"is less than ({eval_space_Gb}GB){Style.RESET_ALL}") | ||
if "Linux" == platform.system(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if "Linux" == platform.system(): | |
if sys.platform == "linux": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remember that Python is not C++, code like
if a = 1
is invalid.
Implement train on windows.
Compile steps(need visual studio 2017):
execute "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat" in the windows cmd to establish a compile environment.
enter the code folder and use command "python setup.py develop"
Train steps:
Anothers: