-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Has anyone successfully built on Windows 10? #44
Comments
I seem to have succeeded, but there are many problems. The core lies in the build.ninja parameters. cflags = -DTORCH_EXTENSION_NAME=slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0 -DTORCH_API_INCLUDE_EXTENSION_H -IC:\ProgramData\Anaconda3\envs\py310torch\lib\site-packages\torch\include -IC:\ProgramData\Anaconda3\envs\py310torch\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\Anaconda3\envs\py310torch\lib\site-packages\torch\include\TH -IC:\ProgramData\Anaconda3\envs\py310torch\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\include" -IC:\ProgramData\Anaconda3\envs\py310torch\Include -D_GLIBCXX_USE_CXX11_ABI=0 /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /std:c++17 -DSLSTM_HIDDEN_SIZE=128 -DSLSTM_BATCH_SIZE=8 -DSLSTM_NUM_HEADS=4 -DSLSTM_NUM_STATES=4 -DSLSTM_DTYPE_B=float -DSLSTM_DTYPE_R=nv_bfloat16 -DSLSTM_DTYPE_W=nv_bfloat16 -DSLSTM_DTYPE_G=nv_bfloat16 -DSLSTM_DTYPE_S=nv_bfloat16 -DSLSTM_DTYPE_A=float -DSLSTM_NUM_GATES=4 -DSLSTM_SIMPLE_AGG=true -DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false -DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0 -DSLSTM_FORWARD_CLIPVAL_VALID=false -DSLSTM_FORWARD_CLIPVAL=0.0 -U__CUDA_NO_HALF_OPERATORS -U__CUDA_NO_HALF_CONVERSIONS -U__CUDA_NO_BFLOAT16_OPERATORS -U__CUDA_NO_BFLOAT16_CONVERSIONS -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ rule compile rule cuda_compile rule link build slstm.o: compile C$:\ProgramData\Anaconda3\envs\py310torch\lib\site-packages\xlstm\blocks\slstm\src\cuda\slstm.cc build slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0.pyd: link slstm.o slstm_forward.cuda.o slstm_backward.cuda.o slstm_backward_cut.cuda.o slstm_pointwise.cuda.o blas.cuda.o cuda_error.cuda.o default slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0.pyd ` |
Have you successfully built on Windows 10你成功了吗 |
I still got a problem even though I make every change like you posted. Do you mind taking a look? 'build.ninja' cflags = -DTORCH_EXTENSION_NAME=slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0 -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda\envs\xlstm\Lib\site-packages\torch\include -ID:\Anaconda\envs\xlstm\Lib\site-packages\torch\include\torch\csrc\api\include -ID:\Anaconda\envs\xlstm\Lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include" -ID:\Anaconda\envs\xlstm\include -D_GLIBCXX_USE_CXX11_ABI=0 /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /std:c++17 -DSLSTM_HIDDEN_SIZE=128 -DSLSTM_BATCH_SIZE=8 -DSLSTM_NUM_HEADS=4 -DSLSTM_NUM_STATES=4 -DSLSTM_DTYPE_B=float -DSLSTM_DTYPE_R=nv_bfloat16 -DSLSTM_DTYPE_W=nv_bfloat16 -DSLSTM_DTYPE_G=nv_bfloat16 -DSLSTM_DTYPE_S=nv_bfloat16 -DSLSTM_DTYPE_A=float -DSLSTM_NUM_GATES=4 -DSLSTM_SIMPLE_AGG=true -DSLSTM_GRADIENT_RECURRENT_CLIPVAL_VALID=false -DSLSTM_GRADIENT_RECURRENT_CLIPVAL=0.0 -DSLSTM_FORWARD_CLIPVAL_VALID=false -DSLSTM_FORWARD_CLIPVAL=0.0 -U__CUDA_NO_HALF_OPERATORS -U__CUDA_NO_HALF_CONVERSIONS -U__CUDA_NO_BFLOAT16_OPERATOR S -U__CUDA_NO_BFLOAT16_CONVERSIONS -U__CUDA_NO_BFLOAT162_OPERATORS__ -U__CUDA_NO_BFLOAT162_CONVERSIONS__ rule compile rule cuda_compile rule link build slstm.o: compile D$:\Anaconda3\envs\xlstm\lib\site-packages\xlstm\blocks\slstm\src\cuda\slstm.cc build slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0.pyd: link slstm.o slstm_forward.cuda.o slstm_backward.cuda.o slstm_backward_cut.cuda.o slstm_pointwise.cuda.o blas.cuda.o cuda_error.cuda.o default slstm_HS128BS8NH4NS4DBfDRbDWbDGbDSbDAfNG4SA1GRCV0GRC0d0FCV0FC0d0.pyd |
@vanclouds7 Please ensure that ninja, cuda and cudnn are installed, And add include files and dynamic libraries: D:\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include D:\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64 You can use 'ninja', '-v' to get more verbose output |
@vanclouds7 I forgot one |
@wrench1997 |
@vanclouds7 Go to the sltsm ninja.build directory to view the error output. In addition, I saw that you did not specify your cuda version in the python variable. |
@vanclouds7 #TORCH_HOME = os.path.abspath(torch.__file__).replace('\__init__.py','')
# edit: add to `extra_cflags`
f"-I{TORCH_HOME}/include",
f"-I{TORCH_HOME}/include/torch/csrc/api/include",
#CUDA_HOME=os.environ.get('CUDA_HOME') or os.environ.get('CUDA_PATH')
# edit: add to `extra_ldflags`
f"/LIBPATH:{TORCH_HOME}/lib",
f"/LIBPATH:{CUDA_HOME}/lib/x64","cublas.lib", Lastly, run |
could you share your code and environment wityh me ? thank you a lot! |
could you share your code and environment wityh me ? thank you a lot! |
你好 同学 我能不能加一下你的微信 Liz18326042653 .非常感谢 |
@Adapter525 我的方案上传了,你可以试试 |
Could you please indicate where to include this files and libraries? OI feel like I am following all the instructions, but the build still fails |
Hello, do you have any questions? Can you send me an error message? |
Hi, yes, I attached the error log to the message and also build.ninja file and cuda_init.py file. The .ninja_log is not very informative. |
@kristinaste |
1 similar comment
This comment was marked as outdated.
This comment was marked as outdated.
Here are my relevant changes, Make sure both link.exe and cl.exe can run directly. |
I have been trying for a few days, replacing cuda12.1, cudnn, and building ninja from scratch. Win10 still reports an error. The compatibility with win is too poor
The text was updated successfully, but these errors were encountered: