-
Notifications
You must be signed in to change notification settings - Fork 24.2k
libtorch 1.8.0 precompiled has no CUDA backend linked (Adding "-INCLUDE:?warp_size@cuda@at@@YAHXZ" no longer helps) #54131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @lablabla the official way of using LibTorch is through CMake since it handles the trivial linking things. Can you give cmake a try? Also can you share the minimum repro torchscript? I'll see if I can repro this locally. |
@lablabla, could you please refer this command to run your program https://github.com/pytorch/builder/blob/master/test_example_code/check-torch-cuda.cpp |
@mszhanyi , following your suggestion, I added @skyline75489 |
@skyline75489 using the cmake properly by calling looking at the linker input, it adds the
calls, so it works great now Thanks! |
@lablabla @yoshihingis The demo video is https://ossci-windows.s3.us-east-1.amazonaws.com/vsextension/demo.mp4. Any comments are welcome |
Worked with: libtorch 1.12 Quoted from: Although the code compiles fine without it, it throws |
🐛 Bug
Trying to load JIT torchscript model in Windows 10 using precompiled torch 1.8.0 with CUDA 11.1 results in
This was previously solved by adding
"-INCLUDE:?warp_size@cuda@at@@YAHXZ"
to the linker options of MSVC as suggested herewhen moving to 1.8.0 I added
torch_cuda_cpp.lib
andtorch_cuda_cu.lib
to the linked librariesThe same torchscript works well in 1.7.1
Environment
conda
,pip
, source): Precompiled binariescc @peterjc123 @maxluk @nbcsm @guyang3532 @gunandrose4u @mszhanyi @skyline75489
The text was updated successfully, but these errors were encountered: