Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SYCL][CUDA] 7/14 accessor tests fail when compiled into a single executable #2118

Closed
againull opened this issue Jul 15, 2020 · 2 comments
Closed
Labels
cuda CUDA back-end

Comments

@againull
Copy link
Contributor

Build compiler
git clone https://github.com/intel/llvm
Hash: b00fb7c

Includes: #1990, #1977

python /localdisk2/ws/againull/sycl/llvm/buildbot/configure.py --cuda -o /localdisk2/ws/againull/sycl/build
python /localdisk2/ws/againull/sycl/llvm/buildbot/compile.py -o /localdisk2/ws/againull/sycl/build

Build accessor CTS tests
git clone https://github.com/KhronosGroup/SYCL-CTS.git
Hash: 9cbe1a719b25c269ef78a2ee08f2e5ed12a1cc6d

Applied: KhronosGroup/SYCL-CTS#52

cmake -G Ninja -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_COMPILER=clang -DINTEL_SYCL_ROOT=/localdisk2/ws/againull/sycl/build -DINTEL_SYCL_TRIPLE=nvptx64-nvidia-cuda-sycldevice -DSYCL_IMPLEMENTATION=Intel_SYCL -DSYCL_CTS_ENABLE_OPENCL_INTEROP_TESTS=Off -DSYCL_CTS_ENABLE_DOUBLE_TESTS=On -DSYCL_CTS_ENABLE_HALF_TESTS=On -DINTEL_SYCL_FLAGS="-Xsycl-target-backend;--cuda-gpu-arch=sm_50" -DOpenCL_INCLUDE_DIR=/localdisk2/ws/againull/sycl/build/include/sycl -DOpenCL_LIBRARY=/localdisk2/ws/againull/sycl/build/lib/libOpenCL.so ..

ninja test_accessor -j 12

Run CTS tests
./bin/test_accessor -p nvidia -d opencl_gpu
7/14 tests are failing:
test_accessor.log

If each test is compiled to a separate binary (not to a single binary) then 13/14 tests pass (only accessor_api_image is failing - separate issue).
For example, if remove all cpp files from tests/accessor except accessor_api_buffer.cpp.
And run cmake/ninja as described above, accessor_api_buffer test will pass.

There is an issue when all accessor tests are compiled into a single executable.

@bader bader added the cuda CUDA back-end label Jul 15, 2020
@bader
Copy link
Contributor

bader commented Jul 22, 2020

@againull, @pvchupin, I think we figured out that this was caused by the regression in the driver. Can we close this one?

@againull
Copy link
Contributor Author

Yes, problem does't reproduce on 435.21 nvidia driver.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda CUDA back-end
Projects
None yet
Development

No branches or pull requests

2 participants