Skip to content

Latest commit

 

History

History
161 lines (102 loc) · 9.92 KB

README.md

File metadata and controls

161 lines (102 loc) · 9.92 KB

CLBlast: The tuned OpenCL BLAS library

Platform Build status
Windows Build Status
Linux/macOS Build Status
Test machine (thanks to ArrayFire) Test status
clblast-linux-nvidia-a100 Test Status
clblast-linux-nvidia-k80 Test Status
clblast-linux-nvidia-p100 Test Status
clblast-linux-nvidia-t4 Test Status
clblast-linux-nvidia-v100 Test Status
clblast-windows-amd-r9 Test Status
clblast-windows-nvidia-m6000 Test Status

CLBlast is a lightweight, performant and tunable OpenCL BLAS library written in C++11. It is designed to leverage the full performance potential of a wide variety of OpenCL devices from different vendors, including desktop and laptop GPUs, embedded GPUs, and other accelerators. CLBlast implements BLAS routines: basic linear algebra subprograms operating on vectors and matrices. See the CLBlast website for performance reports on some devices.

The library is not tuned for all possible OpenCL devices: if out-of-the-box performance is poor, please run the tuners first. See the docs for a list of already tuned devices and instructions on how to tune yourself and contribute to future releases of the CLBlast library.

Why CLBlast and not clBLAS or cuBLAS?

Use CLBlast instead of clBLAS:

  • When you care about achieving maximum performance.
  • When you want to be able to inspect the BLAS kernels or easily customize them to your needs.
  • When you run on exotic OpenCL devices for which you need to tune yourself.
  • When you are still running on OpenCL 1.1 hardware.
  • When you prefer a C++ API over a C API (C API also available in CLBlast).
  • When you value an organized and modern C++ codebase.
  • When you target Intel CPUs and GPUs or embedded devices.
  • When you can benefit from the increased performance of half-precision fp16 data-types.

Use CLBlast instead of cuBLAS:

  • When you want your code to run on devices other than NVIDIA CUDA-enabled GPUs.
  • When you want to tune for a specific configuration (e.g. rectangular matrix-sizes).
  • When you sleep better if you know that the library you use is open-source.
  • When you are using OpenCL rather than CUDA.

When not to use CLBlast:

  • When you run on NVIDIA's CUDA-enabled GPUs only and can benefit from cuBLAS's assembly-level tuned kernels.

Getting started

CLBlast can be compiled with minimal dependencies (apart from OpenCL) in the usual CMake-way, e.g.:

mkdir build && cd build
cmake ..
make

Detailed instructions for various platforms can be found are here.

Like clBLAS and cuBLAS, CLBlast also requires OpenCL device buffers as arguments to its routines. This means you'll have full control over the OpenCL buffers and the host-device memory transfers. CLBlast's API is designed to resemble clBLAS's C API as much as possible, requiring little integration effort in case clBLAS was previously used. Using CLBlast starts by including the C++ header:

#include <clblast.h>

Or alternatively the plain C version:

#include <clblast_c.h>

Afterwards, any of CLBlast's routines can be called directly: there is no need to initialize the library. The available routines and the required arguments are described in the above mentioned include files and the included API documentation. The API is kept as close as possible to the Netlib BLAS and the cuBLAS/clBLAS APIs. For an overview of the supported routines, see here.

To get started quickly, a couple of stand-alone example programs are included in the samples subfolder. They can optionally be compiled using the CMake infrastructure of CLBlast by providing the -DSAMPLES=ON flag, for example as follows:

cmake -DSAMPLES=ON ..

Afterwards, you can optionally read more about running proper benchmarks and tuning the library.

Full documentation

More detailed documentation is available in separate files:

Known issues

Known performance related issues:

  • Severe performance issues with Beignet v1.3.0 due to missing support for local memory. Please downgrade to v1.2.1 or upgrade to v1.3.1 or newer.

Other known issues:

  • Routines returning an integer are currently not properly tested for half-precision FP16: IHAMAX/IHAMIN/IHMAX/IHMIN

  • Half-precision FP16 tests might sometimes fail based on order multiplication, i.e. (a * b) * c != (c * b) * a

  • The AMD APP SDK has a bug causing a conflict with libstdc++, resulting in a segfault when initialising static variables. This has been reported to occur with the CLBlast tuners.

  • The AMD run-time compiler has a bug causing it to get stuck in an infinite loop. This is reported to happen occasionally when tuning the CLBlast GEMM routine.

  • AMD Southern Island GPUs might cause wrong results with the amdgpu-pro drivers. Do configure CMake with AMD_SI_EMPTY_KERNEL_WORKAROUND to resolve the issue, see issue #301.

  • Tests might fail on an Intel IvyBridge GPU with the latest Beignet. Please downgrade Beignet to 1.2.1, see issue #231.

Contributing

Contributions are welcome in the form of tuning results for OpenCL devices previously untested or pull requests. See the contributing guidelines for more details.

The main contributing authors (code, pull requests, testing) can be found in the list ofGitHub contributors.

Tuning and testing on a variety of OpenCL devices was made possible by:

Hardware/software for this project was contributed by:

More information

Further information on CLBlast is available through the following links:

  • A 20-minute presentation of CLBlast was given at the GPU Technology Conference in May 2017. A recording is available on the GTC on-demand website (poor audio quality however) and a full slide-set is also available as PDF. An updated version was also presented at IWOCL in May 2018. The slide set can be found here as PDF.
  • More in-depth information and experimental results are also available in a scientific paper titled CLBlast: A Tuned OpenCL BLAS Library (v1 May 2017, updated to v2 in April 2018). For CLTune, the inspiration for the included auto-tuner, see also the CLTune: A Generic Auto-Tuner for OpenCL Kernels paper.

How to cite this work:

Cedric Nugteren. CLBlast: A Tuned OpenCL BLAS Library. In IWOCL'18: International Workshop
on OpenCL. ACM, New York, NY, USA, 10 pages. 2018. https://doi.org/10.1145/3204919.3204924

Support us

This project started in March 2015 as an evenings and weekends free-time project next to a full-time job for Cedric Nugteren. You can find contact information on the website of the main author.