Optimizing CUDA code by kernel fusion: application on BLAS

Investor logo

Warning

This publication doesn't include Institute of Computer Science. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

FILIPOVIČ Jiří MADZIN Matúš FOUSEK Jan MATYSKA Luděk

Year of publication 2015
Type Article in Periodical
Magazine / Source The Journal of Supercomputing
MU Faculty or unit

Faculty of Informatics

Citation
web http://link.springer.com/article/10.1007/s11227-015-1483-z
Doi http://dx.doi.org/10.1007/s11227-015-1483-z
Field Informatics
Keywords GPU; CUDA; BLAS; Kernel fusion; Code generation
Description Contemporary GPUs have significantly higher arithmetic throughput than a memory throughput. Hence, many GPU kernels are memory bound and cannot exploit arithmetic power of the GPU. Examples of memory-bound kernels are BLAS-1 (vector–vector) and BLAS-2 (matrix–vector) operations. However, when kernels share data, kernel fusion can improve memory locality by placing shared data, originally passed via off-chip global memory, into a faster, but distributed on-chip memory. In this paper, we show how kernels performing map, reduce or their nested combinations can be fused automatically by our source-to-source compiler. To demonstrate the usability of the compiler, we have implemented several BLAS-1 and BLAS-2 routines and show how the performance of their sequences can be improved by fusions. Compared with similar sequences using CUBLAS, our compiler is able to generate code that is up to 2.24x faster for the examples tested.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info