Concurrent number cruncher - A GPU implementation of a general sparse linear sol

in: Proc. 28th Gocad Meeting, Nancy

Abstract

A wide class of numerical methods needs to solve a linear system, where the matrix pattern of non-zero coefficients can be arbitrary. These problems can greatly benefit from highly multithreaded computational power and large memory bandwidth available on GPUs, especially since dedicated general purpose APIs such as CTM (AMD-ATI) and CUDA (NVIDIA) have appeared. CUDA even provides a BLAS implementation, but only for dense matrices (CuBLAS). Other existing linear solvers for the GPU are also limited by their internal matrix representation. This paper describes how to combine recent GPU programming techniques and new GPU dedicated APIs with high performance computing strategies (namely block compressed row storage, register blocking and vectorization), to implement a sparse general-purpose linear solver. Our implementation of the Jacobipreconditioned Conjugate Gradient algorithm outperforms by up to a factor of 11.5x leading-edge CPU counterparts, making it attractive for applications which content with single precision.

Download / Links

    BibTeX Reference

    @INPROCEEDINGS{131.buatois,
        author = { Buatois, Luc and Caumon, Guillaume and Levy, Bruno },
         title = { Concurrent number cruncher - A GPU implementation of a general sparse linear sol },
     booktitle = { Proc. 28th Gocad Meeting, Nancy },
          year = { 2008 },
      abstract = { A wide class of numerical methods needs to solve a linear system, where the matrix pattern of non-zero
    coefficients can be arbitrary. These problems can greatly benefit from highly multithreaded computational
    power and large memory bandwidth available on GPUs, especially since dedicated general purpose APIs
    such as CTM (AMD-ATI) and CUDA (NVIDIA) have appeared. CUDA even provides a BLAS implementation,
    but only for dense matrices (CuBLAS). Other existing linear solvers for the GPU are also limited by
    their internal matrix representation.
    This paper describes how to combine recent GPU programming techniques and new GPU dedicated
    APIs with high performance computing strategies (namely block compressed row storage, register blocking
    and vectorization), to implement a sparse general-purpose linear solver. Our implementation of the Jacobipreconditioned
    Conjugate Gradient algorithm outperforms by up to a factor of 11.5x leading-edge CPU
    counterparts, making it attractive for applications which content with single precision. }
    }