Today's computers are parallel friendly, allowing us to reduce the amount of time we spend running computations. In this two-part introductory seminar, we will discuss tow different "scales" of parallel computing: (1) parallel computing for small to medium problems that can be achieved on a personal computer, (2) and parallel computing for large problems that usually require a cluster.
The first part, parallel computing on a personal computer, will focus on general purpose GPU (GPGPU) computing. GPGPU is a big part of modern parallel computing, both on personal computers and on clusters. We will present some basic examples to illustrate the efficiency, in terms of execution time, to be expected from GPGPU.
The second part deals with parallel computing on CPU clusters; it will focus on the use of Message Passing Interface (MPI). MPI is a C/C++/Fortran library that is used by nodes (a node can be seen as one computer) to communicate between each other. If you use MPI, parts of your code will be executed at the same time by every node, which requires a particular attention to the distribution of memory. We will use basic examples to illustrate these concepts.