Register to this seminar by January 27, 2019.
By using distributed computing, multiple computers can work together on the same problem, using hundreds or thousands of CPU cores. This can considerably reduce simulations times. The computer memory is also combined, allowing for the performance of larger simulations.
The most common tool for distributed computing is Message Passing Interface (MPI). This is a high-speed communication protocol designed specifically for high-performance computing.
In this seminar, Jarno van der Kolk will show how to set up MPI and share strategies to get the best performance. You will learn how to compile your code, program using multiple processes, and what the limitations are.
We will also discuss how to run this code on the computational resources provided by Compute Canada, where you can have access to systems with thousands of CPU cores.
The code examples for this seminar are available on BitBucket.