Skip to content

Latest commit

 

History

History
24 lines (17 loc) · 912 Bytes

File metadata and controls

24 lines (17 loc) · 912 Bytes

We hope that you have enjoyed the fourth, and final, week of Python in High Performance Computing!

This week, we have looked into parallel computing using MPI for Python. With MPI communication one is able to distribute the work to multiple CPU cores and to take care of the necessary data synchronisations while executing a parallel algorithm.

By now, you should know how to send and receive MPI messages, how to use collective communication, and how to create your own custom communicators. You should also be familiar with the key concepts of parallel computing and understand the execution and data model of MPI.

To learn more about MPI or parallel programming in general, please consider looking at the other training courses offered by PRACE: http://www.training.prace-ri.eu/