Abstract
High-performance architectures are becoming more and more complex with the passage of time. These large scale, heterogeneous architectures and multi-core system are difficult to program. New programming models are required to make expression of parallelism easier, while keeping productivity of the developer higher.
Partition Global Address-space (PGAS) languages such as UPC appeared to augment developer’s productivity for distributed memory systems. UPC provides a simpler, shared memory-like model with a user control over data layout. But it is developer’s responsibility to take care of the data locality, by using appropriate data layouts.
SMPSs/StarSs programming model tries to simplify the parallel programming on multicore architectures. It offers task level parallelism, where dependencies among the tasks are determined at the run time. In addition, runtime take cares of the data locality, while scheduling tasks. Hence, providing two-folds improvement in productivity; first, saving developer’s time by using automatic dependency detection, instead of hard coding them. Second, save cache optimization time, as runtime take cares of data locality.
The purpose of this thesis is to use the PGAS programming model e.g. UPC for different nodes with the shared memory task based parallelization model i.e. StarSs to take the advantage of the multi core systems and contrast this approach to the legacy MPI and OpenMP combination. Performance as well as programmability is considered in the evaluation.
The combination UPC + SMPSs, results in approximately the same execution time as MPI and OpenMP. The current lack of features such as multi-dimensional data distribution or virtual topologies in UPC, make the hybrid UPC + SMPSs/StarSs programming model less programmable than MPI + OpenMP for the application studied in this thesis.
|