Change search
ReferencesLink to record
Permanent link

Direct link
TurboBŁYSK: Scheduling for improved data-driven task performance with fast dependency resolution
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.ORCID iD: 0000-0002-9637-2065
KTH, School of Information and Communication Technology (ICT), Software and Computer systems, SCS.
2014 (English)In: Using and Improving OpenMP for Devices, Tasks, and More: 10th International Workshop on OpenMP, IWOMP 2014, Salvador, Brazil, September 28-30, 2014. Proceedings, Springer, 2014, 45-57 p.Conference paper (Refereed)
Abstract [en]

Data-driven task-parallelism is attracting growing interest and has now been added to OpenMP (4.0). This paradigm simplifies the writing of parallel applications, extracting parallelism, and facilitates the use of distributed memory architectures. While the programming model itself is becoming mature, a problem with current run-time scheduler implementations is that they require a very large task granularity in order to scale. This limitation goes at odds with the idea of task-parallel programing where programmers should be able to concentrate on exposing parallelism with little regard to the task granularity. To mitigate this limitation, we have designed and implemented TurboBŁYSK, a highly efficient run-time scheduler of tasks with explicit data-dependence annotations. We propose a novel mechanism based on pattern-saving that allows the scheduler to re-use previously resolved dependency patterns, based on programmer annotations, enabling programs to use even the smallest of tasks and scale well. We experimentally show that our techniques in TurboBŁYSK enable achieving nearly twice the peak performance compared with other run-time schedulers. Our techniques are not OpenMP specific and can be implemented in other task-parallel frameworks.

Place, publisher, year, edition, pages
Springer, 2014. 45-57 p.
, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN 0302-9743 ; 8766
Keyword [en]
Application programming interfaces (API), Computer software reusability, Distributed memory architecture, Parallel application, Peak performance, Programming models, Scheduler implementation, Task granularity, Task parallelism, Task performance
National Category
Computer Science
URN: urn:nbn:se:kth:diva-161691DOI: 10.1007/978-3-319-11454-5_4ISI: 000360155400004ScopusID: 2-s2.0-84921513746ISBN: 978-3-319-11453-8ISBN: 978-3-319-11454-5OAI: diva2:798214
10th International Workshop on OpenMP, IWOMP 2014, Salvador, Brazil, September 28-30, 2014

QC 20150326

Available from: 2015-03-26 Created: 2015-03-13 Last updated: 2016-01-27Bibliographically approved
In thesis
1. Improving Performance and Quality-of-Service through the Task-Parallel Model​: Optimizations and Future Directions for OpenMP
Open this publication in new window or tab >>Improving Performance and Quality-of-Service through the Task-Parallel Model​: Optimizations and Future Directions for OpenMP
2015 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

With the failure of Dennard's scaling, which stated that shrinking transistors will be more power-efficient, computer hardware has today become very divergent. Initially the change only concerned the number of processor on a chip (multicores), but has today further escalated into complex heterogeneous system with non-intuitive properties -- properties that can improve performance and power consumption but also strain the programmer expected to develop on them.

Answering these challenges is the OpenMP task-parallel model -- a programming model that simplifies writing parallel software. Our focus in the thesis has been to explore performance and quality-of-service directions of the OpenMP task-parallel model, particularly by taking architectural features into account.

The first question tackled is: what capabilities does existing state of the art runtime-systems have and how do they perform? We empirically evaluated the performance of several modern task-parallel runtime-systems. Performance and power-consumption was measured through the use of benchmarks and we show that the two primary causes for bottlenecks in modern runtime-systems lies in either the task management overheads or how tasks are being distributed across processors.

Next, we consider quality-of-service improvements in task-parallel runtime-systems. Striving to improve execution performance, current state of the art runtime-systems seldom take dynamic architectural features such as temperature into account when deciding how work should be distributed across the processors, which can lead to overheating. We developed and evaluated two strategies for thermal-awareness in task-parallel runtime-systems. The first improves performance when the computer system is constrained by temperature while the second strategy strives to reduce temperature while meeting soft real-time objectives.

We end the thesis by focusing on performance. Here we introduce our original contribution called BLYSK -- a prototype OpenMP framework created exclusively for performance research.

We found that overheads in current runtime-systems can be expensive, which often lead to performance degradation. We introduce a novel way of preserving task-graphs throughout application runs: task-graphs are recorded, identified and optimized the first time an OpenMP application is executed and are later re-used in following executions, removing unnecessary overheads. Our proposed solution can nearly double the performance compared with other state of the art runtime-systems.

Performance can also be improved through heterogeneity. Today, manufacturers are placing processors with different capabilities on the same chip. Because they are different, their power-consuming characteristics and performance differ. Heterogeneity adds another dimension to the multiprocessing problem: how should work be distributed across the heterogeneous processors?We evaluated the performance of existing, homogeneous scheduling algorithms and found them to be an ill-match for heterogeneous systems. We proposed a novel scheduling algorithm that dynamically adjusts itself to the heterogeneous system in order to improve performance.

The thesis ends with a high-level synthesis approach to improve performance in task-parallel applications. Rather than limiting ourselves to off-the-shelf processors -- which often contains a large amount of unused logic -- our approach is to automatically generate the processors ourselves. Our method allows us to generate application-specific hardware from the OpenMP task-parallel source code. Evaluated using FPGAs, the performance of our System-on-Chips outperformed other soft-cores such as the NiosII processor and were also comparable in performance with modern state of the art processors such as the Xeon PHI and the AMD Opteron.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2015. 64 p.
, TRITA-ICT, 2015:13
Task Parallel, OpenMP, Scheduling, OmpSs, multicore, manycore
National Category
Communication Systems
Research subject
Computer Science
urn:nbn:se:kth:diva-175539 (URN)978-91-7595-711-1 (ISBN)
Public defence
2015-11-10, Sal A, KTH Kista, Electrum Kistagången 16, Kista, 10:00 (English)

QC 20151016

Available from: 2015-10-16 Created: 2015-10-16 Last updated: 2015-10-16Bibliographically approved

Open Access in DiVA

fulltext(814 kB)18 downloads
File information
File name FULLTEXT01.pdfFile size 814 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Podobas, ArturBrorsson, MatsVlassov, Vladimir
By organisation
Software and Computer systems, SCS
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 18 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 164 hits
ReferencesLink to record
Permanent link

Direct link