Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Scalable Performance Analysis of MPI Parallel Applications
KTH, School of Computer Science and Communication (CSC), High Performance Computing and Visualization (HPCViz).ORCID iD: 0000-0001-9693-6265
2015 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

  A considerably fraction of science discovery is nowadays relying on computer simulations. High Performance Computing  (HPC) provides scientists with the means to simulate processes ranging from climate modeling to protein folding. However, achieving good application performance and making an optimal use of HPC resources is a heroic task due to the complexity of parallel software. Therefore, performance tools  and runtime systems that help users to execute  applications in the most optimal way are of utmost importance in the landscape of HPC.  In this thesis, we explore different techniques to tackle the challenges of collecting, storing, and using  fine-grained performance data. First, we investigate the automatic use of real-time performance data in order to run applications in an optimal way. To that end, we present a prototype of an adaptive task-based runtime system that uses real-time performance data for task scheduling. This runtime system has a performance monitoring component that provides real-time access to the performance behavior of anapplication while it runs. The implementation of this monitoring component is presented and evaluated within this thesis. Secondly, we explore lossless compression approaches  for MPI monitoring. One of the main problems that  performance tools face is the huge amount of fine-grained data that can be generated from an instrumented application. Collecting fine-grained data from a program is the best method to uncover the root causes of performance bottlenecks, however, it is unfeasible with extremely parallel applications  or applications with long execution times. On the other hand, collecting coarse-grained data is scalable but  sometimes not enough to discern the root cause of a performance problem. Thus, we propose a new method for performance monitoring of MPI programs using event flow graphs. Event flow graphs  provide very low overhead in terms of execution time and  storage size, and can be used to reconstruct fine-grained trace files of application events ordered in time.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2015. , viii, 39 p.
Series
TRITA-CSC-A, ISSN 1653-5723 ; 2015:05
Keyword [en]
parallel computing, performance monitoring, performance tools, event flow graphs
National Category
Computer Systems
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-165043ISBN: 978-91-7595-518-6 (print)OAI: oai:DiVA.org:kth-165043DiVA: diva2:806809
Presentation
2015-05-20, The Visualization Studio, room 4451, Lindstedtsvägen 5, KTH, Stockholm, 10:00 (English)
Opponent
Supervisors
Note

QC 20150508

Available from: 2015-05-08 Created: 2015-04-21 Last updated: 2015-05-08Bibliographically approved
List of papers
1. Scaling Dalton, a molecular electronic structure program
Open this publication in new window or tab >>Scaling Dalton, a molecular electronic structure program
Show others...
2011 (English)In: Seventh International Conference on e-Science, e-Science 2011, 5-8 December 2011, Stockholm, Sweden, IEEE conference proceedings, 2011, 256-262 p.Conference paper, Published paper (Refereed)
Abstract [en]

Dalton is a molecular electronic structure program featuring common methods of computational chemistry that are based on pure quantum mechanics (QM) as well as hybrid quantum mechanics/molecular mechanics (QM/MM). It is specialized and has a leading position in calculation of molecular properties with a large world-wide user community (over 2000 licenses issued). In this paper, we present a characterization and performance optimization of Dalton that increases the scalability and parallel efficiency of the application. We also propose asolution that helps to avoid the master/worker design of Daltonto become a performance bottleneck for larger process numbers and increase the parallel efficiency.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2011
Keyword
Chemistry, Image color analysis, Libraries, Measurement, Optimization, Quantum mechanics, Wave functions
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-50421 (URN)10.1109/eScience.2011.43 (DOI)2-s2.0-84856350618 (Scopus ID)978-1-4577-2163-2 (ISBN)
Conference
Seventh International Conference on e-Science, e-Science 2011, 5-8 December 2011, Stockholm, Sweden
Funder
Swedish e‐Science Research Center, OpCoReSEU, FP7, Seventh Framework Programme, INFSO RI-261523Swedish e‐Science Research Center
Note
Copyright 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. QC 20120110Available from: 2012-01-10 Created: 2011-12-05 Last updated: 2015-05-08Bibliographically approved
2. Design and Implementation of a Runtime System for Parallel Numerical Simulations on Large-Scale Clusters
Open this publication in new window or tab >>Design and Implementation of a Runtime System for Parallel Numerical Simulations on Large-Scale Clusters
2011 (English)In: Proceedings Of The International Conference On Computational Science (ICCS) / [ed] Sato, M; Matsuoka, S; Sloot, PMA; VanAlbada, GD; Dongarra, J, Elsevier, 2011, Vol. 4, 2105-2114 p.Conference paper, Published paper (Refereed)
Abstract [en]

The execution of scientific codes will introduce a number of new challenges and intensify some old ones on new high-performance computing infrastructures. Petascale computers are large systems with complex designs using heterogeneous technologies that make the programming and porting of applications difficult, particularly if one wants to use the maximum peak performance of the system. In this paper we present the design and first prototype of a runtime system for parallel numerical simulations on large-scale systems. The proposed runtime system addresses the challenges of performance, scalability, and programmability of large-scale HPC systems. We also present initial results of our prototype implementation using a molecular dynamics application kernel.

Place, publisher, year, edition, pages
Elsevier, 2011
Series
Procedia Computer Science, ISSN 1877-0509 ; 4
Keyword
Hybrid computational methods, Parallel computing, Advanced computing architectures, Runtime systems
National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-38886 (URN)10.1016/j.procs.2011.04.230 (DOI)000299165200229 ()2-s2.0-79958278307 (Scopus ID)
Conference
11th International Conference on Computational Science, ICCS 2011. Singapore. 1 June 2011 - 3 June 2011
Funder
Swedish e‐Science Research Center
Note
QC 20120110Available from: 2011-09-02 Created: 2011-09-02 Last updated: 2015-05-08Bibliographically approved
3. Online Performance Data Introspection with IPM
Open this publication in new window or tab >>Online Performance Data Introspection with IPM
2014 (English)In: Proceedings of the 15th IEEE International Conference on High Performance Computing and Communications (HPCC 2013), IEEE Computer Society, 2014, 728-734 p.Conference paper, Published paper (Refereed)
Abstract [en]

Exascale systems will be heterogeneous architectures with multiple levels of concurrency and energy constraints. In such a complex scenario, performance monitoring and runtime systems play a major role to obtain good application performance and scalability. Furthermore, online access to performance data becomes a necessity to decide how to schedule resources and orchestrate computational elements: processes, threads, tasks, etc. We present the Performance Introspection API, an extension of the IPM tool that provides online runtime access to performance data from an application while it runs. We describe its design and implementation and show its overhead on several test benchmarks. We also present a real test case using the Performance Introspection API in conjunction with processor frequency scaling to reduce power consumption.

Place, publisher, year, edition, pages
IEEE Computer Society, 2014
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-136212 (URN)10.1109/HPCC.and.EUC.2013.107 (DOI)2-s2.0-84903964607 (Scopus ID)978-076955088-6 (ISBN)
Conference
The 15th IEEE International Conference on High Performance Computing and Communications (HPCC 2013). Zhangjiajie , China , November 13-15, 2013.
Note

QC 20140602

Available from: 2013-12-04 Created: 2013-12-04 Last updated: 2015-05-08Bibliographically approved
4. MPI Trace Compression Using Event Flow Graphs
Open this publication in new window or tab >>MPI Trace Compression Using Event Flow Graphs
2014 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Understanding how parallel applications behave is crucial for using high-performance computing (HPC) resources efficiently. However, the task of performance analysis is becoming increasingly difficult due to the growing complexity of scientific codes and the size of machines. Even though many tools have been developed over the past years to help in this task, current approaches either only offer an overview of the application discarding temporal information, or they generate huge trace files that are often difficult to handle.

In this paper we propose the use of event flow graphs for monitoring MPI applications, a new and different approach that balances the low overhead of profiling tools with the abundance of information available from tracers. Event flow graphs are captured with very low overhead, require orders of magnitude less storage than standard trace files, and can still recover the full sequence of events in the application. We test this new approach with the NERSC-8/Trinity Benchmark suite and achieve compression ratios up to 119x.

Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8632
Keyword
MPI event flow graphs, trace compression, trace reconstruction, performance monitoring
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-165042 (URN)2-s2.0-84958532986 (Scopus ID)
Conference
Euro-Par 2014 Parallel Processing
Note

QC 20150423. QC 20160314

Available from: 2015-04-21 Created: 2015-04-21 Last updated: 2017-04-28Bibliographically approved

Open Access in DiVA