Change search
ReferencesLink to record
Permanent link

Direct link
Scheduling of parallel matrix computations and data layout conversion for HPC and Multi-Core Architectures
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Compting Center North (HPC2N).
2011 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Dense linear algebra represents fundamental building blocks in many computational science and engineering applications. The dense linear algebra algorithms must be numerically stable, robust, and reliable in order to be usable as black-box solvers by expert as well as non-expert users. The algorithms also need to scale and run efficiently on massively parallel computers with multi-core nodes. Developing high-performance algorithms for dense matrix computations is a challenging task, especially since the widespread adoption of multi-core architectures. Cache reuse is an even more critical issue on multi-core processors than on uni-core processors due to their larger computational power and more complex memory hierarchies. Blocked matrix storage formats, in which blocks of the matrix are stored contiguously, and blocked algorithms, in which the algorithms exhibit large amounts of cache reuse, remain key techniques in the effort to approach the theoretical peak performance.

In Paper I, we present a packed and distributed Cholesky factorization algorithm based on a new blocked and packed matrix storage format. High performance node computations are obtained as a result of the blocked storage format, and the use of look-ahead leads to improved parallel efficiency. In Paper II and Paper III, we study the problem of in-place matrix transposition in general and in-place matrix storage format conversion in particular. We present and evaluate new high-performance parallel algorithms for in-place conversion between the standard column-major and row-major formats and the four standard blocked matrix storage formats. Another critical issue, besides cache reuse, is that of efficient scheduling of computational tasks. Many weakly scalable parallel algorithms are efficient only when the problem size per processor is relatively large. A current research trend focuses on developing parallel algorithms which are more strongly scalable and hence more efficient also for smaller problems.

In Paper IV, we present a framework for dynamic node-scheduling of two-sided matrix computations and demonstrate that by using priority-based scheduling one can obtain an efficient scheduling of a QR sweep. In Paper V and Paper VI, we present a blocked implementation of two-stage Hessenberg reduction targeting multi-core architectures. The main contributions of Paper V are in the blocking and scheduling of the second stage. Specifically, we show that the concept of look-ahead can be applied also to this two-sided factorization, and we propose an adaptive load-balancing technique that allow us to schedule the operations effectively.

Abstract [sv]

Matrisberäkningar är fundamentala byggblock imånga beräkningstunga teknisk-vetenskapliga applikationer. Algoritmerna måste vara numeriskt stabila och robusta för att användaren ska kunna förlita sig på de beräknade resultaten. Algoritmerna måste dessutom skala och kunna köras effektivt på massivt parallella datorer med noder bestående av flerkärniga processorer.

Det är utmanande att uveckla högpresterande algoritmer för täta matrisberäkningar, särskilt sedan introduktionen av flerkärniga processorer. Det är ännu viktigare att återanvända data i cache-minnena i en flerkärnig processor på grund av dess höga beräkningsprestanda. Två centrala tekniker i strävan efter algoritmer med optimal prestanda är blockade algoritmer och blockade matrislagringsformat. En blockad algoritm har ett minnesåtkomstmönster som passar minneshierarkin väl. Ett blockat matrislagringsformat placerar matrisens element i minnet så att elementen i specifika matrisblock lagras konsekutivt.

I Artikel I presenteras en algoritm för Cholesky-faktorisering av en matris kompakt lagrad i ett distribuerat minne. Det nya lagringsformatet är blockat och möjliggör därigenom hög prestanda. Artikel II och Artikel III beskriver hur en konventionellt lagrad matris kan konverteras till och från ett blockat lagringsformat med hjälp av en ytterst liten mängd extra lagringsutrymme. Lösningen bygger på en ny parallell algoritm för matristransponering av rektangulära matriser.

Vid skapandet av en skalbar parallell algoritm måste man även beakta hur de olika beräkningsuppgifterna schemaläggs på ett effektivt sätt. Många så kallade svagt skalbara algoritmer är effektiva endast för relativt stora problem. En nuvarande forskningstrend är att utveckla så kallade starkt skalbara algoritmer, vilka är mer effektiva även för mindre problem.

Artikel IV introducerar ett dynamiskt schemaläggningssystem för två-sidiga matrisberäkningar. Beräkningsuppgifterna fördelas statiskt på noderna och schemaläggs sedan dynamiskt inom varje nod. Artikeln visar även hur prioritetsbaserad schemaläggning tar en tidigare ineffektiv algoritm för ett så kallat QR-svep och gör den effektiv.

Artikel V och Artikel VI presenterar nya parallella blockade algoritmer, designade för flerkärniga processorer, för en två-stegs Hessenberg-reduktion. De centrala bidragen i Artikel V utgörs av en blockad algoritm för reduktionens andra steg samt en adaptiv lastbalanseringsmetod.

Place, publisher, year, edition, pages
Umeå: Institutionen för datavetenskap, Umeå universitet , 2011.
Series
Report / UMINF, ISSN 0348-0542 ; 11.04
National Category
Information Science
Research subject
Computer and Information Science
Identifiers
URN: urn:nbn:se:umu:diva-41224ISBN: 978-91-7459-214-6OAI: oai:DiVA.org:umu-41224DiVA: diva2:411700
Public defence
2011-05-19, KBC-huset, KB3B1, Umeå Universitet, Umeå, 13:00 (English)
Opponent
Supervisors
Available from: 2011-04-28 Created: 2011-03-21 Last updated: 2012-01-20Bibliographically approved
List of papers
1. Distributed SBP Cholesky factorization algorithms with near-optimal scheduling
Open this publication in new window or tab >>Distributed SBP Cholesky factorization algorithms with near-optimal scheduling
2009 (English)In: ACM Transactions on Mathematical Software, ISSN 0098-3500, Vol. 36, no 2, 11:1-11:25 p.Article in journal (Refereed) Published
Identifiers
urn:nbn:se:umu:diva-24692 (URN)10.1145/1499096.1499100 (DOI)
Available from: 2009-07-09 Created: 2009-07-09 Last updated: 2012-01-09Bibliographically approved
2. Blocked in-place transposition with application to storage format conversion
Open this publication in new window or tab >>Blocked in-place transposition with application to storage format conversion
2009 (English)Report (Other academic)
Abstract [en]

We develop a prototype library for in-place (dense) matrix storage format conversion between the canonical row and column-major formats and the four canonical block data layouts. Many of the fastest linear algebra routines operate on matrices in a block data layout. In-place storage format conversion enables support for input/output of large matrices in the canonical row and column-major formats. The library uses algorithms associated with in-place transposition as building blocks. We investigate previous work on the subject of (in-place) transposition and the most promising algorithms are implemented and evaluated. Our results indicate that the Three-Stage Algorithm which only requires a small constant amount of additional memory performs well and is easy to tune. Murray Dow’s V5 algorithm, which is a two-stage semi-in-place algorithm that requires a small amount of additional memory is sometimes a better choice. The write-allocate strategy of most cache-based computer architectures appears to be the cause of an observed performance problem for large matrices.

Place, publisher, year, edition, pages
Umeå: Institutionen för datavetenskap, Umeå universitet, 2009. 29 p.
Series
Report / UMINF, ISSN 0348-0542 ; 09.01
Identifiers
urn:nbn:se:umu:diva-41217 (URN)
Available from: 2011-03-21 Created: 2011-03-21 Last updated: 2012-01-04Bibliographically approved
3. A framework for dynamic node-scheduling of two-sided blocked matrix computations
Open this publication in new window or tab >>A framework for dynamic node-scheduling of two-sided blocked matrix computations
2009 (English)In: Applied Parallel Computing - State of the Art in Parallel and Scientific Computing: 9th International Workshop, PARA 2008, 2009Conference paper (Refereed)
Series
, Lecture Notes in Computer Science, ISSN 0302-9743
Identifiers
urn:nbn:se:umu:diva-24701 (URN)
Conference
9th International Workshop, PARA 2008
Note
Accepted for publishing.Available from: 2009-07-10 Created: 2009-07-10 Last updated: 2012-01-09Bibliographically approved
4. Efficient reduction from block Hessenberg form to Hessenberg form using shared memory
Open this publication in new window or tab >>Efficient reduction from block Hessenberg form to Hessenberg form using shared memory
2010 (English)In: PARA 2010: State of the Art in Scientific and Parallel Computing, Reykjavik, June 6-9, 2010, 2010Conference paper (Refereed)
Identifiers
urn:nbn:se:umu:diva-41219 (URN)
Conference
PARA 2010, Reykjavik, June 6-9, 2010
Note
Accepted for publishingAvailable from: 2011-03-21 Created: 2011-03-21 Last updated: 2012-01-09Bibliographically approved
5. Parallel two-stage reduction to Hessenberg form using dynamic scheduling on shared-memory architectures
Open this publication in new window or tab >>Parallel two-stage reduction to Hessenberg form using dynamic scheduling on shared-memory architectures
2011 (English)In: Parallel Computing, ISSN 0167-8191, Vol. 37, no 12, 771-782 p.Article in journal (Refereed) Published
Abstract [en]

We consider parallel reduction of a real matrix to Hessenberg form using orthogonal transformations. Standard Hessenberg reduction algorithms reduce the columns of the matrix from left to right in either a blocked or unblocked fashion. However, the standard blocked variant performs 20% of the computations in terms of matrix vector multiplications. We show that a two-stage approach consisting of an intermediate reduction to block Hessenberg form speeds up the reduction by avoiding matrix vector multiplications. We describe and evaluate a new high-performance implementation of the two-stage approach that attains significant speedups over the one-stage approach. The key components are a dynamically scheduled implementation of Stage 1 and a blocked, adaptively load-balanced implementation of Stage 2. (C) 2011 Elsevier B.V. All rights reserved.

Keyword
Hessenberg reduction, Blocked algorithm, Parallel computing, Dynamic scheduling, High performance, Multi-core, Memory hierarchies
National Category
Computer Science Computational Mathematics
Identifiers
urn:nbn:se:umu:diva-41221 (URN)10.1016/j.parco.2011.05.001 (DOI)000298522200005 ()
Available from: 2011-03-21 Created: 2011-03-21 Last updated: 2012-02-03Bibliographically approved

Open Access in DiVA

fulltext(689 kB)1740 downloads
File information
File name FULLTEXT01.pdfFile size 689 kBChecksum SHA-512
8a167bde0b89b9c7af30aa50e42e7aa589b9a5efb6561e450d01a19b568503a410a87979ec6831d9cd0447264dc16c420c2f0b98b3b1e2bb99f5dc478dc55565
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Karlsson, Lars
By organisation
Department of Computing ScienceHigh Performance Compting Center North (HPC2N)
Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 1740 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 894 hits
ReferencesLink to record
Permanent link

Direct link