CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Asynchronous Algorithms for Large-Scale Optimization: Analysis and Implementation
KTH, School of Electrical Engineering (EES), Automatic Control.ORCID iD: 0000-0003-1725-2901
2017 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution.

The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound.

Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay.

Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2017. , 100 p.
Series
TRITA-EE, ISSN 1653-5146 ; 2017:021
Keyword [en]
convex optimization, optimization, asynchronous algorithms, algorithms, parallel algorithms, large-scale, big data
National Category
Control Engineering
Research subject
Electrical Engineering
Identifiers
URN: urn:nbn:se:kth:diva-203812ISBN: 978-91-7729-328-6 (print)OAI: oai:DiVA.org:kth-203812DiVA: diva2:1082700
Presentation
2017-04-07, Q2, Osquldas väg 10, KTH Campus, Stockholm, 10:00 (English)
Opponent
Supervisors
Funder
Swedish Research Council, 66255
Note

QC 20170317

Available from: 2017-03-17 Created: 2017-03-17 Last updated: 2017-03-17Bibliographically approved

Open Access in DiVA

kth-lic-aytekin-2017(1650 kB)112 downloads
File information
File name FULLTEXT01.pdfFile size 1650 kBChecksum SHA-512
d02afb2841aa13be61c96265043e066875b3aadbc358f86beebe46c72b902f06debb7f9214fad8c06f7ebc2cbe7b3741b91b6b44bd75b3dca3c1ceb7bab1f1b7
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Aytekin, Arda
By organisation
Automatic Control
Control Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 112 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 2102 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf