Asynchronous Algorithms for Large-Scale Optimization: Analysis and Implementation
2017 (English)Licentiate thesis, monograph (Other academic)
This thesis proposes and analyzes several first-order methods for convex optimization, designed for parallel implementation in shared and distributed memory architectures. The theoretical focus is on designing algorithms that can run asynchronously, allowing computing nodes to execute their tasks with stale information without jeopardizing convergence to the optimal solution.
The first part of the thesis focuses on shared memory architectures. We propose and analyze a family of algorithms to solve an unconstrained, smooth optimization problem consisting of a large number of component functions. Specifically, we investigate the effect of information delay, inherent in asynchronous implementations, on the convergence properties of the incremental prox-gradient descent method. Contrary to related proposals in the literature, we establish delay-insensitive convergence results: the proposed algorithms converge under any bounded information delay, and their constant step-size can be selected independently of the delay bound.
Then, we shift focus to solving constrained, possibly non-smooth, optimization problems in a distributed memory architecture. This time, we propose and analyze two important families of gradient descent algorithms: asynchronous mini-batching and incremental aggregated gradient descent. In particular, for asynchronous mini-batching, we show that, by suitably choosing the algorithm parameters, one can recover the best-known convergence rates established for delay-free implementations, and expect a near-linear speedup with the number of computing nodes. Similarly, for incremental aggregated gradient descent, we establish global linear convergence rates for any bounded information delay.
Extensive simulations and actual implementations of the algorithms in different platforms on representative real-world problems validate our theoretical results.
Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2017. , 100 p.
TRITA-EE, ISSN 1653-5146 ; 2017:021
convex optimization, optimization, asynchronous algorithms, algorithms, parallel algorithms, large-scale, big data
Research subject Electrical Engineering
IdentifiersURN: urn:nbn:se:kth:diva-203812ISBN: 978-91-7729-328-6 (print)OAI: oai:DiVA.org:kth-203812DiVA: diva2:1082700
2017-04-07, Q2, Osquldas väg 10, KTH Campus, Stockholm, 10:00 (English)
Patrinos, Panagiotis K., Assistant Professor
Johansson, Mikael, Professor
FunderSwedish Research Council, 66255
QC 201703172017-03-172017-03-172017-03-17Bibliographically approved