Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward Efficient Federated Learning over Wireless Networks: Novel Frontiers in Resource Optimization
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Communication Systems, CoS.ORCID iD: 0000-0001-8826-2088
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

With the rise of the Internet of Things (IoT) and 5G networks, edge computing addresses critical limitations in cloud computing’s quality of service . Machine learning (ML) has become essential in processing IoT-generated data at the edge, primarily through distributed optimization algorithms that support predictive tasks. However, state-of-the-art ML models demand substantial computational and communication resources, often exceeding the capabilities of wireless devices. Moreover, training these models typically requires centralized access to datasets, but transmitting such data to the cloud introduces significant communication overhead, posing a critical challenge to resource-constrained systems. Federated Learning (FL) is a promising iterative approach that reduces communication costs through local computation on devices, where only model parameters are shared with a central server. Accordingly, every communication iteration of FL experiences costs such as computation, latency, bandwidth, and energy. Although FL enables distributed learning across multiple devices without exchanging raw data, its success is often hindered by the limitations of wireless communication overhead, including traffic congestion, and device resource constraints. To address these challenges, this thesis presents cost-effective methods for making FL training more efficient in resource-constrained wireless environments. Initially, we investigate challenges in distributed training over wireless networks, addressing background traffic and latency that impede communication iterations. We introduce the cost-aware causal FL algorithm (FedCau), which balances training performance with communication and computation costs through a novel iteration-termination method, removing the need for future information. A multi-objective optimization problem is formulated, integrating FL loss and iteration costs, with communication managed via slotted-ALOHA, CSMA/CA, and OFDMA protocols. The framework is extended to include both convex and non-convex loss functions, and results are compared with established communication-efficient methods, including heavily Aggregated Quantized Gradient (LAQ). Additionally, we develop ALAQ(Adaptive LAQ), which conserves energy while maintaining high test accuracy by dynamically adjusting bit allocation for local model updates during iterations . Next, we leverage cell-free massive multiple-input multiple-output (CFm-MIMO) networks to address the high latency in large-scale FL deployments. This architecture allows for simultaneous service to many users on the same time/frequency resources, mitigating the latency bottleneck through spatial multiplexing. Accordingly, we propose optimized uplink power allocation schemes that minimize the trade-off between energy consumption and latency, enabling more iterations under given energy and latency constraints and leading to substantial gains in FL test accuracy. In this regard, we present three approaches, beginning with a method that jointly minimizes the users’ uplink energy and FL training latency. This approach optimizes the trade-off between each user’s uplink latency and energy consumption, factoring in how individual transmit power impacts the energy and latency of other users to jointly reduce overall uplink energy consumption and FL training latency.

Furthermore, to address the straggler effect, we propose an adaptive mixed-resolution quantization scheme for local gradient updates, which considers high resolution only for essential entries and utilizes dynamic power control. Finally, we introduce EFCAQ, an energy-efficient FL in CFmMIMO networks, with a proposed adaptive quantization to co-optimize the straggler effect and the overall user energy consumption while minimizing the FL loss function through an adaptive number of local iterations of users. Through extensive theoretical analysis and experimental validation, this thesis demonstrates that the proposed methods outperform state-of-the-art algorithms across various FL setups and datasets. These contributions pave the way for energy-efficient and low-latency FL systems, making them more practical for use in real-world wireless networks.

Abstract [sv]

Framväxten av sakernas Internet (IoT, Internet of Things) och 5G-nät begränsas av tjänstekvaliteten i molnet, men kantberäkningar kan adressera dessa problem. Maskininlärning (ML) kommer bli avgörande för att bearbeta IoT-genererad data vid kanten av nätet, främst genom att använda distribuerade optimeringsalgoritmer för prediktion. Dagens ML-modeller kräver dock stora beräknings- och kommunikationsresurser som ofta överstiger kapaciteten hos enskilda trådlösa enheter. Dessutom kräver träningen av dessa modeller vanligtvis centraliserad åtkomst till stora datamängder, men överföringen av denna data till molnet har betydande kommunikationskostnader, vilket är en kritisk utmaning för att driva resursbegränsade system. Federerad inlärning (FL) är en lovande iterativ ML-metod som minskar kommunikationskostnaderna genom att genomföra lokala beräkning på lokalt tillgänglig data på enheterna och endast dela modellparametrar med en central server. Varje iteration i FL har vissa kostnader när det gäller beräkningar, latens, bandbredd och energi. Även om FL möjliggör distribuerad inlärning över flera enheter utan att utbyta rådata, begränsas metoden i praktiken av den trådlösa kommunikationstekniken, t.ex. trafikstockningar i nätet och energibegränsningar i enheterna. För att adressera dessa problem presenterar denna avhandling kostnadseffektiva metoder för att göra FL-träning mer effektiv i resursbegränsade trådlösa miljöer.

Inledningsvis löser vi forskningsproblem relaterade till distribuerad inlärning över trådlösa nätverk med fokus på hur annan datatrafik och kommunikationslatensen begränsar FL-iterationerna. Vi introducerar den kostnadsmedvetna kausala FL-algoritmen FedCau som balanserar träningsprestanda mot kommunikations- och beräkningskostnader. En viktig del av lösningen är en ny termineringsmetod som tar bort det tidigare behovet av att ha information om framtida beräkningar vid termineringen. Ett flermålsoptimeringsproblem formuleras för att integrera FL-kostnader med kommunikation som genomförs med ALOHA-, CSMA/CA- eller OFDMA-protokollen. Ramverket omfattar både konvexa och icke-konvexa förlustfunktioner och resultaten jämförs med etablerade kommunikationseffektiva metoder, inklusive Lazily Aggregated Quantized Gradient (LAQ). Dessutom utvecklar vi A-LAQ (adaptivLAQ) som sparar energi samtidigt som hög ML-noggrannhet bibehålls genom att dynamiskt justera bitallokeringen för de lokala modelluppdateringarna under FL-iterationerna.

Därefter analyserar vi hur cellfri massiv multiple-input multiple-output (CFmMIMO) teknik kan användas för att hantera den höga kommunikationslatensen som annars uppstår när storskaliga modeller tränas genom FL. Denna nya nätarkitektur består av många samarbetande basstationer vilket möjliggör att många användare kan skicka modelluppdateringar samtidigt på samma frekvenser genom rumslig multiplexing, vilket drastiskt minskar latensen. Vi föreslår nya upplänkseffektregleringsscheman som optimerar avvägningen mellan energiförbrukning och latens. Denna lösning möjliggör fler FL-iterationer under givna energi- och latensbegränsningar och leder till betydande vinster i FL-testnoggrannheten. Vi presenterar tre tillvägagångssätt varav det första är en metod som minimerar en matematisk avvägningen mellan varje användares upplänkslatens och energiförbrukning. Metoden tar hänsyn till hur de individuella sändningseffekterna påverkar andra användares energi och latens för att gemensamt minska den totala energiförbrukningenoch FL-träningsfördröjningen. Vårt andra bidrag är en metod för att hantera eftersläpningseffekter genom ett adaptivt kvantiseringsschema med blandad upplösning för de lokala gradientuppdateringarna. I detta schema används hög kvantiseringsupplösning endast för viktiga variabler och vi använder även dynamisk effektreglering. Slutligen introducerar vi EFCAQ som är en energieffektiv FL-metod för CFmMIMO-nätverk. EFCAQ kombinerar ett nytt adaptivt kvantiseringsschema med att samoptimera eftersläpningseffekten och användarens totala energiförbrukning så att FL-förlustfunktionen minimeras genom att använda ett adaptivt antal lokala iterationer hos varje användare.

Genom omfattande teoretisk analys och experimentell validering visar denna avhandling att de föreslagna metoderna överträffar tidigare kända algoritmer i olika FL-scenarier och för olika datauppsättningar. Våra bidrag banar väg för energieffektiva FL-system med låg latens, vilket gör dem mer praktiska för användning i verkliga trådlösa nätverk.

Place, publisher, year, edition, pages
Stockholm: Kungliga Tekniska högskolan, 2025. , p. xv, 123
Series
TRITA-EECS-AVL ; 2025:13
Keywords [en]
Federated Learning, Optimization, Cell-free massive MIMO, Resource allocation, Energy, Latency
Keywords [sv]
Federerad inlärning, Optimering, Cell-fri massiv MIMO, Resursallokering, Energieffektivitet, Latens
National Category
Communication Systems
Research subject
Electrical Engineering
Identifiers
URN: urn:nbn:se:kth:diva-358334ISBN: 978-91-8106-165-9 (print)OAI: oai:DiVA.org:kth-358334DiVA, id: diva2:1927535
Public defence
2025-02-10, https://kth-se.zoom.us/j/69502080036, Ka-sal C, Kistagången 16, Stockholm, 13:00 (English)
Opponent
Supervisors
Note

QC 20250115

Available from: 2025-01-15 Created: 2025-01-15 Last updated: 2025-02-18Bibliographically approved
List of papers
1. Machine Learning over Networks: Co-design of Distributed Optimization and Communications
Open this publication in new window or tab >>Machine Learning over Networks: Co-design of Distributed Optimization and Communications
2020 (English)In: Proceedings of the 21st IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2020, Institute of Electrical and Electronics Engineers (IEEE), 2020Conference paper, Published paper (Refereed)
Abstract [en]

This paper considers a general class of iterative algorithms performing a distributed training task over a network where the nodes have background traffic and communicate through a shared wireless channel. Focusing on the carrier-sense multiple access with collision avoidance (CSMA/CA) as the main communication protocol, we investigate the mini-batch size and convergence of the training algorithm as a function of the communication protocol and network settings. We show that, given a total latency budget to run the algorithm, the training performance becomes worse as either the background traffic or the dimension of the training problem increases. We then propose a lightweight algorithm to regulate the network congestion at every node, based on local queue size with no explicit signaling with other nodes, and demonstrate the performance improvement due to this algorithm. We conclude that a co-design of distributed optimization algorithms and communication protocols is essential for the success of machine learning over wireless networks and edge computing.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2020
Series
IEEE International Workshop on Signal Processing Advances in Wireless Communications, ISSN 2325-3789
Keywords
Distributed optimization, machine learning, efficient algorithm, latency, CSMA/CA
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-292378 (URN)10.1109/SPAWC48557.2020.9154264 (DOI)000620337500062 ()2-s2.0-85090398486 (Scopus ID)
Conference
21st IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2020; Atlanta; United States; 26 May 2020 through 29 May 2020
Note

QC 20230307

Available from: 2021-04-14 Created: 2021-04-14 Last updated: 2025-01-15Bibliographically approved
2. FedCau: A Proactive Stop Policy for Communication and Computation Efficient Federated Learning
Open this publication in new window or tab >>FedCau: A Proactive Stop Policy for Communication and Computation Efficient Federated Learning
2024 (English)In: IEEE Transactions on Wireless Communications, ISSN 1536-1276, E-ISSN 1558-2248, Vol. 23, no 9, p. 11076-11093Article in journal (Refereed) Published
Abstract [en]

This paper investigates efficient distributed training of a Federated Learning (FL) model over a wireless network of wireless devices. The communication iterations of the distributed training algorithm may be substantially deteriorated or even blocked by the effects of the devices' background traffic, packet losses, congestion, or latency. We abstract the communication-computation impacts as an 'iteration cost' and propose a cost-aware causal FL algorithm (FedCau) to tackle this problem. We propose an iteration-termination method that trade-offs the training performance and networking costs. We apply our approach when workers use the slotted-ALOHA, carrier-sense multiple access with collision avoidance (CSMA/CA), and orthogonal frequency-division multiple access (OFDMA) protocols. We show that, given a total cost budget, the training performance degrades as either the background communication traffic or the dimension of the training problem increases. Our results demonstrate the importance of proactively designing optimal cost-efficient stopping criteria to avoid unnecessary communication-computation costs to achieve a marginal FL training improvement. We validate our method by training and testing FL over the MNIST and CIFAR-10 dataset. Finally, we apply our approach to existing communication efficient FL methods from the literature, achieving further efficiency. We conclude that cost-efficient stopping criteria are essential for the success of practical FL over wireless networks.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Costs, Training, Wireless networks, Protocols, Optimization, Machine learning algorithms, Resource management, Federated learning, communication protocols, cost-efficient algorithm, latency, unfolding federated learning
National Category
Telecommunications
Identifiers
urn:nbn:se:kth:diva-354333 (URN)10.1109/TWC.2024.3378351 (DOI)001312963400083 ()2-s2.0-85189318899 (Scopus ID)
Note

QC 20241004

Available from: 2024-10-04 Created: 2024-10-04 Last updated: 2025-01-15Bibliographically approved
3. A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
Open this publication in new window or tab >>A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
2022 (English)In: 2022 IEEE GLOBECOM Workshops, GC Wkshps 2022: Proceedings, Institute of Electrical and Electronics Engineers (IEEE) , 2022, p. 1828-1833Conference paper, Published paper (Refereed)
Abstract [en]

Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients. In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data. However, when a wireless network connects clients and the server, the communication resource limitations of the clients may prevent completing the training of the FL iterations. Therefore, communication-efficient variants of FL have been widely investigated. Lazily Aggregated Quantized Gradient (LAQ) is one of the promising communication-efficient approaches to lower resource usage in FL. However, LAQ assigns a fixed number of bits for all iterations, which may be communication-inefficient when the number of iterations is medium to high or convergence is approaching. This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which is a method that significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations. We train FL in an energy-constraint condition and investigate the convergence analysis for A-LAQ. The experimental results highlight that A-LAQ outperforms LAQ by up to a 50% reduction in spent communication energy and an 11% increase in test accuracy.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2022
Keywords
adaptive transmission, communication bits, edge learning, Federated learning, LAQ
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-333440 (URN)10.1109/GCWkshps56602.2022.10008580 (DOI)2-s2.0-85146892229 (Scopus ID)
Conference
2022 IEEE GLOBECOM Workshops, GC Wkshps 2022, Virtual, Online, Brazil, Dec 4 2022 - Dec 8 2022
Note

Part of ISBN 9781665459754

QC 20230802

Available from: 2023-08-02 Created: 2023-08-02 Last updated: 2025-01-15Bibliographically approved
4. Low-Latency and Energy-Efficient Federated Learning over Cell-Free Networks: A Trade-off Analysis
Open this publication in new window or tab >>Low-Latency and Energy-Efficient Federated Learning over Cell-Free Networks: A Trade-off Analysis
(English)Manuscript (preprint) (Other academic)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-358332 (URN)
Note

Submitted to the IEEE Open Journal of the Communications Society

QC 20250115

Available from: 2025-01-15 Created: 2025-01-15 Last updated: 2025-01-15
5. Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks
Open this publication in new window or tab >>Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks
2024 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Federated learning (FL) is a distributed learning framework where users train a global model by exchanging local model updates with a server instead of raw datasets, preserving data privacy and reducing communication overhead. However, the latency grows with the number of users and the model size, impeding the successful FL over traditional wireless networks with orthogonal access. Cell-free massive multiple-input multipleoutput (CFmMIMO) is a promising solution to serve numerous users on the same time/frequency resource with similar rates. This architecture greatly reduces uplink latency through spatial multiplexing but does not take application characteristics into account. In this paper, we co-optimize the physical layer with the FL application to mitigate the straggler effect. We introduce a novel adaptive mixed-resolution quantization scheme of the local gradient vector updates, where only the most essential entries are given high resolution. Thereafter, we propose a dynamic uplink power control scheme to manage the varying user rates and mitigate the straggler effect. The numerical results demonstrate that the proposed method achieves test accuracy comparable to classic FL while reducing communication overhead by at least 93% on the CIFAR-10, CIFAR-100, and Fashion-MNIST datasets. We compare our methods against AQUILA, Top-q, and LAQ, using the max-sum rate and Dinkelbach power control schemes. Our approach reduces the communication overhead by 75% and achieves 10% higher test accuracy than these benchmarks within a constrained total latency budget. 

National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-358331 (URN)
Conference
IEEE Global Communications Conference, 8–12 December 2024, Cape Town, South Africa
Note

QC 20250115

Available from: 2025-01-15 Created: 2025-01-15 Last updated: 2025-01-15Bibliographically approved
6. Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization
Open this publication in new window or tab >>Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Federated Learning (FL) enables clients to share learning parameters instead of local data, reducing communication overhead. Traditional wireless networks face latency challenges with FL. In contrast, Cell-Free Massive MIMO (CFmMIMO) can serve multiple clients on shared resources, boosting spectral efficiency and reducing latency for large-scale FL. However, clients' communication resource limitations can hinder the completion of the FL training. To address this challenge, we propose an energy-efficient, low-latency FL framework featuring optimized uplink power allocation for seamless client-server collaboration. Our framework employs an adaptive quantization scheme, dynamically adjusting bit allocation for local gradient updates to reduce communication costs. We formulate a joint optimization problem covering FL model updates, local iterations, and power allocation, solved using sequential quadratic programming (SQP) to balance energy and latency. Additionally, clients use the AdaDelta method for local FL model updates, enhancing local model convergence compared to standard SGD, and we provide a comprehensive analysis of FL convergence with AdaDelta local updates. Numerical results show that, within the same energy and latency budgets, our power allocation scheme outperforms the Dinkelbach and max-sum rate methods by increasing the test accuracy up to 7\% and 19\%, respectively. Moreover, for the three power allocation methods, our proposed quantization scheme outperforms AQUILA and LAQ by increasing test accuracy by up to 36\% and 35\%, respectively. 

National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-358333 (URN)10.48550/arXiv.2412.20785 (DOI)
Note

QC 20250115

Available from: 2025-01-15 Created: 2025-01-15 Last updated: 2025-01-15Bibliographically approved

Open Access in DiVA

fulltext(3818 kB)292 downloads
File information
File name FULLTEXT01.pdfFile size 3818 kBChecksum SHA-512
35299488eb8a5d2aaaf016251efe9f676b6910490ca2e1518a2d99682185043b0b629bcc14e6b3a6397df621426f9915273e6b97658ef5e3c29108446bba1d33
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Mahmoudi Benhangi, Afsaneh
By organisation
Communication Systems, CoS
Communication Systems

Search outside of DiVA

GoogleGoogle Scholar
Total: 292 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 3027 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf