Digitala Vetenskapliga Arkivet

Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
ADF-SL: An Adaptive and Fair Scheme for Smart Learning Task Distribution
Ministry of Higher Education and Scientific Research, Iraq.
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0003-3128-191x
2025 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 13, p. 122928-122942Article in journal (Refereed) Published
Abstract [en]

Split Learning (SL) is an emerging decentralized paradigm that enables numerous participants, to train a deep neural network without disclosing sensitive information, such as patient data, in fields such as healthcare. In healthcare, SL enables distributed training across a variety of medical devices, hospitals, and organizations, improving model robustness while maintaining patient confidentiality. However, training models within SL is affected by data heterogeneity and sensitivity, and often requires more computational resources than an individual data provider can afford. This can result in significant model divergence and decreased performance due to differences in data distributions between various clients. To address this issue, we propose a framework that integrates fairness and adaptivity considerations, called ADF-SL. In particular, ADF-SL dynamically adjusts the total number of clients involved in model training and the number of iteration required to achieve convergence without compromising participant privacy. To evaluate performance, we compare the effectiveness of ADF-SL with that of the naive (Vanilla) SL approach, SplitFed and FairFed. Extensive experiments performed on time series electrocardiogram (ECG) databases (MITDB, SVDB, and INCARTDB) indicate that ADF-SL significantly outperforms the three existing algorithms that served as baselines. Compared to these baseline methods, ADF-SL accelerates model training on clients by up to 22.7%, 10.4%, and 5.8% compared to Vanilla SL, SplitFed, and FairFed, respectively, while maintaining model convergence and accuracy. Furthermore, the conducted ablation study has confirmed the importance of ADF-SL decay enrichment, which has outperformed non-decay ADF-SL for each used dataset by up to 15.8%, 43.9%, and 7.6%, respectively. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025. Vol. 13, p. 122928-122942
Keywords [en]
Artificial intelligence, efficiency, fairness, layer distribution, split learning, Deep learning, Electrocardiograms, Learning systems, Medical computing, Decentralised, Layer distributions, Learning tasks, Model training, Neural-networks, Performance, Sensitive informations, Task distribution, Health care
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:bth-28486DOI: 10.1109/ACCESS.2025.3586544ISI: 001531849600040Scopus ID: 2-s2.0-105010185357OAI: oai:DiVA.org:bth-28486DiVA, id: diva2:1988488
Part of project
HINTS - Human-Centered Intelligent Realities
Funder
Knowledge Foundation, 20220068Available from: 2025-08-12 Created: 2025-08-12 Last updated: 2025-09-30Bibliographically approved

Open Access in DiVA

fulltext(2553 kB)62 downloads
File information
File name FULLTEXT01.pdfFile size 2553 kBChecksum SHA-512
e5de9ff2d8dee026ba3891d1e20ece3cc97cf762e567d5545e4f614d96ffaccafd9a692783888a52370636fe36aea6f1dbf7a1bad1ae048f8d67d5d4e0a997a2
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Boeva, Veselka
By organisation
Department of Computer Science
In the same journal
IEEE Access
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 62 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 591 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf