Show simple item record

dc.contributor.authorElbir, Ahmet M.
dc.contributor.authorColeri, Sinem
dc.contributor.authorPapazafeiropoulos, Anastasios K.
dc.contributor.authorKourtessis, Pandelis
dc.contributor.authorChatzinotas, Symeon
dc.date.accessioned2023-09-20T11:45:01Z
dc.date.available2023-09-20T11:45:01Z
dc.date.issued2022-09-01
dc.identifier.citationElbir , A M , Coleri , S , Papazafeiropoulos , A K , Kourtessis , P & Chatzinotas , S 2022 , ' A Hybrid Architecture for Federated and Centralized Learning ' , IEEE Transactions on Cognitive Communications and Networking , vol. 8 , no. 3 , pp. 1529-1542 . https://doi.org/10.1109/TCCN.2022.3181032
dc.identifier.otherORCID: /0000-0003-1841-6461/work/142860206
dc.identifier.urihttp://hdl.handle.net/2299/26701
dc.description© 2022 IEEE. This is the accepted manuscript version of an article which has been published in final form at t https://doi.org/https://doi.org/10.1109/TCCN.2022.3181032
dc.description.abstractMany of the machine learning tasks rely on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been suggested as a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. In practice, not all the clients have sufficient computational resources to participate in training. To address this common scenario, we propose a more efficient approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: i) increased computation-per-client and ii) sequential data transmission. Notably, the HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets.en
dc.format.extent14
dc.format.extent6060644
dc.language.isoeng
dc.relation.ispartofIEEE Transactions on Cognitive Communications and Networking
dc.subjectBandwidth
dc.subjectcentralized learning
dc.subjectCollaborative work
dc.subjectComputational modeling
dc.subjectComputer architecture
dc.subjectData models
dc.subjectedge efficiency
dc.subjectedge intelligence
dc.subjectfederated learning
dc.subjectInternet of Things
dc.subjectMachine learning
dc.subjectTraining
dc.subjectHardware and Architecture
dc.subjectComputer Networks and Communications
dc.subjectArtificial Intelligence
dc.titleA Hybrid Architecture for Federated and Centralized Learningen
dc.contributor.institutionDepartment of Engineering and Technology
dc.contributor.institutionSchool of Physics, Engineering & Computer Science
dc.contributor.institutionCommunications and Intelligent Systems
dc.contributor.institutionCentre for Engineering Research
dc.contributor.institutionCentre for Climate Change Research (C3R)
dc.contributor.institutionSPECS Deans Group
dc.contributor.institutionOptical Networks
dc.contributor.institutionCentre for Computer Science and Informatics Research
dc.contributor.institutionCentre for Future Societies Research
dc.description.statusPeer reviewed
dc.date.embargoedUntil2025-06-08
dc.identifier.urlhttp://www.scopus.com/inward/record.url?scp=85131765961&partnerID=8YFLogxK
rioxxterms.versionofrecord10.1109/TCCN.2022.3181032
rioxxterms.typeJournal Article/Review
herts.preservation.rarelyaccessedtrue


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record