dc.contributor.author | Elbir, Ahmet M. | |
dc.contributor.author | Coleri, Sinem | |
dc.contributor.author | Papazafeiropoulos, Anastasios K. | |
dc.contributor.author | Kourtessis, Pandelis | |
dc.contributor.author | Chatzinotas, Symeon | |
dc.date.accessioned | 2023-09-20T11:45:01Z | |
dc.date.available | 2023-09-20T11:45:01Z | |
dc.date.issued | 2022-09-01 | |
dc.identifier.citation | Elbir , A M , Coleri , S , Papazafeiropoulos , A K , Kourtessis , P & Chatzinotas , S 2022 , ' A Hybrid Architecture for Federated and Centralized Learning ' , IEEE Transactions on Cognitive Communications and Networking , vol. 8 , no. 3 , pp. 1529-1542 . https://doi.org/10.1109/TCCN.2022.3181032 | |
dc.identifier.other | ORCID: /0000-0003-1841-6461/work/142860206 | |
dc.identifier.uri | http://hdl.handle.net/2299/26701 | |
dc.description | © 2022 IEEE. This is the accepted manuscript version of an article which has been published in final form at t https://doi.org/https://doi.org/10.1109/TCCN.2022.3181032 | |
dc.description.abstract | Many of the machine learning tasks rely on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been suggested as a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. In practice, not all the clients have sufficient computational resources to participate in training. To address this common scenario, we propose a more efficient approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: i) increased computation-per-client and ii) sequential data transmission. Notably, the HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets. | en |
dc.format.extent | 14 | |
dc.format.extent | 6060644 | |
dc.language.iso | eng | |
dc.relation.ispartof | IEEE Transactions on Cognitive Communications and Networking | |
dc.subject | Bandwidth | |
dc.subject | centralized learning | |
dc.subject | Collaborative work | |
dc.subject | Computational modeling | |
dc.subject | Computer architecture | |
dc.subject | Data models | |
dc.subject | edge efficiency | |
dc.subject | edge intelligence | |
dc.subject | federated learning | |
dc.subject | Internet of Things | |
dc.subject | Machine learning | |
dc.subject | Training | |
dc.subject | Hardware and Architecture | |
dc.subject | Computer Networks and Communications | |
dc.subject | Artificial Intelligence | |
dc.title | A Hybrid Architecture for Federated and Centralized Learning | en |
dc.contributor.institution | Department of Engineering and Technology | |
dc.contributor.institution | School of Physics, Engineering & Computer Science | |
dc.contributor.institution | Communications and Intelligent Systems | |
dc.contributor.institution | Centre for Engineering Research | |
dc.contributor.institution | Centre for Climate Change Research (C3R) | |
dc.contributor.institution | SPECS Deans Group | |
dc.contributor.institution | Optical Networks | |
dc.contributor.institution | Centre for Computer Science and Informatics Research | |
dc.contributor.institution | Centre for Future Societies Research | |
dc.description.status | Peer reviewed | |
dc.date.embargoedUntil | 2025-06-08 | |
dc.identifier.url | http://www.scopus.com/inward/record.url?scp=85131765961&partnerID=8YFLogxK | |
rioxxterms.versionofrecord | 10.1109/TCCN.2022.3181032 | |
rioxxterms.type | Journal Article/Review | |
herts.preservation.rarelyaccessed | true | |