Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets.To enhance privacy in federated learning,multi-party comp...Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets.To enhance privacy in federated learning,multi-party computation can be leveraged for secure communication and computation during model training.This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy,as well as the corresponding optimization techniques to improve model accuracy and training efficiency.We also pinpoint future directions to deploy federated learning to a wider range of applications.展开更多
To solve the data island problem,federated learning(FL)provides a solution paradigm where each client sends the model parameters but not the data to a server for model aggregation.Peer-to-peer(P2P)federated learning f...To solve the data island problem,federated learning(FL)provides a solution paradigm where each client sends the model parameters but not the data to a server for model aggregation.Peer-to-peer(P2P)federated learning further improves the robustness of the system,in which there is no server and each client communicates directly with the other.For secure aggregation,secure multi-party computing(SMPC)protocols have been utilized in peer-to-peer manner.However,the ideal SMPC protocols could fail when some clients drop out.In this paper,we propose a robust peer-to-peer learning(RP2PL)algorithm via SMPC to resist clients dropping out.We improve the segmentbased SMPC protocol by adding a check and designing the generation method of random segments.In RP2PL,each client aggregates their models by the improved robust secure multi-part computation protocol when finishes the local training.Experimental results demonstrate that the RP2PL paradigm can mitigate clients dropping out with no significant degradation in performance.展开更多
基金partially supported by the National Natural Science Foundation of China(NSFC)(Grant Nos.U21A20516,62076017,and 62141605)the Funding of Advanced Innovation Center for Future Blockchain and Privacy Computing(No.ZF226G2201)+1 种基金the Beihang University Basic Research Funding(No.YWF-22-L-531)the Funding(No.22-TQ23-14-ZD-01-001)and WeBank Scholars Program.
文摘Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets.To enhance privacy in federated learning,multi-party computation can be leveraged for secure communication and computation during model training.This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy,as well as the corresponding optimization techniques to improve model accuracy and training efficiency.We also pinpoint future directions to deploy federated learning to a wider range of applications.
基金supported by the National Key R&D Program of China(2022YFB3102100)Shenzhen Fundamental Research Program(JCYJ20220818102414030)+2 种基金the Major Key Project of PCL(PCL2022A03)Shenzhen Science and Technology Program(ZDSYS20210623091809029)Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies(2022B1212010005).
文摘To solve the data island problem,federated learning(FL)provides a solution paradigm where each client sends the model parameters but not the data to a server for model aggregation.Peer-to-peer(P2P)federated learning further improves the robustness of the system,in which there is no server and each client communicates directly with the other.For secure aggregation,secure multi-party computing(SMPC)protocols have been utilized in peer-to-peer manner.However,the ideal SMPC protocols could fail when some clients drop out.In this paper,we propose a robust peer-to-peer learning(RP2PL)algorithm via SMPC to resist clients dropping out.We improve the segmentbased SMPC protocol by adding a check and designing the generation method of random segments.In RP2PL,each client aggregates their models by the improved robust secure multi-part computation protocol when finishes the local training.Experimental results demonstrate that the RP2PL paradigm can mitigate clients dropping out with no significant degradation in performance.