Better Privacy Guarantees for Decentralized Federated Learning
Fully decentralized algorithms, in which participants exchange messages in a peer-to-peer fashion along the edges of a network graph, are increasingly popular in federated learning due to their scalability and efficiency. Intuitively, decentralized algorithms should also provide better privacy guarantees, as nodes only observe the messages sent by their neighbors in the graph. But formalizing and quantifying this gain is challenging: existing results are limited to Local Differential Privacy (LDP) guarantees that overlook the advantages of decentralization. In this talk, I will introduce appropriate relaxations of differential privacy and show how they can be used to show stronger privacy guarantees for decentralized SGD, matching the privacy-utility trade-off of centralized SGD in some settings. Interestingly, some of these algorithms amplify privacy guarantees as a function of the distance between nodes in the graph, which aligns well with the privacy expectations of users in some use-cases.