Personalized Federated Learning
Learning client-specific models, or personalization, is an important goal in federated learning. In this talk, we connect meta-learning with federated optimization and show how the popular federated training algorithm FedAvg can be viewed as optimizing a personalization objective for fine-tuning with local SGD. We formally study this algorithm as part of our Average Regret-Upper-Bound Analysis (ARUBA) framework, which yields average regret and excess transfer guarantees for learning an initialization for optimizing convex functions over non-i.i.d. data; crucially, our bounds improve with a natural notion of similarity across client data. We further leverage our ARUBA framework to design new personalized federated learning algorithms, including a theoretically principled and empirically effective method for differentially private personalized federated learning.