Contribute Media
A thank you to everyone who makes this possible: Read More

Online Optimization Meets Federated Learning

Description

"Online Optimization Meets Federated Learning" Aadirupa Saha, Kumar Kshitij Patel

In this tutorial, we aim to cover the state-of-the-art theoretical results in (1) online and bandit convex optimization, (2) federated/distributed optimization, and (3) emerging results at their intersection. The first part of the tutorial will focus on the Online Optimization setting (especially for the adversarial model), the notion of regret, different feedback models (first-order, zeroth-order, comparisons, etc.), and analyze the performance guarantees of online gradient descent-based algorithms. The second part of the tutorial will detail the Distributed/Federated Stochastic Optimization model, discussing the data heterogeneity assumptions, local update algorithms, and min-max optimal algorithms. We will also underline the lack of results beyond the stochastic setting, i.e., in the presence of adaptive adversaries. In the final third part of the tutorial, we describe the emerging and very practical direction of Distributed Online Optimization problem. In this part, we will introduce a distributed notion of regret, followed by some recent developments studying the first, zeroth order feedback for this problem. We will conclude with many open questions, especially for distributed online optimization and underline the various applications of this framework captures.

Details

Improve this page