For directory, see ICML2020-Accepted List.pdf
In this pdf file, papers highlighted in YELLOW/ORANGE are in this repository.
PINK highlighted means interesting papers not available online.
List of papers in topics: (without replacement, so each paper in the most revelant topic)
- A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
- Adascale_sgd_a_scale_invariant_algorithm_for_distributed_training-Original Pdf
- Improving the Sample and Communication Complexity for Decentralized Non-Convex Optimization - A Joint Gradient Estimation and Tracking Approach
- Is Local SGD Better than Minibatch SGD/ On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent
- On the Noisy Gradient Descent that Generalizes as SGD
- The_Complexity_of_Finding_Stationary_Points_with_Stochastic_Gradient_Descent
- Min-Max Optimization without Gradients - Convergence and Applications to Adversarial ML
- Second-Order Provable Defenses against Adversarial Attacks
- Zeno++
Almost Tune-Free Variance Reduction
- Acceleration for Compressed Gradient Descent in Distributed Optimization
- Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth Non-Convex Optimization
- Momentum Improves Normalized SGD
- Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization
- Universal Average-Case Optimality of Polyak Momentum
- COMMUNICATION-EFFICIENT FEDERATED LEARNING WITH SKETCHING
- Federated Learning with Only Positive Labels
- From Local SGD to Local Fixed Point Methods for Federated Learning
- SCAFFOLD Stochastic Controlled Averaging for Federated Learning
- High-Dimensional Robust Mean Estimation via Gradient Descent
- Safe Screening Rules for L0-regression
- Spectral Graph Matching and Regularized Quadratic Relaxation I - The Gaussian Model
- WONDER Weighted One-shot Distributed Ridge Regression in High Dimensions