An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies. 2020-04-01
-
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
-
DBA: Distributed Backdoor Attacks against Federated Learning
-
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. ICML 2021.
-
NeurIPS 2020 Submission: Backdoor Attacks on Federated Meta-Learning
-
Inverting Gradients - How easy is it to break Privacy in Federated Learning?
-
CAFE: Catastrophic Data Leakage in Vertical Federated Learning
-
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.
-
A Framework for Evaluating Gradient Leakage Attacks in Federated Learning
-
Gradient Inversion with Generative Image Prior. NeurIPS 2021.
-
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning. NeurIPS 2021.
- Analyzing Federated Learning through an Adversarial Lens
- (*) Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. 2019-11-26
- Data Poisoning Attacks on Federated Machine Learning. 2020-04-19
- NeurIPS 2020 submission: Free-rider Attacks on Model Aggregation in Federated Learning
- Free-riders in Federated Learning: Attacks and Defenses. 2019-11-28
-
Differentially Private Federated Learning: A Client Level Perspective. NIPS 2017 Workshop
-
FedSel: Federated SGD under Local Differential Privacy with Top-k Dimension Selection.
-
LDP-Fed: Federated Learning with Local Differential Privacy.