From ded8437ab68d419cff6f5b0615a951177c060831 Mon Sep 17 00:00:00 2001 From: Enayat Ullah Date: Thu, 3 Oct 2024 12:02:35 -0700 Subject: [PATCH] Website and Github update (#677) Summary: Two updates: 1. Github page: Added a line that the latest version supports fast gradient and ghost clipping. 2. Wesbite: Removed the line about passing in custom alphas in the privacy accountant in the FAQs section of website. Differential Revision: D63790553 --- README.md | 5 +++++ docs/faq.md | 3 ++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 5ef7651d..336564fa 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,11 @@ [Opacus](https://opacus.ai) is a library that enables training PyTorch models with differential privacy. It supports training with minimal code changes required on the client, has little impact on training performance, and allows the client to online track the privacy budget expended at any given moment. +## News +**August, 2024**: The latest release supports Fast Gradient Clipping and Ghost Clipping (details in the [blogpost](https://pytorch.org/blog/clipping-in-opacus/)) to enable memory-efficient differentially private training of models. Feel free to try and share your [feedback](https://github.com/pytorch/opacus/issues). + + + ## Target audience This code release is aimed at two target audiences: 1. ML practitioners will find this to be a gentle introduction to training a model with differential privacy as it requires minimal code changes. diff --git a/docs/faq.md b/docs/faq.md index 1de387a8..ea12bebe 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -108,7 +108,8 @@ Opacus computes and stores *per-sample* gradients under the hood. What this mean Although we report expended privacy budget using the (epsilon, delta) language, internally, we track it using Rényi Differential Privacy (RDP) [[Mironov 2017](https://arxiv.org/abs/1702.07476), [Mironov et al. 2019](https://arxiv.org/abs/1908.10530)]. In short, (alpha, epsilon)-RDP bounds the [Rényi divergence](https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy#R%C3%A9nyi_divergence) of order alpha between the distribution of the mechanism’s outputs on any two datasets that differ in a single element. An (alpha, epsilon)-RDP statement is a relaxation of epsilon-DP but retains many of its important properties that make RDP particularly well-suited for privacy analysis of DP-SGD. The `alphas` parameter instructs the privacy engine what RDP orders to use for tracking privacy expenditure. -When the privacy engine needs to bound the privacy loss of a training run using (epsilon, delta)-DP for a given delta, it searches for the optimal order from among `alphas`. There’s very little additional cost in expanding the list of orders. We suggest using a list `[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))`. You can pass your own alphas by passing `alphas=custom_alphas` when calling `privacy_engine.make_private_with_epsilon`. +When the privacy engine needs to bound the privacy loss of a training run using (epsilon, delta)-DP for a given delta, it searches for the optimal order from among `alphas`. There’s very little additional cost in expanding the list of orders. We suggest using a list `[1 + x / 10.0 for x in range(1, 100)] + list(range(12, 64))`. + A call to `privacy_engine.get_epsilon(delta=delta)` returns a pair: an epsilon such that the training run satisfies (epsilon, delta)-DP and an optimal order alpha. An easy diagnostic to determine whether the list of `alphas` ought to be expanded is whether the returned value alpha is one of the two boundary values of `alphas`.