Skip to content

Commit

Permalink
Pass kwargs from make_private to _prepare_optimizer (#648)
Browse files Browse the repository at this point in the history
Summary:
## Types of changes
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Docs change / refactoring / dependency upgrade

## Motivation and Context / Related issue

Pass `kwargs` from `make_private` to `_prepare_optimizer`  to allow extra arguments to the optimizer.
Closes #559

## How Has This Been Tested (if it applies)

Tested and confirmed the [colab notebook](https://colab.research.google.com/drive/1VivVsyU31onR1EAePuQQUMRzGGRgAi94?usp=sharing) in the attached issue works.

## Checklist

- [x] The documentation is up-to-date with the changes I made.
- [x] I have read the **CONTRIBUTING** document and completed the CLA (see **CONTRIBUTING**).
- [x] All tests passed, and additional code has been covered with new tests.

Pull Request resolved: #648

Reviewed By: EnayatUllah

Differential Revision: D58396361

Pulled By: HuanyuZhang

fbshipit-source-id: 64f1b7dfcc3c091702204115ed213887ebd363e9
  • Loading branch information
eigengravy authored and facebook-github-bot committed Jun 11, 2024
1 parent 202c58a commit 3619619
Showing 1 changed file with 7 additions and 2 deletions.
9 changes: 7 additions & 2 deletions opacus/privacy_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ def __init__(self, *, accountant: str = "prv", secure_mode: bool = False):

def _prepare_optimizer(
self,
optimizer: optim.Optimizer,
*,
optimizer: optim.Optimizer,
noise_multiplier: float,
max_grad_norm: Union[float, List[float]],
expected_batch_size: int,
Expand All @@ -110,6 +110,7 @@ def _prepare_optimizer(
clipping: str = "flat",
noise_generator=None,
grad_sample_mode="hooks",
**kwargs,
) -> DPOptimizer:
if isinstance(optimizer, DPOptimizer):
optimizer = optimizer.original_optimizer
Expand All @@ -134,6 +135,7 @@ def _prepare_optimizer(
loss_reduction=loss_reduction,
generator=generator,
secure_mode=self.secure_mode,
**kwargs,
)

def _prepare_data_loader(
Expand Down Expand Up @@ -274,6 +276,7 @@ def make_private(
clipping: str = "flat",
noise_generator=None,
grad_sample_mode: str = "hooks",
**kwargs,
) -> Tuple[GradSampleModule, DPOptimizer, DataLoader]:
"""
Add privacy-related responsibilities to the main PyTorch training objects:
Expand Down Expand Up @@ -371,7 +374,7 @@ def make_private(
expected_batch_size /= world_size

optimizer = self._prepare_optimizer(
optimizer,
optimizer=optimizer,
noise_multiplier=noise_multiplier,
max_grad_norm=max_grad_norm,
expected_batch_size=expected_batch_size,
Expand All @@ -380,6 +383,7 @@ def make_private(
distributed=distributed,
clipping=clipping,
grad_sample_mode=grad_sample_mode,
**kwargs,
)

optimizer.attach_step_hook(
Expand Down Expand Up @@ -487,6 +491,7 @@ def make_private_with_epsilon(
grad_sample_mode=grad_sample_mode,
poisson_sampling=poisson_sampling,
clipping=clipping,
**kwargs,
)

def get_epsilon(self, delta):
Expand Down

0 comments on commit 3619619

Please sign in to comment.