Skip to content

Commit

Permalink
Fixing the corner case when the optimizer has no trainable parameters (
Browse files Browse the repository at this point in the history
…pytorch#619)

Summary:
Pull Request resolved: pytorch#619

We made the following changes:
1. We has fixed the corner case when the optimizer has no trainable parameters. This might happen when there are more than one optimizers while some of them are frozen during the fine-tuning.
2. We have changed the "closure" logic in the "step" function in "ddpoptimizer.py", to make it consistent with "optimizer.py".

Differential Revision: D53055273

fbshipit-source-id: 4e8e1e6184f1c9d380da862f585bdad2d6c2bf55
  • Loading branch information
HuanyuZhang authored and facebook-github-bot committed Jan 25, 2024
1 parent d0290d7 commit f4dc430
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 1 deletion.
6 changes: 5 additions & 1 deletion opacus/optimizers/ddpoptimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,8 +70,12 @@ def reduce_gradients(self):
def step(
self, closure: Optional[Callable[[], float]] = None
) -> Optional[torch.Tensor]:
if closure is not None:
with torch.enable_grad():
closure()

if self.pre_step():
self.reduce_gradients()
return self.original_optimizer.step(closure)
return self.original_optimizer.step()
else:
return None
5 changes: 5 additions & 0 deletions opacus/optimizers/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -491,6 +491,11 @@ def pre_step(
closure: A closure that reevaluates the model and
returns the loss. Optional for most optimizers.
"""
# The corner case when the optimizer has no trainable parameters.
# Essentially the DPOptimizer act as a normal optimizer
if self.grad_samples is None or len(self.grad_samples) == 0:
return True

self.clip_and_accumulate()
if self._check_skip_next_step():
self._is_last_step_skipped = True
Expand Down

0 comments on commit f4dc430

Please sign in to comment.