Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors depending on choice of epsilon-ball for alpha-crown and closed-loop dynamics model #42

Open
mfe7 opened this issue Mar 3, 2023 · 3 comments
Labels
Workaround A workaround is available.

Comments

@mfe7
Copy link

mfe7 commented Mar 3, 2023

Describe the bug
For the same model, I can compute output bounds successfully for some choices of epsilon-balls, but receive 2 different errors for some other choices of the epsilon-ball. These errors are appearing when I use alpha-CROWN but not when I use backward as the method.

To Reproduce

  1. Minimum example available in this colab.

  2. The pytorch state_dict file is attached, which can be unzipped then uploaded into the colab file structure.
    single_pendulum_small_controller.torch.zip

  3. Here's the error I receive when using the input range of [[1., 1.2], [0., 0.2]]:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
[<ipython-input-4-682426423bb8>](https://localhost:8080/#) in <module>
     27   my_input = BoundedTensor(nominal_input, ptb)
     28   # lb, ub = model.compute_bounds(x=(my_input,), method="backward")
---> 29   lb, ub = model.compute_bounds(x=(my_input,), method="alpha-CROWN")
     30   print(f"{ub=}")
     31   print(f"{lb=}")

11 frames
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
   1186                 method = 'backward'
   1187             if bound_lower:
-> 1188                 ret1 = self.get_optimized_bounds(
   1189                     x=x, C=C, method=method,
   1190                     intermediate_layer_bounds=intermediate_layer_bounds,

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in get_optimized_bounds(self, x, aux, C, IBP, forward, method, bound_lower, bound_upper, reuse_ibp, return_A, average_A, final_node_name, intermediate_layer_bounds, reference_bounds, aux_reference_bounds, needed_A_dict, cutter, decision_thresh, epsilon_over_decision_thresh)
    576                 arg_arb = None
    577 
--> 578             ret = self.compute_bounds(
    579                 x, aux, C, method=method, IBP=IBP, forward=forward,
    580                 bound_lower=bound_lower, bound_upper=bound_upper,

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
   1337         self.final_node_name = final.name
   1338 
-> 1339         self.check_prior_bounds(final)
   1340 
   1341         if method == 'backward':

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
    881             return
    882         for n in node.inputs:
--> 883             self.check_prior_bounds(n)
    884         for i in getattr(node, 'requires_input_bounds', []):
    885             self.compute_intermediate_bounds(

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
    881             return
    882         for n in node.inputs:
--> 883             self.check_prior_bounds(n)
    884         for i in getattr(node, 'requires_input_bounds', []):
    885             self.compute_intermediate_bounds(

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
    881             return
    882         for n in node.inputs:
--> 883             self.check_prior_bounds(n)
    884         for i in getattr(node, 'requires_input_bounds', []):
    885             self.compute_intermediate_bounds(

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
    881             return
    882         for n in node.inputs:
--> 883             self.check_prior_bounds(n)
    884         for i in getattr(node, 'requires_input_bounds', []):
    885             self.compute_intermediate_bounds(

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
    883             self.check_prior_bounds(n)
    884         for i in getattr(node, 'requires_input_bounds', []):
--> 885             self.compute_intermediate_bounds(
    886                 node.inputs[i], prior_checked=True)
    887         node.prior_checked = True

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_intermediate_bounds(self, node, prior_checked)
    981                         # Compute backward bounds only when there are unstable
    982                         # neurons, or when we don't know which neurons are unstable.
--> 983                         node.lower, node.upper = self.backward_general(
    984                             C=newC, node=node, unstable_idx=unstable_idx,
    985                             unstable_size=unstable_size)

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/backward_bound.py](https://localhost:8080/#) in backward_general(self, C, node, bound_lower, bound_upper, average_A, need_A_only, unstable_idx, unstable_size, update_mask, verbose)
    145             if isinstance(l, BoundRelu):
    146                 # TODO: unify this interface.
--> 147                 A, lower_b, upper_b = l.bound_backward(
    148                     l.lA, l.uA, *l.inputs, start_node=node, unstable_idx=unstable_idx,
    149                     beta_for_intermediate_layers=self.intermediate_constr is not None)

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in bound_backward(self, last_lA, last_uA, x, start_node, beta_for_intermediate_layers, unstable_idx)
    406         # Get element-wise CROWN linear relaxations.
    407         upper_d, upper_b, lower_d, lower_b, lb_lower_d, ub_lower_d, alpha_lookup_idx = \
--> 408             self._backward_relaxation(last_lA, last_uA, x, start_node, unstable_idx)
    409         # save for calculate babsr score
    410         self.d = upper_d

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in _backward_relaxation(self, last_lA, last_uA, x, start_node, unstable_idx)
    332             else:
    333                 # Spec dimension is dense. Alpha must not be created sparsely.
--> 334                 assert self.alpha_lookup_idx[start_node.name] is None
    335                 selected_alpha = self.alpha[start_node.name]
    336             # The first dimension is lower/upper intermediate bound.

AssertionError:

Here's the error I receive when I use input range [[0., 0.2], [0., 0.2]]:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-5-87da0248fc5a>](https://localhost:8080/#) in <module>
     27   my_input = BoundedTensor(nominal_input, ptb)
     28   # lb, ub = model.compute_bounds(x=(my_input,), method="backward")
---> 29   lb, ub = model.compute_bounds(x=(my_input,), method="alpha-CROWN")
     30   print(f"{ub=}")
     31   print(f"{lb=}")

6 frames
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
   1186                 method = 'backward'
   1187             if bound_lower:
-> 1188                 ret1 = self.get_optimized_bounds(
   1189                     x=x, C=C, method=method,
   1190                     intermediate_layer_bounds=intermediate_layer_bounds,

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in get_optimized_bounds(self, x, aux, C, IBP, forward, method, bound_lower, bound_upper, reuse_ibp, return_A, average_A, final_node_name, intermediate_layer_bounds, reference_bounds, aux_reference_bounds, needed_A_dict, cutter, decision_thresh, epsilon_over_decision_thresh)
    576                 arg_arb = None
    577 
--> 578             ret = self.compute_bounds(
    579                 x, aux, C, method=method, IBP=IBP, forward=forward,
    580                 bound_lower=bound_lower, bound_upper=bound_upper,

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
   1342             # This is for the final output bound.
   1343             # No need to pass in intermediate layer beta constraints.
-> 1344             ret = self.backward_general(
   1345                 C=C, node=final,
   1346                 bound_lower=bound_lower, bound_upper=bound_upper,

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/backward_bound.py](https://localhost:8080/#) in backward_general(self, C, node, bound_lower, bound_upper, average_A, need_A_only, unstable_idx, unstable_size, update_mask, verbose)
    145             if isinstance(l, BoundRelu):
    146                 # TODO: unify this interface.
--> 147                 A, lower_b, upper_b = l.bound_backward(
    148                     l.lA, l.uA, *l.inputs, start_node=node, unstable_idx=unstable_idx,
    149                     beta_for_intermediate_layers=self.intermediate_constr is not None)

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in bound_backward(self, last_lA, last_uA, x, start_node, beta_for_intermediate_layers, unstable_idx)
    406         # Get element-wise CROWN linear relaxations.
    407         upper_d, upper_b, lower_d, lower_b, lb_lower_d, ub_lower_d, alpha_lookup_idx = \
--> 408             self._backward_relaxation(last_lA, last_uA, x, start_node, unstable_idx)
    409         # save for calculate babsr score
    410         self.d = upper_d

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in _backward_relaxation(self, last_lA, last_uA, x, start_node, unstable_idx)
    357                 full_alpha_shape = sparse_alpha_shape[:-1] + self.shape
    358                 if lb_lower_d is not None:
--> 359                     lb_lower_d = reconstruct_full_alpha(lb_lower_d, full_alpha_shape, self.alpha_indices)
    360                 if ub_lower_d is not None:
    361                     ub_lower_d = reconstruct_full_alpha(ub_lower_d, full_alpha_shape, self.alpha_indices)

[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in reconstruct_full_alpha(sparse_alpha, full_alpha_shape, alpha_indices)
    347                     if len(alpha_indices) == 1:
    348                         # Relu after a dense layer.
--> 349                         full_alpha[:, :, alpha_indices[0]] = sparse_alpha
    350                     elif len(alpha_indices) == 3:
    351                         # Relu after a conv layer.

RuntimeError: shape mismatch: value tensor of shape [2, 1, 25] cannot be broadcast to indexing result of shape [2, 1, 24]
  1. To replicate this bug, please open the colab (optionally, save as a copy so you can edit as the above link just provides viewing permission). Unzip & upload the provided .torch model file. Run the first cell to clone & install auto-lirpa. Restart the runtime and auto-lirpa will be fully installed (no need to re-run the first cell again). Run the subsequent cells to define the model class, load the state_dict, and run the test case. There are a few test cases that return a valid answer, and a few test cases that provide the errors listed above.

System configuration:

  • OS: Ubuntu 20.04 (provided by colab)
  • Python version: 3.8.10
  • Pytorch Version: 1.13.1
  • Hardware: basic free colab
  • Have you tried to reproduce the problem in a cleanly created conda/virtualenv environment using official installation instructions and the latest code on the main branch?: Pretty much, but using a fresh colab

Additional context
One thing that is possibly unique/different about this computation graph is that it includes repeated instances of the controller and dynamics to represent the closed-loop system's dynamics across multiple timesteps. I am wondering if there are naming conflicts across timesteps that contribute to these errors.

@shizhouxing
Copy link
Member

Thanks for reporting the issue. Here is a workaround by setting bound_opts when creating a BoundedModule:

model = BoundedModule(model, nominal_input,
                      bound_opts={
                          'sparse_features_alpha': False,
                          'sparse_spec_alpha': False,
                      })

@mfe7
Copy link
Author

mfe7 commented Mar 4, 2023

Great, just confirmed this works. Thanks for the quick response! I'll leave this open in case there is a root issue to resolve, otherwise feel free to close. Btw, this repo is amazing!!!

@shizhouxing
Copy link
Member

Thanks @mfe7 ! There should still be issues to be fixed so it's good to leave this issue open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Workaround A workaround is available.
Projects
None yet
Development

No branches or pull requests

2 participants