You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
For the same model, I can compute output bounds successfully for some choices of epsilon-balls, but receive 2 different errors for some other choices of the epsilon-ball. These errors are appearing when I use alpha-CROWN but not when I use backward as the method.
Here's the error I receive when using the input range of [[1., 1.2], [0., 0.2]]:
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
[<ipython-input-4-682426423bb8>](https://localhost:8080/#) in <module>
27 my_input = BoundedTensor(nominal_input, ptb)
28 # lb, ub = model.compute_bounds(x=(my_input,), method="backward")
---> 29 lb, ub = model.compute_bounds(x=(my_input,), method="alpha-CROWN")
30 print(f"{ub=}")
31 print(f"{lb=}")
11 frames
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
1186 method = 'backward'
1187 if bound_lower:
-> 1188 ret1 = self.get_optimized_bounds(
1189 x=x, C=C, method=method,
1190 intermediate_layer_bounds=intermediate_layer_bounds,
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in get_optimized_bounds(self, x, aux, C, IBP, forward, method, bound_lower, bound_upper, reuse_ibp, return_A, average_A, final_node_name, intermediate_layer_bounds, reference_bounds, aux_reference_bounds, needed_A_dict, cutter, decision_thresh, epsilon_over_decision_thresh)
576 arg_arb = None
577
--> 578 ret = self.compute_bounds(
579 x, aux, C, method=method, IBP=IBP, forward=forward,
580 bound_lower=bound_lower, bound_upper=bound_upper,
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
1337 self.final_node_name = final.name
1338
-> 1339 self.check_prior_bounds(final)
1340
1341 if method == 'backward':
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
881 return
882 for n in node.inputs:
--> 883 self.check_prior_bounds(n)
884 for i in getattr(node, 'requires_input_bounds', []):
885 self.compute_intermediate_bounds(
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
881 return
882 for n in node.inputs:
--> 883 self.check_prior_bounds(n)
884 for i in getattr(node, 'requires_input_bounds', []):
885 self.compute_intermediate_bounds(
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
881 return
882 for n in node.inputs:
--> 883 self.check_prior_bounds(n)
884 for i in getattr(node, 'requires_input_bounds', []):
885 self.compute_intermediate_bounds(
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
881 return
882 for n in node.inputs:
--> 883 self.check_prior_bounds(n)
884 for i in getattr(node, 'requires_input_bounds', []):
885 self.compute_intermediate_bounds(
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in check_prior_bounds(self, node)
883 self.check_prior_bounds(n)
884 for i in getattr(node, 'requires_input_bounds', []):
--> 885 self.compute_intermediate_bounds(
886 node.inputs[i], prior_checked=True)
887 node.prior_checked = True
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_intermediate_bounds(self, node, prior_checked)
981 # Compute backward bounds only when there are unstable
982 # neurons, or when we don't know which neurons are unstable.
--> 983 node.lower, node.upper = self.backward_general(
984 C=newC, node=node, unstable_idx=unstable_idx,
985 unstable_size=unstable_size)
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/backward_bound.py](https://localhost:8080/#) in backward_general(self, C, node, bound_lower, bound_upper, average_A, need_A_only, unstable_idx, unstable_size, update_mask, verbose)
145 if isinstance(l, BoundRelu):
146 # TODO: unify this interface.
--> 147 A, lower_b, upper_b = l.bound_backward(
148 l.lA, l.uA, *l.inputs, start_node=node, unstable_idx=unstable_idx,
149 beta_for_intermediate_layers=self.intermediate_constr is not None)
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in bound_backward(self, last_lA, last_uA, x, start_node, beta_for_intermediate_layers, unstable_idx)
406 # Get element-wise CROWN linear relaxations.
407 upper_d, upper_b, lower_d, lower_b, lb_lower_d, ub_lower_d, alpha_lookup_idx = \
--> 408 self._backward_relaxation(last_lA, last_uA, x, start_node, unstable_idx)
409 # save for calculate babsr score
410 self.d = upper_d
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in _backward_relaxation(self, last_lA, last_uA, x, start_node, unstable_idx)
332 else:
333 # Spec dimension is dense. Alpha must not be created sparsely.
--> 334 assert self.alpha_lookup_idx[start_node.name] is None
335 selected_alpha = self.alpha[start_node.name]
336 # The first dimension is lower/upper intermediate bound.
AssertionError:
Here's the error I receive when I use input range [[0., 0.2], [0., 0.2]]:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-5-87da0248fc5a>](https://localhost:8080/#) in <module>
27 my_input = BoundedTensor(nominal_input, ptb)
28 # lb, ub = model.compute_bounds(x=(my_input,), method="backward")
---> 29 lb, ub = model.compute_bounds(x=(my_input,), method="alpha-CROWN")
30 print(f"{ub=}")
31 print(f"{lb=}")
6 frames
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
1186 method = 'backward'
1187 if bound_lower:
-> 1188 ret1 = self.get_optimized_bounds(
1189 x=x, C=C, method=method,
1190 intermediate_layer_bounds=intermediate_layer_bounds,
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in get_optimized_bounds(self, x, aux, C, IBP, forward, method, bound_lower, bound_upper, reuse_ibp, return_A, average_A, final_node_name, intermediate_layer_bounds, reference_bounds, aux_reference_bounds, needed_A_dict, cutter, decision_thresh, epsilon_over_decision_thresh)
576 arg_arb = None
577
--> 578 ret = self.compute_bounds(
579 x, aux, C, method=method, IBP=IBP, forward=forward,
580 bound_lower=bound_lower, bound_upper=bound_upper,
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
1342 # This is for the final output bound.
1343 # No need to pass in intermediate layer beta constraints.
-> 1344 ret = self.backward_general(
1345 C=C, node=final,
1346 bound_lower=bound_lower, bound_upper=bound_upper,
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/backward_bound.py](https://localhost:8080/#) in backward_general(self, C, node, bound_lower, bound_upper, average_A, need_A_only, unstable_idx, unstable_size, update_mask, verbose)
145 if isinstance(l, BoundRelu):
146 # TODO: unify this interface.
--> 147 A, lower_b, upper_b = l.bound_backward(
148 l.lA, l.uA, *l.inputs, start_node=node, unstable_idx=unstable_idx,
149 beta_for_intermediate_layers=self.intermediate_constr is not None)
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in bound_backward(self, last_lA, last_uA, x, start_node, beta_for_intermediate_layers, unstable_idx)
406 # Get element-wise CROWN linear relaxations.
407 upper_d, upper_b, lower_d, lower_b, lb_lower_d, ub_lower_d, alpha_lookup_idx = \
--> 408 self._backward_relaxation(last_lA, last_uA, x, start_node, unstable_idx)
409 # save for calculate babsr score
410 self.d = upper_d
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in _backward_relaxation(self, last_lA, last_uA, x, start_node, unstable_idx)
357 full_alpha_shape = sparse_alpha_shape[:-1] + self.shape
358 if lb_lower_d is not None:
--> 359 lb_lower_d = reconstruct_full_alpha(lb_lower_d, full_alpha_shape, self.alpha_indices)
360 if ub_lower_d is not None:
361 ub_lower_d = reconstruct_full_alpha(ub_lower_d, full_alpha_shape, self.alpha_indices)
[/usr/local/lib/python3.8/dist-packages/auto_LiRPA-0.3.1-py3.8.egg/auto_LiRPA/operators/activations.py](https://localhost:8080/#) in reconstruct_full_alpha(sparse_alpha, full_alpha_shape, alpha_indices)
347 if len(alpha_indices) == 1:
348 # Relu after a dense layer.
--> 349 full_alpha[:, :, alpha_indices[0]] = sparse_alpha
350 elif len(alpha_indices) == 3:
351 # Relu after a conv layer.
RuntimeError: shape mismatch: value tensor of shape [2, 1, 25] cannot be broadcast to indexing result of shape [2, 1, 24]
To replicate this bug, please open the colab (optionally, save as a copy so you can edit as the above link just provides viewing permission). Unzip & upload the provided .torch model file. Run the first cell to clone & install auto-lirpa. Restart the runtime and auto-lirpa will be fully installed (no need to re-run the first cell again). Run the subsequent cells to define the model class, load the state_dict, and run the test case. There are a few test cases that return a valid answer, and a few test cases that provide the errors listed above.
System configuration:
OS: Ubuntu 20.04 (provided by colab)
Python version: 3.8.10
Pytorch Version: 1.13.1
Hardware: basic free colab
Have you tried to reproduce the problem in a cleanly created conda/virtualenv environment using official installation instructions and the latest code on the main branch?: Pretty much, but using a fresh colab
Additional context
One thing that is possibly unique/different about this computation graph is that it includes repeated instances of the controller and dynamics to represent the closed-loop system's dynamics across multiple timesteps. I am wondering if there are naming conflicts across timesteps that contribute to these errors.
The text was updated successfully, but these errors were encountered:
Great, just confirmed this works. Thanks for the quick response! I'll leave this open in case there is a root issue to resolve, otherwise feel free to close. Btw, this repo is amazing!!!
Describe the bug
For the same model, I can compute output bounds successfully for some choices of epsilon-balls, but receive 2 different errors for some other choices of the epsilon-ball. These errors are appearing when I use
alpha-CROWN
but not when I usebackward
as the method.To Reproduce
Minimum example available in this colab.
The pytorch state_dict file is attached, which can be unzipped then uploaded into the colab file structure.
single_pendulum_small_controller.torch.zip
Here's the error I receive when using the input range of
[[1., 1.2], [0., 0.2]]
:Here's the error I receive when I use input range
[[0., 0.2], [0., 0.2]]
:System configuration:
Additional context
One thing that is possibly unique/different about this computation graph is that it includes repeated instances of the controller and dynamics to represent the closed-loop system's dynamics across multiple timesteps. I am wondering if there are naming conflicts across timesteps that contribute to these errors.
The text was updated successfully, but these errors were encountered: