Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assert not sparse_intermediate_bounds or use_sparse_conv is False #49

Open
yusiyoh opened this issue May 15, 2023 · 3 comments
Open

assert not sparse_intermediate_bounds or use_sparse_conv is False #49

yusiyoh opened this issue May 15, 2023 · 3 comments

Comments

@yusiyoh
Copy link

yusiyoh commented May 15, 2023

Hello and thank you for this amazing tool!

I have also tried to use alpha-beta-CROWN with the same setup but had the same error in the title.

You can find the issue here: Verified-Intelligence/alpha-beta-CROWN#28

With auto_LiRPA, CROWN works as expected but when I change to alpha-CROWN, I have the following error:

AssertionError                            Traceback (most recent call last)
[<ipython-input-34-0e7e4f8b53a2>](https://localhost:8080/#) in <cell line: 3>()
      2 print('Bounding method: backward (CROWN, DeepPoly)')
      3 with torch.no_grad():  # If gradients of the bounds are not needed, we can use no_grad to save memory.
----> 4   lb, ub = bounded_model.compute_bounds(x=(bounded_image,), method='alpha-CROWN')
      5 
      6 # Auxillary function to print bounds.

3 frames
[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
   1186                 method = 'backward'
   1187             if bound_lower:
-> 1188                 ret1 = self.get_optimized_bounds(
   1189                     x=x, C=C, method=method,
   1190                     intermediate_layer_bounds=intermediate_layer_bounds,

[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in get_optimized_bounds(self, x, aux, C, IBP, forward, method, bound_lower, bound_upper, reuse_ibp, return_A, average_A, final_node_name, intermediate_layer_bounds, reference_bounds, aux_reference_bounds, needed_A_dict, cutter, decision_thresh, epsilon_over_decision_thresh)
    455     if init_alpha:
    456         # TODO: this should set up aux_reference_bounds.
--> 457         self.init_slope(x, share_slopes=opts['use_shared_alpha'],
    458                    method=method, c=C, final_node_name=final_node_name)
    459 

[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in init_slope(self, x, share_slopes, method, c, bound_lower, bound_upper, final_node_name, intermediate_layer_bounds, activation_opt_params, skip_bound_compute)
   1047             start_nodes.append(('_forward', 1, None))
   1048         if method in ['backward', 'forward+backward']:
-> 1049             start_nodes += self.get_alpha_crown_start_nodes(
   1050                 node, c=c, share_slopes=share_slopes,
   1051                 final_node_name=final_node_name)

[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/backward_bound.py](https://localhost:8080/#) in get_alpha_crown_start_nodes(self, node, c, share_slopes, final_node_name)
    769                 # The spec dim is c only, and is shared among h, w.
    770                 output_shape = node.patch_size[nj.name][0]
--> 771             assert not sparse_intermediate_bounds or use_sparse_conv is False  # Double check our assumption holds. If this fails, then we created wrong shapes for alpha.
    772         else:
    773             # Output is linear layer, or patch converted to matrix.

My implementation is here: https://colab.research.google.com/drive/1b4PMeK0NKmeXV-mKCfeopbuoJYHa8W4S?usp=sharing
Weights and data:
model_data.zip

I do not get the cause of the error. I tried 'sparse_features_alpha': False, 'sparse_spec_alpha': False but got same results.

Another point is: How to visualize the perturbed image? How does BoundedTensor change the original image?

Thanks in advance and best regards

@shizhouxing
Copy link
Member

@yusiyoh Maybe could you please try also setting sparse_intermediate_bounds to False for now?

I am not able to run the notebook for now -- it's showing normalizer not defined for me.

Another point is: How to visualize the perturbed image? How does BoundedTensor change the original image?

BoundedTensor annotates the original image with a perturbation. If it's an Linf perturbation, then there will be a lower bound and an upper bound for the perturbed image. Not sure what kind of visualization you are looking for?

@yusiyoh
Copy link
Author

yusiyoh commented May 16, 2023

Sorry for inconvenience. This version should be okay: https://colab.research.google.com/drive/1b4PMeK0NKmeXV-mKCfeopbuoJYHa8W4S?usp=sharing

Changing sparse_intermediate_bounds to False yields another error (which I had while using alpha-beta-CROWN too):

/usr/local/lib/python3.10/dist-packages/auto_LiRPA/bound_general.py:970: UserWarning: Creating an identity matrix with size 8192x8192 for node BoundMaxPool(name="/input"). This may indicate poor performance for bound computation. If you see this message on a small network please submit a bug report.
  sparse_C = self.get_sparse_C(
/usr/local/lib/python3.10/dist-packages/auto_LiRPA/bound_general.py:970: UserWarning: Creating an identity matrix with size 4096x4096 for node BoundMaxPool(name="/input.4"). This may indicate poor performance for bound computation. If you see this message on a small network please submit a bug report.
  sparse_C = self.get_sparse_C(
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-21-b546ba971dce>](https://localhost:8080/#) in <cell line: 3>()
      2 print('Bounding method: backward (CROWN, DeepPoly)')
      3 with torch.no_grad():  # If gradients of the bounds are not needed, we can use no_grad to save memory.
----> 4   lb, ub = bounded_model.compute_bounds(x=(bounded_image,), method='alpha-CROWN')
      5 
      6 # Auxillary function to print bounds.

3 frames
[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/bound_general.py](https://localhost:8080/#) in compute_bounds(self, x, aux, C, method, IBP, forward, bound_lower, bound_upper, reuse_ibp, reuse_alpha, return_A, needed_A_dict, final_node_name, average_A, intermediate_layer_bounds, reference_bounds, intermediate_constr, alpha_idx, aux_reference_bounds, need_A_only, cutter, decision_thresh, update_mask)
   1186                 method = 'backward'
   1187             if bound_lower:
-> 1188                 ret1 = self.get_optimized_bounds(
   1189                     x=x, C=C, method=method,
   1190                     intermediate_layer_bounds=intermediate_layer_bounds,

[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in get_optimized_bounds(self, x, aux, C, IBP, forward, method, bound_lower, bound_upper, reuse_ibp, return_A, average_A, final_node_name, intermediate_layer_bounds, reference_bounds, aux_reference_bounds, needed_A_dict, cutter, decision_thresh, epsilon_over_decision_thresh)
    455     if init_alpha:
    456         # TODO: this should set up aux_reference_bounds.
--> 457         self.init_slope(x, share_slopes=opts['use_shared_alpha'],
    458                    method=method, c=C, final_node_name=final_node_name)
    459 

[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/optimized_bounds.py](https://localhost:8080/#) in init_slope(self, x, share_slopes, method, c, bound_lower, bound_upper, final_node_name, intermediate_layer_bounds, activation_opt_params, skip_bound_compute)
   1053             node.restore_optimized_params(activation_opt_params[node.name])
   1054         else:
-> 1055             node.init_opt_parameters(start_nodes)
   1056         init_intermediate_bounds[node.inputs[0].name] = (
   1057             [node.inputs[0].lower.detach(), node.inputs[0].upper.detach()])

[/usr/local/lib/python3.10/dist-packages/auto_LiRPA/operators/pooling.py](https://localhost:8080/#) in init_opt_parameters(self, start_nodes)
     48                 warnings.warn("MaxPool's optimization is not supported for forward mode")
     49                 continue
---> 50             self.alpha[ns] = torch.empty(
     51                 [1, size_s, self.input_shape[0], self.input_shape[1],
     52                 self.output_shape[-2], self.output_shape[-1],

TypeError: empty(): argument 'size' must be tuple of ints, but found element of type torch.Size at pos 2

So can I select an arbitrarily perturbed image between the lower and upper bound as a sample from the perturbation set? Or how can I visualize this perturbation set in general?

I am not sure if my question makes sense, but I am trying to have a sample from the perturbation set and forward it to the model to get the corresponding prediction.

@yusiyoh
Copy link
Author

yusiyoh commented May 16, 2023

This might be helpful:
image

These are the start_nodes in init_opt_parameter() function, and all of them have non-integer sizes expect the last one (the output layer with 43 classes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants