Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Results Interpretation #1

Open
EmanueleLM opened this issue Dec 6, 2019 · 5 comments
Open

Results Interpretation #1

EmanueleLM opened this issue Dec 6, 2019 · 5 comments

Comments

@EmanueleLM
Copy link

EmanueleLM commented Dec 6, 2019

Hi,

When I run the CROWN verified error with let's say \epsilon = 0.3, what's the exact meaning of the terms in the output like Loss, CE loss etc.? I'd like to estimate or desume a lower bound with just CROWN on some architectures+data, is it possible with this code?

p.s. the architecture has not been trained with CROWN-IBP, it is just naturally trained (or at least with adversarial robustness).

Written in other words, given an epsilon radius, I'd like to know if that n-ball is safe using just CROWN.. is it possible to do that with your code and with a naturally trained architecture that I've built by myself?

Thank you,
Best.

@huanzhang12
Copy link
Owner

huanzhang12 commented Dec 20, 2019

Sorry for the late reply. Yes, you can use the code to evaluate your model, no mater how they were trained.

When you run the code example for computing CROWN verified error, Loss and CE loss are not useful. They are used to monitor the training process. The metrics that do make sense include Err (clean error) and Rob Err (verified error). You should read the numbers in parentheses (they are the mean over the epoch, rather than a batch).

It is easy the dump CROWN bounds on some architecture + data, on any networks, not necessarily trained using CROWN-IBP. There is some commented code in train.py which shows how to call the bound API to obtain CROWN bounds: https://github.com/huanzhang12/CROWN-IBP/blob/master/train.py#L165

These comments print out lower and upper bounds for all examples in a batch. You can check if the lower bound is less than 0 to determine if an example is guaranteed to be safe or not, just like what I did for computing the verified error here.

In order to make the code read your model and data, you can follow instructions on how to train your own model, except for the last step where you run eval.py instead of train.py. Don't forget to add necessary command line arguments like "eval_params:method_params:bound_type=crown-full" to enable the full CROWN bounds (see instructions here).

Let me know if there is anything unclear or if you have any further questions.

@EmanueleLM
Copy link
Author

No worries, thank you for the detailed reply, I'll try it in the next days.

@carinaczhang
Copy link

Sorry for the late reply. Yes, you can use the code to evaluate your model, no mater how they were trained.

When you run the code example for computing CROWN verified error, Loss and CE loss are not useful. They are used to monitor the training process. The metrics that do make sense include Err (clean error) and Rob Err (verified error). You should read the numbers in parentheses (they are the mean over the epoch, rather than a batch).

It is easy the dump CROWN bounds on some architecture + data, on any networks, not necessarily trained using CROWN-IBP. There is some commented code in train.py which shows how to call the bound API to obtain CROWN bounds: https://github.com/huanzhang12/CROWN-IBP/blob/master/train.py#L165

These comments print out lower and upper bounds for all examples in a batch. You can check if the lower bound is less than 0 to determine if an example is guaranteed to be safe or not, just like what I did for computing the verified error here.

In order to make the code read your model and data, you can follow instructions on how to train your own model, except for the last step where you run eval.py instead of train.py. Don't forget to add necessary command line arguments like "eval_params:method_params:bound_type=crown-full" to enable the full CROWN bounds (see instructions here).

Let me know if there is anything unclear or if you have any further questions.

I am just wondering why you only need to check lowerbound<0 to guarantee safety? I thought we needed to check whether the perturbation is within the boundary - But I might just be confused about the definition of verification

@carinaczhang
Copy link

How could one print out model parameters for each layer?

@EmanueleLM
Copy link
Author

I don't remember if models allowed are from keras implementations, but in that case it's enough

[print(l.weights) for l in model.layers]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants