-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
some questions about dropout in VAT #4
Comments
Hello Cao_enjun,
Excuse me, I can't understand the problem. Can you please provide a more
detailed explanation?
Is the same problem present in Adversarial Training (AT)?
Thank you,
@ENRY12
|
Hi, I come back again. A question of virtual adversarial training arose to me. When using this method, we first feed the unlabeled data into the forward net to get their predictions as labels, and in the meantime, feed the noised data to get logits, then regard the KL divergence as the loss to get r_vadv. But in your codes, you applied dropout to the forward net. As far as I know, tensorflow only reuses variables and no operation (e.g. dropout) in the graph, thus the logits and the labels we get from the above procedure use different settings of dropout, i.e., the KL divergence computed may include the noises from dropout. How do you consider about it? Would it make the r_vadv not accurate enough? |
Hi Cao_enjun, Thank you, |
hello, I have some questions about dropout in VAT.
If I use dropout in VAT, the output distribution will change even without perturbation.
thanks!
The text was updated successfully, but these errors were encountered: