-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible for the smoothed classifier to completely abstain on test set? #8
Comments
Hi Kirk,
I believe the issue is that n=10 is too few samples for an alpha=0.001
confidence level (which means that there is a 0.001 probability that the
answer will be wrong). You either have to increase n (use more samples),
or increase alpha (accept a higher probability of failure). In our paper,
the smallest n we experimented with was n=100, which abstained 12% of the
time (see Table 4 in our paper).
Jeremy
…On Mon, Jul 20, 2020 at 6:01 AM kirk86 ***@***.***> wrote:
@jmcohen <https://github.com/jmcohen> Hi, thanks for releasing the code.
If you don't mind me asking, I'm trying to understand if its possible for
a smooth classifier trained using randomised smoothing to completely
abstain on the test set of cifar-10 corrupted with PGD l-infintiy norm?
I've trained a smooth classifier using noise=0.56 and at test time I use
PGD with epsilon=0.1 and l-infinity norm to evaluate the robustness of the
smooth classifier.
e.g. running one epoch on test set of cifar-10
for each batch in minibatches
adversarial_samples = produce adv. noisy samples for this batch <-- PGD with l-infinity & epsilon=0.1
for each x in the adversarial_samples
# compute randomized smoothing labels
predicted_labels = smooth_classifier.predict(x, n=10, alpha=0.001, batch_size=128)
Am I missing sth or is it completely normal in this case for the smoothed
classifier to abstain from prediction for the whole test set on cifar10?
Thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGYQNZ5JRUVOB4NGWAE3TTR4QIXLANCNFSM4PCDY4SA>
.
|
@jmcohen Hi Jeremy, thanks for getting back to me, appreciate it!
I eventually figured it out through trial and error that |
@jmcohen Hi, thanks for releasing the code.
If you don't mind me asking, I'm trying to understand if its possible for a smooth classifier trained using randomised smoothing to completely abstain on the test set of cifar-10 corrupted with PGD l-infintiy norm?
I've trained a smooth classifier using noise=0.56 and at test time I use PGD with epsilon=0.1 and l-infinity norm to evaluate the robustness of the smooth classifier.
e.g. running one epoch on test set of cifar-10
Am I missing sth or is it completely normal in this case for the smoothed classifier to abstain from prediction for the whole test set on cifar10?
Thanks!
The text was updated successfully, but these errors were encountered: