@@ -181,30 +181,31 @@ You can find all available model IDs in the table below (note that the full lead
181
181
| <sub >** 1** </sub > | <sub ><sup >** Gowal2020Uncovering_70_16_extra** </sup ></sub > | <sub >* [ Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples] ( https://arxiv.org/abs/2010.03593 ) * </sub > | <sub >91.10%</sub > | <sub >65.87%</sub > | <sub >WideResNet-70-16</sub > | <sub >arXiv, Oct 2020</sub > |
182
182
| <sub >** 2** </sub > | <sub ><sup >** Gowal2020Uncovering_28_10_extra** </sup ></sub > | <sub >* [ Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples] ( https://arxiv.org/abs/2010.03593 ) * </sub > | <sub >89.48%</sub > | <sub >62.76%</sub > | <sub >WideResNet-28-10</sub > | <sub >arXiv, Oct 2020</sub > |
183
183
| <sub >** 3** </sub > | <sub ><sup >** Wu2020Adversarial_extra** </sup ></sub > | <sub >* [ Adversarial Weight Perturbation Helps Robust Generalization] ( https://arxiv.org/abs/2004.05884 ) * </sub > | <sub >88.25%</sub > | <sub >60.04%</sub > | <sub >WideResNet-28-10</sub > | <sub >NeurIPS 2020</sub > |
184
- | <sub >** 4** </sub > | <sub ><sup >** Carmon2019Unlabeled** </sup ></sub > | <sub >* [ Unlabeled Data Improves Adversarial Robustness] ( https://arxiv.org/abs/1905.13736 ) * </sub > | <sub >89.69%</sub > | <sub >59.53%</sub > | <sub >WideResNet-28-10</sub > | <sub >NeurIPS 2019</sub > |
185
- | <sub >** 5** </sub > | <sub ><sup >** Sehwag2021Proxy** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Using Proxy Distributions] ( https://arxiv.org/abs/2104.09425 ) * </sub > | <sub >85.85%</sub > | <sub >59.09%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Apr 2021</sub > |
186
- | <sub >** 6** </sub > | <sub ><sup >** Sehwag2020Hydra** </sup ></sub > | <sub >* [ HYDRA: Pruning Adversarially Robust Neural Networks] ( https://arxiv.org/abs/2002.10509 ) * </sub > | <sub >88.98%</sub > | <sub >57.14%</sub > | <sub >WideResNet-28-10</sub > | <sub >NeurIPS 2020</sub > |
187
- | <sub >** 7** </sub > | <sub ><sup >** Gowal2020Uncovering_70_16** </sup ></sub > | <sub >* [ Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples] ( https://arxiv.org/abs/2010.03593 ) * </sub > | <sub >85.29%</sub > | <sub >57.14%</sub > | <sub >WideResNet-70-16</sub > | <sub >arXiv, Oct 2020</sub > |
188
- | <sub >** 8** </sub > | <sub ><sup >** Gowal2020Uncovering_34_20** </sup ></sub > | <sub >* [ Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples] ( https://arxiv.org/abs/2010.03593 ) * </sub > | <sub >85.64%</sub > | <sub >56.82%</sub > | <sub >WideResNet-34-20</sub > | <sub >arXiv, Oct 2020</sub > |
189
- | <sub >** 9** </sub > | <sub ><sup >** Wang2020Improving** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Requires Revisiting Misclassified Examples] ( https://openreview.net/forum?id=rklOg6EFwS ) * </sub > | <sub >87.50%</sub > | <sub >56.29%</sub > | <sub >WideResNet-28-10</sub > | <sub >ICLR 2020</sub > |
190
- | <sub >** 10** </sub > | <sub ><sup >** Wu2020Adversarial** </sup ></sub > | <sub >* [ Adversarial Weight Perturbation Helps Robust Generalization] ( https://arxiv.org/abs/2004.05884 ) * </sub > | <sub >85.36%</sub > | <sub >56.17%</sub > | <sub >WideResNet-34-10</sub > | <sub >NeurIPS 2020</sub > |
191
- | <sub >** 11** </sub > | <sub ><sup >** Hendrycks2019Using** </sup ></sub > | <sub >* [ Using Pre-Training Can Improve Model Robustness and Uncertainty] ( https://arxiv.org/abs/1901.09960 ) * </sub > | <sub >87.11%</sub > | <sub >54.92%</sub > | <sub >WideResNet-28-10</sub > | <sub >ICML 2019</sub > |
192
- | <sub >** 12** </sub > | <sub ><sup >** Sehwag2021Proxy_R18** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Using Proxy Distributions] ( https://arxiv.org/abs/2104.09425 ) * </sub > | <sub >84.38%</sub > | <sub >54.43%</sub > | <sub >ResNet-18</sub > | <sub >arXiv, Apr 2021</sub > |
193
- | <sub >** 13** </sub > | <sub ><sup >** Pang2020Boosting** </sup ></sub > | <sub >* [ Boosting Adversarial Training with Hypersphere Embedding] ( https://arxiv.org/abs/2002.08619 ) * </sub > | <sub >85.14%</sub > | <sub >53.74%</sub > | <sub >WideResNet-34-20</sub > | <sub >NeurIPS 2020</sub > |
194
- | <sub >** 14** </sub > | <sub ><sup >** Cui2020Learnable_34_20** </sup ></sub > | <sub >* [ Learnable Boundary Guided Adversarial Training] ( https://arxiv.org/abs/2011.11164 ) * </sub > | <sub >88.70%</sub > | <sub >53.57%</sub > | <sub >WideResNet-34-20</sub > | <sub >arXiv, Nov 2020</sub > |
195
- | <sub >** 15** </sub > | <sub ><sup >** Zhang2020Attacks** </sup ></sub > | <sub >* [ Attacks Which Do Not Kill Training Make Adversarial Learning Stronger] ( https://arxiv.org/abs/2002.11242 ) * </sub > | <sub >84.52%</sub > | <sub >53.51%</sub > | <sub >WideResNet-34-10</sub > | <sub >ICML 2020</sub > |
196
- | <sub >** 16** </sub > | <sub ><sup >** Rice2020Overfitting** </sup ></sub > | <sub >* [ Overfitting in adversarially robust deep learning] ( https://arxiv.org/abs/2002.11569 ) * </sub > | <sub >85.34%</sub > | <sub >53.42%</sub > | <sub >WideResNet-34-20</sub > | <sub >ICML 2020</sub > |
197
- | <sub >** 17** </sub > | <sub ><sup >** Huang2020Self** </sup ></sub > | <sub >* [ Self-Adaptive Training: beyond Empirical Risk Minimization] ( https://arxiv.org/abs/2002.10319 ) * </sub > | <sub >83.48%</sub > | <sub >53.34%</sub > | <sub >WideResNet-34-10</sub > | <sub >NeurIPS 2020</sub > |
198
- | <sub >** 18** </sub > | <sub ><sup >** Zhang2019Theoretically** </sup ></sub > | <sub >* [ Theoretically Principled Trade-off between Robustness and Accuracy] ( https://arxiv.org/abs/1901.08573 ) * </sub > | <sub >84.92%</sub > | <sub >53.08%</sub > | <sub >WideResNet-34-10</sub > | <sub >ICML 2019</sub > |
199
- | <sub >** 19** </sub > | <sub ><sup >** Cui2020Learnable_34_10** </sup ></sub > | <sub >* [ Learnable Boundary Guided Adversarial Training] ( https://arxiv.org/abs/2011.11164 ) * </sub > | <sub >88.22%</sub > | <sub >52.86%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Nov 2020</sub > |
200
- | <sub >** 20** </sub > | <sub ><sup >** Chen2020Adversarial** </sup ></sub > | <sub >* [ Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning] ( https://arxiv.org/abs/2003.12862 ) * </sub > | <sub >86.04%</sub > | <sub >51.56%</sub > | <sub >ResNet-50 <br /> (3x ensemble)</sub > | <sub >CVPR 2020</sub > |
201
- | <sub >** 21** </sub > | <sub ><sup >** Chen2020Efficient** </sup ></sub > | <sub >* [ Efficient Robust Training via Backward Smoothing] ( https://arxiv.org/abs/2010.01278 ) * </sub > | <sub >85.32%</sub > | <sub >51.12%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Oct 2020</sub > |
202
- | <sub >** 22** </sub > | <sub ><sup >** Sitawarin2020Improving** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Through Progressive Hardening] ( https://arxiv.org/abs/2003.09347 ) * </sub > | <sub >86.84%</sub > | <sub >50.72%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Mar 2020</sub > |
203
- | <sub >** 23** </sub > | <sub ><sup >** Engstrom2019Robustness** </sup ></sub > | <sub >* [ Robustness library] ( https://github.com/MadryLab/robustness ) * </sub > | <sub >87.03%</sub > | <sub >49.25%</sub > | <sub >ResNet-50</sub > | <sub >GitHub,<br >Oct 2019</sub > |
204
- | <sub >** 24** </sub > | <sub ><sup >** Zhang2019You** </sup ></sub > | <sub >* [ You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle] ( https://arxiv.org/abs/1905.00877 ) * </sub > | <sub >87.20%</sub > | <sub >44.83%</sub > | <sub >WideResNet-34-10</sub > | <sub >NeurIPS 2019</sub > |
205
- | <sub >** 25** </sub > | <sub ><sup >** Wong2020Fast** </sup ></sub > | <sub >* [ Fast is better than free: Revisiting adversarial training] ( https://arxiv.org/abs/2001.03994 ) * </sub > | <sub >83.34%</sub > | <sub >43.21%</sub > | <sub >ResNet-18</sub > | <sub >ICLR 2020</sub > |
206
- | <sub >** 26** </sub > | <sub ><sup >** Ding2020MMA** </sup ></sub > | <sub >* [ MMA Training: Direct Input Space Margin Maximization through Adversarial Training] ( https://openreview.net/forum?id=HkeryxBtPB ) * </sub > | <sub >84.36%</sub > | <sub >41.44%</sub > | <sub >WideResNet-28-4</sub > | <sub >ICLR 2020</sub > |
207
- | <sub >** 27** </sub > | <sub ><sup >** Standard** </sup ></sub > | <sub >* [ Standardly trained model] ( https://github.com/RobustBench/robustbench/ ) * </sub > | <sub >94.78%</sub > | <sub >0.00%</sub > | <sub >WideResNet-28-10</sub > | <sub >N/A</sub > |
184
+ | <sub >** 4** </sub > | <sub ><sup >** Zhang2020Geometry** </sup ></sub > | <sub >* [ Geometry-aware Instance-reweighted Adversarial Training] ( https://arxiv.org/abs/2010.01736 ) * </sub > | <sub >89.36%</sub > | <sub >59.64%</sub > | <sub >WideResNet-28-10</sub > | <sub >ICLR 2021</sub > |
185
+ | <sub >** 5** </sub > | <sub ><sup >** Carmon2019Unlabeled** </sup ></sub > | <sub >* [ Unlabeled Data Improves Adversarial Robustness] ( https://arxiv.org/abs/1905.13736 ) * </sub > | <sub >89.69%</sub > | <sub >59.53%</sub > | <sub >WideResNet-28-10</sub > | <sub >NeurIPS 2019</sub > |
186
+ | <sub >** 6** </sub > | <sub ><sup >** Sehwag2021Proxy** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Using Proxy Distributions] ( https://arxiv.org/abs/2104.09425 ) * </sub > | <sub >85.85%</sub > | <sub >59.09%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Apr 2021</sub > |
187
+ | <sub >** 7** </sub > | <sub ><sup >** Sehwag2020Hydra** </sup ></sub > | <sub >* [ HYDRA: Pruning Adversarially Robust Neural Networks] ( https://arxiv.org/abs/2002.10509 ) * </sub > | <sub >88.98%</sub > | <sub >57.14%</sub > | <sub >WideResNet-28-10</sub > | <sub >NeurIPS 2020</sub > |
188
+ | <sub >** 8** </sub > | <sub ><sup >** Gowal2020Uncovering_70_16** </sup ></sub > | <sub >* [ Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples] ( https://arxiv.org/abs/2010.03593 ) * </sub > | <sub >85.29%</sub > | <sub >57.14%</sub > | <sub >WideResNet-70-16</sub > | <sub >arXiv, Oct 2020</sub > |
189
+ | <sub >** 9** </sub > | <sub ><sup >** Gowal2020Uncovering_34_20** </sup ></sub > | <sub >* [ Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples] ( https://arxiv.org/abs/2010.03593 ) * </sub > | <sub >85.64%</sub > | <sub >56.82%</sub > | <sub >WideResNet-34-20</sub > | <sub >arXiv, Oct 2020</sub > |
190
+ | <sub >** 10** </sub > | <sub ><sup >** Wang2020Improving** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Requires Revisiting Misclassified Examples] ( https://openreview.net/forum?id=rklOg6EFwS ) * </sub > | <sub >87.50%</sub > | <sub >56.29%</sub > | <sub >WideResNet-28-10</sub > | <sub >ICLR 2020</sub > |
191
+ | <sub >** 11** </sub > | <sub ><sup >** Wu2020Adversarial** </sup ></sub > | <sub >* [ Adversarial Weight Perturbation Helps Robust Generalization] ( https://arxiv.org/abs/2004.05884 ) * </sub > | <sub >85.36%</sub > | <sub >56.17%</sub > | <sub >WideResNet-34-10</sub > | <sub >NeurIPS 2020</sub > |
192
+ | <sub >** 12** </sub > | <sub ><sup >** Hendrycks2019Using** </sup ></sub > | <sub >* [ Using Pre-Training Can Improve Model Robustness and Uncertainty] ( https://arxiv.org/abs/1901.09960 ) * </sub > | <sub >87.11%</sub > | <sub >54.92%</sub > | <sub >WideResNet-28-10</sub > | <sub >ICML 2019</sub > |
193
+ | <sub >** 13** </sub > | <sub ><sup >** Sehwag2021Proxy_R18** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Using Proxy Distributions] ( https://arxiv.org/abs/2104.09425 ) * </sub > | <sub >84.38%</sub > | <sub >54.43%</sub > | <sub >ResNet-18</sub > | <sub >arXiv, Apr 2021</sub > |
194
+ | <sub >** 14** </sub > | <sub ><sup >** Pang2020Boosting** </sup ></sub > | <sub >* [ Boosting Adversarial Training with Hypersphere Embedding] ( https://arxiv.org/abs/2002.08619 ) * </sub > | <sub >85.14%</sub > | <sub >53.74%</sub > | <sub >WideResNet-34-20</sub > | <sub >NeurIPS 2020</sub > |
195
+ | <sub >** 15** </sub > | <sub ><sup >** Cui2020Learnable_34_20** </sup ></sub > | <sub >* [ Learnable Boundary Guided Adversarial Training] ( https://arxiv.org/abs/2011.11164 ) * </sub > | <sub >88.70%</sub > | <sub >53.57%</sub > | <sub >WideResNet-34-20</sub > | <sub >arXiv, Nov 2020</sub > |
196
+ | <sub >** 16** </sub > | <sub ><sup >** Zhang2020Attacks** </sup ></sub > | <sub >* [ Attacks Which Do Not Kill Training Make Adversarial Learning Stronger] ( https://arxiv.org/abs/2002.11242 ) * </sub > | <sub >84.52%</sub > | <sub >53.51%</sub > | <sub >WideResNet-34-10</sub > | <sub >ICML 2020</sub > |
197
+ | <sub >** 17** </sub > | <sub ><sup >** Rice2020Overfitting** </sup ></sub > | <sub >* [ Overfitting in adversarially robust deep learning] ( https://arxiv.org/abs/2002.11569 ) * </sub > | <sub >85.34%</sub > | <sub >53.42%</sub > | <sub >WideResNet-34-20</sub > | <sub >ICML 2020</sub > |
198
+ | <sub >** 18** </sub > | <sub ><sup >** Huang2020Self** </sup ></sub > | <sub >* [ Self-Adaptive Training: beyond Empirical Risk Minimization] ( https://arxiv.org/abs/2002.10319 ) * </sub > | <sub >83.48%</sub > | <sub >53.34%</sub > | <sub >WideResNet-34-10</sub > | <sub >NeurIPS 2020</sub > |
199
+ | <sub >** 19** </sub > | <sub ><sup >** Zhang2019Theoretically** </sup ></sub > | <sub >* [ Theoretically Principled Trade-off between Robustness and Accuracy] ( https://arxiv.org/abs/1901.08573 ) * </sub > | <sub >84.92%</sub > | <sub >53.08%</sub > | <sub >WideResNet-34-10</sub > | <sub >ICML 2019</sub > |
200
+ | <sub >** 20** </sub > | <sub ><sup >** Cui2020Learnable_34_10** </sup ></sub > | <sub >* [ Learnable Boundary Guided Adversarial Training] ( https://arxiv.org/abs/2011.11164 ) * </sub > | <sub >88.22%</sub > | <sub >52.86%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Nov 2020</sub > |
201
+ | <sub >** 21** </sub > | <sub ><sup >** Chen2020Adversarial** </sup ></sub > | <sub >* [ Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning] ( https://arxiv.org/abs/2003.12862 ) * </sub > | <sub >86.04%</sub > | <sub >51.56%</sub > | <sub >ResNet-50 <br /> (3x ensemble)</sub > | <sub >CVPR 2020</sub > |
202
+ | <sub >** 22** </sub > | <sub ><sup >** Chen2020Efficient** </sup ></sub > | <sub >* [ Efficient Robust Training via Backward Smoothing] ( https://arxiv.org/abs/2010.01278 ) * </sub > | <sub >85.32%</sub > | <sub >51.12%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Oct 2020</sub > |
203
+ | <sub >** 23** </sub > | <sub ><sup >** Sitawarin2020Improving** </sup ></sub > | <sub >* [ Improving Adversarial Robustness Through Progressive Hardening] ( https://arxiv.org/abs/2003.09347 ) * </sub > | <sub >86.84%</sub > | <sub >50.72%</sub > | <sub >WideResNet-34-10</sub > | <sub >arXiv, Mar 2020</sub > |
204
+ | <sub >** 24** </sub > | <sub ><sup >** Engstrom2019Robustness** </sup ></sub > | <sub >* [ Robustness library] ( https://github.com/MadryLab/robustness ) * </sub > | <sub >87.03%</sub > | <sub >49.25%</sub > | <sub >ResNet-50</sub > | <sub >GitHub,<br >Oct 2019</sub > |
205
+ | <sub >** 25** </sub > | <sub ><sup >** Zhang2019You** </sup ></sub > | <sub >* [ You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle] ( https://arxiv.org/abs/1905.00877 ) * </sub > | <sub >87.20%</sub > | <sub >44.83%</sub > | <sub >WideResNet-34-10</sub > | <sub >NeurIPS 2019</sub > |
206
+ | <sub >** 26** </sub > | <sub ><sup >** Wong2020Fast** </sup ></sub > | <sub >* [ Fast is better than free: Revisiting adversarial training] ( https://arxiv.org/abs/2001.03994 ) * </sub > | <sub >83.34%</sub > | <sub >43.21%</sub > | <sub >ResNet-18</sub > | <sub >ICLR 2020</sub > |
207
+ | <sub >** 27** </sub > | <sub ><sup >** Ding2020MMA** </sup ></sub > | <sub >* [ MMA Training: Direct Input Space Margin Maximization through Adversarial Training] ( https://openreview.net/forum?id=HkeryxBtPB ) * </sub > | <sub >84.36%</sub > | <sub >41.44%</sub > | <sub >WideResNet-28-4</sub > | <sub >ICLR 2020</sub > |
208
+ | <sub >** 28** </sub > | <sub ><sup >** Standard** </sup ></sub > | <sub >* [ Standardly trained model] ( https://github.com/RobustBench/robustbench/ ) * </sub > | <sub >94.78%</sub > | <sub >0.00%</sub > | <sub >WideResNet-28-10</sub > | <sub >N/A</sub > |
208
209
209
210
210
211
#### L2
0 commit comments