Skip to content

Latest commit

 

History

History
37 lines (36 loc) · 5.91 KB

File metadata and controls

37 lines (36 loc) · 5.91 KB

Information about the surveyed papers

#P Authors Title Venue Year
P1 Belkin et al. Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate. NeurIPS 2018
P2 Chatterjee & Mishchenko Circuit-based intrinsic methods to detect overfitting. ICML 2020
P3 Chatterji & Long Foolish crowds support benign overfitting. JMLR 2022
P4 Chen et al. Robust overfitting may be mitigated by properly learned smoothening. ICLR 2021
P5 d'Ascoli et al. Triple descent and the two kinds of overfitting: where & why do they appear? NeurIPS 2020
P6 Feldman et al. The advantages of multiple classes for reducing overfitting from test set reuse. ICML 2019
P7 Feldman et al. Open problem: how fast can a multiclass test set be overfit? COLT 2019
P8 Frei et al. Benign overfitting without linearity: neural network classifiers trained by gradient descent for noisy linear data. COLT 2022
P9 He et al. Sparse double descent: where network pruning aggravates overfitting. ICML 2022
P10 Huang et al. Sparse progressive distillation: resolving overfitting under pretrain-and-finetune paradigm. ACL 2022
P11 Ju et al. Overfitting can be harmless for basis pursuit, but only to a degree. NeurIPS 2020
P12 Ju et al. On the generalization power of overfitted two-layer neural tangent kernel models. ICML 2021
P13 Kim et al. Understanding catastrophic overfitting in single-step adversarial training. AAAI 2021
P14 Koehler et al. Uniform convergence of interpolators: Gaussian width, norm bounds and benign overfitting. NeurIPS 2021
P15 Liu et al. Overfitting the data: compact neural video delivery via content-aware feature modulation. ICCV 2021
P16 Mohammed & Cawley Over-fitting in model selection with Gaussian process regression. ICML 2017
P17 Rice et al. Overfitting in adversarially robust deep learning. ICML 2020
P18 Roelofs et al. A meta-analysis of overfitting in machine learning. NeurIPS 2019
P19 Rozendaal et al. Overfitting for fun and profit: instance-adaptive data compression. ICLR 2021
P20 Russo & Zou How much does your data exploration overfit? controlling bias via information usage. IEEE Trans. Inf. Theory 2020
P21 Sanyal et al. How benign is benign overfitting? ICLR 2021
P22 Shamir The implicit bias of benign overfitting. COLT 2022
P23 Singla et al. Low curvature activations reduce overfitting in adversarial training. ICCV 2021
P24 Song et al. Observational overfitting in reinforcement learning. ICLR 2020
P25 Steck Autoencoders that don't overfit towards the identity. NeurIPS 2020
P26 Sun et al. meProp: sparsified back propagation for accelerated deep learning with reduced overfitting. ICML 2017
P27 Telgarsky Stochastic linear optimization never overfits with quadratically-bounded losses on general data. COLT 2022
P28 Wang et al. Benign overfitting in multiclass classification: all roads lead to interpolation. NeurIPS 2021
P29 Webster et al. Detecting overfitting of deep generative networks via latent recovery. CVPR 2019
P30 Werpachowski et al. Detecting overfitting via adversarial examples. NeurIPS 2019
P31 Xu et al. Overfitting avoidance in tensor train factorization and completion: prior analysis and inference. ICDM 2021
P32 Zhang & Amini Label consistency in overfitted generalized k-means. NeurIPS 2021
P33 Zhang et al. Why overfitting isn't always bad: retrofitting cross-lingual word embeddings to dictionaries. ACL 2020