From bddead9bb9c3f9e9c048c5cd84460349c8041099 Mon Sep 17 00:00:00 2001 From: cboylston <71150546+cboylston@users.noreply.github.com> Date: Wed, 22 Mar 2023 23:07:58 -0400 Subject: [PATCH] Update robustness.md Added links to the papers for "Just Train Twice" and "Correct-N-Contrast." --- robustness.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/robustness.md b/robustness.md index b8f245e..043130b 100644 --- a/robustness.md +++ b/robustness.md @@ -27,8 +27,8 @@ To obtain good performance on _all_ subgroups, in addition to the ground-truth l Subgroup information also does not need to be explicitly annotated or known. Several recent works aim to first infer subgroups before using a robust training method to obtain good performance on all subgroups. A frequent heuristic is to use the above observation that models trained with empirical risk minimization (ERM) and that minimize average error may still perform poorly on minority subgroups; one can then infer minority or majority subgroups depending on if the trained ERM model correctly predicts the datapoints. - [Learning from Failure (LfF)](https://arxiv.org/abs/2007.02561) trains two models in tandem. Each model trains on the same data batches, where for each batch, datapoints that the first model gets incorrect are upweighted in the loss objective for the second model. -- [Just Train Twice (JTT)]() trains an initial ERM model for a few epochs, identifies the datapoints this model gets incorrect after training, and trains a new model with ERM on the same dataset but with the incorrect points upsampled. -- [Correct-N-Contrast (CNC)]() also trains an initial ERM model, but uses supervised contrastive learning to train a new model to learn similar representations for datapoints with the same class but different trained ERM model predictions. +- [Just Train Twice (JTT)](https://arxiv.org/abs/2107.09044) trains an initial ERM model for a few epochs, identifies the datapoints this model gets incorrect after training, and trains a new model with ERM on the same dataset but with the incorrect points upsampled. +- [Correct-N-Contrast (CNC)](https://arxiv.org/abs/2203.01517) also trains an initial ERM model, but uses supervised contrastive learning to train a new model to learn similar representations for datapoints with the same class but different trained ERM model predictions. ## Certified Robustness against Adversarial Perturbations