State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against such instances. It is formulated as a min-max problem that finds the optimal solution when the training data is corrupted by a worst-case attack. For linear regression problems, adversarial training can be formulated as a convex problem. Using this reformulation, we make two technical contributions. First, we formulate the training problem as an instance of robust regression to reveal its relationship to parameter reduction methods. Specifically, $\ell_\infty$ adversarial training produces sparse solutions. Second, we study adversarial training in overparameterized regimes, i.e. when there are more parameters than data. We prove that adversarial training with small disturbances gives a minimum-norm solution that interpolates the training data. Ridge regression and lasso approximate such interpolation solutions as the regularization parameter vanishes. In contrast, for adversarial training, the transition to the interpolation regime is abrupt and for non-zero disturbance values. We will prove this result and illustrate it with a numerical example.



Source link

Share.

Leave A Reply