Download the PDF of the paper entitled “Out-of-Sample Error Estimation for Robust M-estimators with Convex Penalties” by Pierre C. Bellec.

Download PDF

overview: If $(X,y)$ is observed and $p,n$ are of the same order, then for a robust $M$ estimator regularized with a convex penalty in high-dimensional linear regression, a general sample Outer error estimation is proposed. If $\psi$ is the derivative of the robust data fitting loss $\rho$, then the estimate is observed only by the quantity $\hat\psi = \psi(yX\hat\beta)$, $X. depends on the data ^\top \hat\psi$ and derivatives $(\partial/\partial y) \hat\psi$ and $(\partial/\partial y) X\hat\beta$ fixed $X$.

$p/n\le \gamma$ or $p/n\le \gamma$ or asymptotically in the higher-dimensional asymptotic regime $p/n\to\gamma’\in(0,\infty)$ . A general differentiable loss function $\rho$ is allowed if $\psi=\rho’$ is 1-Lipschitz. The effectiveness of the out-of-sample error estimate depends on the $\ell_1$ -penalized Huber It is held in one of the M estimators. It outperforms $s_*n$ for a sufficiently small constant $s_*\in(0,1)$ that is independent of $n,p$.

For squared loss, and if the response is uncorrupted, the results additionally produce $n^{-1/2}$ consistent estimates of the noise variance and the generalization error. It generalizes the estimate previously known in Lasso to arbitrary convex penalties.

Submission history

From: Pierre C. Bellec [view email]


Wednesday, August 26, 2020 21:50:41 UTC (61 KB)

Monday, September 14, 2020 23:23:24 UTC (64 KB)

Monday, October 12, 2020 14:48:21 UTC (160 KB)

Thursday, June 3, 2021 20:17:13 UTC (191 KB)

Thursday, March 30, 2023 17:41:03 UTC (183 KB)

Source link


Leave A Reply