We study a new method for estimating the risk of an arbitrary estimator of the mean vector in the classical normal mean problem. The key idea is to generate two auxiliary data vectors by adding a carefully constructed regular noise vector to the original data. Then train the target estimator on the first auxiliary vector and test it on the second auxiliary vector. To stabilize the risk estimate, average this procedure over multiple plots of synthetic noise. An important aspect of this combined bootstrap approach is that it provides unbiased estimates of risk, even in the absence of assumptions about mean vector estimators. However, it’s a slightly “harder” version of the original problem, with inflated noise variance. Under the assumptions required for Stein’s unbiased risk estimator (SURE), we show that a restricted version of this estimator accurately recovers his SURE. We also analyze the biased variance decomposition of the risk estimator error to elucidate the effects of the auxiliary noise variance and the number of bootstrap samples on the estimator accuracy. Finally, we show that the combined bootstrap risk estimator performs very well in simulated experiments and denoising examples.



    Source link

    Share.

    Leave A Reply