Consider the problem of robustly testing the norm of a high-dimensional sparse signal vector under two different observation models. The first model is given $n$ iid samples from the distribution $\mathcal{N}\left(\theta,I_d\right)$ (unknown $\theta$). Damaged. Input parameter $ \gamma>0$. Indicates that the algorithm for this task requires $n=\Omega\left(s\log\frac{ed}{s}\right)$ samples exact to the logarithm factor. increase. We also extend the result to the other general notion of sparsity, namely $\|\theta\|_q\le s$ for any $0 < q < 2$. The second observational model we consider produces data according to a sparse linear regression model where the covariate is iid Gaussian and the regression coefficients (signal) are known to be $s$-sparse. Again, assume that the $\epsilon$ fraction of your data is arbitrarily corrupted. An algorithm that reliably tests the norm of the regression coefficients requires at least $n=\Omega\left(\min(s\log d,{1}/{\gamma^4})\right)$ samples Indicates that Our results show that test complexity in these two settings increases significantly under robustness constraints. This is consistent with recent observations made with robust mean and robust covariance tests.

    Source link


    Leave A Reply