Observations $(X,y) from a single-index model with an unknown link function, a Gaussian covariate, and a regularized M-estimator $\hat\beta$ constructed from a convex loss function and a regularizer Consider $. In the region where both the sample size $n$ and the dimension $p$ increase and where $p/n$ has finite bounds, the empirical distribution of $\hat\beta$ and the predicted value $X\hat \ beta$ has been previously characterized in many models: the empirical distribution is known to converge to the proximal operators of the losses and penalties of the associated Gaussian sequence model. and the data generation process. This connection between $(\hat\beta,X\hat\beta)$ and the corresponding proximal operator usually gives fixed-point equations containing unobservable quantities such as the priors of the index or link functions. need to solve.
In this paper, we develop another theory to explain the empirical distribution of $\hat\beta$ and $X\hat\beta$. An approximation of $(\hat\beta,X\hat\beta)$ with respect to the proximity operator is provided. Only observable adjustments are included. These proposed observable adjustments are data-driven and do not require prior knowledge of index or link functions, for example. These new adjustments provide confidence intervals for the individual components of the index and estimates of the correlation between $\hat\beta$ and the index. Therefore, the loss, regularization and interactions between models are captured in a data-driven manner without solving the fixed-point equations studied in previous work. The results apply to both strong convex regularization and unregularized M-estimators. Squared and logistic loss simulations for single-index models are provided, including logistic regression and 1-bit compressed sensing with 20\% corrupted bits.