In this paper, we develop an approach to inference in linear regression models when the number of potential explanatory variables is larger than the sample size. This approach treats each regression coefficient in turn as an interest parameter and the remaining coefficients as nuisance parameters, seeking a transform that respects the best interest and induces sparsity in the relevant blocks of the conceptual Fisher information matrix. . The induced sparsity is exploited through marginal least-squares analysis of each variable as in factorial experiments, thereby avoiding penalties. His one parameterization of the problem has been found to be particularly useful computationally and mathematically. In particular, it enables an analytical solution of the optimal transformation problem, facilitating comparison of theoretical analysis with other work. In contrast to regularized regression such as Lasso and its extensions, neither selection adjustment nor rescaling of explanatory variables is required, thus preserving the physical interpretation of the regression coefficients. Recommended usage is within a broader set of inference statements to reflect model and parameter uncertainty. We briefly discuss considerations in extending this work to other regression models.



    Source link

    Share.

    Leave A Reply