Wherever the unknown regression function of a single variable is known to have derivatives up to order $(\gamma+1)$ , the absolute value is bounded by a common constant or ae. Classical Minimax Optimal Rate of Mean Integral Squared Error (MISE) $\left(\frac{1}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ leads us to conclude that $\gamma$ is obtained the larger the rate approaches $\frac{1}{n}$. This paper shows that: (i) For $n\leq\left(\gamma+1\right)^{2\gamma+3}$, the minimax-optimal MISE rate is approximately $\frac {\log n}{n }$ and the best smoothness degree to exploit is approximately $\left\lceil \frac{\log n}{2}\right\rceil -2$; (ii) For $n>\left(\gamma+1\right)^{2\gamma+3}$, the minimum optimal MISE rate is $\left(\frac{1}{n}\right)^{ \frac{2\gamma+2}{2\gamma+3}}$ and the best smoothness degree to exploit is $\gamma+1$.

    The result of minimax optimality is the set of metric entropy bounds developed in this paper for the class of smooth functions. Some of our limitations are original, and some of them improve and/or generalize those in the literature. The metric entropy bounds allow us to examine the minimax optimal MISE rates associated with some commonly observed smoothness classes and some non-standard smoothness classes, without worrying about nonparametric regression. can also have independent interests.

    Source link


    Leave A Reply