Download the PDF of H\^ong V\^an L\^e’s paper titled “Supervised Learning with Stochastic Projection and Kernel Mean Embedding”.

Download PDF

overview: In this paper, we propose the concept of a correct loss function in a generative model of supervised learning for the measurable spaces, the input space $\mathcal{X}$ and the label space $\mathcal{Y}$. The correct loss function in a generative model of supervised learning is the distance between the elements of the hypothesis space $\mathcal{H}$ of possible predictors and the supervised operator that may not belong to $\mathcal{H}$. discrepancy must be correctly measured. To define the correct loss function, we have Suggest characterization. \mathcal{Y}$ to projection $\Pi_{\mathcal{X}}: Use \mathcal{X}\times\mathcal{Y}\to \mathcal{X}$ as the solution to the linear operator equation . If $\mathcal{Y}$ is a separable topological space with Borel $\sigma$-algebra $ \mathcal{B} (\mathcal{Y})$, then I have a normal conditional probability measure $ We propose another characterization of \. mu_{\mathcal{Y}|\mathcal{X}}$ is the minimum mean squared error in the Markov kernel space, called the stochastic morphism, from $\mathcal{X}$ to $\mathcal{Y}$ will be used. , using kernel average embedding. Using these results and using an internal measure to quantify the generalizability of a learning algorithm, we apply the Cucker-Smale results related to the learnability of regression models to the conditional probability estimation problem setting. Generalize. We also present a variant of his Vapnik’s regularization method for solving probabilistic ill-posed problems using internal measures, and an example of its application.

Post history

From: Hong Van Le [view email]

[v1]

Wed, May 10, 2023 17:54:21 UTC (39 KB)

[v2]

Thursday, May 25, 2023 17:24:18 UTC (44 KB)



Source link

Share.

Leave A Reply