A Bayesian approach to solving the inverse problem relies on the choice of prior probabilities. This key element allows for probabilistic formulation of expertise and physical constraints and plays an important role in the success of inference. Recently, the Bayesian inverse problem was solved using generative models as highly informative priors. A generative model is a common machine learning tool for generating data with properties that closely resemble those of a particular database. Usually the generated data distribution is embedded in a low-dimensional manifold. For inverse problems, generative models are trained on databases that reflect properties of the desired solution, such as the typical structure of tissue in the human brain in magnetic resonance (MR) imaging. Inference is performed on a low-dimensional manifold determined by a generative model that significantly reduces the dimensionality of the inverse problem. However, this procedure produces posterior distributions that do not allow Lebesgue density on the real variables, and the accuracy reached can strongly depend on the quality of the generative model. For linear Gaussian models, we investigate another Bayesian inference based on probabilistic generative models run in the original high-dimensional space. We analytically derive the required prior probability density function induced by the generative model using the Laplace approximation. The properties of the obtained inferences are investigated. Specifically, we show that the derived Bayesian estimates are consistent, in contrast to approaches that use low-dimensional manifolds of generative models. The MNIST data set is used to construct numerical experiments that confirm our theoretical findings.