In this paper, we consider a data-driven chance-constrained stochastic optimization problem in a Bayesian framework. Bayesian posterior distributions provide a principled mechanism for incorporating data and prior knowledge into stochastic optimization problems. However, computing the Bayesian posterior distribution is usually a tricky problem, and a large body of literature has been written on approximate Bayesian computations. Here we focus on the problem of statistical consistency (in the proper sense) of optimal values computed using approximate posterior distributions in the context of chance-constrained optimization. To this end, we rigorously prove frequentist consistency results that show convergence of optimal values to optimal values for fixed parameterized constrained optimization problems. We enhance this by also establishing the stochastic convergence rate of the optimal value. We also prove the convex feasibility of approximate Bayesian stochastic optimization problems. Finally, we show the usefulness of our approach to the optimal staffing problem of the M/M/c queuing model.