In this paper, we discuss adaptive inference based on Variational Bayes. Although several studies have been performed to analyze the contractile properties of variational posterior distributions, general and computationally tractable variational Bayesian methods to perform adaptive inference are still lacking. To fill this gap, we propose a new adaptive variational Bayes framework that can manipulate collections of models. The proposed framework first computes the variational posterior of each model separately and then combines them with specific weights to produce the variational posterior of the entire model. We find that this combined variational posterior distribution is the closest member of the posterior distribution across models in the predefined family of approximate distributions. We show that adaptive variational Bayes adaptively achieves optimal shrinkage under very common conditions. Furthermore, we provide a methodology for preserving the tractability and adaptive optimality of adaptive variational Bayes even in the presence of a huge number of individual models, such as sparse models. We apply general results to several examples, such as deep learning and sparse factor models, to derive new adaptive inference results. Additionally, we consider using quasi-likelihood in our framework. We formulate a theoretical condition on the quasi-likelihood to ensure adaptive convergence and discuss its particular application to probabilistic block models and nonparametric regression with sub-Gaussian errors.