This report describes, implements, and extends the work presented in “Tighter Variational Bounds are Not Necessarily Better” (T Rainforth et al., 2018). Theoretically and empirically, increasing the number of importance samples K in an importance-weighted autoencoder (IWAE) (Burda et al., 2016) reduces the signal-to-noise ratio (SNR) of the gradient estimator in inference. provide relevant evidence. It influences the network and thereby the complete learning process. In other words, increasing $K$ decreases the standard deviation of the gradient, but also decreases the true gradient magnitude faster, thus increasing the relative variance of gradient updates. Extensive experiments have been conducted to understand the importance of $K$. These experiments suggest that stricter variational bounds are beneficial for generative networks, whereas looser bounds are suitable for inference networks. Based on these insights, three methods have been implemented and studied. Partial Importance Weighted Autoencoder (PIWAE), Multiplicative Importance Weighted Autoencoder (MIWAE), and Combinatorial Importance Weighted Autoencoder (CIWAE). Each of these three methods, which require IWAE as a special case, uses importance weights in different ways to guarantee a higher SNR of the gradient estimator. In our research studies and analyses, the effectiveness of these algorithms is tested on multiple datasets such as his MNIST and Omniglot. Finally, while the three presented IWAE variations may match or even exceed the performance of the IWAE-generating network, the approximate posterior distributions are much closer to the true posterior distribution than IWAE. indicates that it is possible to generate



    Source link

    Share.

    Leave A Reply