Generative Adversarial Networks (GANs) have achieved great success in data generation. However, its statistical properties are not fully understood. In this paper, we consider the statistical behavior of the general $f$ divergence formulation of GANs. This includes the Kullback-Leibler divergence, which is closely related to the maximum likelihood principle. For a well-specified parametric generative model, we show that all $f$ divergent GANs with the same discriminator class are asymptotically equivalent under appropriate regularity conditions. Moreover, with well-chosen local classifiers, they are asymptotically equivalent to maximum likelihood estimates. For misspecified generative models, GANs with different f diverges {converge to different estimators} and thus cannot be directly compared. However, for some commonly used $f$-divergences, in that smaller asymptotic variances can be achieved when discriminative training in the original $f$-GAN formulation is replaced by , the original $f$-GAN is shown to be suboptimal. By logistic regression. The resulting estimation method is called Adversarial Gradient Estimation (AGE). Empirical studies are provided to support the theory and demonstrate the advantages of AGE over the original $f$-GAN under model misspecification.