\section{conclusion} In this paper, we have proposed a novel approach for training adversarial generative neural networks using an adaptive dropout rate. Our method addresses the overfitting issue and improves the performance of deep neural networks in various applications. By incorporating an adaptive dropout rate that is sensitive to the input data, we have demonstrated that our method outperforms existing dropout techniques in terms of accuracy and robustness. We have conducted experiments on several datasets, including MNIST, CIFAR-10, and CelebA, and compared our method with state-of-the-art techniques. Our AGNN-ADR method consistently achieves better performance in terms of Inception Score (IS) and Frechet Inception Distance (FID), as well as faster convergence and lower loss values during training. The qualitative results also show that our method generates samples with better visual quality and diversity compared to the baseline methods. In summary, our research contributes to the ongoing efforts to improve the performance and robustness of deep learning models, particularly adversarial generative neural networks. Our proposed adaptive dropout rate offers a promising solution for training more robust and accurate deep learning models in various applications. Future work may explore further improvements to the adaptive dropout rate, as well as the application of our method to other types of neural networks and tasks. Additionally, investigating the combination of our method with other regularization techniques and adversarial training methods may lead to even better performance and robustness in deep learning models.