{"paper_url": "https://huggingface.co/papers/1903.09940", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Robust Overfitting Does Matter: Test-Time Adversarial Purification With FGSM](https://huggingface.co/papers/2403.11448) (2024)\n* [Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume](https://huggingface.co/papers/2403.05100) (2024)\n* [Improving Adversarial Training using Vulnerability-Aware Perturbation Budget](https://huggingface.co/papers/2403.04070) (2024)\n* [Tighter Bounds on the Information Bottleneck with Application to Deep Learning](https://huggingface.co/papers/2402.07639) (2024)\n* [Robust optimization for adversarial learning with finite sample complexity guarantees](https://huggingface.co/papers/2403.15207) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"} |