\section{Introduction} The rapid development of artificial intelligence and machine learning has led to significant advancements in various domains, including reinforcement learning (RL) and multi-agent systems. One particularly notable application of RL is in the domain of Atari games, where deep learning models have been successfully employed to learn control policies directly from high-dimensional sensory input \citep{mnih2013playing}. However, the centralized nature of traditional RL algorithms poses challenges in terms of scalability and privacy, motivating the exploration of decentralized RL approaches \citep{liu2022federated}. In this paper, we address the problem of playing Atari games using decentralized reinforcement learning, aiming to develop a scalable and privacy-preserving solution that maintains high performance. Our proposed solution builds upon recent advancements in decentralized RL, which have demonstrated promising results in various scenarios, such as collision avoidance \citep{thumiger2022a}, cooperative multi-agent reinforcement learning \citep{su2022ma2ql}, and edge-computing-empowered Internet of Things (IoT) networks \citep{lei2022adaptive}. While these works provide valuable insights, our approach specifically targets the unique challenges associated with playing Atari games, such as high-dimensional sensory input and complex decision-making processes. By leveraging the strengths of decentralized RL algorithms, we aim to outperform centralized approaches in terms of scalability and privacy while maintaining competitive performance. This paper makes three novel contributions to the field of decentralized reinforcement learning. First, we present a new decentralized RL algorithm specifically tailored for playing Atari games, addressing the challenges of high-dimensional sensory input and complex decision-making. Second, we provide a comprehensive analysis of the algorithm's performance, comparing it to state-of-the-art centralized and decentralized RL approaches on a diverse set of Atari games. Finally, we offer insights into the trade-offs between scalability, privacy, and performance in decentralized RL, highlighting the benefits and limitations of our proposed approach. To contextualize our work, we briefly discuss key related works in the field of decentralized RL. The Safe Dec-PG algorithm, proposed by \citet{lu2021decentralized}, is the first decentralized policy gradient method that accounts for coupled safety constraints in multi-agent reinforcement learning. Another relevant work is the decentralized collision avoidance approach by \citet{thumiger2022a}, which employs a unique architecture incorporating long-short term memory cells and a gradient-based reward function. While these works demonstrate the potential of decentralized RL, our approach specifically targets the challenges associated with playing Atari games, offering a novel solution in this domain. In summary, this paper presents a novel decentralized RL algorithm for playing Atari games, aiming to achieve high performance while maintaining scalability and privacy. By building upon recent advancements in decentralized RL, we contribute to the growing body of research in this area, offering valuable insights into the trade-offs between scalability, privacy, and performance in decentralized reinforcement learning.