# On The Near-Optimality Of Local Policies In Large Cooperative Multi-Agent Reinforcement Learning Washim Uddin Mondal *wmondal@purdue.edu* School of IE and CE, Purdue University Vaneet Aggarwal vaneet@purdue.edu School of IE and ECE, Purdue University Satish V. Ukkusuri *sukkusur@purdue.edu* Lyles School of Civil Engineering, Purdue University Reviewed on OpenReview: *https: // openreview. net/ forum? id= t5HkgbxZp1* ## Abstract We show that in a cooperative N-agent network, one can design locally executable policies for the agents such that the resulting discounted sum of average rewards (value) well approximates the optimal value computed over all (including non-local) policies. Specifically, we prove that, if |X |, |U| denote the size of state, and action spaces of individual agents, then for sufficiently small discount factor, the approximation error is given by O(e) where e ≜ √ 1 N hp|X | +p|U|i. Moreover, in a special case where the reward and state transition functions are independent of the action distribution of the population, the error improves to O(e) where e ≜ √ 1 N p*|X |*. Finally, we also devise an algorithm to explicitly construct a local policy. With the help of our approximation results, we further establish that the constructed local policy is within O(max{*e, ϵ*}) distance of the optimal policy, and the sample complexity to achieve such a local policy is O(ϵ −3), for any ϵ > 0. ## 1 Introduction Multi-agent system (MAS) is a powerful abstraction that models many engineering and social science problems. For example, in a traffic control network, each controller at a signalised intersection can be depicted as an agent that decides the duration of red and green times at the adjacent lanes based on traffic flows (Chen et al., 2020). As flows at different neighboring intersections are inter-dependent, the decisions taken by one intersection have significant ramifications over the whole network. How can one come up with a strategy that steers this tangled mesh to a desired direction is one of the important questions in the multi-agent learning literature. Multi-agent reinforcement learning (MARL) has emerged as one of the popular solutions to this question. *Cooperative* MARL, which is the main focus of this paper, constructs a *policy* (decision rule) for each agent that maximizes the aggregate cumulative *reward* of all the agents by judiciously executing exploratory and exploitative trials. Unfortunately, the size of the joint *state-space* of the network increases exponentially with the number of agents. This severely restricts the efficacy of MARL in the large population regime. There have been many attempts in the literature to circumvent this *curse of dimensionality*. For example, one of the approaches is to restrict the policies of each agent to be *local*. This essentially means that each agent ought to take decisions solely based on its locally observable state. In contrast, the execution of a *global* policy requires each agent to be aware of network-wide state information. Based on how these local policies are learnt, the existing MARL algorithms can be segregated into two major categories. In independent Q-learning (IQL) (Tan, 1993), the local policies of each agent are trained independently. On the other hand, in Centralised Training with Decentralised Execution (CTDE) based paradigm, the training of local policies are done in a centralised manner (Oliehoek et al., 2008; Kraemer and Banerjee, 2016). Despite the empirical success for multiple applications (Han et al., 2021; Feriani and Hossain, 2021), no theoretical convergence guarantees have been obtained for either of these methods. Recently, mean-field control (MFC) (Ruthotto et al., 2020) is gaining popularity as another solution paradigm to large cooperative multi-agent problems with theoretical optimality guarantees. The idea of MFC hinges on the assertion that in an infinite pool of homogeneous agents, the statistics of the behaviour of the whole population can be accurately inferred by observing only one representative agent. However, the optimal policy given by MFC is, in general, non-local. The agents, in addition to being aware of their local states, must also know the distribution of states in the whole population to execute these policies. To summarize, on one hand, we have locally executable policies that are often empirically sound but have no theoretical guarantees. On the other hand, we have MFC-based policies with optimality guarantees but those require global information to be executed. Local executability is desirable for many practical application scenarios where collecting global information is either impossible or costly. For example, in a network where the state of the environment changes rapidly (e.g., vehicle-to-vehicle (V2V) type communications (Chen et al., 2017)), the latency to collect network-wide information might be larger than the time it takes for the environment to change. Consequently, by the time the global information are gathered, the states are likely to transition to new values, rendering the collected information obsolete. The natural question that arises in this context is whether it is possible to come up with locally executable policies with optimality guarantees. Answering this question may provide a theoretical basis for many of the empirical studies in the literature (Chu et al., 2019) that primarily use local policies as the rule book for strategic choice of actions. But, in general, how close are these policies to the optimal one? In this article, we provide an answer to this question. ## 1.1 Our Contribution We consider a network of N interacting agents with individual state, and action space of size *|X |*, and |U| respectively. We demonstrate that, given an initial state distribution, µ0 , of the whole population, it is possible to obtain a non-stationary *locally executable* policy-sequence π˜ = {π˜t}t∈{0,1,*··· }* such that the average time-discounted sum of rewards (value) generated by π˜ closely approximates the value generated by the optimal policy-sequence, π ∗ MARL. In fact, we prove that the approximation error is O (e) where e ≜ √ 1 N hp|X | +p|U|i(Theorem 1). We would like to clarify that the optimal policy-sequence, π ∗ MARL, is, in general, not locally executable. In a special case where reward and transition functions are independent of the action distribution of the population, we show that the error can be improved to O(e) where e ≜ √ 1 N p*|X |* (Theorem 2). Our suggested local policy-sequence, π˜ is built on top of the optimal mean-field policy sequence, π ∗ MF that maximizes the infinite-agent value function. It is worth mentioning that π ∗ MF, in general, is not local−agents require the state-distribution of the N-agent network at each time-step in order to execute it. Our main contribution is to show that if each agent uses infinite-agent state distributions (which can be locally and deterministically computed if the initial distribution, µ0 is known) as proxy for the N-agent distribution, then the resulting time-discounted sum of rewards is not too far off from the optimal value generated by π ∗ MARL. Finally, we devise a Natural Policy Gradient (NPG) based procedure (Algorithm 1) that approximately computes the optimal mean-field policy-sequence, π ∗ MF. Subsequently, in Algorithm 2, we exhibit how the desired local policy can be extracted from π ∗ MF. Applying the result from (Liu et al., 2020), we prove that the local policy generated from Algorithm 2 yields a value that is at most O(max{*ϵ, e*}) distance away from the optimal value, and it requires at most O(ϵ −3) samples to arrive at the intended local policy for any ϵ > 0. ## 1.2 Related Works Single Agent RL: Tabular algorithms such as Q-learning (Watkins and Dayan, 1992), and SARSA (Rummery and Niranjan, 1994) were the first RL algorithms adopted in single agent learning literature. Due to their scalability issues, however, these algorithms could not be deployed for problems with large state-space. Recently, neural network based Q-iteration (Mnih et al., 2015), and policy-iteration (Mnih et al., 2016) algorithms have gained popularity as a substitute for tabular procedures. However, they are still inadequate for large scale multi-agent problems due to the exponential blow-up of joint state-space. Multi-Agent Local Policy: As discussed before, one way to introduce scalability into multi-agent learning is to restrict the policies to be local. The easiest and perhaps the most popular way to train the local policies is via Independent Q-learning (IQL) procedure which has been widely adapted in many application scenarios. For example, many adaptive traffic signal control algorithms apply some form of IQL (Wei et al., 2019; Chu et al., 2019). Unlike IQL, centralised training with decentralised execution (CTDE) based algorithms train local policies in a centralised manner. One of the simplest CTDE based training procedure is obtained via value decomposition network (VDN) (Sunehag et al., 2018) where the goal is to maximize the sum of local Q-functions of the agents. Later, QMIX (Rashid et al., 2018) was introduced where the target was to maximize a weighted sum of local Q-functions. Other variants of CTDE based training procedures include WQMIX (Rashid et al., 2020), QTRAN (Son et al., 2019) etc. As clarified before, none of these procedures provide any theoretical guarantee. Very recently, there have been some efforts to theoretically characterize the performance of localised policies. However, these studies either assume the reward functions themselves to be local, thereby facilitating no agent interaction in the reward model (Qu et al., 2020), or allow the policies to take state-information from neighbouring agents as an input, thereby expanding the definition of local policies (Lin et al., 2020; Koppel et al., 2021). Mean-Field Control: An alternate way to bring scalability into multi-agent learning is to apply the concept of mean-field control (MFC). Empirically, MFC based algorithms have been applied in many practical scenarios ranging from congestion control (Wang et al., 2020), ride-sharing (Al-Abbasi et al., 2019) and epidemic management (Watkins et al., 2016). Theoretically, it has been recently shown that both in homogeneous (Gu et al., 2021), and heterogeneous (Mondal et al., 2022a) population of agents, and in population with non-uniform interactions (Mondal et al., 2022b), MFC closely approximates the optimal policy. Moreover, various model-free (Angiuli et al., 2022) and model-based (Pasztor et al., 2021) algorithms have been proposed to solve MFC problems. Alongside standard MARL problems, the concept of MFC based solutions has also been applied to other variants of reinforcement learning (RL) problems. For example, (Gast and Gaujal, 2011) considers a system comprising of a single controller and N bodies. The controller is solely responsible for taking actions and the bodies are associated with states that change as functions of the chosen action. At each time instant, the controller receives a reward that solely is a function of the current states. The objective is to strategize the choice of actions as a function of states such that the cumulative reward of the controller is maximized. It is proven that the above problem can be approximated via a mean-field process. ## 2 Model For Cooperative Marl We consider a collection of N interacting agents. The state of i-th agent, i ∈ {1, · · · , N} at time t ∈ {0, 1, *· · ·* , ∞} is symbolized as x i t ∈ X where X denotes the collection of all possible states (state-space). Each agent also chooses an action at each time instant from a pool of possible actions, U (action-space). The action of i-th agent at time t is indicated as u i t . The joint state and action of all the agents at time t are denoted as x N t ≜ {x i t} N i=1 and u N t ≜ {u i t} N i=1 respectively. The empirical distributions of the joint states, µ N t and actions, ν N t at time t are defined as follows. $$\mathbf{\mu}_{t}^{N}(x)=\frac{1}{N}\sum_{i=1}^{N}\delta(x_{t}^{i}=x),\ \forall x\in\mathcal{X}\tag{1}$$ $$\mathbf{\nu}_{t}^{N}(u)=\frac{1}{N}\sum_{i=1}^{N}\delta(u_{t}^{i}=u),\ \forall u\in\mathcal{U}\tag{2}$$ where δ(·) denotes the indicator function. At time t, the i-th agent receives a reward ri(x N t ,u N t ) and its state changes according to the following statetransition law: x i t+1 ∼ Pi(x N t ,u N t ). Note that the reward and the transition function not only depend on the state and action of the associated agent but also on the states, and actions of other agents in the population. This dependence makes the MARL problem difficult to handle. In order to simplify the problem, we assume the reward and transition functions to be of the following form for some r : X × U × P(X ) × P(U) → R, and P : X × U × P(X ) × P(U) → P(X ) where P(·) is the collection of all probability measures defined over its argument set, $$r_{i}(\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N})=r(x_{t}^{i},u_{t}^{i},\mathbf{\mu}_{t}^{N},\mathbf{\nu}_{t}^{N})\tag{3}$$ $$P_{i}(\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N})=P(x_{t}^{i},u_{t}^{i},\mathbf{\mu}_{t}^{N},\mathbf{\nu}_{t}^{N})\tag{4}$$ ∀i ∈ {1, · · · , N}, ∀x N t ∈ X N , and ∀u N t ∈ UN . Two points are worth mentioning. First, (3) and (4) suggest that the reward and transition functions of each agent take the state and action of that agent, along with empirical state and action distributions of the entire population as their arguments. As a result, the agents can influence each other only via the distributions µ N t , ν N t which are defined by (1),(2) respectively. Second, the functions r, P are taken to be the same for every agent. This makes the population homogeneous. Such assumptions are common in the mean-field literature and holds when the agents are identical and exchangeable. We define a policy to be a (probabilistic) rule that dictates how a certain agent must choose its action for a given joint state of the population. Mathematically, a policy π is a mapping of the following form, π : X N → P(U). Let π i t denote the policy of i-th agent at time t. Due to (3) and (4), we can, without loss of generality, write the following equation for some π : X × P(X ) → U. $$\pi_{t}^{i}(\mathbf{x}_{t}^{N})=\pi(x_{t}^{i},\mu_{t}^{N})$$ ) (5) In other words, one can equivalently define a policy to be a rule that dictates how an agent should choose its action given its state and the empirical state-distribution of the entire population. Observe that, due to agent-homogeneity, the policy function, π, of each agent are the same. With this revised definition, let the policy of any agent at time t be denoted as πt, and the sequence of these policies be indicated as π ≜ {πt}t∈{0,1,*··· }*. For a given joint initial state, x N 0 , the population-average value of the policy sequence π is defined as follows. $$v_{\text{MARI}}(\mathbf{x}_{0}^{N},\mathbf{\pi})\triangleq\frac{1}{N}\sum_{t=1}^{N}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{i}(\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N})\right]=\frac{1}{N}\sum_{t=1}^{N}\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(x_{t}^{i},u_{t}^{i},\mathbf{\mu}_{t}^{N},\mathbf{\nu}_{t}^{N})\right]\tag{6}$$ $$\left(5\right)$$ where the expectation is computed over all state-action trajectories induced by the policy-sequence, π and γ ∈ [0, 1) is the discount factor. The target of MARL is to maximize vMARL(x N 0 , ·) over all policy sequences, π. Let the optimal policy sequence be π ∗ MARL ≜ {π ∗ t,MARL}t∈{0,1,*··· }*. Note that, in order to execute the policy π ∗ t,MARL, in general, the agents must have knowledge about their own states at time t as well as the empirical state-distribution of the entire population at the same instant. As stated previously, the collection of population-wide state information is a costly process in many practical scenarios. In the subsequent sections of this article, our target, therefore, is to identify (a sequence of) local policies that the agents can execute solely with the knowledge of their own states. Additionally, the policy must be such that its value (expected time-discounted cumulative reward) is close to that generated by the optimal policy-sequence, π ∗ MARL. ## 3 The Mean-Field Control (Mfc) Framework Mean-Field Control (MFC) considers a scenario where the agent population size is infinite. Due to the homogeneity of the agents, in such a framework, it is sufficient to track only one representative agent. We denote the state and action of the representative agent at time t as xt ∈ X and ut ∈ U, respectively, while the state and action distributions of the infinite population at the same instance are indicated as µ∞ t ∈ P(X ) and ν∞ t ∈ P(U), respectively. For a given sequence of policies π ≜ {πt}t∈{0,1,*··· }* and the state distribution µ∞ t at time t, the action distribution ν∞ t at the same instant can be computed as follows. $$\nu_{t}^{\infty}=\nu^{\mathrm{MF}}(\mu_{t}^{\infty},\pi_{t})\triangleq\sum_{x\in{\mathcal{X}}}\pi_{t}(x,\mu_{t}^{\infty})\mu_{t}^{\infty}(x)$$ (x) (7) $$\left(7\right)$$ In a similar fashion, the state-distribution at time t + 1 can be evaluated as shown below. $$\mathbf{\mu}_{t+1}^{\infty}=P^{\rm MF}(\mathbf{\mu}_{t}^{\infty},\pi_{t})\triangleq\sum_{x\in\mathcal{X}}\sum_{u\in\mathcal{U}}P(x,u,\mathbf{\mu}_{t}^{\infty},\nu^{\rm MF}(\mathbf{\mu}_{t}^{\infty},\pi_{t}))\pi_{t}(x,\mathbf{\mu}_{t}^{\infty})(u)\mathbf{\mu}_{t}^{\infty}(x)\tag{8}$$ Finally, the average reward at time t can be expressed as follows. $$r^{\mathrm{MF}}(\mu_{t}^{\infty},\pi_{t})\triangleq\sum_{x\in\mathcal{X}}\sum_{u\in\mathcal{U}}r(x,u,\mu_{t}^{\infty},\nu^{\mathrm{MF}}(\mu_{t}^{\infty},\pi_{t}))\pi_{t}(x,\mu_{t}^{\infty})(u)\mu_{t}^{\infty}(x)$$ For an initial state distribution, µ0 , the value of a policy sequence π = {πt}t∈{0,1,*··· }* is computed as shown below. $$({\mathfrak{g}})$$ $$v_{\rm MF}(\mathbf{\mu}_{0},\mathbf{\pi})=\sum_{t=0}^{\infty}\gamma^{t}r^{\rm MF}(\mathbf{\mu}_{t}^{\infty},\pi_{t})\tag{1}$$ $$(10)$$ The goal of Mean-Field Control is to maximize vMF(µ0 , ·) over all policy sequences π. Let the optimal policy sequence be denoted as π ∗ MF. (Gu et al., 2021) recently established that if the agents execute π ∗ MF in the Nagent MARL system, then the N-agent value function generated by this policy-sequence well-approximates the value function generated by π ∗ MARL when N is large. However, to execute the sequence π ∗ MF in an N-agent system, each agent must be aware of empirical state distributions, {µ N t }t∈{0,1,*··· }*, along with its own states at each time instant. In other words, when the i-th agent in an N-agent system executes the policy sequence π ∗ MF ≜ {π ∗ t,MF}t∈{0,1,*··· }*, it chooses action at time t according to the following distribution, u i t ∼ π ∗ t,MF(x i t , µ N t ). As stated in Section 2, the computation of empirical state distribution of the whole population at each instant is a costly procedure. In the following subsection, we discuss how we can design a near-optimal policy that does not require {µ N t }t∈{1,2,*··· }* its execution. Designing Local Policies: Note from Eq. (8) that the evolution of infinite-population state distribution, µ∞ tis a deterministic equation. Hence, if the initial distribution µ0 is disseminated among the agents, then each agent can locally compute the distributions, {µ∞ t } for t > 0. Consider the following policy sequence, denoted as π˜ ∗ MF ≜ {π˜ ∗ t,MF}t∈{0,1,*··· }*. $$\tilde{\pi}^{*}_{t,\text{MF}}(x,\boldsymbol{\mu})\triangleq\pi^{*}_{t,\text{MF}}(x,\boldsymbol{\mu}^{\infty}_{t}),\ \forall x\in\mathcal{X},\forall\boldsymbol{\mu}\in\mathcal{P}(\mathcal{X}),\forall t\in\{0,1,\cdots\}\tag{11}$$ The sequence, π ∗ MF ≜ {π ∗ t,MF}t∈{0,1,*··· }*, as mentioned before, is the optimal policy-sequence that maximizes the mean-field value function, vMF(µ∞ 0 , ·) for a given initial state distribution, µ0 . Note that, in order to execute the policy-sequence, π˜ ∗ t,MF in an N-agent system, each agent must be aware of its own state as well as the state distribution of an infinite agent system at each time instant−both of which can be obtained locally provided it is aware of the initial distribution, µ0 . In other words, the policy-sequence π˜ ∗ MF completely disregards the empirical distributions, {µ N t }t∈{1,2,*··· }* (instead depends on {µ∞ t }t∈{1,2,*··· }*), and therefore is locally executable for t > 0. The above discussion well establishes π˜ ∗ MF as a locally executable policy. However, it is not clear at this point how well π˜ ∗ MF performs in comparison to the optimal policy, π ∗ MARL in an N-agent system. Must one embrace a significant performance deterioration to enforce locality? In the next section, we provide an answer to this crucial question. ## 4 Main Result Before stating the main result, we shall state a few assumptions that are needed to establish it. Our first assumption is on the reward function, r, and the state-transition function, P, defined in (3) and (4), respectively. Assumption 1. The reward function, r, is bounded and Lipschitz continuous with respect to the mean-field arguments. Mathematically, there exists MR, LR > 0 *such that the following holds* $(a)$$|r(x,u,\boldsymbol{\mu},\boldsymbol{\nu})|\leq M_{R}$ $(b)$$|r(x,u,\boldsymbol{\mu}_{1},\boldsymbol{\nu}_{1})-r(x,u,\boldsymbol{\mu}_{2},\boldsymbol{\nu}_{2})|\leq L_{R}\left[|\boldsymbol{\mu}_{1}-\boldsymbol{\mu}_{2}|_{1}+|\boldsymbol{\nu}_{1}-\boldsymbol{\nu}_{2}|_{1}\right]$ ∀x ∈ X , ∀u ∈ U, ∀µ1 , µ2 ∈ P(X ), and ∀ν1, ν2 ∈ P(U). The function | · | denotes L1*-norm.* Assumption 2. The state-transition function, P, is Lipschitz continuous with respect to the mean-field arguments. Mathematically, there exists LP > 0 *such that the following holds* $(a)$$|P(x,u,\mu_{1},\nu_{1})-P(x,u,\mu_{2},\nu_{2})|\leq L_{P}\left[|\mu_{1}-\mu_{2}|_{1}+|\nu_{1}-\nu_{2}|_{1}\right].$ ∀x ∈ X , ∀u ∈ U, ∀µ1 , µ2 ∈ P(X ), and ∀ν1, ν2 ∈ P(U). Assumptions 1 and 2 ensure the boundedness and the Lipschitz continuity of the reward and state transition functions. The very definition of transition function makes it bounded. So, it is not explicitly mentioned in Assumption 2. These assumptions are commonly taken in the mean-field literature (Mondal et al., 2022a; Gu et al., 2021). Our next assumption is on the class of policy functions. Assumption 3. The class of admittable policy functions, Π, is such that every π ∈ Π satisfies the following for some LQ > 0, $$|\pi(x,\mu_{1})-\pi(x,\mu_{2})|_{1}\leq L_{Q}|\mu_{1}-\mu_{2}|_{1}$$ ∀x ∈ X , ∀µ1 , µ2 ∈ P(X ). Assumption 3 states that every policy is presumed to be Lipschitz continuous w. r. t. the mean-field state distribution argument. This assumption is also common in the literature (Carmona et al., 2018; Pasztor et al., 2021) and typically satisfied by Neural Network based policies with bounded weights. Corresponding to the admittable policy class Π, we define Π∞ ≜ Π × Π *× · · ·* as the set of all admittable policy-sequences. We now state our main result. The proof of Theorem 1 is relegated to Appendix A. Theorem 1. Let x N 0 be the initial states in an N*-agent system and* µ0 be its associated empirical distribution. Assume π ∗ MF ≜ {π ∗ t,MF}t∈{0,1,··· }*, and* π ∗ MARL ≜ {π ∗ t,MARL}t∈{0,1,··· } *to be policy sequences that maximize* vMF(µ0 , ·)*, and* vMARL(x N 0 , ·) respectively within the class, Π∞*. Let* π˜ ∗ MF be a localized policy sequence corresponding to π ∗ MF *which is defined by* (11)*. If Assumptions 1 and 2 hold, then the following is true* whenever γSP < 1. $$\begin{split}&|v_{\text{MARI}}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\text{MARI}}^{*})-v_{\text{MARI}}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\text{MF}}^{*})|\leq\left(\frac{2}{1-\gamma}\right)\left[\frac{M_{R}}{\sqrt{N}}+\frac{L_{R}}{\sqrt{N}}\sqrt{\mathcal{U}l}\right]\\ &+\frac{1}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}l|}\right]\left(\frac{2S_{R}C_{P}}{S_{P}-1}\right)\left[\frac{1}{1-\gamma S_{P}}-\frac{1}{1-\gamma}\right]\end{split}\tag{12}$$ The parameters are defined as follows CP ≜ 2 + LP , SR ≜ (MR + 2LR) + LQ(MR + LR)*, and* SP ≜ (1 + 2LP ) + LQ(1 + LP ) where MR, LR, LP , and LQ *are defined in Assumptions* 1 − 3. Theorem 1 accomplishes our goal of showing that, for large N, there exists a locally executable policy whose N-agent value function is at most O √ 1 N hp|X | +p|U|i distance away from the optimal N-agent value function. Note that the optimality gap decreases with increase in N. We, therefore, conclude that for large population, locally-executable policies are near-optimal. Interestingly, this result also shows that the optimality gap increases with |X |, |U|, the sizes of state, and action spaces. In the next section, we prove that, if the reward, and transition functions do not depend on the action distribution of the population, then it is possible to further improve the optimality gap. ## 5 Optimality Error Improvement In A Special Case The key to improving the optimality gap of Theorem 1 hinges on the following assumption. Assumption 4. The reward function, r, and transition function, P are independent of the action distribution of the population. Mathematically, $(a)\;r(x,u,\mu,\nu)=r(x,u,\mu)$ $(b)\;P(x,u,\mu,\nu)=P(x,u,\mu)$ . ∀x ∈ X , ∀u ∈ U, ∀µ ∈ P(X ), and ∀ν ∈ P(U). Assumption 4 considers a scenario where the reward function, r, and the transition function, P is independent of the action distribution. In such a case, the agents can influence each other only through the state distribution. Clearly, this is a special case of the general model considered in Section 2. We would like to clarify that although r and P are assumed to be independent of the action distribution of the population, they still might depend on the action taken by the individual agents. Assumption 4 is commonly assumed in many mean-field related articles (Tiwari et al., 2019). Theorem 2 (stated below) formally dictates the optimality gap achieved by the local policies under Assumption 4. The proof of Theorem 2 is relegated to Appendix B. Theorem 2. Let x N 0 be the initial states in an N*-agent system and* µ0 be its associated empirical distribution. Assume π ∗ MF ≜ {π ∗ t,MF}t∈{0,1,··· }*, and* π ∗ MARL ≜ {π ∗ t,MARL}t∈{0,1,··· } *to be policy sequences that maximize* vMF(µ0 , ·)*, and* vMARL(x N 0 , ·) respectively within the class, Π∞*. Let* π˜ ∗ MF *be a localized policy sequence* corresponding to π ∗ MF *which is defined by* (11)*. If Assumptions 1, 2, and 4 hold, then the following is true* whenever γSP < 1. $$|v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MARL}^{*})-v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MF}^{*})|\leq\left(\frac{2}{1-\gamma}\right)\left[\frac{M_{R}}{\sqrt{N}}\right]\tag{13}$$ $$+\frac{1}{\sqrt{N}}\sqrt{|\mathcal{X}|}\left(\frac{4S_{R}}{S_{P}-1}\right)\left[\frac{1}{1-\gamma S_{P}}-\frac{1}{1-\gamma}\right]$$ _same as in Theorem 1._ The parameters are same as in Theorem 1. Theorem 2 suggests that, under Assumption 4, the optimality gap for local policies can be bounded as O( √ 1 N p*|X |*). Although the dependence of the optimality gap on N is same as in Theorem 1, its dependence on the size of state, action spaces has been reduced to O(p*|X |*) from O(p|X | +p|U|) stated previously. This result is particularly useful for applications where Assumption 4 holds, and the size of action-space is large. A natural question that might arise in this context is whether it is possible to remove the dependence of the optimality gap on the size of state-space, *|X |* by imposing the restriction that r, P are independent of the state-distribution. Despite our effort, we could not arrive at such a result. This points towards an inherent asymmetry between the roles played by state, and action spaces in mean-field approximation. ## 6 Roadmap Of The Proof In this section, we provide an outline of the proof of Theorem 1. The proof of Theorem 2 is similar. The goal in the proof of Theorem 1 is to establish the following three bounds. G0 ≜ |vMARL(x N 0 ,π ∗ MARL) − vMF(µ0 ,π ∗ MARL)| = O p|X | +p|U| √N ! G1 ≜ |vMARL(x N 0 ,π ∗ MF) − vMF(µ0 ,π ∗ MF)| = O p|X | +p|U| √N ! G2 ≜ |vMARL(x N 0 ,π˜ ∗ MF) − vMF(µ0 ,π ∗ MF)| = O p|X | +p|U| √N ! (14) $\binom{15}{2}$ (15) . $$(16)$$ where vMARL(·, ·) and vMF(·, ·) are value functions defined by (6) and (10), respectively. Once these bounds are established, (13) can be proven easily. For example, observe that, vMARL(x N 0 ,π ∗ MARL) − vMARL(x N 0 ,π˜ ∗ MF) = vMARL(x N 0 ,π ∗ MARL) − vMF(µ0 ,π ∗ MF) + vMF(µ0 ,π ∗ MF) − vMARL(x N 0 ,π˜ ∗ MF) (a) ≤ vMARL(x N 0 ,π ∗ MARL) − vMF(µ0 ,π ∗ MARL) + G2 ≤ G0 + G2 = O p|X | +p|U| √N ! $$(17)$$ Inequality (a) uses the fact that π ∗ MF is a maximizer of vMF(µ0 , ·). Following a similar argument, the term vMARL(x N 0 ,π˜ ∗ MF) − vMARL(x N 0 ,π ∗ MARL) can be bounded by G1 + G2 = O √*|X |*+ √ |U | √N . Combining with (17), we can establish that |vMARL(x N 0 ,π ∗ MARL) − vMARL(x N 0 ,π˜ ∗ MF)| = O √*|X |*+ √ |U | √N . To prove (14),(15), and (16), it is sufficient to show that the following holds $$J_{0}\triangleq|v_{\mathrm{MARL}}(\mathbf{x}_{0}^{N},{\bar{\mathbf{\pi}}})-v_{\mathrm{MF}}(\mathbf{\mu}_{0},{\bar{\mathbf{\pi}}})|={\mathcal{O}}\left({\frac{\sqrt{|{\mathcal{X}}|}+{\sqrt{|{\mathcal{U}}|}}}{\sqrt{N}}}\right)$$ $$(18)$$ for suitable choice of policy-sequences, π¯, and π. This is achieved as follows. - Note that the value functions are defined as the time-discounted sum of average rewards. To bound J0 defined in (18), we therefore focus on the difference between the empirical N-agent average reward, 1 N Pi r(¯x i t , u¯ i t , µ¯ N t , ν¯ N t ) at time t generated from the policy-sequence π¯, and the infinite agent average reward, r MF(µ∞ t , πt) at the same instant generated by π. The parameters x¯ i t , u¯ i t , µ¯ N t , ν¯ N trespectively denote state, action of i-th agent, and empirical state, action distributions of N-agent system at time t corresponding to the policy-sequence π¯. Also, by µ¯ ∞ t , µ∞ t , we denote the infinite agent state distributions at time t corresponding to π¯, π respectively. - The difference between 1N Pi r(¯x i t , u¯ i t , µ¯ N t , ν¯ N t ) and r MF(µ∞ t , πt) can be upper bounded by three terms. - The first term is | 1 N Pi r(¯x i t , u¯ i t , µ¯ N t , ν¯ N t )−r MF(µ¯ N t , π¯t)| which is bounded as O( 1 N p|U|) by Lemma 7 (stated in Appendix A.2). - The second term is |r MF(µ¯ N t , π¯t) − r MF(µ¯ ∞ t , π¯t)| which can be bounded as O(|µ¯ N t − µ¯ ∞ t |) by using the Lipschitz continuity of r MF established in Lemma 4 in Appendix A.1. The term |µ¯ N t − µ¯ ∞ t | can be further bounded as O √ 1 N hp|X | +p|U|i using Lemma 8 stated in Appendix A.2. - Finally, the third term is |r MF(µ¯ ∞ t , π¯t) − r MF(µ∞ t , πt)|. Clearly, if π¯ = π, then this term is zero. Moreover, if π¯ is localization of π, defined similarly as in (11), then the term is zero as well. This is due to the fact that the trajectory of state and action distributions generated by π¯, π are exactly the same in an infinite agent system. This, however, may not be true in an N-agent system. - For the above two cases, i.e., when π¯,π are the same or one is localization of another, we can bound | 1 N Pi r(¯x i t , u¯ i t , µ¯ N t , ν¯ N t ) − r MF(µ∞ t , πt)| as O √ 1 N hp|X | +p|U|i by taking a sum of the above three bounds. The bound obtained is, in general, t dependent. By taking a time discounted sum of these bounds, we can establish (18). - Finally, (14),(15), (16) are established by injecting the following pairs of policy-sequences in (18), (π¯,π) = (π ∗ MARL,π ∗ MARL),(π ∗ MF,π ∗ MF),(π˜ ∗ MF,π ∗ MF). Note that π¯,π are equal for first two pairs whereas in the third case, π¯ is a localization of π. ## 7 Algorithm To Obtain Near-Optimal Local Policy In section 3, we discussed how near-optimal local policies can be obtained if the optimal mean-field policy sequence π ∗ MF is known. In this section, we first describe a natural policy gradient (NPG) based algorithm to approximately obtain π ∗ MF. Later, we also provide an algorithm to describe how the obtained policy-sequence is localised and executed in a decentralised manner. Recall from section 3 that, in an infinite agent system, it is sufficient to track only one representative agent which, at instant t, takes an action ut on the basis of its observation of its own state, xt, and the state distribution, µt of the population. Hence, the evaluation of π ∗ MF can be depicted as a Markov Decision Problem with state-space *X × P*(X ), and action space U. One can, therefore, presume π ∗ MF to be stationary i.e., π ∗ MF = {π ∗ MF, π∗MF, *· · · }* (Puterman, 2014). We would like to point out that the same conclusion may not hold for the localised policy-sequence π˜ ∗ MF. Stationarity of the sequence, π ∗ MF reduces our task to finding an optimal policy, π ∗ MF. This facilitates a drastic reduction in the search space. With slight abuse of notation, in this section, we shall use πΦ to denote a policy parameterized by Φ, as well as the stationary sequence generated by it. Let the collection of policies be denoted as Π. We shall assume that Π is parameterized by Φ ∈ R d. Consider an arbitrary policy πΦ ∈ Π. The Q-function associated with this policy be defined as follows ∀x ∈ X , ∀µ ∈ P(X ), and ∀u ∈ U. $$Q_{\Phi}(x,\mu,u)\triangleq\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t}r(x_{t},u_{t},\mu_{t},\nu_{t})\Big{|}x_{0}=x,\mu_{0}=\mu,u_{0}=u\right]\tag{1}$$ $$\left(19\right)$$ where ut+1 ∼ πΦ(xt+1, µt+1), xt+1 ∼ P(xt, ut, µt , νt), and µt , νt are recursively obtained ∀t > 0 using (8),(7) respectively from the initial distribution µ0 = µ. The advantage function associated with πΦ is defined as shown below. $$A_{\Phi}(x,\mu,u)\triangleq Q_{\Phi}(x,\mu,u)-\mathbb{E}[Q_{\Phi}(x,\mu,\bar{u})]\tag{1}$$ $$(20)$$ $$(21)$$ The expectation is evaluated over u¯ ∼ πΦ(x, µ). To obtain the optimal policy, π ∗ MF, we apply the NPG update (Agarwal et al., 2021; Liu et al., 2020) as shown below with learning parameter, η. This generates a sequence of parameters {Φj} J j=1 from an arbitrary initial choice, Φ0. $$\Phi_{j+1}=\Phi_{j}+\eta\mathbf{w}_{j},\mathbf{w}_{j}\triangleq\arg\min_{\mathbf{w}\in\mathbb{R}^{d}}\;L_{\zeta_{\mu_{0}}^{\Phi_{j}}}(\mathbf{w},\Phi_{j})\tag{1}$$ The term ζ Φj µ0 is the occupancy measure defined as, $$\zeta_{\mu_{0}}^{\Phi_{j}}(x,\mu,u)\triangleq\sum_{\tau=0}^{\infty}\gamma^{\tau}\mathbb{P}(x_{\tau}=x,\mu_{\tau}=\mu,u_{\tau}=u|x_{0}=x,\mu_{0}=\mu,u_{0}=u,\pi_{\Phi_{j}})(1-\gamma)$$ whereas the function Lζ Φj µ0 is given as follows. $$L_{\zeta_{\mu_{0}}^{\,\Phi_{j}}}(\mathbf{w},\Phi)\triangleq\mathbb{E}_{(x,\mu,u)\sim\zeta_{\mu_{0}}^{\,\Phi_{j}}}\Big(\Big(A_{\Phi}(x,\mu,u)-(1-\gamma)\mathbf{w}^{\mathrm{{T}}}\nabla_{\Phi}\log\pi_{\Phi}(x,\mu)(u)\Big)^{2}\Big)$$ Note that in j-th iteration in (21), the gradient direction wj is computed by solving another minimization problem. We employ a stochastic gradient descent (SGD) algorithm to solve this sub-problem. The update equation for the SGD is as follows: wj,l+1 = wj,l −αhj,l (Liu et al., 2020) where α is the learning parameter and the gradient direction, hj,l is defined as follows. $${\bf h}_{j,l}\triangleq\Bigg{(}{\bf w}_{j,l}^{\rm T}\nabla_{\Phi_{j}}\log\pi_{\Phi_{j}}(x,\mathbf{\mu})(u)-\frac{1}{1-\gamma}\hat{A}_{\Phi_{j}}(x,\mathbf{\mu},u)\Bigg{)}\nabla_{\Phi_{j}}\log\pi_{\Phi_{j}}(x,\mathbf{\mu})(u)\tag{22}$$ Algorithm 1 Natural Policy Gradient Algorithm to obtain the Optimal Policy Input: *η, α*: Learning rates, *J, L*: Number of execution steps w0, Φ0: Initial parameters, µ0 : Initial state distribution Initialization: Φ ← Φ0 1: for j ∈ {0, 1, · · · , J − 1} do 2: wj,0 ← w0 3: for l ∈ {0, 1, · · · , L − 1} do 4: Sample (x, µ, u) ∼ ζ Φj µ0 and AˆΦj (x, µ, u) using Algorithm 3 5: Compute hj,l using (22) wj,l+1 ← wj,l − αhj,l 6: **end for** 7: wj ← 1 L PL l=1 wj,l 8: Φj+1 ← Φj + ηwj 9: **end for** Output: {Φ1, *· · ·* , ΦJ }: Policy parameters where (x, µ, u)is sampled from the occupancy measure ζ Φj µ0 , and AˆΦj is a unbiased estimator of AΦj . We detail the process of obtaining the samples and the estimator in Algorithm 3 in the Appendix M. We would like to clarify that Algorithm 3 of (Agarwal et al., 2021) is the foundation for Algorithm 3. The NPG process is summarised in Algorithm 1. In Algorithm 2, we describe how the policy obtained from Algorithm 1 can be localised and executed by the agents in a decentralised manner. This essentially follows the ideas discussed in section 3. Note that, in order to execute line 3 in Algorithm 2, the agents must be aware of the transition function, P. However, the knowledge of the reward function, r, is not required. Algorithm 2 Decentralised Execution of the Policy generated from Algorithm 1 Input: Φ: Policy parameter from Algorithm 1, T: Number of Execution Steps µ0 : Initial state distribution, x0: Initial state of the agent. 1: for t ∈ {0, 1, · · · , T − 1} do 2: Execute ut ∼ πΦ(xt, µt ) 3: Compute µt+1 via mean-field dynamics (8). 4: Observe the next state xt+1 (Updated via N-agent dynamics: xt+1 ∼ P(xt, ut, µ N t , ν N t )) 5: Update: µt ← µt+1 6: Update: xt ← xt+1 7: **end for** Theorem 1 showed that the localization of the optimal mean-field policy π ∗ MF is near-optimal for large N. However, Algorithm 1 can provide only an approximation of π ∗ MF. One might naturally ask: is the decentralised version of the policy given by Algorithm 1 still near-optimal? We provide an answer to this question in Theorem 3. Lemma 1, which follows from Theorem 4.9 of (Liu et al., 2020), is an essential ingredient of this result. The proof of Lemma 1, however, hinges on the assumptions stated below. These are similar to Assumptions 2.1, 4.2, and 4.4 respectively in (Liu et al., 2020). Assumption 5. ∀Φ ∈ R d, ∀µ0 ∈ P(X ), for some χ > 0, Fµ0 (Φ) − χId *is positive semi-definite where* Fµ0 (Φ) *is given as follows.* Fµ0 (Φ) ≜ E(x,µ,u)∼ζΦ µ0 h{∇ΦπΦ(x, µ)(u)*} × {∇*Φ log πΦ(x, µ)(u)} Ti Assumption 6. ∀Φ ∈ R d, ∀µ ∈ P(X ), ∀x ∈ X , ∀u ∈ U, |∇Φ log πΦ(x, µ)(u)|1 ≤ G for some positive constant G. **Assumption 7.**$\forall\Phi_1,\Phi_2\in\mathbb{R}^d$, $\forall\mu\in\mathcal{P}(\mathcal{X})$, $\forall x\in\mathcal{X}$, $\forall u\in\mathcal{U}$, $\cdot$ $$|\nabla_{\Phi_{1}}\log\pi_{\Phi_{1}}(x,\mu)(u)-\nabla_{\Phi_{2}}\log\pi_{\Phi_{2}}(x,\mu)(u)|_{1}\leq M|\Phi_{1}-\Phi_{2}|_{1}$$ for some positive constant M. Assumption 8. ∀Φ ∈ R d, ∀µ0 ∈ P(X ), $$L_{\zeta_{\mu_{0}}^{\Phi^{*}}}(\mathbf{w}_{\Phi}^{*},\Phi)\leq\epsilon_{\mathrm{bias}},\quad\mathbf{w}_{\Phi}^{*}\triangleq\arg\min_{\mathbf{w}\in\mathbb{R}^{d}}L_{\zeta_{\mu_{0}}^{\Phi}}(\mathbf{w},\Phi)$$ where Φ ∗*is the parameter of the optimal policy.* The parameter ϵbias indicates the expressive power of the parameterized policy class, Π. For example, ϵbias = 0 for softmax policies, and is small for rich neural network based policies. Lemma 1. Assume that {Φj} J j=1 is the sequence of policy parameters generated from Algorithm 1. If Assumptions 5−8 hold, then the following inequality holds for some choice of η, α, J, L, for arbitrary initial parameter Φ0 and initial state distribution µ0 ∈ P(X ). $$\operatorname*{sup}_{\Phi\in\mathbb{R}^{d}}v_{\mathrm{MF}}(\mu_{0},\pi_{\Phi})-{\frac{1}{J}}\sum_{j=1}^{J}v_{\mathrm{MF}}(\mu_{0},\pi_{\Phi_{j}})\leq{\frac{\sqrt{\epsilon_{\mathrm{bias}}}}{1-\gamma}}+\epsilon,$$ The sample complexity of Algorithm 1 to achieve (23) is O(ϵ −3). We now state the following result. Theorem 3. Let x N 0 be the initial states in an N*-agent system and* µ0 be its associated empirical distribution. Assume that {Φj} J j=1 *are the sequences of policy parameters obtained from Algorithm 1, and the set of policies,* Π obeys Assumption 3. If Assumptions 1, 2, 5 − 8 hold, then for arbitrary ϵ > 0*, the following relation holds* for certain choices of η, α, J, L, and arbitrary initial distribution, µ0 ∈ P(X )*, and initial parameter,* Φ0, $$\sup_{\Phi\in\mathbb{R}^{d}}v_{\text{MARL}}(\mathbf{\mu}_{0},\pi_{\Phi})-\frac{1}{J}\sum_{j=1}^{J}v_{\text{MARL}}(\mathbf{\mu}_{0},\tilde{\pi}_{\Phi_{j}})\Bigg{|}\leq\frac{\sqrt{e_{\text{bias}}}}{1-\gamma}+C\max\{e,e\},\tag{24}$$ $$e\triangleq\frac{1}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]$$ $$(23)$$ whenever γSP < 1 where SP is given in Theorem 1. The term π˜Φj *denotes the localization of the policy* πΦj *defined similarly as in* (11), and C *is a constant. The sample complexity of the process is* O(ϵ −3). Additionally, if Assumption 4 is satisfied, then e in (24) can be reduced to e =p|X |/ √N. The proof of Theorem 3 is relegated to Appendix N. It states that for any ϵ > 0, Algorithm 1 can generate a policy such that if it is localised and executed in a decentralised manner in an N-agent system, then the value generated from this localised policy is at most O(max{*ϵ, e*}) distance away from the optimal value function. The sample complexity to obtain such a policy is O(ϵ −3). ## 8 Experiments The setup considered for the numerical experiment is taken from (Subramanian and Mahajan, 2019) with slight modifications. We consider a network of N collaborative firms that yield the same product but with varying quality. The product quality of i-th firm, i ∈ {1, · · · , N} at time t ∈ {0, 1, *· · · }* is denoted as x i t that can take values from the set Q ≜ {0, · · · , Q − 1}. At each instant, each firm has two choices to make. Either it can remain unresponsive (which we denote as action 0) or can invest some money to improve the quality of its product (indicated as action 1). If the action taken by i-th firm at time t is denoted as u i t , then its state-transition law is described by the following equation. $$x_{t+1}^{i}=\begin{cases}x_{t}^{i}&\text{if}u_{t}^{i}=0\\ x_{t}^{i}+\left\lfloor\chi\left(Q-1-x_{t}^{i}\right)\left(1-\frac{\bar{\mu}_{t}^{N}}{Q}\right)\right\rfloor&\text{elsewhere}\end{cases}$$ elsewhere(25) $$(25)$$ where χ is a uniform random variable in [0, 1], and µ¯N tis the mean of the empirical state distribution, µ N t . The intuition behind this transition law can be stated as follows. If the firm remain unresponsive i.e., u i t = 0, the product quality does not improve and the state remains the same. In contrast, if the firm invests some money, the quality improves probabilistically. However, if the average quality, µ¯N tin the market is high, a significant improvement is difficult to achieve. The factor (1 − µ¯N t Q ) describes the resistance to improvement due to higher average quality. The reward function of the i-th firm is defined as shown below. $$r(x_{t}^{i},u_{t}^{i},\mu_{t}^{N},\nu_{t}^{N})=\alpha_{R}x_{t}^{i}-\beta_{R}\bar{\mu}_{t}^{N}-\lambda_{R}u_{t}^{i}$$ $$(26)$$ $$(27)$$ t(26) The first term, αRx i t is due to the revenue earned by the firm; the second term, βRµ¯N tis attributed to the resistance to improvement imparted by high average quality; the third term, λRu i t is due to the cost incurred for the investment. Following our previous convention, let π ∗ MF, and π˜ ∗ MF denote the optimal mean-field policy-sequence and its corresponding local policy-sequence respectively. Using these notations, we define the error for a given joint initial state, x N 0 as follows. $$\mathrm{error}\triangleq\left|v_{\mathrm{MARL}}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\mathrm{MF}}^{*})-v_{\mathrm{MARL}}(\mathbf{x}_{0}^{N},\mathbf{\tilde{\pi}}_{\mathrm{MF}}^{*})\right|\tag{1}$$ Fig. 1 plots error as a function of N and Q. Evidently, error decreases with N and increases with Q. We would like to point out that π ∗ MF, in general, is not a maximizer of vMARL(x N 0 , ·). However, for the reasons stated in Section 1, it is difficult to evaluate the N-agent optimal policy-sequence, π ∗ MARL, especially for large N. On the other hand, π ∗ MF can be easily computed via Algorithm 1 and can act as a good proxy for π ∗ MARL. Fig. 1 therefore essentially describes how well the local policies approximate the optimal value function obtained over all policy-sequences in an N-agent system. ![11_image_0.png](11_image_0.png) Figure 1: Error defined by (27) as a function of the population size, N (Fig. 1a) and the size of individual state space, Q (Fig. 1b). The solid line, and the half-width of the shaded region respectively denote mean, and standard deviation of error obtained over 25 random seeds. The values of different system parameters are: αR = 1, βR = λR = 0.5. We use Q = 10 for Fig. 1a whereas N = 50 for Fig. 1b. Moreover, the values of different hyperparameters used in Algorithm 1 are as follows: α = η = 10−3, J = L = 102. We use a feed forward (FF) neural network (NN) with single hidden layer of size 128 as the policy approximator. The code used for the numerical experiment can be accessed at: https://github.itap.purdue.edu/Clanlabs/NearOptimalLocalPolicy Before concluding, we would like to point out that both reward and state transition functions considered in our experimental set up are independent of the action distributions. Therefore, the setting described in this section satisfies Assumption 4. Moreover, due to the finiteness of individual state space Q, and action space {0, 1}, the reward function is bounded. Also, it is straightforward to verify the Lipschitz continuity of both reward and transition functions. Hence, Assumptions 1 and 2 are satisfied. Finally, we use neural network (NN) based policies (with bounded weights) in our experiment. Thus, Assumption 3 is also satisfied. ## 9 Conclusions In this article, we show that, in an N-agent system, one can always choose localised policies such that the resulting value function is close to the value function generated by (possibly non-local) optimal policy. We mathematically characterize the approximation error as a function of N. Furthermore, we devise an algorithm to explicitly obtain the said local policy. One interesting extension of our problem would be to consider the case where the interactions between the agents are non-uniform. Proving near-optimality of local policies appears to be difficult in such a scenario because, due to the non-uniformity, the notion of mean-field is hard to define. Although our results are pertinent to the standard MARL systems, similar results can also be established for other variants of reinforcement learning problems, e.g., the model described in (Gast and Gaujal, 2011). ## Acknowledgments W. U. M. and S. V. U. were partially funded by NSF Grant No. 1638311 CRISP Type 2/Collaborative Research: Critical Transitions in the Resilience and Recovery of Interdependent Social and Physical Networks. ## References Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan. On the theory of policy gradient methods: Optimality, approximation, and distribution shift. *Journal of Machine Learning Research*, 22 (98):1–76, 2021. Abubakr O Al-Abbasi, Arnob Ghosh, and Vaneet Aggarwal. Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning. *IEEE Transactions on Intelligent Transportation Systems*, 20(12):4714–4727, 2019. Andrea Angiuli, Jean-Pierre Fouque, and Mathieu Laurière. Unified reinforcement q-learning for mean field game and control problems. *Mathematics of Control, Signals, and Systems*, pages 1–55, 2022. René Carmona, François Delarue, et al. *Probabilistic theory of mean field games with applications I-II*. Springer, 2018. Chacha Chen, Hua Wei, Nan Xu, Guanjie Zheng, Ming Yang, Yuanhao Xiong, Kai Xu, and Zhenhui Li. Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 34, pages 3414–3421, 2020. Shanzhi Chen, Jinling Hu, Yan Shi, Ying Peng, Jiayi Fang, Rui Zhao, and Li Zhao. Vehicle-to-everything (v2x) services supported by lte-based systems and 5g. *IEEE Communications Standards Magazine*, 1(2): 70–76, 2017. Tianshu Chu, Jie Wang, Lara Codecà, and Zhaojian Li. Multi-agent deep reinforcement learning for largescale traffic signal control. *IEEE Transactions on Intelligent Transportation Systems*, 21(3):1086–1095, 2019. Amal Feriani and Ekram Hossain. Single and multi-agent deep reinforcement learning for AI-enabled wireless networks: A tutorial. *IEEE Communications Surveys & Tutorials*, 23(2):1226–1252, 2021. Nicolas Gast and Bruno Gaujal. A mean field approach for optimization in discrete time. Discrete Event Dynamic Systems, 21(1):63–101, 2011. Haotian Gu, Xin Guo, Xiaoli Wei, and Renyuan Xu. Mean-field controls with q-learning for cooperative marl: convergence and complexity analysis. *SIAM Journal on Mathematics of Data Science*, 3(4):1168–1196, 2021. Chenchen Han, Haipeng Yao, Tianle Mai, Ni Zhang, and Mohsen Guizani. Qmix aided routing in social-based delay-tolerant networks. *IEEE Transactions on Vehicular Technology*, 71(2):1952–1963, 2021. Alec Koppel, Amrit Singh Bedi, Bhargav Ganguly, and Vaneet Aggarwal. Convergence rates of average-reward multi-agent reinforcement learning via randomized linear programming. *arXiv preprint* arXiv:2110.12929, 2021. Landon Kraemer and Bikramjit Banerjee. Multi-agent reinforcement learning as a rehearsal for decentralized planning. *Neurocomputing*, 190:82–94, 2016. Yiheng Lin, Guannan Qu, Longbo Huang, and Adam Wierman. Multi-agent reinforcement learning in stochastic networked systems. *arXiv preprint arXiv:2006.06555*, 2020. Yanli Liu, Kaiqing Zhang, Tamer Basar, and Wotao Yin. An improved analysis of (variance-reduced) policy gradient and natural policy gradient methods. *Advances in Neural Information Processing Systems*, 33: 7624–7636, 2020. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. *nature*, 518(7540):529–533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *International conference on machine learning*, pages 1928–1937. PMLR, 2016. Washim Uddin Mondal, Mridul Agarwal, Vaneet Aggarwal, and Satish V. Ukkusuri. On the approximation of cooperative heterogeneous multi-agent reinforcement learning (MARL) using mean field control (MFC). Journal of Machine Learning Research, 23(129):1–46, 2022a. Washim Uddin Mondal, Vaneet Aggarwal, and Satish Ukkusuri. Can mean field control (mfc) approximate cooperative multi agent reinforcement learning (marl) with non-uniform interaction? In The 38th Conference on Uncertainty in Artificial Intelligence, 2022b. Frans A Oliehoek, Matthijs TJ Spaan, and Nikos Vlassis. Optimal and approximate q-value functions for decentralized pomdps. *Journal of Artificial Intelligence Research*, 32:289–353, 2008. Barna Pasztor, Ilija Bogunovic, and Andreas Krause. Efficient model-based multi-agent mean-field reinforcement learning. *arXiv preprint arXiv:2107.04050*, 2021. Martin L Puterman. *Markov decision processes: discrete stochastic dynamic programming*. John Wiley & Sons, 2014. Guannan Qu, Yiheng Lin, Adam Wierman, and Na Li. Scalable multi-agent reinforcement learning for networked systems with average reward. *Advances in Neural Information Processing Systems*, 33:2074– 2086, 2020. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In International conference on machine learning, pages 4295–4304. PMLR, 2018. Tabish Rashid, Gregory Farquhar, Bei Peng, and Shimon Whiteson. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. Advances in neural information processing systems, 33:10199–10210, 2020. Gavin A Rummery and Mahesan Niranjan. *On-line Q-learning using connectionist systems*, volume 37. Citeseer, 1994. Lars Ruthotto, Stanley J Osher, Wuchen Li, Levon Nurbekyan, and Samy Wu Fung. A machine learning framework for solving high-dimensional mean field game and mean field control problems. Proceedings of the National Academy of Sciences, 117(17):9183–9193, 2020. Kyunghwan Son, Daewoo Kim, Wan Ju Kang, David Earl Hostallero, and Yung Yi. Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning. In *International* conference on machine learning, pages 5887–5896. PMLR, 2019. Jayakumar Subramanian and Aditya Mahajan. Reinforcement learning in stationary mean-field games. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 251–259, 2019. Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. Value-decomposition networks for cooperative multi-agent learning based on team reward. In *Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems*, pages 2085–2087, 2018. Ming Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In *Proceedings of the* tenth international conference on machine learning, pages 330–337, 1993. Nilay Tiwari, Arnob Ghosh, and Vaneet Aggarwal. Reinforcement learning for mean field game. *arXiv* preprint arXiv:1905.13357, 2019. Xiaoqiang Wang, Liangjun Ke, Zhimin Qiao, and Xinghua Chai. Large-scale traffic signal control using a novel multiagent reinforcement learning. *IEEE transactions on cybernetics*, 51(1):174–187, 2020. Christopher JCH Watkins and Peter Dayan. Q-learning. *Machine learning*, 8(3):279–292, 1992. Nicholas J Watkins, Cameron Nowzari, Victor M Preciado, and George J Pappas. Optimal resource allocation for competitive spreading processes on bilayer networks. *IEEE Transactions on Control of Network* Systems, 5(1):298–307, 2016. Hua Wei, Chacha Chen, Guanjie Zheng, Kan Wu, Vikash Gayah, Kai Xu, and Zhenhui Li. Presslight: Learning max pressure control to coordinate traffic signals in arterial network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1290–1298, 2019. ## A Proof Of Theorem 1 In order to establish the theorem, the following Lemmas are necessary. The proof of the Lemmas are relegated to Appendix C−I. ## A.1 Lipschitz Continuity Lemmas In the following three lemmas (Lemma 2 − 4), π, π¯ ∈ Π are arbitrary admittable policies, and µ, µ¯ ∈ P(X ) are arbitrary state distributions. Moreover, the following definition is frequently used. $$|\pi(\cdot,\mu)-\bar{\pi}(\cdot,\bar{\mu})|_{\infty}\triangleq\sup_{x\in\mathcal{X}}|\pi(x,\mu)-\bar{\pi}(x,\bar{\mu})|_{1}\tag{1}$$ $$(28)$$ $$(29)$$ $$(30)$$ $$(31)$$ Lemma 2. If ν MF(·, ·) *is defined by* (7)*, then the following holds.* $$|\nu^{\mathrm{MF}}(\mu,\pi)-\nu^{\mathrm{MF}}(\bar{\mu},\bar{\pi})|_{1}\leq|\mu-\bar{\mu}|_{1}+|\pi(\cdot,\mu)-\bar{\pi}(\cdot,\bar{\mu})|_{\infty}$$ MF(µ¯, π¯)|1 ≤ |µ − µ¯|1 + |π(·, µ) − π¯(·, µ¯)|∞ (29) Lemma 3. If P MF(·, ·) *is defined by* (8)*, then the following holds.* $$|P^{\mathrm{MF}}(\mu,\pi)-P^{\mathrm{MF}}(\bar{\mu},\bar{\pi})|_{1}\leq\tilde{S}_{P}|\mu-\bar{\mu}|_{1}+\tilde{S}_{P}|\pi(\cdot,\mu)-\bar{\pi}(\cdot,\bar{\mu})|_{\infty}$$ where S˜P ≜ 1 + 2LP *, and* S¯P ≜ 1 + LP . Lemma 4. If ν MF(·, ·) *is defined by* (9)*, then the following holds.* $$|r^{\mathrm{MF}}(\mu,\pi)-r^{\mathrm{MF}}(\bar{\mu},\bar{\pi})|\leq\bar{S}_{R}|\mu-\bar{\mu}|_{1}+\bar{S}_{R}|\pi(\cdot,\mu)-\bar{\pi}(\cdot,\bar{\mu})|_{\infty}$$ where S˜R ≜ MR + 2LR*, and* S¯R ≜ MR + LR. Lemma 2 − 4 essentially dictate that the state and action evolution functions, (P MF(·, ·), and ν MF(·, ·) respectively), and the average reward function, r MF(·, ·) demonstrate Lipschitz continuity property w. r. t. the state distribution and policy arguments. Lemma 2 is an essential ingredient in the proof of Lemma 3, and 4. ## A.2 Large Population Approximation Lemmas In the following four lemmas (Lemma 5 − 8), π ≜ {πt}t∈{0,1,*··· }* ∈ Π∞ is an arbitrary admissible policysequence, and {µ N t , ν N t }t∈{0,1,*··· }* are the N-agent empirical state, and action distributions induced by it from the initial state distribution, µ0 . Similarly, {x i t , uit}t∈{0,1,*··· }* are the states, and actions of i-th agent evolved from the initial distribution, µ0 via the policy-sequence, π. The joint states, and actions at time t are denoted by x N t ,u N trespectively. Lemma 5. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}\big|\nu_{t}^{N}-\nu^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t})\big|_{1}\leq\frac{1}{\sqrt{N}}\sqrt{|{\cal U}|}$$ Lemma 6. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}\left|\mu_{t+1}^{N}-P^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t})\right|_{1}\leq\frac{C_{P}}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]$$ hp|X | +p|U|i(33) where CP ≜ 2 + LP . Lemma 7. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}\left|\frac{1}{N}\sum_{i=1}r(x_{t}^{i},u_{t}^{i},\mu_{t}^{N},\nu_{t}^{N})-r^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t})\right|\leq\frac{M_{R}}{\sqrt{N}}+\frac{L_{R}}{\sqrt{N}}\sqrt{|{\cal U}|}$$ $$(32)$$ $$(33)$$ Finally, if µ∞ tindicates the state distribution of an infinite agent system at time t induced by the policysequence, π from the initial distribution, µ0 then the following result can be proven invoking Lemma 3, and 6. Lemma 8. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}|\mu_{t}^{N}-\mu_{t}^{\infty}|\leq{\frac{C_{P}}{\sqrt{N}}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\left({\frac{S_{P}^{t}-1}{S_{P}-1}}\right)$$ where SP ≜ S˜P + LQS¯P . The terms S˜P , S¯P are defined in Lemma 3 while CP *is given in Lemma 6.* ## A.3 Proof Of The Theorem Let π ≜ {πt}t∈{0,1,*··· }*, and π¯ ≜ {π¯t}t∈{0,1,*··· }*be two arbitrary policy-sequences in Π∞. Denote by {µ N t , ν N t }, {µ∞ t , ν∞ t } the state, and action distributions induced by policy-sequence π at time t in an Nagent system, and infinite agent systems respectively. Also, the state, and action of i-th agent at time t corresponding to the same policy sequence are indicated as x i t , and u i t respectively. The same quantities corresponding to the policy-sequence π¯ are denoted as {µ¯ N t , ν¯ N t , µ¯ ∞ t , ν¯ ∞ t , x¯ i t , u¯ i t}. Consider the following difference, |vMARL(x N 0 ,π¯) − vMF(µ0 ,π)| (a) ≤ X∞ t=0 γ t 1 N X N i=1 E -r(¯x i t , u¯ i t , µ¯ N t , ν¯ N t )− r MF(µ ∞ t , πt) ≤ X∞ t=0 γ tE 1 N r(¯x i t , u¯ i t , µ¯ N t , ν¯ N t ) − r MF(µ¯ N t , π¯t) | {z } ≜J1 + X∞ t=0 γ tE r MF(µ¯ N t , π¯t) − r MF(µ¯ ∞ t , π¯t) | {z } ≜J2 + X∞ t=0 γ t r MF(µ¯ ∞ t , π¯t) − r MF(µ ∞ t , πt) | {z } ≜J3 Inequality (a) follows from the definition of the value functions vMARL(·, ·), vMF(·, ·) given in (6), and (10) respectively. The first term, J1 can be bounded using Lemma 7 as follows. $$J_{1}\leq\left(\frac{1}{1-\gamma}\right)\left[\frac{M_{R}}{\sqrt{N}}+\frac{L_{R}}{\sqrt{N}}\sqrt{|{\mathcal{U}}|}\right].$$ The second term, J2, can be bounded as follows. J2 ≜ X∞ t=0 γ tE|r MF(µ¯ N t , π¯t) − r MF(µ¯ ∞ t , π¯t)| (a) ≤ X∞ t=0 γ tE S˜R|µ¯ N t − µ¯ ∞ t |1 + S¯R π¯t(·, µ¯ N t ) − π¯t(·, µ¯ ∞ t ) ∞ (b) ≤ SR X∞ t=0 γ tE|µ¯ N t − µ¯ ∞ t | (c) ≤1 √N hp|X | +p|U|i SRCP SP − 1 1 1 − γSP −1 1 − γ where SR ≜ S˜R + LQS¯R. Inequality (a) follows from Lemma 4, whereas (b) is a consequence of Assumption 3. Finally, (c) follows from Lemma 8. It remains to bound J3. Note that, if π¯ = π, then J3 = 0. Hence, $$v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MARL}^{*})-v_{\rm MF}(\mathbf{\mu}_{0},\mathbf{\pi}_{\rm MARL}^{*})|\leq J_{0},$$ $$v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MF}^{*})-v_{\rm MF}(\mathbf{\mu}_{0},\mathbf{\pi}_{\rm MF}^{*})|\leq J_{0}$$ $$(34)$$ $\binom{35}{35}$. where J0 is given as follows, $$J_{0}\triangleq\left(\frac{1}{1-\gamma}\right)\left[\frac{M_{R}}{\sqrt{N}}+\frac{L_{R}}{\sqrt{N}}\sqrt{|\mathcal{U}|}\right]+\frac{1}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\left(\frac{S_{R}C_{P}}{S_{P}-1}\right)\left[\frac{1}{1-\gamma S_{P}}-\frac{1}{1-\gamma}\right].$$ Moreover, if π = π ∗ MF, and π¯ = π˜ ∗ MF (or vice versa), then J3 = 0 as well. This is precisely because the trajectory of state, and action distributions generated by the policy-sequences π ∗ MF,π˜ ∗ MF are identical in an infinite agent system. Hence, we have, $$|v_{\mathrm{MART}}(\mathbf{x}_{0}^{N},{\tilde{\mathbf{\pi}}}_{\mathrm{MF}}^{*})-v_{\mathrm{MF}}(\mathbf{\mu}_{0},\mathbf{\pi}_{\mathrm{MF}}^{*})|\leq J_{0}$$ MF)| ≤ J0 (36) Consider the following inequalities, vMARL(x N 0 ,π ∗ MARL) − vMARL(x N 0 ,π˜ ∗ MF) = vMARL(x N 0 ,π ∗ MARL) − vMF(µ0 ,π ∗ MF) + vMF(µ0 ,π ∗ MF) − vMARL(x N 0 ,π˜ ∗ MF) (a) ≤ vMARL(x N 0 ,π ∗ MARL) − vMF(µ0 ,π ∗ MARL) + J0 (b) ≤ 2J0 $$(36)$$ $$(37)$$ Inequality (a) follows from (36), and the fact that π ∗ MF maximizes vMF(µ0 , ·). Inequality (b) follows from (34). Moreover, $$v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MF}^{*})-v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MARL}^{*})$$ $$=v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MF}^{*})-v_{\rm MF}(\mathbf{\mu}_{0},\mathbf{\pi}_{\rm MF}^{*})+v_{\rm MF}(\mathbf{\mu}_{0},\mathbf{\pi}_{\rm MF}^{*})-v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MARL}^{*})\tag{3}$$ $$\stackrel{{(a)}}{{\leq}}J_{0}+v_{\rm MF}(\mathbf{\mu}_{0},\mathbf{\pi}_{\rm MF}^{*})-v_{\rm MARL}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\rm MF}^{*})\stackrel{{(b)}}{{\leq}}2J_{0}$$ $$(38)$$ Inequality (a) follows from (36), and the fact that π ∗ MARL maximizes vMARL(x0, ·). Inequality (b) follows from (35). Combining (37), and (38), we conclude that, $$|v_{\mathrm{MARL}}(\mathbf{x}_{0}^{N},\mathbf{\pi}_{\mathrm{MARL}}^{*})-v_{\mathrm{MARL}}(\mathbf{x}_{0}^{N},{\tilde{\mathbf{\pi}}}_{\mathrm{MF}}^{*})|\leq2J_{0}$$ ## B Proof Of Theorem 2 The proof of Theorem 2 is similar to that of Theorem 1, however, with subtle differences. The following Lemmas are needed to establish the theorem. ## B.1 Auxiliary Lemmas In the following (Lemma 9 − 11), π ≜ {πt}t∈{0,1,*··· }* ∈ Π∞ denotes an arbitrary admissible policy-sequence. The terms {µ N t , ν N t }t∈{0,1,*··· }* denote the N-agent empirical state, and action distributions induced by π from the initial state distribution, µ0 whereas {µ∞ t , ν∞ t }t∈{0,1,*··· }* indicate the state, and action distributions induced by the same policy-sequence in an infinite agent system. Similarly, {x i t , uit}t∈{0,1,*··· }* denote the states, and actions of i-th agent evolved from the initial distribution, µ0 via the policy-sequence, π. The joint states, and actions at time t are denoted by x N t ,u N trespectively. Lemma 9. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}\left|\boldsymbol{\mu}_{t+1}^{N}-P^{\mathrm{MF}}(\boldsymbol{\mu}_{t}^{N},\pi_{t})\right|_{1}\leq{\frac{2}{\sqrt{N}}}{\sqrt{|\mathcal{X}|}}$$ Lemma 10. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}\left|{\frac{1}{N}}\sum_{i=1}r(x_{t}^{i},u_{t}^{i},\mu_{t}^{N})-r^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t})\right|\leq{\frac{M_{R}}{\sqrt{N}}}$$ $$(39)$$ Lemma 11. The following inequality holds ∀t ∈ {0, 1, *· · · }*. $$\mathbb{E}|\mu_{t}^{N}-\mu_{t}^{\infty}|\leq{\frac{2}{\sqrt{N}}}{\sqrt{|{\mathcal{X}}|}}\left({\frac{S_{P}^{t}-1}{S_{P}-1}}\right)$$ where SP ≜ S˜P + LQS¯P . The terms S˜P , S¯P *are defined in Lemma 3.* The proofs of the above lemmas are relegated to Appendix J − L. ## B.2 Proof Of The Theorem We use the same notations as in Appendix A.3. Consider the following difference, |vMARL(x N 0 ,π¯) − vMF(µ0 ,π)| (a) ≤ X∞ t=0 γ t 1 N X N i=1 r(¯x i t , u¯ i t , µ¯ N t ) # − E-r MF(µ ∞ t , πt) ≤ X∞ t=0 γ tE 1 N r(¯x i t , u¯ i t , µ¯ N t ) − r MF(µ¯ N t , π¯t) | {z } ≜J1 + X∞ t=0 γ tE r MF(µ¯ N t , π¯t) − r MF(µ¯ ∞ t , π¯t) | {z } ≜J2 + X∞ t=0 γ t r MF(µ¯ ∞ t , π¯t) − r MF(µ ∞ t , πt) | {z } ≜J3 Inequality (a) follows from the definition of the value functions vMARL(·, ·), vMF(·, ·) given in (6), and (10) respectively. The first term, J1 can be bounded using Lemma 7 as follows. $$J_{1}\leq\left({\frac{1}{1-\gamma}}\right)\left[{\frac{M_{R}}{\sqrt{N}}}\right]$$ The second term, J2, can be bounded as follows. J2 ≜ X∞ t=0 γ tE|r MF(µ¯ N t , π¯t) − r MF(µ¯ ∞ t , π¯t)| (a) ≤ X∞ t=0 γ tE S˜R|µ¯ N t − µ¯ ∞ t |1 + S¯R π¯t(·, µ¯ N t ) − π¯t(·, µ¯ ∞ t ) ∞ (b) ≤ SR X∞ t=0 γ tE|µ¯ N t − µ¯ ∞ t | (c) ≤1 √N p|X | 2SR SP − 1 1 1 − γSP −1 1 − γ where SR ≜ S˜R + LQS¯R. Inequality (a) follows from Lemma 4, whereas (b) is a consequence of Assumption 3. Finally, (c) follows from Lemma 8. It remains to bound J3. Note that, if π¯ = π, then J3 = 0. Hence, $$\begin{array}{l}{{|v_{\mathrm{M a R L}}(x_{0}^{N},\pi_{\mathrm{M a R L}}^{*})-v_{\mathrm{MF}}(\mu_{0},\pi_{\mathrm{M a R L}}^{*})|\leq J_{0},}}\\ {{|v_{\mathrm{M a R L}}(x_{0}^{N},\pi_{\mathrm{MF}}^{*})-v_{\mathrm{MF}}(\mu_{0},\pi_{\mathrm{MF}}^{*})|\leq J_{0}}}\end{array}$$ MARL)| ≤ J0, (40) MF)| ≤ J0 (41) where J0 is given as follows, $$J_{0}\triangleq\left({\frac{1}{1-\gamma}}\right)\left[{\frac{M_{R}}{\sqrt{N}}}\right]+{\frac{1}{\sqrt{N}}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\left({\frac{2S_{R}}{S_{P}-1}}\right)\left[{\frac{1}{1-\gamma S_{P}}}-{\frac{1}{1-\gamma}}\right].$$ $$\begin{split}(40)\end{split}$$ $$\begin{split}(41)\end{split}$$ Moreover, if π = π ∗ MF, and π¯ = π˜ ∗ MF (or vice versa), then J3 = 0 as well. This is precisely because the trajectory of state, and action distributions generated by the policy-sequences π ∗ MF,π˜ ∗ MF are identical in an infinite agent system. Hence, we have, $$|v_{\mathrm{MART}}(\mathbf{x}_{0}^{N},{\tilde{\mathbf{\pi}}}_{\mathrm{MF}}^{*})-v_{\mathrm{MF}}(\mathbf{\mu}_{0},\mathbf{\pi}_{\mathrm{MF}}^{*})|\leq J_{0}$$ $$(42)$$ MF)| ≤ J0 (42) Following the same set of arguments as is used in (37), and (38), we conclude that, $$|v_{\mathrm{MARL}}(\pmb{x}_{0}^{N},\pmb{\pi}_{\mathrm{MARL}}^{*})-v_{\mathrm{MARL}}(\pmb{\mu}_{0},\pmb{\tilde{\pi}}_{\mathrm{MF}}^{*})|\leq2J_{0}$$ Note the chain of inequalities stated below. |ν MF(µ, π) − ν MF(µ¯, π¯)|1 (a) = X x∈X π(x, µ)µ(x) − X x∈X π¯(x, µ¯)µ¯(x) 1 u∈U X x∈X π(x, µ)(u)µ(x) − X x∈X π¯(x, µ¯)(u)µ¯(x) ≤ X x∈X X u∈U |π(x, µ)(u)µ(x) − π¯(x, µ¯)(u)µ¯(x)| = X ≤ X x∈X |µ(x) − µ¯(x)| X u∈U π(x, µ)(u) + X x∈X µ¯(x) X u∈U |π(x, µ)(u) − π¯(x, µ¯)(u)| | {z } =1 sup x∈X |π(x, µ) − π¯(x, µ¯)|1 ≤ |µ − µ¯|1 + X x∈X µ¯(x) | {z } =1 (b) = |µ − µ¯|1 + |π(·, µ) − π¯(·, µ¯)|∞ Inequality (a) follows from the definition of ν MF(·, ·) as given in (7). On the other hand, equation (b) is a consequence of the definition of *| · |*∞ over the space of all admissible policies, Π. This concludes the result. Observe that, $$|P^{\rm MF}(\mathbf{\mu},\pi)-P^{\rm MF}(\bar{\mathbf{\mu}},\bar{\pi})|_{1}$$ $$\stackrel{{(a)}}{{=}}\left|\sum_{x\in X}\sum_{u\in\mathcal{U}}P(x,u,\mathbf{\mu},\nu^{\rm MF}(\mathbf{\mu},\pi))\pi(x,\mathbf{\mu})(u)\mathbf{\mu}(x)-P(x,u,\bar{\mathbf{\mu}},\nu^{\rm MF}(\bar{\mathbf{\mu}},\bar{\pi}))\bar{\pi}(x,\bar{\mathbf{\mu}})(u)\bar{\mathbf{\mu}}(x)\right|_{1}$$ $$\leq J_{1}+J_{2}$$ Equality (a) follows from the definition of P MF(·, ·) as depicted in (8). The term J1 satisfies the following bound. J1 ≜ X x∈X X u∈U P(x, u, µ, νMF(µ, π)) − P(x, u, µ¯, νMF(µ¯, π¯)) 1 × π(x, µ)(u)µ(x) (a) ≤ LP -|µ − µ¯|1 + |ν MF(µ, π) − ν MF(µ¯, π¯)|1 × X x∈X µ(x) X u∈U π(x, µ)(u) | {z } =1 (b) ≤ 2LP |µ − µ¯|1 + LP |π(·, µ) − π¯(·, µ¯)|∞ Inequality (a) is a consequence of Assumption 2 whereas (b) follows from Lemma 2, and the fact that π(x, µ), µ are probability distributions. The second term, J2 obeys the following bound. J2 ≜ X x∈X X u∈U |P(x, u, µ¯, νMF(µ¯, π¯))|1 | {z } =1 ×|π(x, µ)(u)µ(x) − π¯(x, µ¯)(u)µ¯(x)| ≤ X x∈X |µ(x) − µ¯(x)| X u∈U π(x, µ)(u) + X x∈X µ¯(x) X u∈U |π(x, µ)(u) − π¯(x, µ¯)(u)| | {z } =1 (a) ≤ |µ − µ¯|1 + X x∈X µ¯(x) sup x∈X |π(x, µ) − π¯(x, µ¯)|1 | {z } =1 = |µ − µ¯|1 + |π(·, µ) − π¯(·, µ¯)|∞ Inequality (a) results from the fact that π(x, µ) is a probability distribution while (b) utilizes the definition of *| · |*∞. This concludes the result. Observe that, $$r^{\rm MF}(\mathbf{\mu},\pi)-r^{\rm MF}(\bar{\mathbf{\mu}},\bar{\pi})|$$ $$\stackrel{{\rm(a)}}{{=}}\left|\sum_{x\in\mathcal{X}}\sum_{u\in\mathcal{U}}r(x,u,\mathbf{\mu},\nu^{\rm MF}(\mathbf{\mu},\pi))\pi(x,\mathbf{\mu})(u)\mathbf{\mu}(x)-r(x,u,\bar{\mathbf{\mu}},\nu^{\rm MF}(\bar{\mathbf{\mu}},\bar{\pi}))\bar{\pi}(x,\bar{\mathbf{\mu}})(u)\bar{\mathbf{\mu}}(x)\right|$$ $$\leq J_{1}+J_{2}$$ Equality (a) follows from the definition of r MF(·, ·) as given in (9). The first term obeys the following bound. $$J_{1}\triangleq\sum_{x\in\mathcal{X}}\sum_{u\in\mathcal{U}}\left|r(x,u,\boldsymbol{\mu},\nu^{\mathrm{MF}}(\boldsymbol{\mu},\pi))-r(x,u,\boldsymbol{\overline{\mu}},\nu^{\mathrm{MF}}(\boldsymbol{\overline{\mu}},\bar{\pi}))\right|\times\pi(x,\boldsymbol{\mu})(u)\boldsymbol{\mu}(x)$$ $$\stackrel{{(a)}}{{\leq}}L_{R}\left[|\boldsymbol{\mu}-\boldsymbol{\overline{\mu}}|_{1}+|\nu^{\mathrm{MF}}(\boldsymbol{\mu},\pi)-\nu^{\mathrm{MF}}(\boldsymbol{\overline{\mu}},\bar{\pi})|_{1}\right]\times\underbrace{\sum_{x\in\mathcal{X}}\boldsymbol{\mu}(x)\sum_{u\in\mathcal{U}}\pi(x,\boldsymbol{\mu})(u)}_{=1},$$ $$\stackrel{(b)}{\leq}2L_{R}|\mu-\bar{\mu}|_{1}+L_{R}|\pi(\cdot,\mu)-\bar{\pi}(\cdot,\bar{\mu})|_{\infty}$$ Inequality (a) is a consequence of Assumption 1(b) whereas inequality (b) follows from Lemma 2, and the fact that π(x, µ), µ are probability distributions. The second term, J2 satisfies the following. J2 ≜ X x∈X X u∈U |r(x, u, µ¯, νMF(µ¯, π¯))| × |π(x, µ)(u)µ(x) − π¯(x, µ¯)(u)µ¯(x)| (a) ≤ MR X x∈X X u∈U |π(x, µ)(u)µ(x) − π¯(x, µ¯)(u)µ¯(x)| ≤ MR X x∈X |µ(x) − µ¯(x)| X u∈U π(x, µ)(u) +MR X x∈X µ¯(x) X u∈U |π(x, µ)(u) − π¯(x, µ¯)(u)| | {z } =1 sup x∈X |π(x, µ) − π¯(x, µ¯)|1 ≤ MR|µ − µ¯|1 + MR X x∈X µ¯(x) | {z } =1 (b) $$\stackrel{(b)}{=}M_{R}|\mu-\bar{\mu}|_{1}+M_{R}|\pi(\cdot,\mu)-\bar{\pi}(\cdot,\bar{\mu})|_{\infty}$$ Inequality (a) results from Assumption 1(a). On the other hand, equality (b) is a consequence of the definition of *| · |*∞, and the fact that µ¯ is a probability distribution. This concludes the result. The following Lemma is required to prove the result. Lemma 12. If ∀m ∈ {1, · · · , M}, {Xmn}n∈{1,··· ,N} are independent random variables that lie in [0, 1], and satisfy Pm∈{1,··· ,M} E[Xmn] ≤ 1, ∀n ∈ {1, · · · , N}*, then the following holds,* $$\sum_{m=1}^{M}\mathbb{E}\left|\sum_{n=1}^{N}\left(X_{mn}-\mathbb{E}[X_{mn}]\right)\right|\leq\sqrt{MN}\tag{43}$$ Lemma 12 is adapted from Lemma 13 of (Mondal et al., 2022a). Notice the following relations. E ν N t − ν MF(µ N t , πt) 1 = E hE h ν N t − ν MF(µ N t , πt) 1 x N t ii (a) = E "E " ν N t − X x∈X πt(x, µ N t )µ N t (x) 1 x N t ## = E "E "X u∈U ν N t (u) − X x∈X πt(x, µ N t )(u)µ N t (x) x N t ## (b) = E "X u∈U E "1 N X N i=1 δ(u i t = u) − 1 N X x∈X πt(x, µ N t )(u) X N i=1 δ(x i t = x) x N t ## = E "X u∈U E " 1 N X N i=1 δ(u i t = u) − 1 N X N i=1 πt(x i t , µ N t )(u) x N t ## (c) ≤1 √N p|U| Equality (a) can be established using the definition of ν MF(·, ·) as depicted in (7). Similarly, relation (b) is a consequence of the definitions of µ N t , ν N t . Finally, (c) uses Lemma 12. Specifically, it utilises the facts that, {u i t}i∈{1,··· ,N} are conditionally independent given x N t , and the followings hold $$\mathbb{E}\left[\delta(u_{t}^{i}=u)\Big|\mathbf{x}_{t}^{N}\right]=\pi_{t}(x_{t}^{i},\mathbf{\mu}_{t}^{N})(u),$$ $$\sum_{u\in{\mathcal U}}\mathbb{E}\left[\delta(u_{t}^{i}=u)\Big|\mathbf{x}_{t}^{N}\right]=1$$ ∀i ∈ {1, · · · , N}, ∀u ∈ U. This concludes the lemma. Notice the following decomposition. E µ N t+1 − P MF(µ N t , πt) 1 x∈X E 1 N X N i=1 δ(x i t+1 = x) − X x′∈X X u∈U P(x ′, u, µ N t , νMF(µ N t , πt))(x)πt(x ′, µ N t )(u) 1 N X N i=1 δ(x i t = x ′) (a) = X i=1 X u∈U P(x i t , u, µ N t , νMF(µ N t , πt))(x)πt(x i t , µ N t )(u) ≤ J1 + J2 + J3 x∈X E 1 N X N i=1 δ(x i t+1 = x) − 1 N X N = X Equality (a) uses the definition of P MF(·, ·) as shown in (8). The term, J1 obeys the following bound. $$J_{1}\triangleq\frac{1}{N}\sum_{x\in\mathcal{X}}\mathbb{E}\left|\sum_{i=1}^{N}\delta(x_{t+1}^{i}=x)-\sum_{i=1}^{N}P(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N},\boldsymbol{\nu}_{t}^{N})(x)\right|$$ $$=\frac{1}{N}\sum_{x\in\mathcal{X}}\mathbb{E}\left[\mathbb{E}\left[\left|\sum_{i=1}^{N}\delta(x_{t+1}^{i}=x)-\sum_{i=1}^{N}P(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N},\boldsymbol{\nu}_{t}^{N})(x)\right|\left|\boldsymbol{x}_{t}^{N},\boldsymbol{u}_{t}^{N}\right.\right]\right]$$ $$\stackrel{{(a)}}{{\leq}}\frac{1}{\sqrt{N}}\sqrt{|\mathcal{X}|}$$ Inequality (a) is obtained applying Lemma 12, and the facts that {x i t+1}i∈{1,··· ,N} are conditionally independent given {x N t ,u N t }, and the following relations hold $$\begin{array}{l}{{\mathbb{E}\left[\delta(x_{t+1}^{i}=x)\Big|\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N}\right]=P(x_{t}^{i},u_{t}^{i},\mathbf{\mu}_{t}^{N},\mathbf{\nu}_{t}^{N})(x),}}\\ {{\sum_{x\in\mathcal{X}}\mathbb{E}\left[\delta(x_{t+1}^{i}=x)\Big|\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N}\right]=1}}\end{array}$$ ∀i ∈ {1, · · · , N}, and ∀x ∈ X . The second term satisfies the following bound. J2 ≜ 1 N X x∈X E X N i=1 P(x i t , uit , µ N t , ν N t )(x) − X N i=1 P(x i t , uit , µ N t , νMF(µ N t , πt))(x) ≤ 1 N X N i=1 E P(x i t , uit , µ N t , ν N t ) − P(x i t , uit , µ N t , νMF(µ N t , πt)) 1 (a) ≤ LP E ν N t − ν MF(µ N t , πt) 1 (b) ≤LP √N p|U| Inequality (a) is a consequence of Assumption 2 while (b) follows from Lemma 5. Finally, the term, J3 can be upper bounded as follows. $$J_{3}\triangleq\frac{1}{N}\sum_{v\in\mathcal{K}}\mathbb{E}\left|\sum_{i=1}^{N}P(x_{i}^{i},u_{i}^{i},\boldsymbol{\mu}_{i}^{N},v^{\text{MF}}(\boldsymbol{\mu}_{i}^{N},\pi_{i}))(x)-\sum_{i=1}^{N}\sum_{u\in\mathcal{U}}P(x_{i}^{i},u,\boldsymbol{\mu}_{i}^{N},v^{\text{MF}}(\boldsymbol{\mu}_{i}^{N},\pi_{i}))(x)\pi_{i}(x_{i}^{i},\boldsymbol{\mu}_{i}^{N})(u)\right|$$ $$\stackrel{{(a)}}{{\leq}}\frac{1}{\sqrt{N}}\sqrt{|\mathcal{K}|}$$ Inequality (a) is a result of Lemma 12. In particular, it uses the facts that, {u i t}i∈{1,··· ,N} are conditionally independent given x N t , and the following relations hold $$\mathbb{E}\left[P(x_{t}^{i},u_{t}^{i},\mu_{t}^{N},\nu^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t}))(x)\Big{|}x_{t}^{N}\right]=\sum_{u\in\mathcal{U}}P(x_{t}^{i},u,\mu_{t}^{N},\nu^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t}))(x)\pi_{t}(x_{t}^{i},\mu_{t}^{N})(u),$$ $$\sum_{x\in\mathcal{X}}\mathbb{E}\left[P(x_{t}^{i},u_{t}^{i},\mu_{t}^{N},\nu^{\mathrm{MF}}(\mu_{t}^{N},\pi_{t}))(x)\Big{|}x_{t}^{N}\right]=1$$ ∀i ∈ {1, · · · , N}, and ∀x ∈ X . This concludes the Lemma. Observe the following decomposition. E 1 N X i=1 r(x i t , uit , µ N t , ν N t ) − r MF(µ N t , πt) (a) = E 1 N X N i=1 δ(x i t = x) i=1 r(x i t , uit , µ N t , ν N t ) − X x∈X X u∈U r(x, u, µ N t , νMF(µ N t , πt))πt(x, µ N t )(u) 1 N X N = E 1 N X N i=1 X u∈U r(x i t , u, µ N t , νMF(µ N t , πt))πt(x i t , µ N t )(u) ≤ J1 + J2 i=1 r(x i t , uit , µ N t , ν N t ) − 1 N X N Equation (a) uses the definition of r MF(·, ·) as depicted in (9). The term, J1, obeys the following bound. J1 ≜ 1 N E X N i=1 r(x i t , uit , µ N t , ν N t ) − X N i=1 r(x i t , uit , µ N t , νMF(µ N t , πt)) ≤ 1 N E X N i=1 r(x i t , uit , µ N t , ν N t ) − r(x i t , uit , µ N t , νMF(µ N t , πt)) (a) ≤ LRE ν N t − ν MF(µ N t , πt) 1 (b) ≤LR √N p|U| Inequality (a) results from Assumption 1, whereas (b) is a consequence of Lemma 5. The term, J2, satisfies the following. J2 ≜ 1 N E X N i=1 r(x i t , uit , µ N t , νMF(µ N t , πt)) − X N i=1 X u∈U r(x i t , u, µ N t , νMF(µ N t , πt))πt(x i t , µ N t )(u) = 1 N E "E " X N i=1 r(x i t , uit , µ N t , νMF(µ N t , πt)) − X N i=1 X u∈U r(x i t , u, µ N t , νMF(µ N t , πt))πt(x i t , µ N t )(u) x N t ## = MR N E "E " X N i=1 r0(x i t , uit , µ N t , νMF(µ N t , πt)) − X N i=1 X u∈U r0(x i t , u, µ N t , νMF(µ N t , πt))πt(x i t , µ N t )(u) x N t ## (a) ≤ MR √N where r0(·, ·, ·, ·) ≜ r(·, ·, ·, ·)/MR. Inequality (a) follows from Lemma 12. In particular, it utilises the fact that {u i t}i∈{1,··· ,N} are conditionally independent given xt, and the following relations hold. $|r_{0}(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N},\nu^{\text{MF}}(\boldsymbol{\mu}_{t}^{N},\pi_{t}))|\leq1,$ $\mathbb{E}\left[r_{0}(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N},\nu^{\text{MF}}(\boldsymbol{\mu}_{t}^{N},\pi_{t}))\big{|}\boldsymbol{x}_{t}^{N}\right]=\sum_{u\in\mathcal{U}}r_{0}(x_{t}^{i},u,\boldsymbol{\mu}_{t}^{N},\nu^{\text{MF}}(\boldsymbol{\mu}_{t}^{N},\pi_{t}))\pi_{t}(x_{t}^{i},\boldsymbol{\mu}_{t}^{N})(u)$ ∀i ∈ {1, · · · , N}, ∀u ∈ U. Observe that, $$\mathbb{E}|\boldsymbol{\mu}_{t}^{N}-\boldsymbol{\mu}_{t}^{\infty}|_{1}\leq\mathbb{E}\left|\boldsymbol{\mu}_{t}^{N}-P^{\mathrm{MF}}\big{(}\boldsymbol{\mu}_{t-1}^{N},\pi_{t-1}\big{)}\right|_{1}+\mathbb{E}\left|P^{\mathrm{MF}}\big{(}\boldsymbol{\mu}_{t-1}^{N},\pi_{t-1}\big{)}-\boldsymbol{\mu}_{t}^{\infty}\right|_{1}$$ $$\stackrel{{(a)}}{{\leq}}\frac{C_{P}}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]+\mathbb{E}\left|P^{\mathrm{MF}}\big{(}\boldsymbol{\mu}_{t-1}^{N},\pi_{t-1}\big{)}-P^{\mathrm{MF}}\big{(}\boldsymbol{\mu}_{t-1}^{\infty},\pi_{t-1}\big{)}\right|_{1}$$ Inequality (a) follows from Lemma 6, and relation (8). Using Lemma 3, we get $$\begin{array}{l}{{\left|P^{\mathrm{MF}}(\mathbf{\mu}_{t-1}^{N},\pi_{t-1})-P^{\mathrm{MF}}(\mathbf{\mu}_{t-1}^{\infty},\pi_{t-1})\right|_{1}}}\\ {{\leq\tilde{S}_{P}|\mathbf{\mu}_{t-1}^{N}-\mathbf{\mu}_{t-1}^{\infty}|_{1}+\tilde{S}_{P}|\pi_{t-1}(\cdot,\mathbf{\mu}_{t-1}^{N})-\pi_{t-1}(\cdot,\mathbf{\mu}_{t-1}^{\infty})|_{\infty}}}\\ {{\overset{(a)}{\leq}S_{P}|\mathbf{\mu}_{t-1}^{N}-\mathbf{\mu}_{t-1}^{\infty}|_{1}}}\end{array}$$ where SP ≜ S˜P + LQS¯P . Inequality (a) follows from Assumption 3. Combining, we get, $$\mathbb{E}|\mathbf{\mu}_{t}^{N}-\mathbf{\mu}_{t}^{\infty}|_{1}\leq\frac{C_{P}}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]+S_{P}\mathbb{E}\left|\mathbf{\mu}_{t-1}^{N}-\mathbf{\mu}_{t-1}^{\infty}\right|_{1}\tag{44}$$ Recursively applying the above inequality, we finally obtain, $$\mathbb{E}|\mu_{t}^{N}-\mu_{t}^{\infty}|_{1}\leq\frac{C_{P}}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\left(\frac{S_{P}^{t}-1}{S_{P}-1}\right).$$ Note that, E µ N t+1 − P MF(µ N t , πt) 1 x∈X E 1 N X N i=1 δ(x i t = x ′) i=1 δ(x i t+1 = x) − X x′∈X X u∈U P(x ′, u, µ N t )(x)πt(x ′, µ N t )(u) 1 N X N (a) = X i=1 X u∈U P(x i t , u, µ N t )(x)πt(x i t , µ N t )(u) ≤ J1 + J2 x∈X E 1 N X N i=1 δ(x i t+1 = x) − 1 N X N = X Equality (a) follows from the definition of P MF(·, ·) as depicted in (8). The first term, J1, can be upper bounded as follows. $$J_{1}\triangleq\frac{1}{N}\sum_{x\in\mathcal{X}}\mathbb{E}\left|\sum_{i=1}^{N}\delta(x_{t+1}^{i}=x)-\sum_{i=1}^{N}P(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N})(x)\right|$$ $$=\frac{1}{N}\sum_{x\in\mathcal{X}}\mathbb{E}\left[\mathbb{E}\left[\left|\sum_{i=1}^{N}\delta(x_{t+1}^{i}=x)-\sum_{i=1}^{N}P(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N})(x)\right|\left|\boldsymbol{x}_{t}^{N},\boldsymbol{u}_{t}^{N}\right|\right]\right]$$ $$\stackrel{{(a)}}{{\leq}}\frac{1}{\sqrt{N}}\sqrt{|\mathcal{X}|}$$ Inequality (a) can be derived using Lemma 12, and the facts that {x i t+1}i∈{1,··· ,N} are conditionally independent given {x N t ,u N t }, and, $$\begin{array}{l}{{\mathbb{E}\left[\delta(x_{t+1}^{i}=x)\Big|\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N}\right]=P(x_{t}^{i},u_{t}^{i},\mu_{t}^{N})(x),}}\\ {{\sum_{x\in\mathcal{X}}\mathbb{E}\left[\delta(x_{t+1}^{i}=x)\Big|\mathbf{x}_{t}^{N},\mathbf{u}_{t}^{N}\right]=1}}\end{array}$$ ∀i ∈ {1, · · · , N}, and ∀x ∈ X . The second term can be bounded as follows. $$J_{2}\triangleq\frac{1}{N}\sum_{x\in\mathcal{X}}\mathbb{E}\left|\sum_{i=1}^{N}P(x_{t}^{i},u_{t}^{i},\boldsymbol{\mu}_{t}^{N})(x)-\sum_{i=1}^{N}\sum_{u\in\mathcal{U}}P(x_{t}^{i},u,\boldsymbol{\mu}_{t}^{N})(x)\pi_{t}(x_{t}^{i},\boldsymbol{\mu}_{t}^{N})(u)\right|$$ $$\stackrel{{(a)}}{{\leq}}\frac{1}{\sqrt{N}}\sqrt{|\mathcal{X}|}$$ Inequality (a) is a consequence of Lemma 12. Specifically, it uses the facts that, {u i t}i∈{1,··· ,N} are conditionally independent given x N t , and $$\mathbb{E}\left[P(x_{t}^{i},u_{t}^{i},\mu_{t}^{N})(x)\Big\vert\mathbf{x}_{t}^{N}\right]=\sum_{u\in\mathcal{U}}P(x_{t}^{i},u,\mu_{t}^{N})(x)\pi_{t}(x_{t}^{i},\mu_{t}^{N})(u),$$ $$\sum_{x\in\mathcal{X}}\mathbb{E}\left[P(x_{t}^{i},u_{t}^{i},\mu_{t}^{N})(x)\Big\vert\mathbf{x}_{t}^{N}\right]=1$$ ∀i ∈ {1, · · · , N}, and ∀x ∈ X . This concludes the Lemma. Note that, E 1 N X i=1 r(x i t , uit , µ N t ) − r MF(µ N t , πt) (a) = E 1 N X N i=1 r(x i t , uit , µ N t ) − X x∈X X u∈U r(*x, u,* µ N t )πt(x, µ N t )(u) 1 N X N i=1 δ(x i t = x) = E 1 N X N i=1 X u∈U r(x i t , u, µ N t )πt(x i t , µ N t )(u) i=1 r(x i t , uit , µ N t ) − 1 N X N = 1 N E "E " X N i=1 r(x i t , uit , µ N t ) − X N i=1 X u∈U r(x i t , u, µ N t )πt(x i t , µ N t )(u) x N t ## = MR N E "E " X N i=1 X u∈U r0(x i t , u, µ N t )πt(x i t , µ N t )(u) x N t ## i=1 r0(x i t , uit , µ N t ) − X N (a) ≤ MR √N where r0(·, ·, ·, ·) ≜ r(·, ·, ·, ·)/MR. Inequality (a) follows from Lemma 12. Specifically, it uses the fact that {u i t}i∈{1,··· ,N} are conditionally independent given x N $$\begin{array}{l}{{r_{0}(x_{t}^{i},u_{t}^{i},\mu_{t}^{N})|\leq1,}}\\ {{\mathbb{E}\left[r_{0}(x_{t}^{i},u_{t}^{i},\mu_{t}^{N})\Big|x_{t}^{N}\right]=\sum_{u\in\mathcal{U}}r_{0}(x_{t}^{i},u,\mu_{t}^{N})\pi_{t}(x_{t}^{i},\mu_{t}^{N})(u)}}\end{array}$$ , and ∀i ∈ {1, · · · , N}, ∀u ∈ U. Observe that, $$\mathbb{E}\left|\boldsymbol{\mu}_{t}^{N}-\boldsymbol{\mu}_{t}^{\infty}\right|_{1}\leq\mathbb{E}\left|\boldsymbol{\mu}_{t}^{N}-P^{\mathrm{MF}}(\boldsymbol{\mu}_{t-1}^{N},\pi_{t-1})\right|_{1}+\mathbb{E}\left|P^{\mathrm{MF}}(\boldsymbol{\mu}_{t-1}^{N},\pi_{t-1})-\boldsymbol{\mu}_{t}^{\infty}\right|_{1}$$ $$\stackrel{{(a)}}{{\leq}}\frac{2}{\sqrt{N}}\sqrt{|\mathcal{X}|}+\mathbb{E}\left|P^{\mathrm{MF}}(\boldsymbol{\mu}_{t-1}^{N},\pi_{t-1})-P^{\mathrm{MF}}(\boldsymbol{\mu}_{t-1}^{\infty},\pi_{t-1})\right|_{1}$$ Inequality (a) follows from Lemma 6, and relation (8). Using Lemma 3, we get P MF(µ N t−1 , πt−1) − P MF(µ ∞ t−1 , πt−1) 1 ≤ S˜P |µ N t−1 − µ ∞ t−1 |1 + S¯P |πt−1(·, µ N t−1 ) − πt−1(·, µ ∞ t−1 )|∞ (a) ≤ SP |µ N t−1 − µ ∞ t−1 |1 where $S_{P}\triangleq\tilde{S}_{P}+L_{Q}\tilde{S}_{P}$. Inequality (a) follows from Assumption 3. Combining, we get, $$\mathbb{E}[\mathbf{\mu}_{t}^{N}-\mathbf{\mu}_{t}^{\infty}|_{1}\leq\frac{2}{\sqrt{N}}\sqrt{|\mathcal{X}|}+S_{P}\mathbb{E}\left|\mathbf{\mu}_{t-1}^{N}-\mathbf{\mu}_{t-1}^{\infty}\right|_{1}$$ 1(45) Recursively applying the above inequality, we finally obtain, $$\mathbb{E}|\mu_{t}^{N}-\mu_{t}^{\infty}|_{1}\leq\frac{2}{\sqrt{N}}\sqrt{|\mathcal{X}|}\left(\frac{S_{P}^{t}-1}{S_{P}-1}\right).$$ This concludes the Lemma. $$\left(45\right)$$ ## M Sampling Algorithm Algorithm 3 Sampling Algorithm Input: µ0 , πΦj , P, r 1: Sample x0 ∼ µ0 . 2: Sample u0 ∼ πΦj (x0, µ0 ) 3: ν0 ← ν MF(µ0 , πΦj ) 4: t ← 0 5: FLAG ← FALSE 6: **while** FLAG is FALSE do 7: FLAG ← TRUE with probability 1 − γ. 8: Execute Update 9: **end while** 10: T ← t 11: Accept (xT , µT , uT ) as a sample. 12: VˆΦj ← 0, QˆΦj ← 0 13: FLAG ← FALSE 14: SumRewards ← 0 15: **while** FLAG is FALSE do 16: FLAG ← TRUE with probability 1 − γ. 17: Execute Update 18: SumRewards ← SumRewards + r(xt, ut, µt , νt) 19: **end while** 20: With probability 12 , VˆΦj ← SumRewards. Otherwise QˆΦj ← SumRewards. 21: AˆΦj (xT , µT , uT ) ← 2(QˆΦj − VˆΦj ). Output: (xT , µT , uT ) and AˆΦj (xT , µT , uT ) Procedure Update: 1: xt+1 ∼ P(xt, ut, µt , νt). 2: µt+1 ← P MF(µt , πΦj ) 3: ut+1 ∼ πΦj (xt+1, µt+1) 4: νt+1 ← ν MF(µt+1, πΦj ) 5: t ← t + 1 EndProcedure ## N Proof Of Theorem 3 Note that, sup Φ∈Rd vMARL(µ0 , πΦ) − 1 J X J j=1 vMARL(µ0 , π˜Φj ) ≤ sup Φ∈Rd vMARL(µ0 , πΦ) − sup Φ∈Rd vMF(µ0 , πΦ) + sup Φ∈Rd vMF(µ0 , πΦ) − 1 J X J j=1 vMF(µ0 , πΦj ) | {z } ≜J2 + 1 J X J j=1 vMF(µ0 , πΦj ) − vMARL(µ0 , π˜Φj ) | {z } ≜J3 ≤ sup Φ∈Rd |vMARL(µ0 , πΦ) − vMF(µ0 , πΦ)| +J2 + J3 | {z } ≜J1 Using the same argument as used in Appendix A.3, we can conclude that, $$\begin{array}{l}{{J_{1}\leq\frac{C_{1}}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]=C_{1}e\leq C_{1}\operatorname*{max}\{e,\epsilon\},}}\\ {{J_{3}\leq\frac{C_{3}}{\sqrt{N}}\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]=C_{3}e\leq C_{2}\operatorname*{max}\{e,\epsilon\}}}\end{array}$$ for some constants C1, C3. Moreover, Lemma 1 suggests that, $$J_{2}\leq{\frac{\sqrt{\epsilon_{\mathrm{bias}}}}{1-\gamma}}+\epsilon\leq{\frac{\sqrt{\epsilon_{\mathrm{bias}}}}{1-\gamma}}+\operatorname*{max}\{e,\epsilon\}$$ Taking C = C1 + C3 + 1, we conclude the result. Under Assumption 4, using the same argument as is used in Appendix B.2, we can improve the bounds on J1, J3 as follows. $$\begin{array}{c}{{J_{1}\leq\frac{C_{1}}{\sqrt{N}}\sqrt{|\mathcal{X}|},}}\\ {{J_{3}\leq\frac{C_{3}}{\sqrt{N}}\sqrt{|\mathcal{X}|}}}\end{array}$$ This establishes the improved result.