RedTachyon commited on
Commit
63b565e
1 Parent(s): bfcf4ef

Upload folder using huggingface_hub

Browse files
zkRCp4RmAF/10_image_0.png ADDED

Git LFS Details

  • SHA256: 3f0e11ddb8c18a6bbb8d55546b0f91a2fe2dd52de7a31cc9652b149c294478b3
  • Pointer size: 131 Bytes
  • Size of remote file: 111 kB
zkRCp4RmAF/13_image_0.png ADDED

Git LFS Details

  • SHA256: e8043361baf3f0a16dbdf89fd636905c45ccc53c558c3c2f6922136a5a133a8b
  • Pointer size: 130 Bytes
  • Size of remote file: 64.9 kB
zkRCp4RmAF/1_image_0.png ADDED

Git LFS Details

  • SHA256: 9532037eee729da64dad4323c5684315a308a593d0a2304ca0d2f75e7a046ff8
  • Pointer size: 129 Bytes
  • Size of remote file: 9.63 kB
zkRCp4RmAF/1_image_1.png ADDED

Git LFS Details

  • SHA256: 32508ff5399358995b9f3e0d7cce090529f639df0b9aac45710642964551b3d6
  • Pointer size: 130 Bytes
  • Size of remote file: 11.8 kB
zkRCp4RmAF/1_image_2.png ADDED

Git LFS Details

  • SHA256: 3c16db67c7f90c60176253a58681b8f045436f0cbd985950f2a70a00734c00f9
  • Pointer size: 129 Bytes
  • Size of remote file: 8.53 kB
zkRCp4RmAF/20_image_0.png ADDED

Git LFS Details

  • SHA256: 24fdd0ee15f25634b016d4dedcb0a75f02158a91408b128e1ec26ec2c5a9da37
  • Pointer size: 130 Bytes
  • Size of remote file: 19.8 kB
zkRCp4RmAF/20_image_1.png ADDED

Git LFS Details

  • SHA256: f917c4fa5e855fa1a391551e8c6d67b260fabbd40a8c54c69189c318e8e3b7b6
  • Pointer size: 130 Bytes
  • Size of remote file: 18.1 kB
zkRCp4RmAF/7_image_0.png ADDED

Git LFS Details

  • SHA256: 0530bad4b31ceaf147c38fd8dc7f6bfe1d7d3a6b3f195529a09f895b44ad2a76
  • Pointer size: 130 Bytes
  • Size of remote file: 38.8 kB
zkRCp4RmAF/zkRCp4RmAF.md ADDED
@@ -0,0 +1,869 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Offline Reinforcement Learning With Mixture Of Deterministic Policies
2
+
3
+ Takayuki Osa osa@mi.t.u-tokyo.ac.jp The University of Tokyo, RIKEN
4
+ Akinobu Hayashi *akinobu_hayashi@jp.honda* Honda R&D Co., Ltd.
5
+
6
+ Pranav Deo *pranav_deo@jp.honda* Honda R&D Co., Ltd.
7
+
8
+ Naoki Morihira naoki_morihira@jp.honda Honda R&D Co., Ltd.
9
+
10
+ Takahide Yoshiike *takahide_yoshiike@jp.honda* Honda R&D Co., Ltd.
11
+
12
+ Reviewed on OpenReview: **https://openreview.net/forum?id=zkRCp4RmAF**
13
+
14
+ ## Abstract
15
+
16
+ Offline reinforcement learning (RL) has recently attracted considerable attention as an approach for utilizing past experiences to learn a policy. Recent studies have reported the challenges of offline RL, such as estimating the values of actions that are outside the data distribution. To mitigate offline RL issues, we propose an algorithm that leverages a mixture of deterministic policies. When the data distribution is multimodal, fitting a policy modeled with a unimodal distribution, such as Gaussian distribution, may lead to interpolation between separate modes, thereby resulting in the value estimation of actions that are outside the data distribution. In our framework, the state-action space is divided by learning discrete latent variables, and the sub-policies corresponding to each region are trained. The proposed algorithm was derived by considering the variational lower bound of the offline RL objective function. We show empirically that the use of the proposed mixture policy can reduce the accumulation of the critic loss in offline RL, which was reported in previous studies. Experimental results also indicate that using a mixture of deterministic policies in offline RL improves the performance with the D4RL benchmarking datasets.
17
+
18
+ ## 1 Introduction
19
+
20
+ Reinforcement learning (RL) (Sutton & Barto, 2018) has achieved remarkable success in various applications. Many of its successes have been achieved in online learning settings where the RL agent interacts with the environment during the learning process. However, such interactions are often time-consuming and computationally expensive. The aim of reducing the number of interactions in RL has spurred active interest in offline RL (Levine et al., 2020), also known as batch RL (Lange et al., 2012). In offline RL, the goal is to learn the optimal policy from a prepared dataset generated through arbitrary and unknown processes.
21
+
22
+ Prior work on offline RL has focused on how to avoid estimating the Q-values of actions that are outside the data distribution (Fujimoto et al., 2019; Fujimoto & Gu, 2021). In this study, we propose addressing it from the perspective of the policy structure. Our hypothesis is that, if the data distribution in a given
23
+
24
+ ![1_image_0.png](1_image_0.png)
25
+
26
+ (a) Samples in state-action space.
27
+
28
+ ![1_image_1.png](1_image_1.png)
29
+
30
+ (b) Result of fitting a unimodal distribution.
31
+
32
+ ![1_image_2.png](1_image_2.png)
33
+
34
+ (c) Proposed approach.
35
+ Figure 1: Illustration of the proposed approach. (a) In offline RL, the distribution of samples is often multimodal; (b) Fitting a unimodal distribution to such samples can lead to generating the action out of the data distribution; (c) In the proposed approach, first the latent discrete variable of the state-action space is learned, and then a deterministic policy is learned for each region. dataset is multimodal, the evaluation of the out-of-distribution actions can be reduced by leveraging a policy conditioned on discrete latent variables, which can be interpreted as dividing the state-action space and learning sub-policies for each region. When the data distribution is multimodal, as shown in Figure 1(a),
36
+ fitting a policy modeled with a unimodal distribution, such as Gaussian distribution, may lead to interpolation between separate modes, thereby resulting in the value estimation of actions that are outside the data distribution (Figure 1(b)). To avoid this, we employ a mixture of deterministic policies (Figure 1(c)). We divide the state-action space by learning discrete latent variables and learn the sub-policies for each region.
37
+
38
+ Ideally, this approach can help avoid interpolating separate modes of the data distribution. The main contributions of this study are as follows: 1) it provides a practical algorithm for training a mixture of deterministic policies in offline RL and 2) investigates the effect of policy structure in offline RL.
39
+
40
+ Although it is expected that a mixture of deterministic policies has advantages over a monolithic policy, it is not trivial to train a mixture of deterministic policies. We derived the proposed algorithm by considering the variational lower bound of the offline RL objective function. We refer to the proposed algorithm as deterministic mixture policy optimization (DMPO). Additionally, we proposed a regularization technique for a mixture policy based on mutual information. We empirically demonstrate that the proposed regularization technique improves the performance of the proposed algorithm. A previous study (Brandfonbrener et al., 2021) reported the accumulation of critic loss values during the training phase, which was attributed to generating out-of-distribution actions. In our experiments, we investigated the effect of the policy structures in offline RL through comparison with methods that use a monolithic deterministic policy, Gaussian policy, and Gaussian mixture policy. We empirically show that the use of a mixture of deterministic policies can reduce the accumulation of the approximation error in offline RL. Although a mixture of Gaussian policies has been used in the online RL literature, we show that the use of a Gaussian mixture policy does not significantly improve the performance of an offline RL algorithm. Through experiments with benchmark tasks in D4RL (Fu et al., 2020), we demonstrate that the proposed algorithms are competitive with prevalent offline RL methods.
41
+
42
+ ## 2 Related Work
43
+
44
+ Recent studies have shown that regularization is a crucial component of offline RL (Fujimoto et al., 2019; Kumar et al., 2020; Levine et al., 2020; Kostrikov et al., 2021). For example, Kostrikov et al. proposed a regularization based on Fisher divergence Kostrikov et al. (2021), and Fujimoto et al. showed that simply adding a behavior cloning term to the objective function in TD3 can achieve state-of-the-art performance on D4RL benchmark tasks Fujimoto & Gu (2021). Other studies have investigated the structure of the critic, proposing the use of an ensemble of critics (An et al., 2021) or offering a one-step offline RL approach (Gulcehre et al., 2020; Brandfonbrener et al., 2021; Goo & Niekum, 2021). Previous studies (Fujimoto et al.,
45
+ 2019; Fujimoto & Gu, 2021) have indicated that the source of the value approximation error is "extrapolation error" that occurs when the value of state-action pairs that are not contained in a given dataset is estimated.
46
+
47
+ We hypothesize that such an "extrapolation error" can be mitigated by dividing the state-action space, which can be potentially achieved by learning discrete latent variables. We investigate the effect of incorporating policy structure as an inductive bias in offline RL, which has not been thoroughly investigated.
48
+
49
+ Learning the discrete latent variable in the context of RL is closely related to a mixture policy, where a policy is represented as a combination of a finite number of sub-policies. In a mixture policy, one of the sub-policies is activated for a given state, and the module that determines which sub-policy is to be used is often called the gating policy (Daniel et al., 2016). Because of its two-layered structure, a mixture policy is also called a hierarchical policy (Daniel et al., 2016). Although we did not consider temporal abstraction in this study, we note that a well-known hierarchical RL framework with temporal abstraction is the option critic (Bacon et al., 2017). Because we consider policies without temporal abstraction, we use the term "mixture policy,"
50
+ following the terminology in Wulfmeier et al. (2021). Previous studies have demonstrated the advantages of mixture policies for online RL (Osa et al., 2019; Zhang & Whiteson, 2019; Wulfmeier et al., 2020; 2021; Akrour et al., 2021). In these existing methods, sub-policies are often trained to cover separate modes of the Q-function, which is similar to our approach. Although existing methods have leveraged the latent variable in offline RL (Zhou et al., 2020; Chen et al., 2021b; 2022), the latent variable is continuous in these methods.
51
+
52
+ For example, Chen et al. recently proposed an algorithm called latent-variable advantage-weighted policy optimization (LAPO), which leverages continuous latent space for policy learning Chen et al. (2022). LAPO
53
+ incorporates an importance weight based on the advantage function and learns the continuous latent variable. Although LAPO can achieve state-of-the-art performance on well-known benchmark tasks, we empirically show in this study that LAPO suffers from a surge of the critic loss during training.
54
+
55
+ ## 3 Problem Formulation
56
+
57
+ Reinforcement Learning Consider a reinforcement learning problem under a Markov decision process (MDP) defined by a tuple (S, A,P*, r, γ, d*), where S is the state space, A is the action space, P(st+1|st, at) is the transition probability density, r(s, a) is the reward function, γ is the discount factor, and d(s0) is the probability density of the initial state. A policy π(a|s) : *S × A 7→* R is defined as the conditional probability density over the actions given the states. The goal of RL is to identify a policy that maximizes the expected return E[R0|π], where the return is the sum of the discounted rewards over time given by Rt =PT
58
+ k=t γ k−tr(sk, ak). The Q-function, Qπ(s, a), is defined as the expected return when starting from state s and taking action a, then following policy π under a given MDP (Sutton & Barto, 2018).
59
+
60
+ In offline RL, it is assumed that the learning agent is provided with a fixed dataset, D = {(si, ai, ri)}
61
+ N
62
+ i=1, comprising states, actions, and rewards collected by an unknown behavior policy. The goal of offline RL is to obtain a policy that maximizes the expected return using D without online interactions with the environment during the learning process.
63
+
64
+ Objective function We formulate the offline RL problem as follows: given dataset D = {(si, ai, ri)}
65
+ N i=1 obtained through the interactions between behavior policy β(a|s) and the environment, our goal is to obtain policy π that maximizes the expected return. In the process of training a policy in offline RL, the expected return is evaluated with respect to the states stored in the given dataset. Thus, the objective function is given by:
66
+
67
+ $$J(\pi)=\mathbb{E}_{\mathbf{s}\sim{\mathcal{D}},\mathbf{a}\sim\pi}\left[f^{\pi}(\mathbf{s},\mathbf{a})\right],$$
68
+ π(s, a)] , (1)
69
+ where f πis a function that quantifies the performance of policy π. There are several choices for f π as indicated in Schulman et al. (2016). TD3 employed the action-value function, f π(s, a) = Qπ(s, a), and A2C employed the advantage-function f π(s, a) = Aπ(s, a) (Mnih et al., 2016). Other previous studies employed shaping with an exponential function, such as f π(s, a) = exp Qπ(s, a)(Peters & Schaal, 2007)
70
+ or f π(s, a) = exp Aπ(s, a)(Neumann & Peters, 2008; Wang et al., 2018). Without a loss of generality, we assume that the objective function is given by Equation 1. We derive the proposed algorithm by considering the lower bound of the objective function of offline RL in Equation 1.
71
+
72
+ $\left(1\right)$.
73
+ Mixture policy In this study, we consider a mixture of policies given by
74
+
75
+ $$\pi(\mathbf{a}|\mathbf{s})=\sum_{\mathbf{z}\in\mathcal{Z}}\pi_{\text{gate}}(\mathbf{z}|\mathbf{s})\pi_{\text{sub}}(\mathbf{a}|\mathbf{s},\mathbf{z}),\tag{2}$$
76
+
77
+ where z is a discrete latent variable, πgate(z|s) is the gating policy that determines the value of the latent variable, and πsub(a|s, z) is the sub-policy that determines the action for a given s and z. We assume that a sub-policy πsub(a|s, z) is deterministic; the sub-policy determines the action for a given s and z in a deterministic manner as a = µθ(s, z), where µθ(s, z) is parameterized by vector θ. Additionally, we assume that the gating policy πgate(z|s) determines the latent variable as:
78
+
79
+ $$\mathbf{z}=\arg\max_{\mathbf{z}^{\prime}}Q_{\mathbf{w}}(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{z}^{\prime})),\tag{3}$$
80
+
81
+ where Qw(s, a) is the estimated Q-function parameterized by vector w. This gating policy is applicable to objective functions such as f π(s, a) = exp (Qπ(s, a)), f π(s, a) = Aπ(s, a), and f π(s, a) = exp (Aπ(s, a)).
82
+
83
+ Please refer to Appendix A for details.
84
+
85
+ ## 4 Training A Mixture Of Deterministic Policies By Maximizing The Variational Lower Bound
86
+
87
+ We consider a training procedure based on policy iteration (Sutton & Barto, 2018), in which the critic and policy are iteratively improved. In this section, we describe the policy update procedure of the proposed method.
88
+
89
+ ## 4.1 Variational Lower Bound For Offline Rl
90
+
91
+ To derive the update rule for policy parameter θ, we first consider the lower bound of objective function log J(π) in Equation 1. We assume that f π(s, a) in Equation 1 is approximated with ˆf π w(s, a), which is parameterized with a vector w. In a manner similar to Dayan & Hinton (1997); Kober & Peters (2011),
92
+ when ˆf π w(s, a) > 0 for any s and a, we can determine the lower bound of log J(π) using Jensen's inequality as follows:
93
+
94
+ log J(π) ≈ log Zd β(s)πθ(a|s) ˆf π w(s, a)dsda (4) = log Zd β(s)β(a|s) πθ(a|s) β(a|s) ˆf π w(s, a)dsda (5) ≥ Zd β(s)β(a|s) log πθ(a|s) β(a|s) ˆf π w(s, a)dsda (6) = E(s,a)∼D hlog πθ(a|s) ˆf π w(s, a) i− E(s,a)∼D hlog β(a|s) ˆf π w(s, a) i, (7)
95
+ $$\left(4\right)$$
96
+
97
+ where β(a|s) is the behavior policy used for collecting the given dataset, and d β(s) is the stationary distribution over the state induced by executing behavior policy β(a|s). The second term in Equation 7 is independent of policy parameter θ. Thus, we can maximize the lower bound of J(π) by maximizing PN
98
+ i=1 log πθ(ai|si)
99
+ ˆf π w(si, ai). When we employ f π(s, a) = exp (Aπ(s, a)), and the policy is Gaussian, the resulting algorithm is equivalent to AWAC (Nair et al., 2020). To employ a mixture policy with a discrete latent variable, we further analyze the objective function in Equation 7. As in Kingma & Welling (2014);
100
+ Sohn et al. (2015), we obtain a variant of the variational lower bound of the conditional log-likelihood:
101
+
102
+ $$\log\pi_{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i})\geq-D_{\mathrm{KL}}(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{s}_{i},\mathbf{a}_{i})||p(\mathbf{z}|\mathbf{s}_{i}))+\mathbb{E}_{\mathbf{z}\sim p(\mathbf{z}|\mathbf{s}_{i},\mathbf{a}_{i})}\left[\log\pi_{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i},\mathbf{z})\right]$$ $$=\ell_{\mathrm{cyae}}(\mathbf{s}_{i},\mathbf{a}_{i};\mathbf{\theta},\mathbf{\phi}),$$
103
+
104
+ where qϕ(z|s, a) is the approximate posterior distribution parameterized with vector ϕ, and p(z|s) is the true posterior distribution. The derivation of Equation 8 is provided in Appendix B. Although it is often
105
+
106
+ $$({\boldsymbol{\delta}})$$
107
+
108
+ assumed in prior studies (Fujimoto et al., 2019) that z is statistically independent of s, that is, p(z|s) = p(z),
109
+ in our framework, p(z|s) should represent the behavior of the gating policy, πθ(z|s). In our framework, the gating policy πgate(z|s) determines the latent variable as z = arg maxz′ Qw(s, µ(s, z
110
+ ′)). However, the gating policy is not explicitly modeled in our framework because it would increase computational complexity. To approximate the gating policy represented by the argmax function over the Q-function, we used the softmax distribution, which is often used to approximate the argmax function, given by
111
+
112
+ $$p(\mathbf{z}|\mathbf{s})={\frac{\exp\left(Q_{\mathbf{w}}(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{z}))\right)}{\sum_{\mathbf{z}\in\mathcal{Z}}\exp\left(Q_{\mathbf{w}}(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{z}))\right)}}.$$
113
+
114
+ Since we employ double-clipped Q-learning as in Fujimoto et al. (2018), we compute
115
+
116
+ $$Q_{\mathbf{w}}\big(\mathbf{s},\mathbf{\mu_{\theta}}(\mathbf{s},\mathbf{z})\big)=\operatorname*{min}_{j=1,2}Q_{\mathbf{w}_{j}}\big(\mathbf{s},\mathbf{\mu_{\theta}}(\mathbf{s},\mathbf{z})\big)\tag{1}$$
117
+
118
+ in our implementation. The second term in Equation 8 is approximated as the mean squared error, similar to that in the standard implementation of VAE. Based on Equation 7 and Equation 8, we obtain the objective function for training the mixture policy as follows:
119
+
120
+ $${\mathcal{L}}_{\mathrm{ML}}(\mathbf{\theta},\mathbf{\phi})=\sum_{i=1}^{N}f^{\pi}(\mathbf{s}_{i},\mathbf{a}_{i})\ell_{\mathrm{cvae}}(\mathbf{s}_{i},\mathbf{a}_{i};\mathbf{\theta},\mathbf{\phi}).\tag{1}$$
121
+ $$(9)$$
122
+ $$(10)$$
123
+ $$(11)$$
124
+
125
+ This objective can be regarded as the weighted maximum likelihood (Kober & Peters, 2011) of a mixture policy. Our objective function can be viewed as reconstructing the state-action pairs with adaptive weights, similar to that in Peters & Schaal (2007); Nair et al. (2020). Therefore, the policy samples actions within the support and does not evaluate out-of-distribution actions. The primary difference between the proposed and existing methods (Peters & Schaal, 2007; Nair et al., 2020) is that: the use of a mixture of policies conditioned on discrete latent variables in our approach can be regarded as dividing the state-action space.
126
+
127
+ For example, in AWAC (Nair et al., 2020), a unimodal policy was used to reconstruct all of the "good" actions in the given dataset. However, in the context of offline RL, the given dataset may contain samples collected by diverse behaviors and enforcing the policy to cover all modes in the dataset can degrade the resulting performance. In our approach, policy πθ(a|s, z) is encouraged to mimic the state-action pairs that are assigned to the same values of z without mimicking the actions that are assigned to different values of z. Approximation gap When training a stochastic policy, the first term in Equation 7 can be directly maximized because it is trivial to compute the expected log-likelihood E[log π(a|s)]. However, when a policy is given by a mixture of deterministic policies, it is not trivial. For this reason, we used the variational lower bound in Equation 8. In addition, E[log π(a|*s, z*)] is replaced with MSE as in VAE. As described in Cremer et al. (2018), the use of the objective in Equation 8 instead of log π(a|s) leads to the approximation gap, E[log π(a|s)]−ℓcvae(*s, a*) as in VAE. Although addressing the approximation gap using techniques investigated in Cremer et al. (2018) may improve the performance of DMPO, such investigation is left for future work.
128
+
129
+ ## 4.2 Mutual-Information-Based Regularization
130
+
131
+ To improve the performance of the mixture of deterministic policies, we propose a regularization technique for a mixture policy based on the mutual information (MI) between z and the state action pair (s, a), which we denote by I(z; s, a). As shown in Barber & Agakov (2003), the variational lower bound of I(z; s, a) is given as follows:
132
+
133
+ I(s, a; z) = H(z) − H(z|s, a) = E(s,a,z)∼pπ [log p(z|s, a)] + H(z) = E(s,a)∼β(s,a)[DKL (p(z|s, a)||gψ(z|s, a))] + E(s,a,z)∼p [log gψ(z|s, a)] + H(z) ≥ E(s,a,z)∼p [log gψ(z|s, a)] + H(z), (12)
134
+ where gψ(z|s, a) is an auxiliary distribution to approximate the posterior distribution p(z|s, a).
135
+
136
+ Thus, the final objective function is as follows:
137
+
138
+ $${\mathcal{L}}(\mathbf{\theta},\mathbf{\phi},\mathbf{\psi})={\mathcal{L}}_{\mathrm{ML}}(\mathbf{\theta},\mathbf{\phi})+\lambda\sum_{i=1}^{N}\mathbb{E}_{\mathbf{z}\sim p(\mathbf{z})}\log g_{\mathbf{\psi}}(\mathbf{z}|\mathbf{s}_{i},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s}_{i},\mathbf{z})).$$
139
+ $$(13)$$
140
+
141
+ MI-based regularization using the second term in Equation 13 encourages the diversity of the behaviors encoded in the sub-policy π(a|s, z). In Section 7, we empirically show that this regularization improves the performance of the proposed method.
142
+
143
+ To implement MI-based regularization, we introduced a network to represent gψ(z|s, a) in addition to a network that represents the posterior distribution qϕ(z|s, a). While maximizing the objective LML in Equation 11, both the actor µθ(*s, z*) and posterior distribution qϕ(z|s, a) are updated; however, the auxiliary distribution gψ(z|s, a) is frozen. While maximizing PN
144
+ i=1 Ez∼p(z)log gψ(z|si, µθ(si, z)), both actor µθ(*s, z*)
145
+ and auxiliary distribution gψ(z|s, a) are updated, but the posterior distribution qϕ(z|s, a)is frozen. To maximize log gψ(z|si, µθ(si, z)), the latent variable is sampled from the prior distribution, that is, the uniform distribution in this case, and the maximization of log gψ(z|si, µθ(si, z)) is approximated by minimizing the cross entropy loss between z and zˆ, where zˆ is the output of the network that represents gψ(z|s, a).
146
+
147
+ ## 5 Training The Critic For A Mixture Of Deterministic Policies
148
+
149
+ To derive the objective function for training the critic for a mixture of deterministic policies using the gating policy in Equation 3, we consider the following operator:
150
+
151
+ $${\cal T}_{z}Q_{z}=r(s,a)+\gamma\mathbb{E}_{s^{\prime}}\left[\operatorname*{max}_{z^{\prime}}Q_{z}(s^{\prime},\mu(s^{\prime},z^{\prime}))\right].$$
152
+ $$(14)$$
153
+
154
+ We refer to operator Tz as the *latent-max-Q operator*. Following the method in Ghasemipour et al. (2021),
155
+ we prove the following theorems.
156
+
157
+ Theorem 5.1. In the tabular setting, Tz is a contraction operator in the L∞ *norm. Hence, with repeated* applications of Tz*, any initial Q function converges to a unique fixed point.*
158
+ The proof of Theorem 5.1 is provided in Appendix C.
159
+
160
+ Theorem 5.2. Let Qz denote the unique fixed point achieved in Theorem 5.1 and πz *denote the policy* that chooses the latent variable as z = arg maxz′ Q(s, µ(s, z
161
+ ′)) and outputs the action given by µ(s, z) in a deterministic manner. Then Qz *is the Q-value function corresponding to* πz.
162
+
163
+ Proof. (Theorem 5.2) Rearranging Equation 14 with z
164
+ ′ = arg max Qz(s
165
+ ′, µ(s
166
+ ′, z
167
+ ′)), we obtain
168
+
169
+ $$T_{\pm}Q_{\pm}=r(s,a)+\gamma\mathbb{E}_{s^{\prime}}\mathbb{E}_{a^{\prime}\sim\pi_{\pm}}\left[Q_{\pm}(s^{\prime},a^{\prime})\right].\tag{1}$$
170
+ ′)] . (15)
171
+ Because Qz is the unique fixed point of Tz, we have our result. These theorems reveal that the latent-max-Q operator, Tz, retains the contraction and fixed-point existence properties. Based on these results, we estimate the Q-function by applying the latent-max-Q operator. In our implementation, we employed double-clipped Q-learning (Fujimoto et al., 2018). Thus, given dataset D,
172
+ the critic is trained by minimizing
173
+
174
+ $$\mathcal{L}(\mathbf{w}_{j})=\sum_{(\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}_{i}^{\prime},r_{i})\in\mathcal{D}}\left\|Q_{\mathbf{w}_{j}}(\mathbf{s}_{i},\mathbf{a}_{i})-y_{i}\right\|^{2}\tag{1}$$
175
+
176
+ for j = 1, 2, where target value yiis computed as
177
+
178
+ $$y_{i}=r_{i}+\gamma\operatorname*{max}_{\mathbf{z}^{\prime}\in\mathcal{Z}}\operatorname*{min}_{j=1,2}Q_{\mathbf{w}_{j}^{\prime}}(\mathbf{s}_{i}^{\prime},\mu_{\theta^{\prime}}(\mathbf{s}_{i}^{\prime},\mathbf{z}^{\prime})).$$
179
+ ′)). (17)
180
+ $$(15)$$
181
+ $\square$
182
+ $$(16)$$
183
+ $$(17)$$
184
+
185
+ Algorithm 1 Deterministic mixture policy optimization (DMPO)
186
+ Initialize the actor µθ, critic Qwj for j = 1, 2, and the posterior qϕ(z|s, a)
187
+ for t = 1 to T do Sample a minibatch {(si, ai, s
188
+
189
+ i
190
+ , ri)}M
191
+ i=1 from D
192
+ for each element (si, ai, s
193
+ ′ i
194
+ , ri) do Compute the target value as yi = r + γ maxz′∈Z minj=1,2 Qw′j
195
+ (s
196
+ ′ i
197
+ , µθ′ (s
198
+ ′ i
199
+ , z
200
+ ′))
201
+ end for Update the critic by minimizing PM
202
+ i=1 yi − Qwj
203
+ (si, ai)
204
+ 2for j = 1, 2 if t mod dinterval = 0 **then**
205
+ Update the actor and the posterior by maximizing Equation 11
206
+ (optionally) Update the actor by maximizing PM
207
+ i=1 Ez∼p(z)log gψ(z|si, µθ(si, z))
208
+ end if end for
209
+
210
+ ## 6 Practical Implementation
211
+
212
+ The proposed DMPO algorithm is summarized as Algorithm 1. Similar to that in TD3 (Fujimoto et al., 2018),
213
+ the actor is updated once after dinterval updates of the critics. In our implementation, we set dinterval = 2.
214
+
215
+ The discrete latent variable is represented by a one-hot vector, and we used the Gumbel-Softmax trick to sample the discrete latent variable in a differentiable manner (Jang et al., 2017; Maddison et al., 2017). Herein, following Jang et al. (2017); Maddison et al. (2017), we assume that z is a categorical variable with class probabilities α1, α2*, . . . , α*k and categorical samples are encoded as k-dimensional one-hot vectors lying on the corners of the (k − 1)-dimensional simplex, ∆k−1. In the Gumbel-Softmax trick, sample vectors z˜ ∈ ∆k−1 are generated as follows:
216
+
217
+ $$\tilde{z}_{i}=\frac{\exp\left((\log\alpha_{i}+G_{i})/\lambda\right)}{\sum_{j=1}^{k}\exp\left((\log\alpha_{j}+G_{j})/\lambda\right)},$$
218
+ $$(18)$$
219
+
220
+ , (18)
221
+ where Giis sampled from the Gumbel distribution as Gi ∼ Gumbel(0, 1), and λ is the temperature. As the temperature λ approaches 0, the distribution of z˜ smoothly approaches the categorical distribution p(z). As in prior work on the VAE with the Gumbel-Softmax trick (Dupont, 2018), we set λ = 0.67 in our implementation of DMPO. There are several promising ways for learning discrete latent variable (van den Oord et al., 2017; Razavi et al., 2019), and investigating the best way of learning the discrete latent variable is left for future work. Additionally, we employed the state normalization method used in TD3+BC (Fujimoto & Gu, 2021). During preliminary experiments, we found that when f π(s, a) = exp (bAπ(s, a)) in Equation 11, scaling factor b has non-trivial effects on performance and the best value of b differs for each task. To avoid changing the scaling parameter for each task, we used the normalization of the advantage function as
222
+
223
+ $$f^{\pi}(\mathbf{s},\mathbf{a})=\exp\left(\frac{\alpha\left(A^{\pi}\left(\mathbf{s},\mathbf{a}\right)-\max_{(\mathbf{\beta},\mathbf{a})\in\mathcal{D}_{\text{batch}}}A^{\pi}\left(\mathbf{\hat{s}},\mathbf{\hat{a}}\right)\right)}{\max_{(\mathbf{\beta},\mathbf{a})\in\mathcal{D}_{\text{batch}}}A^{\pi}\left(\mathbf{\hat{s}},\mathbf{\hat{a}}\right)-\min_{(\mathbf{\beta},\mathbf{a})\in\mathcal{D}_{\text{batch}}}A^{\pi}\left(\mathbf{\hat{s}},\mathbf{\hat{a}}\right)}\right),\tag{19}$$
224
+
225
+ where Dbatch is a mini-batch sampled from the given dataset D and α is a constant. We set α = 10 for mujoco tasks and α = 5.0 for antmaze tasks in our experiments. For other hyperparameter details, please refer to the Appendix F.
226
+
227
+ ## 7 Experiments
228
+
229
+ We investigated the effect of policy structure on the resulting performance and training errors of critics. In the first experiment, we performed a comparative assessment of TD3+BC (Fujimoto & Gu, 2021), AWAC (Nair et al., 2020) and DMPO on a toy problem where the distribution of samples in a given dataset is multimodal.
230
+
231
+ ![7_image_0.png](7_image_0.png)
232
+
233
+ Figure 2: Performance on a simple task with multimodal data distribution.
234
+
235
+ | Table 1: Algorithm setup in the experiment. | | | | |
236
+ |-----------------------------------------------|----------------|-------------------------------------------------|----------------|----------------|
237
+ | TD3+BC | AWAC | LP-AWAC | DMPO | |
238
+ | Critic training | double-clipped | double-clipped | double-clipped | double-clipped |
239
+ | Q-learning | Q-learning | Q-learning | Q-learning | |
240
+ | Policy type | monolithic | & | monolithic | & |
241
+ | deterministic | stochastic | deterministic on continuous latent action space | mixture & deterministic | |
242
+ | Regularization | BC term | none | none | none |
243
+ | State | normalized | normalized | normalized | normalized |
244
+ | Advantage normalization | - | yes | yes | yes |
245
+
246
+ Further, we conducted a quantitative comparison between the proposed and baseline methods with D4RL
247
+ benchmark tasks (Fu et al., 2020). In the following section, we refer to the proposed method based on the objective in Equation 11 as DMPO, and a variant of the proposed method with MI-based regularization in Equation 13 as infoDMPO. In both, the toy problem and the D4RL tasks, we used the author-provided implementations of TD3+BC, and our implementations of DMPO and AWAC are based on the authorprovided implementation of TD3+BC. Our implementation is available at https://github.com/TakaOsa/
248
+ DMPO.
249
+
250
+ ## 7.1 Multimodal Data Distribution On Toy Task
251
+
252
+ To show the effect of multimodal data distribution in a given dataset, we evaluated the performance of TD3+BC, AWAC, and DMPO on a toy task, as shown in Figure 2. We also evaluated the variant of AWAC
253
+ that employs the policy structure used in LAPO Chen et al. (2022), which we refer to as LP-AWAC. In LP-AWAC, the continuous latent representations of state action pairs are learned using conditional VAE
254
+ with advantage weighting, and a deterministic policy that outputs actions in the learned latent space is trained using DDPG. We found that the authors' implementation of LAPO1includes techniques to improve
255
+
256
+ 1https://github.com/pcchenxi/LAPO-offlienRL
257
+ performance, such as action normalization and clipping of the target value for the state-value function. While LP-AWAC employs the policy structure proposed by Chen et al. (2022), the implementation of LP-AWAC is largely modified from that of the authors' implementation of LAPO to minimize the difference among our implementation of AWAC, mixAWAC, LP-AWAC, and DMPO. LP-AWAC can be considered as a baseline method that incorporates a continuous latent variable in its policy structure. The implementation details of LP-AWAC is described in Appendix F. The differences between the compared methods are summarized in Table 1. In our implementation of AWAC, LP-AWAC and DMPO, we used state normalization and doubleclipped Q-learning as in TD3+BC and the normalization of the advantage function described in Section 6.
258
+
259
+ The difference among AWAC, LP-AWAC and DMPO indicates the effect of the policy representation.
260
+
261
+ In this toy task, the agent is represented as a point mass, the state is the position of the point mass in two-dimensional space, and the action is the small displacement of the point mass. There are two goals in this task, which are indicated by red circles in Figure 2. The blue circle denotes the starting position in Figure 2, and there are three obstacles, which are indicated by solid black circles. In this task, the reward is sparse: when the agent reaches one of the goals, it receives a reward of 1.0, and the episode ends. If the agent makes contact with one of the obstacles, the agent receives a reward of -1.0, and the episode ends. In the given dataset, trajectories for the two goals are provided, and there is no information on which goal the agent is heading to.
262
+
263
+ The scores are summarized in Table 2. Among the evaluated methods, only DMPO successfully solved this task.
264
+
265
+ The policy obtained by TD3+BC did not reach its goal in a stable manner, as shown in Figure 2(b). Similarly, as shown in Figure 2(c), the policy learned by AWAC often slows down around point (0, 0) and fails to reach the goal. This behavior implies that AWAC attempts to average over multiple modes of the distribution. In contrast, the policy learned by DMPO successfully reaches one of the goals. Because the main difference between AWAC and DMPO is the policy architecture, the result shows that the unimodal policy distribution fails to deal with the multimodal data distribution, whereas the mixture policy employed in DMPO
266
+ successfully dealt with it. Similarly, the performance of LP-AWAC is significantly better than TD3+BC and AWAC, demonstrating the benefit of the policy structure based on the latent action space. On the other hand, the performance of DMPO was better than that of LP-AWAC, indicating the advantage of using the discrete latent variable in offline RL. The activation of the sub-policies is visualized in Figure 2(e). The color indicates the value of the discrete latent variable given by the gating policy, z
267
+ ∗ = arg maxz Qw(s, µ(s, z)).
268
+
269
+ Figure 2(d) shows that different sub-policies are activated for different regions, thereby indicating that DMPO appropriately divides the state-action space.
270
+
271
+ Table 2: Performance on the toy task.
272
+
273
+ TD3+BC AWAC LP-AWAC DMPO
274
+ -0.2± 0.4 0.33± 0.44 0.7±0.6 1.0±0.0
275
+
276
+ ## 7.2 Effect Of Policy Structure
277
+
278
+ We investigated the effect of policy structure by comparing the proposed method with existing methods that incorporate the importance weight based on the advantage function. We used AWAC as baseline methods. To investigate the difference between the mixture of the stochastic policies and the mixture of the deterministic policies, we evaluated a variant of AWAC with Gaussian mixture policies, which we refer to as mixAWAC. For mixAWAC, the Gumbel-Softmax trick was used to sample the discrete latent variable. All baseline methods used double-clipped Q-learning for the critic in this experiment. The implementations of AWAC and DMPO were identical to those used in the previous experiment. In our evaluation, |Z| = 8 was used for DMPO and infoDMPO. Appendix D presents the effect of the dimensionality of the discrete latent variables. In this study, we evaluated the baseline methods with mujoco-v2 and antmazev0 tasks.
279
+
280
+ ## 7.2.1 Performance Score On D4Rl
281
+
282
+ A comparison between AWAC, mixAWAC, LP-AWAC, and DMPO is presented in Table 3. These methods incorporate importance weights based on the advantage function with different policy structures. Therefore, the differences between these methods indicate the effect of policy structure. In our experiments, we did not observe significant differences in the performance of AWAC and mixAWAC. This result indicates that
283
+
284
+ Table 3: Comparison with methods incorporating advantage-weighting using D4RL-v2 datasets. Average normalized scores over the last 10 test episodes and five seeds are shown. The boldface text indicates the best performance. HC = HalfCheetah, HP = Hopper, WK = Walker2d.
285
+
286
+ AWAC mixAWAC LP-AWAC DMPO
287
+
288
+ | best performance. HC = HalfCheetah, HP = Hopper, WK = W | | alker2d. | | | |
289
+ |-----------------------------------------------------------|-----------|------------|------------|-----------|-----------|
290
+ | | AWAC | mixAWAC | LP-AWAC | DMPO | |
291
+ | HC | 94.8±0.2 | 94.0±0.5 | 93.7±0.4 | 97.0±1.0 | |
292
+ | Expert | HP | 109.8±2.9 | 111.8±0.8 | 104.3±5.5 | 93.6±15.1 |
293
+ | WK | 111.0±0.2 | 110.5±0.3 | 110.7±0.1 | 111.4±0.3 | |
294
+ | HC | 92.7±0.8 | 92.1±0.6 | 92.5±0.4 | 91.1±3.4 | |
295
+ | Med.-E | HP | 98.6±10.7 | 102.0±17.5 | 90.5±21.6 | 78.4±19.0 |
296
+ | WK | 109.2±0.3 | 109.1±0.3 | 109.1±0.4 | 109.9±0.4 | |
297
+ | HC | 40.9±0.6 | 41.5±0.4 | 39.8±0.3 | 45.2±0.8 | |
298
+ | Med.-R | HP | 38.2±9.4 | 41.2±4.7 | 46.1±8.1 | 89.2±8.1 |
299
+ | WK | 65.0±15.7 | 67.7±8.8 | 50.2±5.5 | 82.1±3.8 | |
300
+ | HC | 44.3±0.2 | 45.1±0.3 | 44.0±0.4 | 47.5±0.4 | |
301
+ | Med. | HP | 57.5±3.0 | 57.2±3.9 | 52.8±3.8 | 71.2±6.5 |
302
+ | WK | 81.0±2.5 | 78.7 ± 4.8 | 77.4±2.7 | 79.4±4.7 | |
303
+ | HC | 3.2±1.3 | 2.2±0.0 | 4.1±2.3 | 15.8±1.6 | |
304
+ | Rand. | HP | 7.3±0.9 | 8.2±0.2 | 8.4±0.6 | 12.0±10.0 |
305
+ | WK | 3.1±1.0 | 4.9±1.1 | 4.0±1.2 | 2.5±2.6 | |
306
+ | Antmaze | umaze | 49.8±6.2 | 57.4±6.2 | 56.6±4.1 | 83.6±4.5 |
307
+ | umaze-d. | 53.8±13.0 | 46.8±6.9 | 66.6±5.5 | 43.2±7.8 | |
308
+ | med.-p. | 0.0±0.0 | 0.0±0.0 | 0.0±0.0 | 77.0±5.1 | |
309
+ | med.-d. | 0.0±0.0 | 0.0±0.0 | 0.0±0.0 | 56.8±27.2 | |
310
+ | large-p. | 0.0±0.0 | 0.0±0.0 | 0.0±0.0 | 1.0±1.3 | |
311
+ | large-d. | 0.0±0.0 | 0.0±0.0 | 0.0±0.0 | 4.8±9.6 | |
312
+
313
+ the use of Gaussian mixture policies does not lead to performance improvement. However, the performance of DMPO matched or exceeded that of AWAC, except for the Hopper-expert and Hopper-medium-expert tasks. This result also confirms that the use of a mixture of deterministic policies is beneficial for these tasks, although the benefits would be task-dependent.
314
+
315
+ The difference between mixAWAC and DMPO implies a difference between a Gaussian mixture policy and a mixture of deterministic policy. In a Gaussian mixture policy, there is a possibility that one of the Gaussian components covers a large action space and interpolates the separate modes of action. If this happens, outof-distribution actions will be generated by the learned policy. However, in a mixture of the deterministic policy, there is no such possibility that one of the components covers a large action space. In addition, DMPO outperformed LP-AWAC on mujoco-v2 and antmaze-v0 tasks. As the difference between DMPO and LP-AWAC indicates the difference between the discrete and continuous latent representations in our framework, this result also indicates that the use of a discrete latent variable is beneficial for offline RL tasks. A comparison with additional baseline methods is provided in Appendix E.
316
+
317
+ ## 7.2.2 Critic Loss Function
318
+
319
+ To investigate the effect of the policy structure on the critic loss function, we compared the value of the critic loss function among AWAC, mixAWAC, LP-AWAC, and DMPO. The normalized scores and value of the critic loss function during training are depicted in Figure 3. The value of the critic loss given by Equation 16 is plotted for every 5,000 updates. Previous studies have indicated that the critic loss value can accumulate over iterations (Brandfonbrener et al., 2021). Figure 3 shows the accumulation of the critic loss in AWAC on mujoco-v2 tasks. The difference between AWAC and mixAWAC indicates that using a Gaussian mixture policy often reduces the accumulation of the critic loss. The critic loss of mixAWAC is lower than that of AWAC in halfcheetah-medium-replay-v2 and halfcheetah-medium-expert-v2 tasks. This result shows that the use of a multimodal policy can reduce the accumulation of the critic loss in offline RL.
320
+
321
+ In addition, the critic loss of DMPO is even lower than that of mixAWAC, and the result demonstrates that using a mixture of deterministic policies can further reduce the critic loss than using a Gaussian mixture policy. These results indicate that using a mixture of deterministic policies can reduce the generation of out-of-distribution actions, which is essential for offline RL.
322
+
323
+ Regarding LP-AWAC, the critic loss value increased rapidly at the beginning of the training. Although the critic loss value often decreases at the end of the LP-AWAC training, the critic loss value is still higher than
324
+
325
+ ![10_image_0.png](10_image_0.png)
326
+ Figure 4: Histogram of the Bellman errors after 20k steps on the halfcheetah-medium-replay task.
327
+ that of DMPO. The surge in the critic loss value indicates the generation of out-of-distribution actions during training in LP-AWAC. Importantly, in DMPO, the value of the critic loss is clearly lower, and the performance of the policy is better than that of LP-AWAC. This result indicates that the use of a discrete latent variable can be more effective than using a continuous latent variable on these tasks. In Brandfonbrener et al. (2021),
328
+ it was shown that the accumulation of critic loss values can be reduced by introducing regularization. Our results indicate that the use of a mixture policy can also mitigate the accumulation of critic loss in offline RL, which suggests the importance of incorporating inductive bias in the policy structure. However, it is worth noting that the reduction in the critic loss given by Equation 16 does not necessarily improve the policy performance. In halfcheetah-medium-expert-v2, although the critic loss was significantly lower in DMPO
329
+ than in AWAC, there was no significant difference in performance between DMPO and AWAC. Recently, Fujimoto et al. indicated that a lower value of the critic loss given by Equation 16 does not necessarily mean better performance, and the observation in Fujimoto et al. (2022) aligns with our experiments. The metric to measure the accuracy of the value estimation is still an open problem in RL.
330
+
331
+ As another qualitative result, Figure 4 shows the histograms of the Bellman error after training with 20 thousand steps. The Bellman error in mixAWAC is distributed more widely than that in AWAC, indicating that the use of the Gaussian mixture policy can increase the variance during the training of the critic. In contrast, the distribution of the Bellman error in DMPO is more narrow than those of AWAC, mixAWAC,
332
+ and LP-AWAC, indicating that the use of the mixture of deterministic policies may lead to reducing the variance during the critic training. This variance reduction could also be considered a reason why DMPO outperformed baseline methods.
333
+
334
+ ## 7.3 Comparison With Prevalent Baselines
335
+
336
+ We compared the performance of the proposed method with that of prevalent baselines. As baseline methods, we used TD3+BC, CQL (Kumar et al., 2020), IQL (Kostrikov et al., 2022), LAPO, and Diffusion QL(Wang et al., 2023). CQL incorporates conservative critic update and the entropy regularization. In the experiments reported in this section, we used the authors' implementation of LAPO. Diffusion QL is recently proposed by 11
337
+
338
+ Table 4: Results on mujoco tasks using D4RL-v2 datasets and AntMaze tasks. Average normalized scores over the last 10 test episodes and five seeds are shown. HC = HalfCheetah, HP = Hopper, WK = Walker2d.
339
+
340
+ "Diff. QL" represents Diffusion QL proposed in Wang et al. (2023).
341
+
342
+ TD3+BC CQL IQL LAPO Diff. QL DMPO infoDMPO
343
+
344
+ (re-run) (re-run) (re-run) (re-run) (re-run) (ours) (ours)
345
+
346
+ | "Diff. QL" represents Diffusion QL proposed in Wang et al. (2023). TD3+BC CQL IQL LAPO | | Diff. QL | DMPO | infoDMPO | | | | |
347
+ |------------------------------------------------------------------------------------------|-----------|------------|-----------|------------|-----------|-----------|-----------|-----------|
348
+ | (re-run) | (re-run) | (re-run) | (re-run) | (re-run) | (ours) | (ours) | | |
349
+ | HC | 96.3±0.9 | 22.0±9.6 | 96.1±1.5 | 95.4±0.3 | 86.3±15.9 | 97.0±1.0 | 95.6±2.0 | |
350
+ | Expert | HP | 109.9±2.5 | 105.8±3.8 | 98.4±13.1 | 110.9±2.3 | 84.3±24.2 | 93.6±15.1 | 107.5±2.9 |
351
+ | WK | 110.2±0.4 | 108.9±0.4 | 112.6±0.3 | 111.5±0.2 | 109.0±0.6 | 111.4±0.3 | 112.1±0.4 | |
352
+ | HC | 89.4±7.2 | 38.4±8.4 | 90.7±4.3 | 94.3±1.1 | 83.8±15.3 | 91.1±3.4 | 91.4±2.5 | |
353
+ | Med.- E | HP | 95.5±9.4 | 88.4±15.9 | 73.9±32.6 | 110.5±1.2 | 88.1±25.7 | 78.4±19.0 | 94.5±14.9 |
354
+ | WK | 110.2±0.3 | 109.2±1.9 | 111.4±1.1 | 111.0±0.2 | 110.1±0.6 | 109.9±0.4 | 110.1±0.7 | |
355
+ | HC | 44.7±0.4 | 46.9±0.3 | 43.6±1.4 | 41.9±1.0 | 45.6±0.6 | 45.2±0.8 | 46.7±0.6 | |
356
+ | Med.- R | HP | 73.8±18.9 | 95.5±1.7 | 90.6±14.3 | 59.7±14.2 | 56.1±24.0 | 89.2±8.1 | 98.5±2.0 |
357
+ | WK | 64.5±17.0 | 77.5±3.1 | 82.2±3.6 | 50.3±18.6 | 84.1±17.0 | 82.1±3.8 | 86.7±3.2 | |
358
+ | HC | 48.2±0.3 | 48.2±0.4 | 48.2±0.2 | 45.7±0.3 | 46.7±0.7 | 47.5±0.4 | 48.6±0.4 | |
359
+ | Med. | HP | 61.0±4.2 | 77.4±4.0 | 61.2±3.5 | 56.2±5.1 | 57.1±11.4 | 71.2± 6.5 | 86.4±7.6 |
360
+ | WK | 84.7±1.3 | 81.5±2.5 | 82.9±6.0 | 80.5±1.8 | 62.1±20.6 | 79.4±4.7 | 85.0±0.8 | |
361
+ | HC | 11.5±0.6 | 24.1±1.5 | 12.6±4.6 | 27.1±1.0 | 17.5±0.2 | 15.8±1.6 | 16.3±1.2 | |
362
+ | Rand. | HP | 8.7±0.3 | 2.2±1.9 | 7.4±0.3 | 15.2±8.6 | 7.8±0.5 | 12.0±10.0 | 20.4±9.8 |
363
+ | WK | 1.4±1.9 | 4.3±7.9 | 5.5±1.6 | 2.2±1.5 | 6.2±3.4 | 2.5± 2.6 | 2.3±2.0 | |
364
+ | Total | 1010.0 | 930.3 | 1017.3 | 1012.4 | 942.1 | 1026.5 | 1102.1 | |
365
+
366
+ Wang et al. (2023) and employs a diffusion model as a policy. We used the author implementation of Diffusion QL, and the results of Diffusion QL are based on the offline model selection reported in Wang et al. (2023). IQL employs expectile regression for learning the critic to address the issue of generating out-of-distribution actions during training. Because the aim of our study is to investigate the policy structure, the approach of IQL, which address the critic learning, is orthogonal to ours. IQL is the state-of-the-art method for antmaze task on D4RL, which involves dealing with long horizons and requires "stitching" together sub-trajectories in a given dataset (Fu et al., 2020). In the implementation of IQL, several techniques, such as scheduling of the learning rate, were used to improve the performance. To compete with IQL on antmaze task, we also used techniques used in Chen et al. (2022). Therefore, the implementations of DMPO and infoDMPO for antmaze tasks are slightly different from those for other tasks. In our preliminary experiment, we evaluated IQL using the techniques proposed in Chen et al. (2022), and observed that the original implementation of IQL showed better performance. Therefore, we used with the original implementation of IQL for comparison. In this experiment, we used the mujoco-v2, antmaze-v0, and adroit tasks on D4RL.
367
+
368
+ A comparison of TD3+BC, CQL, and IQL is presented in Tables 4, 5, and 6. The boldface text indicates the best performance. In mujoco-v2 tasks, the performance of DMPO is comparable/superior to that of the stateof-the-art methods. In addition, infoDMPO, which employs MI-based regularization, outperformed DMPO
369
+ on various tasks, and infoDMPO showed the best performance for 10 tasks among 15 mujoco-v2 tasks. This result shows that encouraging the diversity of sub-policies using the proposed MI-based regularization is effective for DMPO.
370
+
371
+ The advantages of DMPO and infoDMPO over TD3+BC and CQL are apparent for antmaze tasks. TD3+BC and CQL did not work satisfactorily on antmaze tasks, indicating that techniques used in these algorithms are not effective for such tasks. The performance of DMPO(ant ver.) and infoDMPO(ant ver.) on antmaze tasks is comparable to that of IQL and Diffusion QL, which are the state-of-the-art method for these tasks.
372
+
373
+ We observed similar results for adroit tasks. DMPO and infoDMPO clearly outperformed TD3+BC and CQL, and the performance of DMPO and infoDMPO was comparable to IQL. Considering that infoDMPO
374
+ outperformed IQL on mujoco-v2 task, overall performance of infoDMPO is better than that of IQL. This result reveals that the use of a mixture of deterministic policies can result in a significant performance improvement in offline RL.
375
+
376
+ We also provide the result regarding the computational cost of infoDMPO in Table 7. We used a workstation with GPU RTX A6000 and CPU Core i9-10980XE for this evaluation. The results indicate that the
377
+
378
+ | are shown. | TD3+BC | CQL | IQL | LAPO | Diff. QL | DMPO | infoDMPO | |
379
+ |--------------|-----------|----------|----------|------------|------------|-----------|------------|----------|
380
+ | | | | | (ant ver.) | (ant ver.) | | | |
381
+ | (re-run) | (re-run) | (re-run) | (re-run) | (re-run) | (ours) | (ours) | | |
382
+ | Antmaze | umaze | 92.8±2.7 | 73.0±4.9 | 87.4±4.5 | 97.2±2.7 | 80.4±35.3 | 92.8±2.1 | 89.4±5.1 |
383
+ | umaze-d. | 45.0±22.2 | 43.8±4.4 | 64.6±5.6 | 57.4±11.7 | 8.0±21.9 | 32.6±25.6 | 34.8±18.0 | |
384
+ | med.-p. | 0.0±0.0 | 9.0±6.4 | 74.6±3.1 | 73.8±4.8 | 60.5±48.8 | 63.0±13.0 | 62.6±6.8 | |
385
+ | med.-d. | 0.0±0.0 | 3.8±4.2 | 73.8±7.1 | 81.0±3.6 | 12.4±30.0 | 75.0±8.5 | 82.8±4.4 | |
386
+ | large-p. | 0.0±0.0 | 0.0±0.0 | 39.0±7.2 | 27.6±13.3 | 44.4±48.5 | 42.2±23.0 | 47.4±14.5 | |
387
+ | large-d. | 0.0±0.0 | 0.0±0.4 | 48.0±9.0 | 26.2±17.5 | 48.6±48.8 | 56.6±4.5 | 38.0±4.8 | |
388
+
389
+ | | TD3+BC | CQL(ρ) | IQL | LAPO | Diff. QL | DMPO | infoDMPO | |
390
+ |----------|----------|-----------|-----------|-----------|------------|-----------|------------|---------|
391
+ | | (re-run) | (re-run) | (re-run) | (re-run) | (re-run) | (ours) | (ours) | |
392
+ | pen | 0.8±8.0 | 98.3±81.8 | 88.8±21.2 | 78.9±14.1 | 42.1±57.5 | 86.1± 8.8 | 94.8±16.5 | |
393
+ | hammer | 0.9±0.8 | -7.1±0.1 | 1.0±0.2 | 1.1±0.4 | 0.3±0.2 | 1.2±0.2 | 2.4±0.9 | |
394
+ | Human | door | -0.3±0.0 | -3.3±7.8 | 2.4±2.1 | 3.2±1.6 | -0.4±0.0 | 1.3±1.5 | 4.2±3.1 |
395
+ | relocate | -0.3±0.0 | 0.3±2.4 | 0.0±0.0 | 0.0±0.0 | -0.0±0.0 | 0.0± 0.1 | 0.1±0.0 | |
396
+ | pen | 0.5±7.0 | -1.7±1.5 | 39.2±15.4 | 25.6±12.2 | 19.0±42.2 | 36.0±17.7 | 46.4±16.7 | |
397
+ | hammer | 0.2±0.0 | -7.0±0.1 | 0.9±0.4 | 0.7±0.4 | 0.2±0.0 | 0.8±0.6 | 1.2±0.3 | |
398
+ | Cloned | door | -0.3±0.0 | -9.4±0.0 | 0.6±1.2 | 0.7±0.9 | -0.3±0.0 | 0.0±0.0 | 0.8±0.8 |
399
+ | relocate | -0.3±0.0 | -2.1±0.0 | -0.2±0.0 | -0.2±0.0 | -0.2±0.1 | -0.2±0.0 | -0.2±0.0 | |
400
+
401
+ Table 6: Results on adroit tasks using the average normalized scores over the last 10 test episodes and five
402
+
403
+ seeds.
404
+
405
+ TD3+BC CQL(ρ) IQL LAPO Diff. QL DMPO infoDMPO
406
+
407
+ (re-run) (re-run) (re-run) (re-run) (re-run) (ours) (ours)
408
+
409
+ computational cost of Diffusion QL is approximately three times higher than that of infoDMPO. Therefore, the computational cost is also an advantage of infoDMPO over Diffusion QL.
410
+
411
+ As a qualitative evaluation, we investigated the activation of sub-policies in DMPO. Activation of sub-policies in DMPO on the pen-human-v0 task is depicted in Figure 5. In Figure 5(a), the top row depicts the state at the 20th, 40th, 60th, and 80th time steps, and graphs in the middle row of the figure show the action-values of each sub-policy at each state, Qw(s, µ(s, z)). Figures 5 (a) and (b) show the results for different episodes.
412
+
413
+ A previous study (Smith et al., 2018) reported that in the option-critic framework Bacon et al. (2017) only a few options are activated and that the remaining options do not learn meaningful behaviors. In contrast, the results in Figure 5 show that the value of each of the sub-policies Qw(s, µ(s, z)) changes over time, and various sub-policies are activated during execution. This result implies the following: meaningful sub-policies are learned in DMPO and different behaviors are adaptively used to perform complicated manipulation task.
414
+
415
+ ## 8 Limitation Of The Proposed Method
416
+
417
+ In this work, we proposed a method based on a mixture of deterministic policies, which implicitly divides the state-action space by learning discrete latent variables. While the experimental results demonstrate that our method DMPO can avoids the accumulation of the critic loss, there are problems which cannot be addressed by our approach. For example, if the dataset contains only covers subset of actions, there is the potential to overestimate the value of actions that are not contained in the dataset. In such cases, dividing the state action space is not sufficient to avoid generating OOD actions, and it will be necessary to regularization to avoid the overestimation of Q-values such as CQL Kumar et al. (2020).
418
+
419
+ In addition, while DMPO demonstrated the performance comparable to Diffusion QL on mujoco tasks in D4RL, a diffusion-model-based policy is evidently more expressive than a mixture of deterministic policies.
420
+
421
+ When the data distribution in a given dataset is highly complex in offline RL, a diffusion-model-based policy should demonstrate its advantage over a mixture of deterministic policies.
422
+
423
+ Table 7: Wall clock time for training and inference.
424
+
425
+ | Time | for | training |
426
+ |----------------------|-----------------------|--------------|
427
+ | with 1 million steps | Action inference time | |
428
+ | infoDMPO | 170 [min] | 1.2-1.3 [ms] |
429
+ | Diffusion QL | 600 [min] | 4.0-4.3 [ms] |
430
+
431
+ ![13_image_0.png](13_image_0.png)
432
+
433
+ Figure 5: Visualization of sub-policy activation in the pen-human-v0 task. The top row depicts the state at the 20th, 40th, 60th, and 80th time steps; the graphs in the middle row depicts the action-values of each sub-policy at each state.
434
+
435
+ ## 9 Conclusion
436
+
437
+ We presented DMPO, an algorithm for training a mixture of deterministic policies in offline RL. This algorithm can be interpreted as an approach that divides the state-action space by learning the discrete latent variable and the corresponding sub-policies in each region. In this study, we empirically investigated the effect of policy structure in offline RL. The experimental results reveal that the use of a mixture of deterministic policies can mitigate the issue of critic error accumulation in offline RL. In addition, the results indicate that the use of a mixture of deterministic policies significantly improves the performance of an offline RL algorithm. We believe that our study contributes to advancing techniques to leverage policy structure in offline RL.
438
+
439
+ ## References
440
+
441
+ Riad Akrour, Davide Tateo, and Jan Peters. Continuous action reinforcement learning from a mixture of interpretable experts. *IEEE Transactions on Pattern Analysis and Machine Intelligence*, 2021. doi: 10.1109/TPAMI.2021.3103132.
442
+
443
+ Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offline reinforcement learning with diversified q-ensemble. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2021.
444
+
445
+ Pierre-Luc Bacon, Jean Harb, and Doina Precup. Option-critic architecture. In *Proceedings of the AAAI*
446
+ Conference on Artificial Intelligence (AAAI), 2017.
447
+
448
+ David Barber and Felix Agakov. The im algorithm : A variational approach to information maximization.
449
+
450
+ In *Advances in Neural Information Processing Systems (NeurIPS)*, 2003.
451
+
452
+ David Brandfonbrener, William F. Whitney, Rajesh Ranganath, and Joan Bruna. Offline RL without offpolicy evaluation. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2021.
453
+
454
+ Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In Advances in Neural Information Processing Systems (NeurIPS), 2021a.
455
+
456
+ Xi Chen, Ali Ghadirzadeh, Tianhe Yu, Jianhao Wang, Yuan Gao, Wenzhe Li, Bin Liang, Chelsea Finn, and Chongjie Zhang. LAPO: Latent-variable advantage-weighted policy optimization for offline reinforcement learning. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2022.
457
+
458
+ Xiong-Hui Chen, Yang Yu, Qingyang Li, Fan-Ming Luo, Zhiwei Qin, Wenjie Shang, and Jieping Ye. Offline model-based adaptable policy learning. In *Advances in Neural Information Processing Systems (NeurIPS)*,
459
+ 2021b.
460
+
461
+ Tom Minka Chris J. Maddison, Daniel Tarlow. A* sampling. In Advances in Neural Information Processing Systems (NeurIPS), 2014.
462
+
463
+ Chris Cremer, Xuechen Li, and David Duvenaud. Inference suboptimality in variational autoencoders. In Proceedings of the International Conference on Machine Learning (ICML), 2018.
464
+
465
+ Christian Daniel, Gerhard Neumann, Oliver Kroemer, and Jan Peters. Hierarchical relative entropy policy search. *Journal of Machine Learning Research*, 17(93):1–50, 2016.
466
+
467
+ P. Dayan and G. Hinton. Using expectation-maximization for reinforcement learning. *Neural Computation*,
468
+ 9:271–278, 1997.
469
+
470
+ Emilien Dupont. Learning disentangled joint continuous and discrete representations. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
471
+
472
+ Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep datadriven reinforcement learning. *arXiv*, 2020.
473
+
474
+ Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. Advances in Neural Information Processing Systems (NeurIPS), 2021.
475
+
476
+ Scott Fujimoto, Herke van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In *Proceedings of the International Conference on Machine Learning (ICML)*, pp. 1587–1596, 2018.
477
+
478
+ Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration.
479
+
480
+ In *Proceedings of the International Conference on Machine Learning (ICML)*, pp. 2052–2062, 2019.
481
+
482
+ Scott Fujimoto, David Meger, Doina Precup, Ofir Nachum, and Shixiang Shane Gu. Why should I trust you, Bellman? the Bellman error is a poor replacement for value error. In *Proceedings of the International* Conference on Machine Learning (ICML), 2022.
483
+
484
+ Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, and Shixiang Shane Gu. EMaQ: Expected-max Qlearning operator for simple yet effective offline and online rl. In *Proceedings of the International Conference* on Machine Learning (ICML), volume 139, pp. 3682–3691. PMLR, 18–24 Jul 2021.
485
+
486
+ Wonjoon Goo and Scott Niekum. You only evaluate once: a simple baseline algorithm for offline rl. In Proceedings of the Conference on Robot Learning (CoRL), 2021.
487
+
488
+ Caglar Gulcehre, Sergio Gomez, Jakub Sygnowski, Ziyu Wang, Tom Le Paine, Konrad Zolna, Razvan Pascanu, Yutian Chen, and Matt Hoffman. Addressing extrapolation error in deepoffline reinforcement learning. In *Offline Reinforcement Learning Workshop at Neural Information Processing Systems (NeurIPS)*,
489
+ 2020.
490
+
491
+ Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-softmax. In *Proceedings* of the International Conference on Learning Representations (ICLR), 2017.
492
+
493
+ Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In *Proceedings of the International* Conference on Learning Representations (ICLR), 2014.
494
+
495
+ J. Kober and J. Peters. Policy search for motor primitives in robotics. *Machine Learning*, 84:171–203, 2011.
496
+
497
+ Ilya Kostrikov, Rob Fergus, Jonathan Tompson, and Ofir Nachum. Offline reinforcement learning with Fisher divergence critic regularization. In *Proceedings of the International Conference on Machine Learning* (ICML), 2021.
498
+
499
+ Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2022.
500
+
501
+ Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2020.
502
+
503
+ Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement Learning, pp. 45–73, 2012.
504
+
505
+ Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. *arXiv*, 2020.
506
+
507
+ Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
508
+
509
+ Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In *Proceedings of the International Conference on Machine Learning (ICML)*, 2016.
510
+
511
+ Ashvin Nair, Abhishek Gupta, Murtaza Dalal, and Sergey Levine. AWAC: Accelerating online reinforcement learning with offline datasets. *arXiv*, arXiv:2006.09359, 2020.
512
+
513
+ Gerhard Neumann and Jan Peters. Fitted q-iteration by advantage weighted regression. In *Advances in* Neural Information Processing Systems (NeurIPS), 2008.
514
+
515
+ Takayuki Osa, Voot Tangkaratt, and Masashi Sugiyama. Hierarchical reinforcement learning via advantageweighted information maximization. In *Proceedings of the International Conference on Learning Representations (ICLR)*, 2019.
516
+
517
+ Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In *Proceedings of the International Conference on Machine Learning (ICML)*, 2007.
518
+
519
+ Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2.
520
+
521
+ In *Advances in Neural Information Processing Systems (NeurIPS)*, 2019.
522
+
523
+ John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. In *Proceedings of the International Conference* on Learning Representations (ICLR), 2016.
524
+
525
+ Matthew Smith, Herke Hoof, and Joelle Pineau. An inference-based policy gradient method for learning options. In *Proceedings of the International Conference on Machine Learning (ICML)*, 2018.
526
+
527
+ Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2015.
528
+
529
+ Richard S. Sutton and Andrew G. Barto. *Reinforcement Learning: An Introduction*. The MIT Press, Second edition, 2018.
530
+
531
+ E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ
532
+ International Conference on Intelligent Robots and Systems, pp. 5026–5033, 2012.
533
+
534
+ Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. Neural discrete representation learning. In Advances in Neural Information Processing Systems, 2017.
535
+
536
+ Qing Wang, Jiechao Xiong, Lei Han, peng sun, Han Liu, and Tong Zhang. Exponentially weighted imitation learning for batched historical data. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2018.
537
+
538
+ Zhendong Wang, Jonathan J Hunt, and Mingyuan Zhou. Diffusion policies as an expressive policy class for offline reinforcement learning. In *Proceedings of the International Conference on Learning Representations* (ICLR), 2023.
539
+
540
+ Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Tim Hertweck, Thomas Lampe, Noah Siegel, Nicolas Heess, and Martin Riedmiller. Compositional transfer in hierarchical reinforcement learning. In *Proceedings of Robotics: Science and Systems (R:SS)*, 2020.
541
+
542
+ Markus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah Siegel, Nicolas Heess, and Martin Riedmiller. Data-efficient hindsight off-policy option learning. In Proceedings of the International Conference on Machine Learning
543
+ (ICML), 2021.
544
+
545
+ Shangtong Zhang and Shimon Whiteson. DAC: The double actor-critic architecture for learning options. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
546
+
547
+ Wenxuan Zhou, Sujay Bajracharya, and David Held. PLAS: Latent action space for offline reinforcement learning. In *Proceedings of the Conference on Robot Learning (CoRL)*, volume 155, pp. 1719–1735, 2020.
548
+
549
+ ## A Applicability Of The Gating Policy
550
+
551
+ In the proposed algorithm, we employ a gating policy that determines the value of the latent variable as follows:
552
+
553
+ $$z=\arg\operatorname*{max}_{z^{\prime}}Q_{w}(s,\mu_{\theta}(s,z^{\prime})),$$
554
+ ′)), (20)
555
+ where µθ(s, z
556
+ ′) represents the deterministic sub-policy, and Qw(s, a) is the approximated Q-function. While this gating policy appears specific to the case where Qπ(s, a) is maximized, it is applicable to other objective function such as Aπ(s, a), exp(Qπ(s, a)), and exp(Aπ(s, a)). The advantage function is defined as Aπ(s, a) =
557
+
558
+ $\left(20\right)^3$
559
+ Qπ(s, a) − V
560
+ π(s). Because the state value function V
561
+ π(s) is independent of the action, we can obtain the following equation:
562
+
563
+ $$\arg\max_{\mathbf{a}}Q^{\pi}(\mathbf{s},\mathbf{a})=\arg\max_{\mathbf{a}}\left(Q^{\pi}(\mathbf{s},\mathbf{a})-V^{\pi}(\mathbf{s})\right)$$ $$=\arg\max_{\mathbf{a}}A^{\pi}(\mathbf{s},\mathbf{a}).$$
564
+ $$(21)$$
565
+ $$(22)$$
566
+ $$(23)$$
567
+ $$(24)$$
568
+
569
+ Thus, we can rewrite the gating policy as
570
+
571
+ $$\begin{array}{l}{{\mathbf{z}=\arg\max_{\mathbf{z}^{\prime}}Q_{\mathbf{w}}(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{z}^{\prime}))}}\\ {{\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{z}^{\prime}))}}\\ {{\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s},\mathbf{z}^{\prime})).}}\end{array}$$
572
+
573
+ Similarly, exponential function exp(·) is a monotonically increasing function. Thus, the extrema of Qπ(s, a)
574
+ is also the extrema of exp(Qπ(s, a)). Consequently, we can also rewrite the gating policy as
575
+
576
+ $$\mathbf{z}=\arg\max_{\mathbf{z}^{\prime}}Q_{\mathbf{w}}(\mathbf{s},\mathbf{\mu_{\theta}}(\mathbf{s},\mathbf{z}^{\prime}))$$ $$=\arg\max_{\mathbf{z}^{\prime}}\exp\left(Q_{\mathbf{w}}(\mathbf{s},\mathbf{\mu_{\theta}}(\mathbf{s},\mathbf{z}^{\prime}))\right)$$ $$=\arg\max_{\mathbf{z}^{\prime}}A_{\mathbf{w}}(\mathbf{s},\mathbf{\mu_{\theta}}(\mathbf{s},\mathbf{z}^{\prime}))$$ $$=\arg\max_{\mathbf{z}^{\prime}}\exp\left(A_{\mathbf{w}}(\mathbf{s},\mathbf{\mu_{\theta}}(\mathbf{s},\mathbf{z}^{\prime}))\right).$$
577
+ $$(28)$$
578
+
579
+ Because we used this gating policy, it is deterministic in our implementation.
580
+
581
+ ## B Derivation Of The Variational Lower Bound
582
+
583
+ We employed the variational lower bound in Equation 8 to derive the objective function for the proposed method. Here, we provide a detailed derivation, which was omitted in the main manuscript. We denote the true distribution induced by the policy πθ(a|s) as p(·), and the distribution that approximates the true distribution is denoted as q(·). The KL divergence between q(x) and p(x) is defined as
584
+
585
+ $$D_{\mathrm{KL}}{\big(}q(\mathbf{x})||p(\mathbf{x}){\big)}=\int q(\mathbf{x})\log{\frac{q(\mathbf{x})}{p(\mathbf{x})}}d\mathbf{z}.$$
586
+ $$(29)$$
587
+ $$(30)$$
588
+
589
+ Based on the above notation, the log-likelihood log πθ(ai|si) can be written as follows:
590
+ In the first line, we consider marginalization over z. As log π(a|s) is independent of the latent variable z, the equality in the first line holds. Because DKLq(z|s, a)||p(z|s, a)> 0, we can obtain a variant of the variational lower bound of the conditional log-likelihood:
591
+
592
+ $$\log\pi_{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i})\geq-D_{\rm KL}(q_{\mathbf{\phi}}(\mathbf{z}|\mathbf{s}_{i},\mathbf{a}_{i})||p(\mathbf{z}|\mathbf{s}_{i}))+\mathbb{E}_{\mathbf{z}\sim q(\mathbf{z}|\mathbf{s}_{i},\mathbf{a}_{i}))}\left[\log\pi_{\mathbf{\theta}}(\mathbf{a}_{i}|\mathbf{s}_{i},\mathbf{z})\right].\tag{34}$$
593
+ log πθ(ai|si) = Zqϕ(z|si, ai) log πθ(ai|si)dz (30) = Zqϕ(z|si, ai)log π(ai|si, z) + log p(z|si) − log p(z|si, ai)dz (31) = Zqϕ(z|si, ai) log q(z|si, ai) p(z|si, ai) dz − Zq(z|si, ai) log q(z|si, ai) p(z|si)dz + Zq(z|si, ai) log πθ(ai|si, z)dz (32) = DKLq(z|si, ai)||p(z|si, ai)− DKL(q(z|si, ai)||p(z|si)) + Ez∼q(z|si,ai)) [log πθ(ai|si, z)] . (33)
594
+ $$(31)$$
595
+ $$(32)$$
596
+ $$(33)$$
597
+
598
+ | Table 8: Effect of the dimensionality of the discrete latent variable. WK=walker | | | 2d. | |
599
+ |------------------------------------------------------------------------------------|-------------|-------------|-------------|-------------|
600
+ | | infoDMPO | infoDMPO | infoDMPO | infoDMPO |
601
+ | | |Z| = 4 | |Z| = 8 | |Z| = 16 | |Z| = 32 |
602
+ | pen-human-v0 | 75.7 ± 18.9 | 94.8 ± 16.5 | 75.0 ± 17.5 | 86.7 ± 12.4 |
603
+ | WK-expert-v2 | 99.7±17.9 | 112.1±0.4 | 108.8±6.8 | 106.4±10.2 |
604
+ | WK-med.-expert-v2 | 89.1±25.7 | 110.1±0.7 | 96.0±17.0 | 109.9±0.6 |
605
+ | WK-med.-replay-v2 | 81.6±4.5 | 86.7±3.2 | 85.4±3.7 | 86.3±3.1 |
606
+ | WK-med.-v2 | 81.8±2.5 | 85.0±0.8 | 69.9±28.3 | 84.3±1.0 |
607
+
608
+ ## C Proof Of Contraction Of The Latent-Max-Q Operator
609
+
610
+ We consider operator Tz, which is given by
611
+
612
+ $${\cal T}_{\bf z}Q({\mathbf{s}},{\mathbf{a}})=\mathbb{E}_{{\mathbf{s}}^{\prime}}\left[r({\mathbf{s}},{\mathbf{a}})+\gamma\max_{\mathbf{z}}Q({\mathbf{s}}^{\prime},{\mathbf{\mu}}({\mathbf{s}}^{\prime},{\mathbf{z}}^{\prime}))\right].\tag{1}$$
613
+
614
+ To prove the contraction of Tz, we use the infinity norm given by
615
+
616
+ $$(35)$$
617
+ $$\|Q_{1}-Q_{2}\|_{\infty}=\operatorname*{max}_{s\in S,a\in{\mathcal{A}}}|Q_{1}(s,a)-Q_{2}(s,a)|\,,$$
618
+ $$(36)$$
619
+ $$(37)$$
620
+
621
+ $$(41)$$
622
+ where Q1 and Q2 are different estimates of the Q-function. We consider the infinity norm of the difference
623
+ between the two estimates, Q1 and Q2, after applying operator Tz: ∥TzQ1 − TzQ2∥∞ (37) = Es′ hr(s, a) + γ max z′ Q1(s ′, µ(s ′, z ′))i− Es′ hr(s, a) + γ max z′ Q2(s ′, µ(s ′, z ′))i (38) = γEs′ hmax z′ Q1(s ′, µ(s ′, z ′))i− γEs′ hmax z′ Q2(s ′, µ(s ′, z ′))i (39) = γ Es′ hmax z′ Q1(s ′, µ(s ′, z ′))i− Es′ hmax z′ Q2(s ′, µ(s ′, z ′))i (40) = γ Es′ hmax z′ Q1(s ′, µ(s ′, z ′)) − max z′ Q2(s ′, µ(s ′, z ′))i (41) ≤ γ Es′ ∥Q1 − Q2∥∞ (42) ≤ γ ∥Q1 − Q2∥∞ . (43) The above relationship shows the contraction of operator Tz.
624
+
625
+ ## D Effect Of Dimensionality Of The Discrete Latent Variable
626
+
627
+ In our evaluation, we first examined the effect of the dimensionality of the discrete latent variable. The results are presented in Table 8. As shown, infoDMPO with |Z| = 8 demonstrated the best performance, while the performance with |Z| = 16 and |Z| = 32 is comparable. These results show that the policy performance is not very sensitive to the dimensionality of the latent variable. However, the performance with |Z| = 4 is relatively weak, thereby indicating that the policy may not be sufficiently expressive when the dimensionality of the latent variable is significantly small. Because |Z| = 8 consistently provided satisfactory performance, |Z| = 8 was used in the subsequent evaluations.
628
+
629
+ ## E Comparison With Additional Baselines
630
+
631
+ We provide a comparison with additional baselines for the mujoco-v2 tasks in D4RL in Table 9. We present the results of MAPLE, which is a recent model-based offline algorithm that uses latent representations (Chen et al., 2021b). In addition, we provide the results of decision transformer (Chen et al., 2021a), as a representative transformer-based method. Although these methods are well-known and state-of-the-art, we focused on model-free and non-transformer-based methods in the main manuscript. For each baseline method, we adapted the results reported in the original paper. DMPO and infoDMPO provide a consistently better or comparable performance to these baseline methods, although our implementation of DMPO and infoDMPO
632
+ does not employ techniques such as ensemble of critics. This result indicates a significant effect of the policy structure in offline RL. Table 9: Results on mujoco tasks using D4RL-v2 datasets. Average normalized scores over the last 10 test episodes and five seeds are shown. HC = HalfCheetah, HP = Hopper, WK = Walker2d. The gray text indicates the performance lower than that of DMPO/infoDMPO. The bold text indicates the best performance.
633
+
634
+ MAPLE Decision DMPO infoDMPO
635
+ Transformer
636
+ (paper) (paper) (ours) (ours)
637
+
638
+ | MAPLE | Decision | DMPO | infoDMPO | | |
639
+ |--------------|-------------|-------------|-------------|------------|-----------|
640
+ | | Transformer | | | | |
641
+ | (paper) | (paper) | (ours) | (ours) | | |
642
+ | HC | 63.5 ± 6.5 | 86.8 ± 1.3 | 91.1± 3.4 | 91.4± 2.5 | |
643
+ | Med.- Expert | HP | 42.5 ± 4.1 | 107.6 ± 1.8 | 78.4± 19.0 | 94.5±14.9 |
644
+ | WK | 73.8 ± 8.0 | 108.1 ± 0.2 | 109.9± 0.4 | 110.1± 0.7 | |
645
+ | HC | 59.0 ± 0.6 | 36.6 ± 0.8 | 45.2± 0.8 | 46.7± 0.6 | |
646
+ | Med.- Replay | HP | 87.5 ± 10.8 | 82.7 ± 7.0 | 89.2± 8.1 | 98.5± 2.0 |
647
+ | WK | 76.7 ± 3.8 | 66.6 ± 3.0 | 82.1± 3.8 | 86.7± 3.2 | |
648
+ | HC | 50.4 ± 1.9 | 42.6 ± 0.1 | 47.5± 0.4 | 48.6± 0.4 | |
649
+ | Med. | HP | 21.1 ± 1.2 | 67.6 ± 1.0 | 71.2± 6.5 | 86.4± 7.6 |
650
+ | WK | 56.3 ± 10.6 | 74.0 ± 1.4 | 79.4± 4.7 | 85.0± 0.8 | |
651
+
652
+ ## F Hyperparameters And Implementation Details
653
+
654
+ Computational resource and license The experiments were run with Amazon Web Service and workstations with NVIDIA RTX 3090 GPUs and Intel Core i9-10980XE CPUs at 3.0 GHz. We used the physics simulator, MuJoCo (Todorov et al., 2012) under an institutional license, and later we switched to the Apache license.
655
+
656
+ Software The software versions used in the experiments are listed below:
657
+ - Python 3.8
658
+ - Pytorch 1.10.0
659
+ - Gym 0.21.0 - MuJoCo 2.1.0
660
+ - mujoco-py 2.1.2.14 We used the author-provided implementations for TD3+BC2 and CQL3. DMPO, AWAC, mixAWAC, and IQL were implemented based on the the author-provided implementation of TD3. For IQL, we used the hyperparameters provided in Kostrikov et al. (2022). To minimize the difference between DMPO and AWAC,
661
+ we used a delayed update of the policy in both DMPO and AWAC. For simplicity, we did not use a regularization technique for the actor such as the dropout layer used in (Kostrikov et al., 2022), although the use of such techniques should further improve the performance. In our implementation of DMPO, the value of z is a part of the input to the actor network. Thus, different behaviors corresponding to different values of z are represented by the same actor network. The network architecture is illustrated in Figure 6. Computation of the advantage function In DMPO, a policy is deterministic because both the gating policy π(z|s) and sub-policy π(a|s, z) are deterministic. Thus, the state-value function is given by
662
+
663
+ $$V^{\pi}(\mathbf{s})=\operatorname*{max}_{\mathbf{z}}Q^{\pi}(\mathbf{s},\mu(\mathbf{s},\mathbf{z})).$$
664
+ π(s, µ(s, z)). (44)
665
+ Therefore, the advantage function is given by
666
+
667
+ $$A^{\pi}(s,a)=Q^{\pi}(s,a)-V^{\pi}(s)=Q^{\pi}(s,a)-\operatorname*{max}_{z}Q^{\pi}(s,\mu(s,z)).$$
668
+ π(s, µ(s, z)). (45)
669
+ In the policy update, we use the target actor in the second term in Equation 45. Thus, in our implementation, the advantage function is approximated as
670
+
671
+ $$\frac{A(\mathbf{s},\mathbf{a};\mathbf{w},\mathbf{\theta}^{\prime})=Q(\mathbf{s},\mathbf{a};\mathbf{w})-\operatorname*{max}_{\mathbf{z}}Q(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}^{\prime}}(\mathbf{s},\mathbf{z});\mathbf{w})}{\operatorname*{max}_{\mathbf{w},\mathbf{\theta}^{\prime}}Q(\mathbf{s},\mathbf{\mu}_{\mathbf{\theta}^{\prime}}(\mathbf{s},\mathbf{z});\mathbf{w})}.$$
672
+ zQ(s, µθ′ (s, z); w). (46)
673
+ 2https://github.com/sfujim/TD3_BC 3https://github.com/young-geng/CQL
674
+
675
+ $$(44)$$
676
+ $$(45)$$
677
+ $$(46)$$
678
+
679
+ ![20_image_0.png](20_image_0.png)
680
+
681
+ (a) Computation for maximizing LML in Equation 11.
682
+
683
+ ![20_image_1.png](20_image_1.png)
684
+
685
+ (b) Computation for maximizing PN
686
+ i=1 Ez∼p(z)log gψ(z|si, µθ(si, z)).
687
+
688
+ Figure 6: Connection between qϕ(z|s, a), µθ(s, z), and gψ(z|s, a) during training.
689
+
690
+ Target smoothing in DMPO In DMPO, a policy is given by a mixture of deterministic sub-policies, where a sub-policy is selected in a deterministic manner, similar to that in Equation 3. Thus, the mixture policy in this framework is deterministic. As reported in Fujimoto & Gu (2021), the use of a deterministic policy may lead to overfitting of the critic to narrow peaks. Because our policy is deterministic, we also employed a technique called target policy smoothing used in TD3. Thus, the target value in Equation 17 is modified as follows:
691
+
692
+ $$y_{i}=r_{i}+\gamma\operatorname*{max}_{\mathbf{z}^{\prime}\in\mathcal{Z}}\operatorname*{min}_{j=1,2}Q_{\mathbf{w}_{j}^{\prime}}(\mathbf{s}^{\prime},\mathbf{\mu}_{\mathbf{\theta}^{\prime}}(\mathbf{s}^{\prime},\mathbf{z}^{\prime})+\epsilon_{\mathrm{clip}}),$$
693
+ $$(47)$$
694
+
695
+ where ϵclip is given by
696
+
697
+ $$\epsilon_{\mathrm{clip}}=\operatorname*{min}(\operatorname*{max}(\epsilon,-c),c)\quad{\mathrm{where}}\quad\epsilon\sim{\mathcal{N}}(0,\sigma),$$
698
+ ϵclip = min(max(ϵ, −c), c) where ϵ ∼ N (0, σ), (48)
699
+ and constant c defines the range of the noise. Techniques for Antmaze tasks In LAPO Chen et al. (2022), several techniques to stabilize the training of the value functions are used. Suppose the state-value function is approximated with Vwv
700
+ (s) parameterized with a vector wv, and the Q-function is approximated with two models, which are represented by Qwj
701
+ (s, a)
702
+ for j = 1, 2. The state-value function is updated by minimizing
703
+
704
+ $$(48)$$
705
+ $$\mathcal{L}_{v}(\mathbf{w}_{v})=\sum_{(\mathbf{s}_{i},\mathbf{a}_{i})\in\mathcal{D}}\left\|\tilde{y}_{i}-V_{\mathbf{w}_{v}}(\mathbf{s}_{i})\right\|^{2},\tag{1}$$
706
+
707
+ where the target value y˜iis the clipped target value computed as
708
+
709
+ $${\hat{y}}_{i}=\operatorname*{max}\left(\operatorname*{min}\left(y_{i},v_{\operatorname*{max}}\right),v_{\operatorname*{min}}\right)$$
710
+ y˜i = max (min (yi, vmax), vmin) (50)
711
+ and yiis computed as
712
+
713
+ $$y_{i}=c\operatorname*{min}_{j=1,2}Q_{\mathbf{w}_{j}}(\mathbf{s}_{i},\mathbf{a}_{i})+(1-c)\operatorname*{max}_{j=1,2}Q_{\mathbf{w}_{j}}(\mathbf{s}_{i},\mathbf{a}_{i})$$
714
+ (si, ai) (51)
715
+ $$(49)$$
716
+ $$(50)$$
717
+ $$(51)$$
718
+
719
+ and c is a constant, and we used c = 0.7 as in Chen et al. (2022). The minimum and maximum target value vmin and vmax are computed as
720
+
721
+ $$v_{\min}=\frac{1}{1-\gamma}\min_{r_{i}\in\mathcal{D}}r_{i}$$ $$v_{\max}=\frac{1}{1-\gamma}\max_{r_{i}\in\mathcal{D}}r_{i}.\tag{1}$$
722
+ (52) $\binom{53}{53}$ .
723
+ The Q-function is updated by minimizing the following objective function:
724
+
725
+ $$\mathcal{L}_{q}(\mathbf{w})=\sum_{(\mathbf{s}_{i},\mathbf{a}_{i},r_{i},\mathbf{s}_{i}^{\prime})\in\mathcal{D}}\left\|r_{i}+\gamma V_{\mathbf{w}_{v}}(\mathbf{s}_{i}^{\prime})-Q_{\mathbf{w}}(\mathbf{s}_{i},\mathbf{a}_{i})\right\|^{2}.\tag{54}$$
726
+
727
+ For the antmaze tasks, we also used same techniques in DMPO.
728
+
729
+ Implementation of mixAWAC The difference between mixAWAC and AWAC is a policy representation.
730
+
731
+ For mixAWAC, we used a Gaussian mixture policy. The discrete latent variable is sampled from a categorical distribution, and the corresponding Gaussian component policy is used to sample the action. As in DMPO, the latent variable is represented as a one-hot vector, and the neural network that represents the Gaussian components takes the state and the one-hot vector as its input. The key part of the implementation is how to sample from a categorical distribution in a differentiable manner. We used the Gumbel-max trick for this purpose (Chris J. Maddison, 2014; Jang et al., 2017; Maddison et al., 2017). The Gumbel-max trick is often used to learn discrete latent variable in VAE (Kingma & Welling, 2014). In our implementation, the activation function of the last layer of the gating policy is the softmax function. The discrete latent variable is sampled using on the Gumbel-max trick based on the output of the gating policy.
732
+
733
+ Implementation of LP-AWAC As in our implementation of AWAC, mixAWAC, and DMPO, the doubleclipped Q-learning is employed in LP-AWAC. In additioned to the Q-function, the state value function Vw(s)
734
+ is trained by minimizing the mean squared error:
735
+
736
+ $${\mathcal{L}}_{\mathrm{LP-AWAC}}(\mathbf{w})=\sum_{(\mathbf{s}_{i},\mathbf{a}_{i})\in{\mathcal{D}}}\left\|V_{\mathbf{w}}(\mathbf{s}_{i})-\operatorname*{min}_{j=1,2}Q_{\mathbf{w}_{j}}(\mathbf{s}_{i},\mathbf{a}_{i})\right\|_{2}^{2}$$
737
+ $$\left(57\right)$$
738
+ $$\left(55\right)$$
739
+
740
+ In LP-AWAC, the conditional VAE is trained using advantage weighting. Denoting the approximated posterior and likelihood by qϕ(z|s, a) and pψ(a|s, z), respectively, the encoder and decoder are trained by maximizing the following objective function:
741
+
742
+ $$\mathcal{L}_{\text{cvae}}(\mathbf{\phi},\mathbf{\psi})=\sum_{(\mathbf{z}_{i},\mathbf{a}_{i},\mathbf{r}_{i},\mathbf{z}_{i})\in\mathcal{D}}W(\mathbf{s}_{i},\mathbf{a}_{i})\left(-D_{\text{KL}}(\mathbf{q}_{\mathbf{\phi}}(\mathbf{z}|\mathbf{s}_{i},\mathbf{a}_{i})||p(\mathbf{z}|\mathbf{s}_{i}))+\mathbb{E}_{\mathbf{z}\sim p(\mathbf{z}|\mathbf{s}_{i},\mathbf{a}_{i})}\left[\log p_{\mathbf{\psi}}(\mathbf{a}_{i}|\mathbf{s}_{i},\mathbf{z})\right]\right),\tag{56}$$
743
+
744
+ where W(si, ai) is the weight for advantage weighting. In our experiments, we used the normalized advantage weighting in Equation 19. Then, the deterministic latent actor µθ(s) is trained to output the latent variable z by maximizing the expected Q-value:
745
+
746
+ $${\mathcal{L}}_{\mathrm{latent-actor}}(\mathbf{\theta})=\sum_{(\mathbf{s}_{i},\mathbf{a}_{i})\in{\mathcal{D}}}Q_{\mathbf{w}_{1}}\left(\mathbf{s}_{i},g_{\psi}(\mathbf{s}_{i},\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{s}_{i}))\right),\tag{1}$$
747
+
748
+ where gψ(s, a) is the decoder. The objective function for learning the continuous latent variable in LPAWAC is very similar to that of DMPO in Equation 11 for learning the discrete latent variable. When considering the deterministic latent actor µθ(s) in LP-AWAC as the gating policy that approximately solves arg maxz Q(s, z), LP-AWAC can be considered as the variant of DMPO using the continuous latent variable.
749
+
750
+ Thus, the difference between DMPO and LP-AWAC indicates the difference of the discrete and continuous latent variable in our framework.
751
+
752
+ | Hyperparameter | Value | |
753
+ |--------------------------------|---------------------------------------------|---------------------------------------|
754
+ | Hyperparameters | Optimizer | Adam |
755
+ | Critic learning rate | 3e-4 (mujoco-v2, adroit) / 2e-4 (Antmaze) | |
756
+ | Actor learning rate | 3e-4 (mujoco-v2, adroit) / 2e-4 (Antmaze) | |
757
+ | Posterior learning rate | 3e-4 (mujoco-v2, adroit) / 2e-4 (Antmaze) | |
758
+ | Mini-batch size | 256 | |
759
+ | Discount factor | 0.99 | |
760
+ | Target update rate | 5e-3 | |
761
+ | Policy noise | 0.2 | |
762
+ | Policy noise clipping | (-0.5, 0.5) | |
763
+ | Policy update frequency | 2 | |
764
+ | Architecture | Critic hidden dim | 256 |
765
+ | Critic hidden layers | 2 (mujoco-v2, adroit) / 3 (Antmaze) | |
766
+ | Critic activation function | ReLU | |
767
+ | Actor hidden dim | 256 | |
768
+ | Actor hidden layers | 2 (mujoco-v2, adroit) / 3 (Antmaze) | |
769
+ | Actor activation function | ReLU | |
770
+ | Posterior hidden dim | 256 | |
771
+ | Posterior hidden layers | 2 (mujoco-v2, adroit) / 3 (Antmaze) | |
772
+ | Posterior activation function | ReLU | |
773
+ | DMPO | Score scaling α | 5.0 (human, Antmaze) 10.0 (mujoco-v2) |
774
+ | learning rate of the posterior | 3e-6 (Adroit) | |
775
+ | for infomax | 5e-7 (mujoco-v2) | |
776
+ | infoDMPO | 5e-7 (Antmaze) | |
777
+ | Score scaling α | 5.0 (Antmaze, HP-med.-expert) 10.0 (others) | |
778
+
779
+ Table 10: Hyperparameters of DMPO & infoDMPO.
780
+ Number of updates In the pen-human-v0, hammer-human-v0, door-human-v0, and relocate-humanv0 tasks, the number of samples contained in the dataset is significantly smaller than that for the other datasets. While the datasets for mujoco tasks contained approximately 1 million samples, the numbers of samples in the adroit-human tasks were as follows: pen-human-v0: 4,950 samples, hammer-human-v0:
781
+ 11,264 samples, door-human-v0: 6,703 samples, and relocate-human-v0: 9,906 samples. Thus, in the penhuman-v0, hammer-human-v0, door-human-v0, and relocate-human-v0 tasks, we updated the policy 10,000 times, whereas for the other tasks, we updated the policy 1 million times. The aforementioned number of policy updates was applied to all methods. Hyperparameters Tables 10–14 provide the hyperparameters used in the experiments. Regarding λ in infoDMPO, the first and second terms in Equation (13) are maximized separately. Thus, we implicitly set the value of λ by setting the different learning rates for the first and second terms in Equation (13). The learning rate for the first term in Equation (13) was fixed to 3e-4. We tested a set of the learning rate {1e-7, 5e-7, 1e-6, 3e-6} for the second term in Equation (13), and we reported the best results in the paper.
782
+
783
+ | Hyperparameter | Value | |
784
+ |----------------------------|-------------------|------|
785
+ | Hyperparameters | Optimizer | Adam |
786
+ | Critic learning rate | 3e-4 | |
787
+ | Actor learning rate | 3e-4 | |
788
+ | Mini-batch size | 1024 | |
789
+ | Discount factor | 0.99 | |
790
+ | Target update rate | 5e-3 | |
791
+ | Policy update frequency | 2 | |
792
+ | Score scaling α | 10.0 | |
793
+ | Architecture | Critic hidden dim | 256 |
794
+ | Critic hidden layers | 2 | |
795
+ | Critic activation function | ReLU | |
796
+ | Actor hidden dim | 256 | |
797
+ | Actor hidden layers | 2 | |
798
+ | Actor activation function | ReLU | |
799
+
800
+ Table 11: Hyperparameters of AWAC.
801
+
802
+ | Hyperparameter | Value | |
803
+ |----------------------------|-------------------|------|
804
+ | Hyperparameters | Optimizer | Adam |
805
+ | Critic learning rate | 3e-4 | |
806
+ | Actor learning rate | 3e-4 | |
807
+ | Mini-batch size | 256 | |
808
+ | Discount factor | 0.99 | |
809
+ | Target update rate | 5e-3 | |
810
+ | Policy noise | 0.2 | |
811
+ | Policy noise clipping | (-0.5, 0.5) | |
812
+ | Policy update frequency | 2 | |
813
+ | α | 2.5 | |
814
+ | Architecture | Critic hidden dim | 256 |
815
+ | Critic hidden layers | 2 | |
816
+ | Critic activation function | ReLU | |
817
+ | Actor hidden dim | 256 | |
818
+ | Actor hidden layers | 2 | |
819
+ | Actor activation function | ReLU | |
820
+
821
+ Table 12: Hyperparameters of TD3+BC. The default hyperparameters in the TD3+BC GitHub are used.
822
+
823
+ | Hyperparameter | Value |
824
+ |-----------------------------------|---------------|
825
+ | Optimizer | Adam |
826
+ | Critic learning rate | 3e-4 |
827
+ | Actor learning rate | 3e-5 |
828
+ | Mini-batch size | 256 |
829
+ | Discount factor | 0.99 |
830
+ | Target update rate | 5e-3 |
831
+ | Target entropy | -1·Action Dim |
832
+ | Entropy in Q target | False |
833
+ | Lagrange | False |
834
+ | α | 10 |
835
+ | Pre-training steps | 40e3 |
836
+ | Num sampled actions (during eval) | 10 |
837
+ | Num sampled actions (logsumexp) | 10 |
838
+ | Critic hidden dim | 256 |
839
+ | Critic hidden layers | 3 |
840
+ | Critic activation function | ReLU |
841
+ | Actor hidden dim | 256 |
842
+ | Actor hidden layers | 3 |
843
+ | Actor activation function | ReLU |
844
+
845
+ Table 13: Hyperparameters of CQL. The default hyperparameters in the CQL GitHub are used.
846
+
847
+ | Hyperparameter | Value | |
848
+ |--------------------------------|---------------------------------------------|------|
849
+ | IQL hyperparameters | Optimizer | Adam |
850
+ | Critic learning rate | 3e-4 | |
851
+ | Actor learning rate | 3e-4 | |
852
+ | Mini-batch size | 256 | |
853
+ | Discount factor | 0.99 | |
854
+ | Target update rate | 5e-3 | |
855
+ | Expectile | 0.7 (mujoco-v2) 0.7 (adroit) 0.9 (antmaze) | |
856
+ | Advantage scale | 3.0 (mujoco-v2) 0.5 (adroit) 10.0 (antmaze) | |
857
+ | Actor learning rate scheduling | cosine | |
858
+ | Architecture | Critic hidden dim | 256 |
859
+ | Critic hidden layers | 2 | |
860
+ | Critic activation function | ReLU | |
861
+ | Actor hidden dim | 256 | |
862
+ | Actor hidden layers | 3 | |
863
+ | Actor activation function | ReLU | |
864
+
865
+ Table 14: Hyperparameters of IQL. The default hyperparameters in the IQL in the paper Kostrikov et al.
866
+
867
+ (2022) are used.
868
+
869
+ Hyperparameters Architecture
zkRCp4RmAF/zkRCp4RmAF_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 25,
6
+ "ocr_stats": {
7
+ "ocr_pages": 0,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 0,
10
+ "ocr_engine": "none"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 25,
14
+ "code": 0,
15
+ "table": 13,
16
+ "equations": {
17
+ "successful_ocr": 74,
18
+ "unsuccessful_ocr": 11,
19
+ "equations": 85
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }