The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: TypeError Message: Couldn't cast array of type struct<paloma_paragraphs: list<item: null>, paloma_documents: list<item: null>> to {'paloma_paragraphs': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=1, id=None), length=1, id=None)} Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/builder.py", line 2013, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/table.py", line 2261, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/table.py", line 2261, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/table.py", line 1802, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/table.py", line 1802, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/table.py", line 2122, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}") TypeError: Couldn't cast array of type struct<paloma_paragraphs: list<item: null>, paloma_documents: list<item: null>> to {'paloma_paragraphs': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=1, id=None), length=1, id=None)} The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1391, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 990, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/sitepackages/datasets/builder.py", line 2040, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
added
string  attributes
dict  created
string  doc
dict  id
string  metadata
dict  text
string 

20240218T23:39:39.769Z  {
"paloma_paragraphs": []
}  20220705T02:23:46.000Z  {
"arxiv_id": "2112.01738",
"language": "en",
"timestamp": "20220705T02:23:46",
"url": "https://arxiv.org/abs/2112.01738",
"yymm": "2112"
}  proofpilearXiv_0650  {
"provenance": "001.jsonl.gz:1"
}  \section{Introduction}
With the explosive growth of Internet of Things (IoT) devices, wireless communication networks (WCNs) are increasingly facing the challenge of allocating finite transmit power and bandwidth for system utility maximization~\cite{xu2021survey}. Accordingly, one needs to design advanced radio resource management schemes to serve numerous wireless access devices. Massive multipleinput multipleoutput (MIMO) and multiuser transmission are two key enablers for supporting largerscale connection in future WCNs~\cite{he2021survey}. Therefore, some works have been carried on researching the beamforming design (BF)~\cite{he2015energy}, power allocation (PA)~\cite{yu2020power}, and user scheduling (US)~\cite{ammar2021distributed}, etc.
Generally speaking, US and BF (USBF) design are two fundamental problems in multiuser WCNs, \textred{which are implemented at the media access control layer \cite{dimic2005on} and the physical layer \cite{zhang2009networked}, respectively. Unfortunately, these two issues are always coupled, which is difficult to be solved.} Therefore, they are generally investigated separately in the existing literature, such as BF design with a given user set~\cite{shi2011iteratively} or US optimization combined with PA (USPA)\cite{dong2019energy}. \textred{For example, the authors of~\cite{yu2007transmitter} and~\cite{huh2012network} only consider the BF problem, where the uplinkdownlink duality theory is adopted for tackling the nonconvex problem of transceivers design. The authors of \cite{huang2020hybrid} and \cite{huang2021multihop} also solve the BF problem for RISempowered Terahertz communications with deep reinforcement learning methods. To further improve the performance of WCNs, crosslayer design is increasingly becoming popular~\cite{fu2014a}. The authors of~\cite{yoo2006on} investigate the USBF problem by sequentially performing the semiorthogonal user selection (SUS) algorithm for US optimization and the zeroforcing BF (ZFBF) algorithm for BF design. The authors of \cite{chen2017low} propose a low complexity USBF scheme for 5G MIMO nonorthogonal multipleaccess systems, but the nonconvex problem is separated by tracking two subproblems, namely, BF scheme and greedy minpower US scheme, instead of jointly solving them. The authors of~\cite{zhang2017sumrate} also discuss cross layer optimization with statistical channel information for massive MIMO scenario, by tackling US and BF individually.}
Meanwhile, the existing researches on coordinated multiuser communication are mainly based on the conventional Shannon theory~\cite{shannon1948mathematical}, which assumes that the communication capacity has extremely low decoding error probability with enough long blocklength transmission. However, in the ultrareliable low latency communication (URLLC) senarios, such as factory automation and remote surgery, this condition with the long blocklength transmission may not be satisfied~\cite{nasir2020resource}. To take the impact of finite blocklength transmission into account, the achievable rate has been expressed as a complicated function composed of the received signaltonoise (SNR), the blocklength, and the decoding error probability, which is smaller than the Shannon rate~\cite{polyanskiy2010channel}. Consequently, the optimization problem in scenarios with finite blocklength transmission is more challenging~\cite{he2020beamforming}. In order to solve the problem of interest, the algorithms designed in the aforementioned references are mainly based on the convex optimization theory~\cite{bertsekas2003convex}. However, such modeldriven optimization algorithms usually suffer from a high computational complexity, which may restrict their practical application ability in WCNs.
Recently, deep neural networks (DNNs) have emerged as an effective tool to solve such challenging radio resource management problems in WCNs~\cite{she2021tutorial}. Different from the modeldriven optimization algorithms running independently for each instance, DNNs are trained with numerous data to learn the mapping between radio resource optimization policies and WCN environments. Hence, the main computational cost of DNNs is shifted into the offline training stage, and only simple mathematical operations are needed in the online optimization stage. The work in~\cite{li2021multicell} shows that DNNs could achieve competitive performance with lower computational complexity than existing modeldriven optimization algorithms. A similar conclusion has been demonstrated in~\cite{xia2020deep}, where DNNs are used for BF design of multiuser multipleinput singleoutput (MISO) downlink systems, but the size of the considered problem is rather small. \textred{The authors of \cite{kaushik2021} regard resource allocation problems in the field of wireless communications as the generalized assignment problems (GAP), and propose a novel deep unsupervised learning approach to solve GAP in a timeefficient manner. The authors of \cite{liang2020towards} focus on solving PA problem via ensembling several deep neural networks. This is also an unsupervised approach and achieves competitive results compared with conventional methods. However, the core network is specifically designed for power control problem and it could not be extended for US.} In addition, these DNNbased architectures~\cite{liang2020towards,kaushik2021,li2021multicell,xia2020deep} are mainly inherited from image processing tasks and not tailored to radio resource management problems, especially the fact that they fail to exploit the prior topology knowledge in WCNs. The numerical results obtained in~\cite{chen2021gnn} illustrated that the performance of DNNs degrades dramatically with increasing WCN size.
To achieve a better scalability of learningbased radio resource management, a potential approach is to incorporate the network topology into the learning of neural networks, namely graph neural networks (GNNs)~\cite{he2021overview}. For instance, the authors of~\cite{cui2019spatial} combined DNNs with the geographic location of transceivers, and thereby proposed a spatial convolution model for wireless link scheduling problems with hundreds of nodes. The authors of~\cite{eisen2020optimal} proposed a random edge graph neural network (REGNN) for PA optimization on graphs formed by the interference links within WCNs. The work in~\cite{shen2019graph} demonstrates that GNNs are insensitive to the permutation of data, such as channel state information (CSI). Further, this work was extended in~\cite{shen2020graph} to solve both PA and BF problems via message passing graph neural networks (MPGNNs), which have the ability to generalize to largescale problems while enjoying a high computational efficiency. However, their proposed designs in~\cite{shen2019graph,shen2020graph} only investigated the continuous optimization problems with simple constraints. The discrete optimization problems with complicated constraints are still an opening issue and need to be further considered. Fortunately, the application of primaldual learning in~\cite{he2021gblinks} provides an effective way to solve the complicated constrained radio resource management problems.
\textred{Based on the above considerations, this work studies the joint USBF optimization problem in the multiuser MISO downlink system. Unlike the conventional methods, the USBF design will be simultaneously achieved via solving a single optimization problem, instead of different problems. Moreover, to improve the computational efficiency and utilize network historical data information, we propose a GNNbased Joint USBF (JUSBF) learning algorithm. The main contributions and advantages of this work are summarized as follows:}
\begin{itemize}
\item \textred{A joint USBF optimization problem for multiuser MISO downlink systems is formulated with the goal of maximizing the number of scheduled users subject to user rate and base station (BS) power constraints. To solve this discretecontinuous variables optimization problem, a SCAbased USBF (SCAUSBF) algorithm is firstly designed to pave the way for the JUSBF algorithm.}
\item \textred{A JUSBF learning algorithm is developed by combining the joint user scheduling and power allocation network (JEEPON) model with the BF analytical solution. In particular, we first formulate the investigated problem as a graph optimization problem through wireless graph representation, then design a GNNbased JEEPON model to learn the USPA strategy on graphs, and utilize the BF analytical solution to achieve joint USBF design. Meanwhile, a primaldual learning framework is developed to train JEEPON in an unsupervised manner.}
\item Finally, numerical results is conducted to validate the effectiveness of the proposed algorithms. Compared with the SCAUSBF algorithm, the JUSBF learning algorithm achieves close performance and higher computational efficiency, and enjoys the generalizability in dynamic WCN scenarios.
\end{itemize}
The remainder of this paper is organized as follows. Section~\rmnum{2} introduces a challenging radio resource management problem in the multiuser MISO downlink system. Section~\rmnum{3} proposes the SCAUSBF for solving the investigated problem. Section~\rmnum{4} designs the JEEPON and provides a primaldual learning framework to train it in an unsupervised manner. Numerical results are presented in Section~\rmnum{5}. Finally, conclusions are drawn in Section~\rmnum{6}.
\textbf{\textcolor{black}{$\mathbf{\mathit{Notations}}$}}: Throughout this paper, lowercase and uppercase letters (such as $a$ and $A$) represent scalars, while the bold counterparts $\mathbf{a}$ and $\mathbf{A}$ represent vectors and matrices, respectively. $\left\cdot\right$ indicates the absolute value of a complex scalar or the cardinality of a set. $\left\\cdot\right\_{0}$, $\left\\cdot\right\_{1}$, and $\left\\cdot\right\_{2}$ denote the $\ell_{0}$norm, $\ell_{1}$norm, and $\ell_{2}$norm, respectively. The superscripts $(\cdot)^{T}$, $(\cdot)^{H}$, and $(\cdot)^{1}$ denote the transpose, conjugate transpose, and inverse of a matrix, respectively. $\mathbb{R}$, $\mathbb{R}^{+}$, and $\mathbb{C}$ are the sets of real, nonnegative real, and complex numbers, respectively. Finally, $\mathbb{R}^{M\times1}$ and $\mathbb{C}^{M\times1}$ represent $M$dimensional real and complex column vectors, respectively.
\section{System Model and Problem Formulation}
In this work, we consider a multiuser MISO downlink system with taking the reliable and delivery latency into account, where a BS with $N$ antennas serves $K$ singleantenna users\footnote{\textred{Since the complexity of discussed problem, the singlecell scenario is considered in this paper. Research on more complex scenario with multicells will be discussed in future work, where intercell interference should be considered.}}. For simplicity, let $\mathcal{K}=\{1,2,\cdots,K\}$ and $\mathcal{S}=\{1,2,\cdots,K^{\ast}\}\subseteq\mathcal{K}$ be the set of candidate users and scheduled users, respectively, where $K^{\ast}\leq{K}$. The channel between user $k$ and the BS is denoted as $\mathbf{h}_{k}\in\mathbb{C}^{N\times1}$. Let $p_{k}\geq{0}$ and $\mathbf{w}_{k}\in\mathbb{C}^{N\times1}$ represent the transmit power and unitnorm BF vector used by the BS for user $k$, respectively. Thus, the received signal at user $k$ is given by
\begin{equation}\label{Eq.(01)}
y_{k}=\sum\limits_{l\in\mathcal{S}}\sqrt{p_{l}}\mathbf{h}_{k}^{H}\mathbf{w}_{l}s_{l}+n_{k},
\end{equation}
where $s_{l}$ is the normalized data symbol intended for the $l$th user, and $n_{k}\sim\mathcal{CN}(0,\sigma_{k}^{2})$ denotes the additive Gaussian white noise at user $k$ with zero mean and variance $\sigma_{k}^{2}$. For notational convenience, we define $\overline{\mathbf{h}}_{k}=\frac{\mathbf{h}_{k}}{\sigma_{k}}$ and the downlink signaltointerferenceplusnoise ratio (SINR) of user $k$ as
\begin{equation}\label{Eq.(02)}
\overrightarrow{\gamma}_{k}=\frac{p_{k}\left\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right^{2}}
{\sum\limits_{l\neq k,l\in\mathcal{S}}p_{l}\left\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{l}\right^{2}+1}.
\end{equation}
To satisfy the extreme requirements of delay, finite blocklength transmission regime is adopted in this paper. The results in~\cite{polyanskiy2010channel} show that the achievable rate is not only a function of the received SNR (or SINR), but also the decoding error probability $\epsilon$ and the transmission finite blocklength $n$. Accordingly, the achievable rate of user $k$ with finite blocklength transmission is given by\footnote{The proposed algorithms is also suitable for solving similar optimization problems, where the user rate is based on Shannon capacity formula.}
\begin{equation}\label{Eq.(03)}
R(\overrightarrow{\gamma}_{k})=C(\overrightarrow{\gamma}_{k})\vartheta\sqrt{V(\overrightarrow{\gamma}_{k})},
\end{equation}
where $C(\overrightarrow{\gamma}_{k})=\ln(1+\overrightarrow{\gamma}_{k})$ denotes the Shannon capacity, $\vartheta=\frac{Q^{1}(\epsilon)}{\sqrt{n}}$, $Q^{1}(\cdot)$ is the inverse of Gaussian Qfunction $Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}\mathrm{exp}(\frac{t^{2}}{2})dt$, and $V(\overrightarrow{\gamma}_{k})$ denotes the channel dispersion, which is defined as
\begin{equation}\label{Eq.(04)}
V(\overrightarrow{\gamma}_{k})=1\frac{1}{(1+\overrightarrow{\gamma}_{k})^{2}}.
\end{equation}
The target of this work is to maximize the number of users belonging to the scheduled user set $\mathcal{S}\subseteq\mathcal{K}$ subject to the constraints of peruser minimum rate requirement and BS maximum power budget. Specifically, one needs to carefully select the scheduled user set $\mathcal{S}$, and design BF vectors with reasonable transmit power\footnote{For the ultradense or largescale connective URLLC scenario, it may be a better choice to schedule as many users as possible while satisfying reliability and latency requirements. Accordingly, we aim to maximize the set cardinality of scheduled users in this work.}. To this end, the joint USBF optimization problem is formulated as follows\footnote{\textred{In our experiment, we obtain perfect CSI via link level simulation. However, it is indeed hard to estimate CSI in the real communication systems~\cite{du2021robust}. Although there are pilotbased and blind channel estimation methods, the perfect CSI cannot be obtained due to the estimation error, which may lead to performance deterioration. Statistical CSI, including RSRP (Reference Signal Receiving Power), RSRQ (Reference Signal Receiving Quality), RSSI (Received Signal Strength Indicator), et al., might be helpful under this condition. We would like to further investigate the joint USBF problem in the future work.}}
\begin{subequations}\label{Eq.(05)}
\begin{align}
&\max_{\{p_{k},\mathbf{w}_{k}\}}\mathcal{S},\label{Eq.(05a)}\\
\mathrm{s.t.}~&r_{k}\leq R(\overrightarrow{\gamma}_{k}),~\left\\mathbf{w}_{k}\right\_{2}=1,\forall{k}\in\mathcal{S},\label{Eq.(05b)}\\
&\sum\limits_{k\in\mathcal{S}}p_{k}\leq{P},\textred{~p_{k}\geq{0},\forall{k}\in\mathcal{S},}\label{Eq.(05c)}
\end{align}
\end{subequations}
where $\left\mathcal{S}\right$ is the cardinality of set $\mathcal{S}$, $r_{k}$ is the peruser minimum rate requirement, and $P$ denotes the power budget of the BS. Problem~\eqref{Eq.(05)} is a mixedinteger continuousvariable programming problem that involves a discrete objective function and two continuousvariable constraints about power and unitnorm BF vectors. It is difficult to obtain the global optimal solution of problem~\eqref{Eq.(05)}, even the nearoptimal solution. Although the greedy heuristic search based USBF (GUSBF) algorithm in Appendix A could be considered as a possible effective solution, it brings extremely high computational complexity especially for largescale WCNs. In the sequel, the SCAbased USBF optimization algorithm and the GNNbased learning algorithm are successively proposed to solve the problem~\eqref{Eq.(05)}.
\section{Design of The SCAUSBF Algorithm}
In this section, we pay our attention on designing an effective optimization algorithm for problem~\eqref{Eq.(05)} from the perspective of successive convex approximation (SCA) optimization theory. Since problem~\eqref{Eq.(05)} is nonconvex, the first thing is to transform it into a tractable form via some basic mathematical transformations. \textred{One idea is to apply the uplinkdownlink duality theory~\cite{schubert2004solution} to equivalently transform the downlink problem~\eqref{Eq.(05)} into a virtual uplink dual problem~\eqref{Eq.(06)}~\footnote{\textred{Similar to formula~\eqref{Eq.(01)}, the virtual uplink inputoutput relationship could be expressed as $\mathbf{y}=\sum\limits_{k\in{\mathcal{S}}}\sqrt{q_{k}}\overline{\mathbf{h}}_{k}s_{k}+\mathbf{n}$, where $\mathbf{y}\in\mathbb{R}^{N\times1}$ is the virtual uplink received signal at BS, $s_k$ is the virtual uplink normalized data symbol intended for the $k$th user, and $\mathbf{n}\in\mathbb{R}^{N\times1}$ is the additive Gaussian white noise with $\mathcal{CN}(0,\mathbf{I})$. For the virtual uplink communication systems, $\mathbf{w}_{k}$ is used as the received vector for the $k$th user. Thus, the virtual uplink received SINR of the $k$th user can be calculated via the received signal $\mathbf{w}_{k}^{H}\mathbf{y}$.}}, i.e.,}
\begin{subequations}\label{Eq.(06)}
\begin{align}
&\max_{\left\{q_{k},\mathbf{w}_{k}\right\}}\mathcal{S},\label{Eq.(06a)}\\
\mathrm{s.t.}~&r_{k}\leq{R}(\overleftarrow{\gamma}_{k}), \left\\mathbf{w}_{k}\right\_{2}=1,\forall{k}\in\mathcal{S},\label{Eq.(06b)}\\
&\sum\limits_{k\in\mathcal{S}}q_{k}\leq{P},\textred{~q_{k}\geq{0},\forall{k}\in\mathcal{S},}\label{Eq.(06c)}
\end{align}
\end{subequations}
where $q_{k}$ is the virtual uplink transmit power of user $k$, and $\overleftarrow{\gamma}_{k}$ represents the corresponding virtual uplink received SINR, i.e.,
\begin{equation}\label{Eq.(07)}
\overleftarrow{\gamma}_{k}=\frac{q_{k}\left\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right^{2}}{\sum\limits_{l\neq{k},l\in\mathcal{S}}q_{l}\left\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right^{2}+1}.
\end{equation}
\textred{Note that the definition (\ref{Eq.(07)}) focuses on calculating SINRs for the scheduled user set $\mathcal{S}$, with its implicit information is that the SINRs of the unscheduled users are all zero values in theory. For convenience, we further propose a new SINR definition ${\overleftarrow{\gamma}}_{k}^{(\mathcal{K})}$ which is directly calculated based on the candidate user set $\mathcal{K}$, i.e.,}
{\color{red}\begin{equation}\label{Eq.(08)}
\overleftarrow{\gamma}_{k}^{(\mathcal{K})}=\frac{q_{k}\left\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right^{2}}{\sum\limits_{l\neq{k},l\in\mathcal{K}}q_{l}\left\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right^{2}+1}.
\end{equation}}
\textred{To clearly indicate whether a user is scheduled or not, we introduce $\kappa_{k}$ as a binary variable indicator of the user state, with $\kappa_{k}=1$ if user $k$ is scheduled and $\kappa_{k}=0$ otherwise, $k\in\mathcal{K}$. Therefore, $\kappa_{k}=1$ also means that the minimum rate constraint is met for the $k$th user, i.e., formulas $r_{k}\le{R}(\overleftarrow{\gamma}_{k})$ and $q_{k}\geq{0}$ will hold. However, $\kappa_{k}=0$ does not mean that formulas $R\left(\overleftarrow{\gamma}_{k}\right) = 0$ and $q_{k}=0$ are always true. For instance, for a candidate user set $\mathcal{K}$ and scheduled user set $\mathcal{S},\mathcal{S}\subset\mathcal{K}$. Transmission power of the BS is not always precisely exhausted for scheduled user set $\mathcal{S}$. For user $k'$ from the rest user set $\mathcal{K}\setminus\mathcal{S}$, if the residual power could not meet the minimum transmission power requirement, then we have $\kappa_{k'}=0,0<R({\overleftarrow{\gamma}_{k'}})<r_{k'},\mathbf{w}_{k'}_{2} = 1$. In such circumstance, $\kappa_{k'}r_{k'}\le{R}(\overleftarrow{\gamma}_{k'})$ holds, but $\kappa_{k'}=0$, i.e., $\kappa_{k'}\notin\mathcal{S}$. Meanwhile, for user $k\in\mathcal{S}$, $\overleftarrow{\gamma}_{k}>\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}>0$ and $\kappa_{k}=1$ hold. For user $k\notin\mathcal{S}$, if $k\in\mathcal{K},k\ne{k'}$, then $\overleftarrow{\gamma}_{k}=\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}=0$ and $\kappa_{k}=0$ hold. Let $\bm{\kappa}=[\kappa_{1},\kappa_{2},\cdots,\kappa_{k},\cdots,\kappa_{K}]^{T}$, problem~\eqref{Eq.(06)} is approximately written as}
{\color{red}\begin{subequations}\label{Eq.(09)}
\begin{align}
&\max_{\left\{\kappa_{k},q_{k},\mathbf{w}_{k}\right\}} \left\\bm{\kappa}\right\_{0},\label{Eq.(09a)}\\
\mathrm{s.t.}~&\kappa_{k}\in\{0,1\},\forall{k}\in\mathcal{K},\label{Eq.(09b)}\\
&\kappa_{k}r_{k}\leq{R}(\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}), \left\\mathbf{w}_{k}\right\_{2}=1,\forall{k}\in\mathcal{K},\label{Eq.(09c)}\\
&\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},~q_{k}\geq{0},\forall k\in\mathcal{K}.\label{Eq.(09d)}
\end{align}
\end{subequations}}
\textred{As discussed above, for user $k\in\mathcal{S}$, $\overleftarrow{\gamma}_{k}>\overleftarrow{\gamma}_{k}^{\mathcal{(K)}}>0$ holds, and for user $k\notin\mathcal{S}$, $\kappa_{k}=0$ holds. Therefore, (\ref{Eq.(09c)}) is a more strict constraint than (\ref{Eq.(06b)}), and the solution to problem (\ref{Eq.(06)}) is the upper bound of problem (\ref{Eq.(09)}).}
The goal of problem~\eqref{Eq.(09)} is to maximize the number of scheduled users under the given constraints. Further, constraints~\eqref{Eq.(09b)} and~\eqref{Eq.(09c)} can be equivalently transformed into continuous constraint type and SINR form~\cite{he2020beamforming}, respectively. Let $\widetilde{\gamma}_{k}>0$ be the minimum SINR associated with achieving the minimum achievable rate $r_k$ for the $k$th user. \textred{Thus, problem~\eqref{Eq.(09)} can be equivalently transformed as}
\begin{subequations}\label{Eq.(10)}
\begin{align}
&\max_{\{\kappa_{k},q_{k},\mathbf{w}_{k}\}} \left\\bm{\kappa}\right\_{0},\label{Eq.(10a)}\\
\mathrm{s.t.}~&0\leq\kappa_{k}\leq{1},\forall{k}\in\mathcal{K},\label{Eq.(10b)}\\
&\sum\limits_{k\in\mathcal{K}}\left(\kappa_{k}\kappa_{k}^{2}\right)\leq{0},\label{Eq.(10c)}\\
&\kappa_{k}\widetilde{\gamma}_{k}\leq\overleftarrow{\gamma}_{k}^{(\mathcal{K})}, \left\\mathbf{w}_{k}\right\_{2}=1,\forall{k}\in\mathcal{K},\label{Eq.(10d)}\\
&\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},~q_{k}\geq{0},\forall{k}\in\mathcal{K}.\label{Eq.(10e)}
\end{align}
\end{subequations}
Constraints~\eqref{Eq.(10b)} and~\eqref{Eq.(10c)} assure that the value of $\kappa_{k}$ equals to either one or zero, i.e., $\kappa_{k}\in\{0,1\}$, $\forall k\in\mathcal{K}$. According to~\cite[Proposition 2]{che2014joint}, the strong Lagrangian duality holds for problem~\eqref{Eq.(10)}. Introducing similar mathematical tricks on handling constraint~\eqref{Eq.(10c)}, \textred{problem~\eqref{Eq.(10)} is reformulated as follows}
\begin{subequations}\label{Eq.(11)}
\begin{align}
&\min_{\{\kappa_{k},q_{k},\mathbf{w}_{k}\}}\sum\limits_{k\in\mathcal{K}}\kappa_{k}+g\left(\bm{\kappa}\right)h\left(\bm{\kappa}\right),\label{Eq.(11a)}\\
\mathrm{s.t.}~&~\eqref{Eq.(10b)},~\eqref{Eq.(10d)},~\eqref{Eq.(10e)},\label{Eq.(11b)}
\end{align}
\end{subequations}
where $\lambda$ is a proper nonnegative constant, and $g\left(\bm{\kappa}\right)$ and $h\left(\bm{\kappa}\right)$ are defined respectively as
\begin{subequations}\label{Eq.(12)}
\begin{align}
g\left(\bm{\kappa}\right)&\triangleq\lambda\sum\limits_{k\in\mathcal{K}}\kappa_{k}+\lambda\left(\sum\limits_{k\in\mathcal{K}}\kappa_{k}\right)^{2},\\
h\left(\bm{\kappa}\right)&\triangleq\lambda\sum\limits_{k\in\mathcal{K}}\kappa_{k}^{2}+\lambda\left(\sum\limits_{k\in\mathcal{K}}\kappa_{k}\right)^{2}.
\end{align}
\end{subequations}
Note that the optimal receiver BF vector $\mathbf{w}_k^{(\ast)}$ for maximizing the uplink SINR $\overleftarrow{\gamma}_{k}^{(\mathcal{K})}$ of the $k$th user is the
minimum mean square error (MMSE) filter with fixed $\{q_{k}\}$, i.e.,
\begin{equation}\label{Eq.(13)}
\mathbf{w}_{k}^{(\ast)}=\frac{\left(\mathbf{I}_{N}+\sum\limits_{k\in\mathcal{K}}q_{k}\overline{\mathbf{h}}_{k}\overline{\mathbf{h}}_{k}^{H}\right)^{1}\overline{\mathbf{h}}_{k}}
{\left\\left(\mathbf{I}_{N}+\sum\limits_{k\in\mathcal{K}}q_{k}\overline{\mathbf{h}}_{k}\overline{\mathbf{h}}_{k}^{H}\right)^{1}\overline{\mathbf{h}}_{k}\right\_{2}},
\end{equation}
where $\mathbf{I}_{N}$ denotes $N$by$N$ identity matrix. For fixed $\{\mathbf{w}_{k}\}$, \textred{problem~\eqref{Eq.(11)} is rewritten as}
\begin{subequations}\label{Eq.(14)}
\begin{align}
&\min_{\{\kappa_{k},q_{k}\}}\sum\limits_{k\in\mathcal{K}}\kappa_{k}+g\left(\bm{\kappa}\right)h\left(\bm{\kappa}\right),\label{Eq.(14a)}\\
\mathrm{s.t.}~&\widetilde{\gamma}_{k}\kappa_{k}q_{k}\left\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right^{2}+\varphi_{k}(\bm{\kappa},\mathbf{q})\phi_{k}(\bm{\kappa},\mathbf{q})\leq{0},\forall{k}\in\mathcal{K},\label{Eq.(14b)}\\
&~\eqref{Eq.(10b)},~\eqref{Eq.(10e)},\label{Eq.(14c)}
\end{align}
\end{subequations}
where $\varphi_{k}\left(\bm{\kappa},\mathbf{q}\right)$ and $\phi_{k}\left(\bm{\kappa},\mathbf{q}\right)$ are defined as
\begin{subequations}\label{Eq.(15)}
\begin{align}
\varphi_{k}(\bm{\kappa},\mathbf{q})&\triangleq\frac{1}{2}\left(\widetilde{\gamma}_{k}\kappa_{k}+\sum\limits_{l\in\mathcal{K},l\neq k}q_{l}\left\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right^{2}\right)^{2},\\
\phi_{k}(\bm{\kappa},\mathbf{q})&\triangleq\frac{1}{2}\widetilde{\gamma}_{k}^{2}\kappa_{k}^{2}+\frac{1}{2}\left(\sum\limits_{l\in\mathcal{K},l\neq k}q_{l}\left\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}\right^{2}\right)^{2}.
\end{align}
\end{subequations}
Problem~\eqref{Eq.(14)} belongs to the class of difference of convex programming problem, since the objective function~\eqref{Eq.(14a)} and constraint~\eqref{Eq.(14b)} are the difference of two convex functions. In the sequel, we resort to the classic SCAbased methods~\cite{nguyen2015achieving}. Using the convexity of functions $h\left(\bm{\kappa}\right)$ and $\phi\left(\bm{\kappa},\mathbf{q}\right)$, we have
\begin{equation}\label{Eq.(16)}
\begin{split}
&h(\bm{\kappa})\geq\psi\left(\bm{\kappa}\right)\triangleq h(\bm{\kappa}^{(\tau)})+\sum\limits_{k\in\mathcal{K}}h'(\bm{\kappa}^{(\tau)})(\kappa_{k}\kappa_{k}^{(\tau)}),\\
&\phi_{k}(\bm{\kappa},\mathbf{q})\geq\varrho_{k}(\bm{\kappa},\mathbf{q})\triangleq\phi_{k}(\bm{\kappa}^{(\tau)},\mathbf{q}^{(\tau)})\\
&+\widetilde{\gamma}_{k}^{2}\kappa_{k}^{(\tau)}(\kappa_{k}\kappa_{k}^{(\tau)})+\sum\limits_{l\in\mathcal{K},l\neq{k}}\rho_{k,l}(\mathbf{q}^{(\tau)})(q_{l}q_{l}^{(\tau)}),
\end{split}
\end{equation}
where $h'\left(\kappa_{k}\right)\triangleq2\lambda\left(\kappa_{k}+\sum\limits_{l\in\mathcal{K}}\kappa_{l}\right)$, $\rho_{k,m}\left(\mathbf{q}\right)\triangleq\left\overline{\mathbf{h}}_{m}^{H}\mathbf{w}_{k}\right^{2}\sum\limits_{n\in\mathcal{K},n\neq k}q_{n}\left\overline{\mathbf{h}}_{n}^{H}\mathbf{w}_{k}\right^{2}$, and superscript $\tau$ is the $\tau$th iteration of the SCAUSBF algorithm presented shortly. From the aforementioned discussions, \textred{the convex approximation problem solved at the $(\tau+1)$th iteration of the proposed algorithm is given by}
\begin{subequations}\label{Eq.(17)}
\begin{align}
&\min_{\{\kappa_{k},q_{k}\}}\sum\limits_{k\in\mathcal{K}}\kappa_{k}+g(\bm{\kappa})\psi(\bm{\kappa}),\label{Eq.(17a)}\\
\mathrm{s.t.}~&\widetilde{\gamma}_{k}\kappa_{k}q_{k}\left\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}\right^{2}+\varphi_{k}(\bm{\kappa},\mathbf{q})\varrho_{k}(\bm{\kappa},\mathbf{q})\leq{0},\forall{k}\in\mathcal{K},\label{Eq.(17b)}\\
&~\eqref{Eq.(10b)},~\eqref{Eq.(10e)}.\label{Eq.(17c)}
\end{align}
\end{subequations}
Based on the above mathematical transformation, the SCAUSBF is summarized in \textbf{Algorithm}~\ref{Alg.(1)}. In the description of \textbf{Algorithm}~\ref{Alg.(1)}, $\delta$ denotes the maximum permissible error, and $\upsilon^{(\tau)}$ and $\zeta^{(t)}$ denote the objective value of problem~\eqref{Eq.(11)} at the $\tau$th iteration and problem~\eqref{Eq.(17)} at the $\tau$th iteration and the $t$th iteration, respectively. \textred{Note that SCAUSBF is also suitable for problem scenarios based on the Shannon capacity formula, as we just need to replace the $R(\overleftarrow{\gamma}_{k})$ with $C(\overleftarrow{\gamma}_{k})$ in problems (\ref{Eq.(05)}), (\ref{Eq.(06)}), and (\ref{Eq.(09)}), and replace the minimum SINR $\widetilde{\gamma}_{k}$ in problem (10b) for achieving minimum achievable rate $r_k$ with $\widetilde{\gamma}'_{k} = 2^{r_k}1$. }The convergence of SCAUSBF can be guaranteed by the monotonic boundary theory. \textred{To speed up the convergence of SCAUSBF, we can first filter out the users which meets constraints~\eqref{Eq.(05b)} and~\eqref{Eq.(05c)} by adopting a single user communication with the maximum ratio transmission (MRT) and full power transmission, thus, at least one user could be scheduled in such circumstance.}
\begin{algorithm}[!ht]
\caption{The SCAUSBF Algorithm for Problem~\eqref{Eq.(10)}}\label{Alg.(1)}
\begin{algorithmic}[1]
\STATE Let $t=0$, $\tau=0$, $\lambda=10^{2}$ and $\delta=10^{5}$. Initialize the BF vectors $\{\mathbf{w}_{k}^{(0)}\}$ and downlink power vectors $\{p_{k}^{(0)}\}$, such that constraint~\eqref{Eq.(05b)} and~\eqref{Eq.(05c)} are satisfied. \label{Alg.(11)}
\STATE Initialize $\zeta^{(0)}$ and $\upsilon^{(0)}$, calculate the downlink SINR $\overrightarrow{\gamma}_{k}$ via $\{p_{k}^{(0)},\mathbf{w}_{k}^{(0)}\}$ and Eq.~\eqref{Eq.(02)}, and obtain the uplink power vector $\mathbf{q}=[q_{1},\cdots,q_{K^{\ast}}]^{T}$ with
\begin{equation}\label{Eq.(18)}
\mathbf{q}=\bm{\Psi}^{1}\mathbf{I}_{K^{\ast}},
\end{equation}
where $\mathbf{I}_{K^{\ast}}$ is the allone vector with $K^{\ast}$ dimensions, and matrix $\bm{\Psi}$ is given by
\begin{equation}\label{Eq.(19)}
[\bm{\Psi}]_{k,l}=\left\{
\begin{aligned}
\frac{\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}^{2}}{\overrightarrow{\gamma}_{k}},k=l,\\
\overline{\mathbf{h}}_{l}^{H}\mathbf{w}_{k}^{2},k\neq{l}.
\end{aligned}
\right.
\end{equation}
\STATE Let $t\leftarrow{t+1}$. Solve problem~\eqref{Eq.(17)} to obtain $\{\kappa_{k}^{(t)},q_{k}^{(t)}\}$ and $\zeta^{(t)}$.\label{Alg.(13)}
\STATE If $\frac{\zeta^{(t)}\zeta^{(t1)}}{\zeta^{(t1)}}\leq\delta$, go to Step~\ref{Alg.(15)}. Otherwise, go to Step~\ref{Alg.(13)}.\label{Alg.(14)}
\STATE Let $\tau\leftarrow\tau+1$, update $\{\mathbf{w}_{k}^{(\tau)}\}$ with $\{q_{k}^{(t)}\}$ and Eq.~\eqref{Eq.(13)}, and obtain the objective value $\upsilon^{(\tau)}$. If $\frac{\upsilon^{(\tau)}\upsilon^{(\tau1)}}{\upsilon^{(\tau1)}}\leq\delta$, stop iteration and go to Step~\ref{Alg.(16)}. Otherwise, go to Step~\ref{Alg.(13)}.\label{Alg.(15)}
\STATE Calculate the uplink SINR $\overleftarrow{\gamma}_{k}$ via $\{q_{k}^{(t)},\mathbf{w}_{k}^{(\tau)}\}$ and Eq.~\eqref{Eq.(07)}, and obtain the downlink power vector $\mathbf{p}=[p_{1},\cdots,p_{K^{\ast}}]^{T}$ with\label{Alg.(16)}
\begin{equation}\label{Eq.(20)}
\mathbf{p}=\mathbf{D}^{1}\mathbf{I}_{K^{\ast}},
\end{equation}
where matrix $\mathbf{D}$ is given by
\begin{equation}\label{Eq.(21)}
[\mathbf{D}]_{k,l}=\left\{
\begin{aligned}
\frac{\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{k}^{2}}{\overleftarrow{\gamma}_{k}},k=l,\\
\overline{\mathbf{h}}_{k}^{H}\mathbf{w}_{l}^{2},k\neq{l}.
\end{aligned}
\right.
\end{equation}
\STATE Calculate the objective function value, then output the US, PA and BF vectors $\{\kappa_{k},p_{k},\mathbf{w}_{k}\}$.
\end{algorithmic}
\end{algorithm}
\section{Design of The JUSBF Algorithm}
\textred{In this section, the transformation of problem~\eqref{Eq.(05)} to problem~\eqref{Eq.(10)} is inherited, where the BF vector has an analytical solution. We focus on proposing the JUSBF learning algorithm to output the joint USBF strategy. Specifically, we first introduce the graph representation method of singlecell WCNs, then design the JEEPON model to learn the USPA strategy, and combine the BF analytical solution to achieve the JUSBF learning algorithm, which is summarized as \textbf{Algorithm}~\ref{Alg.(03)}.} In the sequel, we focus on studying the design of JEEPON and the corresponding training framework.
\begin{algorithm}[!ht]
\caption{The JUSBF Learning Algorithm}\label{Alg.(03)}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{D}=\{\mathbf{h}_{k}\}$: Testing sample with $K$ users; \\
~\quad $\bm{\Theta}$: The trainable parameters of JEEPON. \\
\ENSURE The optimization strategy $\{\kappa_{k}^{(\ast)},q_{k}^{(\ast)},\mathbf{w}_{k}^{(\ast)}\}$ of sample $\mathcal{D}$.
\STATE Construct graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ for sample $\mathcal{D}$ via the graph representation module.
\STATE Input graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ to JEEPON and obtain the USPA strategy $\{\kappa_{k}^{(\ast)},q_{k}^{(\ast)}\}$.
\STATE Calculate the BF vectors $\{\mathbf{w}_{k}^{(\ast)}\}$ via Eq.~\eqref{Eq.(13)}, and output the strategy $\{\kappa_{k}^{(\ast)},q_{k}^{(\ast)},\mathbf{w}_{k}^{(\ast)}\}$.
\STATE Calculate the uplink SINR $\overleftarrow{\gamma}_{k}$ via $\{q_{k}^{(\ast)},\mathbf{w}_{k}^{(\ast)}\}$ and Eq.~\eqref{Eq.(07)}, and obtain the downlink power vector $\{p_{k}^{(\ast)}\}$ via Eq.~\eqref{Eq.(21)}.
\end{algorithmic}
\end{algorithm}
\subsection{Problem Transformation and Loss Function Definition}
Different from the proposed SCAUSBF that alternately updates the BF vectors, in the sequel, it is regarded as intermediate variables about the virtual uplink power vectors. Taking~\eqref{Eq.(13)} into~\eqref{Eq.(08)}, the uplink received SINR of user $k$ is rewritten as
\begin{equation}\label{Eq.(22)}
\widehat{\gamma}_{k}=\frac{q_{k}\overline{\mathbf{h}}_{k}^{H}\bm{\Lambda}^{1}\overline{\mathbf{h}}_{k}^{2}}{\sum\limits_{l\neq{k},l\in\mathcal{K}}q_{l}\overline{\mathbf{h}}_{l}^{H}\bm{\Lambda}^{1}\overline{\mathbf{h}}_{k}^{2}+\bm{\Lambda}^{1}\overline{\mathbf{h}}_{k}_{2}^{2}},
\end{equation}
where $\bm{\Lambda}=\mathbf{I}_{N}+\sum\limits_{k\in\mathcal{K}}q_{k}\overline{\mathbf{h}}_{k}\overline{\mathbf{h}}_{k}^{H}$. Thus, problem~\eqref{Eq.(10)} is formulated as follows
\begin{subequations}\label{Eq.(23)}
\begin{align}
&\max_{\{\kappa_{k},q_{k}\}} \sum\limits_{k\in\mathcal{K}}\kappa_{k},\label{Eq.(23a)}\\
\mathrm{s.t.}~&{0}\leq\kappa_{k}\leq{1},\forall{k}\in\mathcal{K},\label{Eq.(23b)}\\
&\sum\limits_{k\in\mathcal{K}}(\kappa_{k}\kappa_{k}^{2})\leq{0},\label{Eq.(23c)}\\
&\kappa_{k}\widetilde{\gamma}_{k}\leq\widehat{\gamma}_{k},\forall{k}\in\mathcal{K},\label{Eq.(23d)}\\
&\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},~q_{k}\geq{0},\forall{k}\in\mathcal{K}.\label{Eq.(23e)}
\end{align}
\end{subequations}
\textred{To facilitate the design of JEEPON, incorporating partially the constraints into the objective function, the violationbased Lagrangian relaxation method~\cite{fioretto2020predicting} is adopted to formulate problem~\eqref{Eq.(23)} as an unconstrained optimization problem. Observe the constraints constraints~\eqref{Eq.(23b)} and~\eqref{Eq.(23e)} only contain single optimization variables that can be satisfied by subsequent projectionbased methods. For constraints~\eqref{Eq.(23c)} and~\eqref{Eq.(23d)}, we introduce the nonnegative Lagrangian multipliers $\{\mu,\nu\in\mathbb{R}^{+}\}$ to capture how much the constraints are violated.} Thus, the partial Lagrangian relaxation function of problem~\eqref{Eq.(23)} is given by
\begin{equation}\label{Eq.(24)}
\begin{aligned}
\mathcal{L}(\bm{\kappa},\mathbf{q},\mu,\nu)=\sum\limits_{k\in\mathcal{K}}\kappa_{k}+\mu\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}(\kappa_{k}\kappa_{k}^{2})+\nu\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}(\kappa_{k}\widetilde{\gamma}_{k}\widehat{\gamma}_{k}),
\end{aligned}
\end{equation}
where $\chi_{c}^{\geq}(x)=\max\{x,0\}$ is the violation degree function. Further, the Lagrangian dual problem of~\eqref{Eq.(23)} is formulated as
\begin{equation}\label{Eq.(25)}
\begin{aligned}
\max_{\{\mu,\nu\}}\,\min_{\{\kappa_{k},q_{k}\}}\mathcal{L}(\bm{\kappa},\mathbf{q},\mu,\nu).
\end{aligned}
\end{equation}
\textred{To update the trainable parameters of JEEPON, a primaldual learning framework (PDLF) is proposed to train it in an unsupervised manner, and the loss function is defined as $\mathrm{Loss}=\mathcal{L}/K$. In the sequel, we focus on describing the architecture of JEEPON and PDLF.}
\subsection{Graph Representation and Model Design}
\textred{WCNs can be naturally divided into undirected/directed graphs depending on the topology structures, and homogeneous/heterogeneous graphs depending on types of the communication links and user equipments (UEs)~\cite{he2021overview}. For notational convenience, a graph with node set $\mathcal{V}$ and edge set $\mathcal{E}$ is defined as $\mathcal{G}(\mathcal{V},\mathcal{E})$, where the node $v\in\mathcal{V}$ and edge $e_{u,v}\in\mathcal{E}$ (between node $u$ and node $v$) feature vectors are represented as $\mathbf{x}_{v}$ and $\mathbf{e}_{u,v}$, respectively. In the graph representation of singlecell cellular networks, we can consider the UEs as nodes and the interfering links between different UEs as edges, as shown in Fig.~\ref{GraphStructure}. In general, the node and edge features of graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ mainly include CSI and other environmental information, such as user weights and Gaussian noise. In order to reduce the dimensionality of node and edge feature vectors, we consider using the orthogonality (modulo value) of CSI to represent channel gains and interferences. Therefore, the features of node $v$ and edge $e_{u,v}$ are defined as $\mathbf{x}_{v}=\overline{\mathbf{h}}_{v}^{H}\mathbf{h}_{v}$ and $\mathbf{e}_{u,v}=\overline{\mathbf{h}}_{u}^{H}\mathbf{h}_{v}$, respectively.}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.5\columnwidth,keepaspectratio]{FigGraph.eps}
\caption{A wireless channel graph with four UEs.}
\label{GraphStructure}
\end{figure}
Following the completion of the WCN graph representation, we focus on the design of JEEPON to output the USPA strategy, where the optimization vectors are carefully defined in the representation vector of nodes. Specifically, JEEPON applies a message passing mechanism based graph convolutional network (GCN)~\cite{gilmer2017neural} to iteratively update the representation vector of node $v\in\mathcal{V}$ by aggregating features from its neighbor nodes and edges. The GCN consists of two steps, first generating and collecting messages from the firstorder neighborhood nodes and edges of node $v$, and then updating the representation vector of node $v$ with the aggregated messages. After $\ell$ times of graph convolutions, the representation vector of node $v$ captures the messages within its $\ell$hop neighborhood nodes and edges. To be specific, the update rule of the $\ell$th GCN layer at node $v$ is formulated as
\begin{equation}\label{Eq.(26)}
\begin{aligned}
\mathbf{m}_{u,v}^{(\ell)}&=\mathbf{M}_{\theta}^{(\ell)}\left(\bm{\beta}_{u}^{(\ell1)},\mathbf{x}_{u},\mathbf{e}_{u,v}\right),u\in\mathcal{N}_{v},\\
\mathbf{g}_{v}^{(\ell)}&=\mathbf{G}\left(F_{\mathrm{max}}\left(\{\mathbf{m}_{u,v}^{(\ell)}\}\right),F_{\mathrm{mean}}\left(\{\mathbf{m}_{u,v}^{(\ell)}\}\right)\right),v\in\mathcal{V},\\
\bm{\beta}_{v}^{(\ell)}&=\mathbf{U}_{\theta}^{(\ell)}\left(\bm{\beta}_{v}^{(\ell1)},\mathbf{x}_{v},\mathbf{g}_{v}^{(\ell)}\right),v\in\mathcal{V},
\end{aligned}
\end{equation}
where $\mathcal{N}_{v}$ is the firstorder neighborhood set of node $v$, $\bm{\beta}_{v}^{(\ell)}\triangleq[\kappa_{v},q_{v}]\in\mathbb{R}^{2}$ represents the pairwise optimization vector of node $v$ at the $\ell$th GCN layer, and $\bm{\beta}_{v}^{(0)}$ is initialized with an allzero vector. Therefore, when the update of the $\ell$th graph convolution operation is completed, the representation vector of node $v$ could be formulated as $[\bm{\beta}_{v}^{(\ell)},\mathbf{x}_{v}]$. Fig.~\ref{MessagePassingProcess} illustrates the state update process of node $v$ at the $\ell$th GCN layer. Here, $\mathrm{M}_{\theta}^{(\ell)}(\cdot)$ is a message construction function defined on each edge to generate edge message $\mathbf{m}_{u,v}^{(\ell)}\in\mathbb{R}^{m}$ by combining incoming node and edge features, where $m$ is the dimension size. $\mathbf{G}(\cdot)$ is a message aggregation function that uses the concatenation of the max function $F_{\mathrm{max}}(\cdot)$ and the mean function $F_{\mathrm{mean}}(\cdot)$ to gather the relevant edge messages $\{\mathbf{m}_{u,v}^{(\ell)}u\in\mathcal{N}_{v}\}$ and output the aggregated message $\mathbf{g}_{v}^{(\ell)}\in\mathbb{R}^{2m}$. $\mathbf{U}_{\theta}^{(\ell)}(\cdot)$ is a state update function defined on each node, which is used to update the node representation through the aggregated message $\mathbf{g}_{v}^{(\ell)}$, node feature $\mathbf{x}_{v}$ and optimization vector $\bm{\beta}_{v}^{(\ell1)}$. In JEEPON, function $\mathrm{M}_{\theta}^{(\ell)}(\cdot)$ and function $\mathbf{U}_{\theta}^{(\ell)}(\cdot)$ are parameterized by different neural network modules.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\columnwidth,keepaspectratio]{FigMessagePassing.eps}
\caption{The state update process of node $v$ at the $\ell$th GCN layer.}
\label{MessagePassingProcess}
\end{figure}
Through the combination of several GCN layers, JEEPON can gather multihop node and edge features. An illustration of JEEPON is given in Fig.~\ref{JEEPONModel}, which consists of $N_{\mathrm{L}}$ GCN layers and one projection activation (PAC) layer. Each GCN layer includes an input layer, an output layer, and two different MLPs which are composed of linear (LN) layers, batch normalization (BN) layers and activation (AC) layers. In the final part of JEEPON, we utilize the PAC layer to put $\{\kappa_{k}^{(N_{\mathrm{L}})},q_{k}^{(N_{\mathrm{L}})}\}$ into the feasible region $\bm{\Omega}$, i.e.,
\begin{equation}\label{Eq.(27)}
\begin{aligned}
\bm{\Omega}\triangleq\{\bm{\kappa},\mathbf{q}:{0}\leq\kappa_{k}\leq{1},\sum\limits_{k\in\mathcal{K}}q_{k}\leq{P},q_{k}\geq{0},\forall{k}\in\mathcal{K}\}.
\end{aligned}
\end{equation}
To this end, the projection functions of the PAC layer are defined as
\begin{equation}\label{Eq.(28)}
\begin{aligned}
\kappa_{k}^{(\ast)}&=F_{\mathrm{relu}}(\kappa_{k},1),~q_{k}^{'}=F_{\mathrm{relu}}(q_{k},P),\forall{k}\in\mathcal{K}, \\
q_{k}^{(\ast)}&=\frac{P}{\mathrm{max}\{P,\sum\limits_{k\in\mathcal{K}}q_{k}^{'}\}}q_{k}^{'},\forall{k}\in\mathcal{K},
\end{aligned}
\end{equation}
where function $F_{\mathrm{relu}}(\mathbf{z},\mathbf{b})=\max\{\min\{\mathbf{z},\mathbf{0}\},\mathbf{b}\}$, and $\mathbf{b}$ is the upper bound of the input. Furthermore, due to the matrix inversion operation in the uplink SINR equation~\eqref{Eq.(22)}, it leads to a high computational overhead. To speed up the computation, the following \textit{Lemma~\ref{lemma01}} is applied to replace the direct matrix inversion by $K$ matrix iterations. Specifically, it reduces the computational complexity to $\mathcal{O}(KN^{2})$ instead of matrix inversion complexity $\mathcal{O}(KN^{2}+N^{3})$, where $\mathcal{O}(\cdot)$ is the big$\mathcal{O}$ notation for describing the computational complexity.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.8\columnwidth,keepaspectratio]{FigJEEPON.eps}
\caption{The architecture of JEEPON.}
\label{JEEPONModel}
\end{figure}
\begin{lemma}\label{lemma01}
According to the ShermanMorrison formula~\cite{sherman1950adjustment}, for an invertible square matrix $\mathbf{A}\in\mathbb{C}^{N\times{N}}$, if there exists two column vectors $\mathbf{u},\mathbf{v}\in\mathbb{C}^{N\times1}$, $1+\mathbf{v}^{H}\mathbf{A}^{1}\mathbf{u}\neq0$, then the inverse of $\mathbf{A}$ is given by
\begin{equation}\label{Eq.(29)}
(\mathbf{A}+\mathbf{u}\mathbf{v}^{H})^{1}=\mathbf{A}^{1}\frac{\mathbf{A}^{1}\mathbf{u}\mathbf{v}^{H}\mathbf{A}^{1}}{1+\mathbf{v}^{H}\mathbf{A}^{1}\mathbf{u}}.
\end{equation}
Based on this formula, let $\mathbf{T}_{n}=\bm{\Lambda}^{1},n\in\{0,1,\cdots,K\}$, then $\mathbf{T}_{n}$ can be converted into an iterative matrix product form, which is formulated as follows
\begin{equation}\label{Eq.(30)}
\mathbf{T}_{n}=
\left\{
\begin{aligned}
\mathbf{I}_{N}&, n=0,\\
\mathbf{T}_{n1}\frac{\mathbf{T}_{n1}q_{n}\overline{\mathbf{h}}_{n}\overline{\mathbf{h}}_{n}^{H}\mathbf{T}_{n1}}{1+q_{n}\overline{\mathbf{h}}_{n}^{H}\mathbf{T}_{n1}\overline{\mathbf{h}}_{n}}&, n>0.
\end{aligned}
\right.
\end{equation}
\end{lemma}
\subsection{PrimalDual Learning Framework}
With regard to the aforementioned aspects, PDLF is developed for training the JEEPON model to solve the Lagrangian dual problem~\eqref{Eq.(25)}, which is composed of two parts, the primal update part and the dual update part, as shown in Fig.~\ref{PDLF}. \textred{The primal update part takes the user's historical channel data sample $\mathcal{D}\triangleq\{\mathbf{h}_{k}\}$ as input, and outputs the related USPA strategy $\bm{\Phi}(\mathcal{D},\bm{\Theta})\triangleq\{\kappa_{k},q_{k}\}$, where $\bm{\Theta}$ is the trainable parameters of JEEPON. Specifically, it includes a graph representation module for WCN topology construction, a JEEPON model for USPA optimization, and a loss function module for updating $\bm{\Theta}$. In the dual part, the Lagrangian multipliers $\{\mu,\nu\}$ are updated by the subgradient optimization method. PDLF runs two parts alternately, the former minimizes function $\mathcal{L}$ with fixed $\{\mu,\nu\}$ by updating $\bm{\Theta}$ to obtain a suitable $\{\kappa_{k},q_{k},\mathbf{w}_{k}\}$, and the latter maximizes function $\mathcal{L}$ with fixed $\{\kappa_{k},q_{k},\mathbf{w}_{k}\}$ by updating $\{\mu,\nu\}$. Therefore, the update rule of the Lagrangian multipliers $\{\mu,\nu\}$ at the $\tau$th epoch is}
\begin{equation}\label{Eq.(31)}
\begin{aligned}
\mu^{(\tau+1)}&=\mu^{(\tau)}+\varepsilon_{\mu}\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}\left(\kappa_{k}^{(\tau)}(\kappa_{k}^{(\tau)})^{2}\right), \\
\nu^{(\tau+1)}&=\nu^{(\tau)}+\varepsilon_{\nu}\sum\limits_{k\in\mathcal{K}}\chi_{c}^{\geq}\left(\kappa_{k}^{(\tau)}\widetilde{\gamma}_{k}\widehat{\gamma}_{k}\right),
\end{aligned}
\end{equation}
where $\varepsilon_{\mu}$ and $\varepsilon_{\nu}$ is the update stepsize of $\mu$ and $\nu$, respectively. In addition, the Lagrangian multipliers are updated every epoch based on the violation degree of the training datasets.
For the inner optimization of problem~\eqref{Eq.(25)}, JEEPON is proposed to transform it into a statistical learning problem, which aims to obtain appropriate optimization vectors $\{\kappa_{k},q_{k}\}$ by updating the trainable parameters of JEEPON.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\columnwidth,keepaspectratio]{FigPDLF.eps}
\captionsetup{labelfont={footnotesize,color=red},font={footnotesize,color=red}}
\caption{The architecture of the PDLF.}
\label{PDLF}
\end{figure}
\begin{algorithm}[!ht]
\caption{PDLF for Training JEEPON.}\label{Alg.(02)}
\begin{algorithmic}[1]
\REQUIRE $N_{\mathrm{e}}$: Number of epochs; \\
~\quad $\bm{\Theta}$: The trainable parameters of JEEPON; \\
~\quad $\varepsilon_{\mu},\varepsilon_{\nu}$: Step size of Lagrangian multipliers; \\
~\quad $\mathcal{D}\triangleq\{\mathcal{D}_{i}\}_{i=1}^{N_{\mathrm{ta}}}$: Training dataset with $N_{\mathrm{ta}}$ samples. \\
\ENSURE The trained JEEPON model.
\STATE Initialize the trainable parameters $\bm{\Theta}$ and the Lagrangian multipliers $\{\mu^{(0)},\nu^{(0)}\}$.
\FOR {epoch $\tau\leftarrow1,2,\cdots,N_{\mathrm{e}}$}
\STATE Initialize dual gradient variables $\{\nabla_{\mu}^{(0)},\nabla_{\nu}^{(0)}\}$.
\FOR {each sample $\mathcal{D}_{i}:i\leftarrow1,2,\cdots,N_{\mathrm{ta}}$}
\STATE Construct the graph $\mathcal{G}_{i}(\mathcal{V},\mathcal{E})$ for sample $\mathcal{D}_{i}$.
\STATE Obtain the USPA strategy $\{\kappa_{k}^{(i)},q_{k}^{(i)}\}$ via JEEPON, and then update $\bm{\Theta}$ via the loss function module.
\STATE Update dual gradient variables:
\STATE\quad
$\nabla_{\mu}^{(i)}\leftarrow\nabla_{\mu}^{(i1)}+\sum\limits_{k\in\mathcal{D}_{i}}\chi_{c}^{\geq}(\kappa_{k}^{(i)}(\kappa_{k}^{(i)})^{2})$,
$\nabla_{\nu}^{(i)}\leftarrow\nabla_{\nu}^{(i1)}+\sum\limits_{k\in\mathcal{D}_{i}}\chi_{c}^{\geq}(\kappa_{k}^{(i)}\widetilde{\gamma}_{k}\widehat{\gamma}_{k})$.
\ENDFOR
\STATE Obtain the Lagrangian multipliers $\{\mu^{(\tau)},\nu^{(\tau)}\}$ via Eq.~\eqref{Eq.(31)}.
\ENDFOR
\end{algorithmic}
\end{algorithm}
PDLF is designed for training JEEPON. Unlike the penaltybased supervised training method in~\cite{xia2020deep}, the proposed PDLF alternately updates $\bm{\Theta}$ and $\{\mu,\nu\}$ in an unsupervised manner, as summarized in \textbf{Algorithm}~\ref{Alg.(02)}. Specifically, we generate a training dataset $\mathcal{D}\triangleq\{\mathcal{D}_{i}\}_{i=1}^{N_{\mathrm{ta}}}$ with $N_{\mathrm{ta}}$ samples, and each sample with the same size. The training stage of PDLF lasts for $N_{\mathrm{e}}$ epochs in total. \textred{In the primal update part, PDLF first constructs the graph representation for sample $\mathcal{D}_{i}$ (Setp 5), and takes it as the input of JEEPON. Then, JEEPON outputs the USPA strategy $\bm{\Phi}(\mathcal{D}_{i},\bm{\Theta})\triangleq\{\kappa_{k}^{(i)},q_{k}^{(i)}\}$ of sample $\mathcal{D}_{i}$ (Step 6), and adopt the loss function module to update $\bm{\Theta}$ (Step 7). The subgradient values of $\{\mu,\nu\}$ are also stored to avoid repeated traversal of the training dataset (Steps 810).} Therefore, in the dual update part, $\{\mu,\nu\}$ are updated by the recorded dual gradient variables $\{\nabla_{\mu}^{(N_{\mathrm{ta}})},\nabla_{\nu}^{(N_{\mathrm{ta}})}\}$ and equation~\eqref{Eq.(31)} (Step 13).
\section{Numerical Results}
In this section, numerical results are presented for the joint USBF optimization problem in the multiuser MISO downlink system. We first introduce the experimental environment and system parameters. Next, the convergence of SCAUSBF and JUSBF is evaluated. Then, the performance of GUSBF, SCAUSBF, and JUSBF is discussed in different system scenarios, as well as the generalizability of JUSBF. In addition, the performance of JUSBF and the convolutional neural network based USBF (CNNUSBF) algorithm (see Appendix B) are also compared. Finally, the computational complexity of the algorithms is presented and discussed, which clearly validates the speed advantage of JUSBF.
\subsection{Experimental Setup}
In the experiment\footnote{\textred{Offline training for JUSBF is necessary and important, where numerous data is required. However, real data is quite difficult to obtain although some researchers are committed to solving this problem \cite{huang2021truedata,coronado20195GEmPOWER,munoz2016the}, so we could only apply simulated data instead.}}, the $K$ singleantenna users are randomly distributed in the range of $(d_{l},d_{r})$ from the BS, $d_{l},d_{r}\in(d_{\mathrm{min}},d_{\mathrm{max}})$, where $d_{\mathrm{min}}=50\mathrm{m}$ is the reference distance and $d_{\mathrm{max}}=200\mathrm{m}$ denotes the cell radius, as shown in Fig.~\ref{SystemModel}. The channel of user $k$ is modeled as $\mathbf{h}_{k}=\sqrt{\rho_{k}}\widetilde{\mathbf{h}}_{k}\in\mathbb{C}^{N\times1}$ where $\widetilde{\mathbf{h}}_{k}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_{N})$ is the smallscale fading, $\varrho=3$ is the pathloss exponent, and $\rho_{k}=1/(1+(d_{k}/d_{\mathrm{min}})^{\varrho})$ denotes the longterm pathloss between user $k$ and the BS with $d_{k}$ representing the distance in meters (m). For simplicity, we assume that all users have the same additive noise variance, i.e., $\sigma_{k}^{2}=\sigma^{2},\forall{k}\in\mathcal{K}$, thus, the signaltonoise ratio (SNR) is defined as $\mathrm{SNR}=10\log_{10}(P/\sigma^{2})$ in dB.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth,keepaspectratio]{FigSystemModel.eps}
\caption{User distribution of the multiuser MISO downlink system.}
\label{SystemModel}
\end{figure}
In the neural network module, JUSBF is implemented by $N_{\mathrm{L}}=2$ GCN layers via \emph{Pytorch}, and the functions $\mathrm{M}_{\theta}(\cdot)$ and $\mathbf{U}_{\theta}(\cdot)$ in each GCN layer are parameterized by MLPs with sizes $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$, respectively. Training and test stages for JUSBF are sequentially implemented. The learing rate of JUSBF and Lagrangian multipliers are set to $\eta=5\times10^{5}$ and $\varepsilon_{\mu},\varepsilon_{\nu}=10^{5}$, respectively. For each configuration, we respectively prepare $N_{\mathrm{ta}}=5000$ training samples and $N_{\mathrm{te}}=500$ testing samples, where the validation split is set to $0.2$ and the training samples are randomly shuffled at each epoch. The entire training stage lasts for $N_{\mathrm{e}}=200$ epochs. According to the conclusion in~\cite[Corollary 1]{he2020beamforming}, the user minimum achievable SINR $\widetilde{\gamma}$ will be set by the system parameters $\{D,n,\epsilon\}$. Note that unless mentioned otherwise, the experiments adopt the default system parameters in Table~\ref{Tab01}.
\begin{table}[!ht]
\centering
\renewcommand{\arraystretch}{1.1}
\caption{Default system parameters.}\label{Tab01}
\begin{tabular}{cc}
\hline
Parameters & Values \\\hline
Range of SNR & $10~\mathrm{dB}$ \\\hline
Blocklength $n$ & $128$ \\\hline
Decoding error probability $\epsilon$ & $10^{6}$ \\\hline
Transmission data bits $D$ & $256~\mathrm{bits}$ \\\hline
BS antenna number $N$ & $32$ \\\hline
Number of candidate users $K$ & $30$ \\\hline
Maximum permissible error $\delta$ & $10^{5}$ \\\hline
Sizes of MLPs in $\mathrm{M}_{\theta}(\cdot)$ & $\mathcal{H}_{1}=\{4,256,128,64,32,16,m\}$ \\\hline
Sizes of MLPs in $\mathbf{U}_{\theta}(\cdot)$ & $\mathcal{H}_{2}=\{3+2m,256,128,64,32,16,3\}$\\\hline
\end{tabular}
\end{table}
\subsection{Convergence Analysis of SCAUSBF and JUSBF}
\textred{The convergence of SCAUSBF and JUSBF is evaluated in this section, where part of the system parameters are set to $K=50$ and $(d_{l},d_{r})=(60\mathrm{m},100\mathrm{m})$. Fig.~\ref{FigTarget1} illustrates the objective value curve of SCAUSBF for different random channel realizations, indicating that SCAUSBF can reach the convergence state through iterations. Fig.~\ref{FigTarget2} illustrates the objective value curve of JUSBF during the training stage, where the objective value of the training samples varies in the range (light blue line), and the average objective value curve (blue line) converges as the number of iterations increases to $3.5\times10^{5}$. During the testing stage, the constraint violation ratios of JUSBF for different testing samples are shown in Table~\ref{Tab02}. It is observed that the percentage of illegal results is $2.268\%$, which is much lower than the results satisfying the constraints.} Note that the value of $\kappa_{k},\forall{k}\in\mathcal{K}$ will be set to $1$ if $0<\kappa_{k}<1$ is obtained, and all the scheduled users are filtered again with the peruser minimum SINR requirement.
\begin{figure}[ht]
\centering
{\color{red}
\begin{minipage}{0.45\linewidth}
\centering
\subfigure[SCAUSBF]{
\label{FigTarget1}
\includegraphics[width=1.0\linewidth]{FigTarget1.eps}}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\centering
\subfigure[JUSBF]{
\label{FigTarget2}
\includegraphics[width=1.0\linewidth]{FigTarget2.eps}}
\end{minipage}
}
\captionsetup{labelfont={footnotesize,color=red},font={footnotesize,color=red}}
\caption{The objective value curves of SCAUSBF and JUSBF.}
\label{EXP01_FIG}
\end{figure}
\begin{table}[!ht]
\centering
\renewcommand{\arraystretch}{1.1}
\captionsetup{labelfont={color=red},font={color=red}}
\caption{Different constraint situations of JUSBF.}\label{Tab02}
{\color{red}
\begin{tabular}{cc}
\hline
Different constraint situations & Percentage of total samples \\\hline
$\kappa_{k}=0,q_{k}\geq{0},\forall{k}\in\mathcal{K}$ & $75.436\%$ \\\hline
$0<\kappa_{k}<1,q_{k}\geq{0},\forall{k}\in\mathcal{K}$ & $2.264\%$ \\\hline
$\kappa_{k}=1,\widetilde{\gamma}_{k}>\widehat{\gamma}_{k},\forall{k}\in\mathcal{K}$ & $0.004\%$ \\\hline
$\kappa_{k}=1,\widetilde{\gamma}_{k}\leq\widehat{\gamma}_{k},\forall{k}\in\mathcal{K}$ & $22.296\%$ \\\hline
\end{tabular}
}
\end{table}
\subsection{Performance and Generalizability Evaluation}
In this subsection, the performance of JUSBF, SCAUSBF and GUSBF with different system parameters are evaluated and compared. For intuitive comparison, the obtained results of SCAUSBF and JUSBF are normalized by the results of GUSBF, defined as $R_{1}=\frac{N_{\mathrm{S}}}{N_{\mathrm{G}}}\times100\%$ and $R_{2}=\frac{N_{\mathrm{J}}}{N_{\mathrm{G}}}\times100\%$, where $N_{\mathrm{S}}$, $N_{\mathrm{J}}$ and $N_{\mathrm{G}}$ are the average number of scheduled users obtained through SCAUSBF, JUSBF and GUSBF, respectively. In addition, we also define the result percentage of CNNUSBF and JUSBF as $R_{3}=\frac{N_{\mathrm{C}}}{N_{\mathrm{J}}}\times100\%$, where $N_{\mathrm{C}}$ is the number of scheduled users obtained through CNNUSBF.
\subsubsection{Performance with Various $K$ and $(d_{l},d_{r})$}
This experiment investigates the influences of $K$ and $(d_{l},d_{r})$ and compares the performance of JUSBF with GUSBF and SCAUSBF, as well as with CNNUSBF in largescale user scenarios. Table~\ref{Tab03} shows that when $K$ is small, the performance of JUSBF is closer to that of GUSBF, because sufficient system resources are conducive to model optimization. JUSBF remains stable when $K$ changes from 20 to 50, and there exist only $2.56\%$ performance degradation at most. Besides, the performance gain of JUSBF improves with the distance interval changes from $20\mathrm{m}$ to $40\mathrm{m}$, since the smaller distance interval leads to the lack of diversity for each user, which brings more difficulties to the learning of JUSBF. In Fig.~\ref{FigDistance}, we show the average performance of these three algorithms with different $(d_{l},d_{r})$. It suggests that JUSBF could achieve a more stable and closer performance compared with GUSBF as $(d_{l},d_{r})$ increases. Owing to the fact that the number of scheduled users is reduced with the increase of $(d_{l},d_{r})$, and the obtained results are more homogeneous, which is beneficial to the learning of JUSBF.
\begin{table*}[!ht]
\centering
\fontsize{8}{8}\selectfont
\renewcommand{\arraystretch}{1.5}
\newcolumntype{C}[1]{>{\centering}p{#1}}
\caption{Performance normalized by GUSBF with various $K$.}\label{Tab03}
\begin{tabular}{C{3em}cccccccc}
\hline
\multirow{3}{*}{$K$} & \multicolumn{8}{c}{$R_{1}$ and $R_{2}$ with varying $(d_{l},d_{r})$} \\\cline{29}
& \multicolumn{2}{c}{$(50\mathrm{m},70\mathrm{m})$} & \multicolumn{2}{c}{$(60\mathrm{m},80\mathrm{m})$} & \multicolumn{2}{c}{$(50\mathrm{m},90\mathrm{m})$} & \multicolumn{2}{c}{$(60\mathrm{m},100\mathrm{m})$} \\\cline{29}
& $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ \\\hline
10 & $100\%$ & \blue{$95.68\%$} & $99.98\%$ & \blue{$94.70\%$} & $100\%$ & \blue{$93.94\%$} & $99.89\%$ & \blue{$93.41\%$} \\\cline{19}
20 & $99.67\%$ & \blue{$90.04\%$} & $99.64\%$ & \blue{$91.35\%$} & $99.52\%$ & \blue{$92.02\%$} & $99.22\%$ & \blue{$92.32\%$} \\\cline{19}
30 & $99.94\%$ & \blue{$89.68\%$} & $99.63\%$ & \blue{$90.33\%$} & $98.80\%$ & \blue{$90.57\%$} & $98.77\%$ & \blue{$91.25\%$} \\\cline{19}
40 & $99.86\%$ & \blue{$88.91\%$} & $99.54\%$ & \blue{$89.86\%$} & $98.52\%$ & \blue{$90.27\%$} & $98.24\%$ & \blue{$91.08\%$} \\\cline{19}
50 & $99.84\%$ & \blue{$88.15\%$} & $98.73\%$ & \blue{$88.79\%$} & $97.48\%$ & \blue{$89.46\%$} & $97.10\%$ & \blue{$90.15\%$} \\\hline
\end{tabular}
\end{table*}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\columnwidth,keepaspectratio]{FigDistance.eps}
\caption{Average number of scheduled users with various $(d_{l},d_{r})$.}
\label{FigDistance}
\end{figure}
\textred{Considering largescale user scenarios, we focus on the performance comparison of JUSBF and CNNUSBF, whereas ignoring GUSBF and SCAUSBF due to the high computational overhead. Table~\ref{Tab04} shows that the performance gap between CNNUSBF and JUSBF widens as $K$ increases, especially when $K=200$ and $(d_{l},d_{r})=(60\mathrm{m},100\mathrm{m})$, the performance of the former can only reach $87.36\%$ of the latter. This indicates that incorporating WCN topology information into model learning is helpful for performance improvement and stability maintenance.}
\begin{table}[!ht]
\centering
\fontsize{8}{8}\selectfont
\renewcommand{\arraystretch}{1.5}
\newcolumntype{C}[1]{>{\centering}p{#1}}
\captionsetup{labelfont={color=red},font={color=red}}
\caption{Performance normalized by JUSBF with various $K$.}\label{Tab04}
{\color{red}\begin{tabular}{C{3em}cccc}
\hline
\multirow{2}{*}{$K$} & \multicolumn{4}{c}{$R_{3}$ with varying $(d_{l},d_{r})$} \\\cline{25}
& $(50\mathrm{m},70\mathrm{m})$ & $(60\mathrm{m},80\mathrm{m})$ & $(50\mathrm{m},90\mathrm{m})$ & $(60\mathrm{m},100\mathrm{m})$ \\\cline{15}
50 & $93.06\%$ & $96.26\%$ & $92.42\%$ & $93.95\%$ \\\cline{15}
100 & $92.89\%$ & $90.83\%$ & $91.42\%$ & $90.80\%$ \\\cline{15}
150 & $89.14\%$ & $90.53\%$ & $90.59\%$ & $89.46\%$ \\\cline{15}
200 & $88.75\%$ & $88.38\%$ & $87.51\%$ & $87.36\%$ \\\hline
\end{tabular}}
\end{table}
\subsubsection{Performance with Various SNR Settings}
This experiment compares the performance of JUSBF, SCAUSBF and GUSBF with different SNR settings, and the obtained results are summarized in Table~\ref{Tab05}. It is observed that JUSBF achieves competitive performance (larger than $90.77\%$) with $\mathrm{SNR}=5$~dB, while SCAUSBF maintains over $95.73\%$ nearoptimal performance compared with GUSBF. Although the performance gap of JUSBF is enlarged as $K$ increase, the trend of degradation is rather slow. For the configuration $\mathrm{SNR}=15$~dB and $(d_{l},d_{r})=(50\mathrm{m},100\mathrm{m})$, JUSBF obtains only a $1.78\%$ relative performance gap with GUSBF when $K$ changes from 20 to 50. Even when $(d_{l},d_{r})=(100\mathrm{m},150\mathrm{m})$, JUSBF still maintains a stable performance. \textred{Moreover, Fig.~\ref{FigSNR} illustrates the gap between the SCAUSBF and JUSBF increases while SNR changes from 0~dB to 20~dB. With the increase of SNR, channel condition becomes better and more users might meet the requirement of QoS. Therefore, the solution space for problem~\eqref{Eq.(05)} enlarges and SCAUSBF shows its advantages under this condition, because it obtains optimal/suboptimal results. On the other hand, JUSBF is difficult to capture the optimal value as the solution space increases in such circumstance. However, the gap between the SCAUSBF and JUSBF decreases when SNR changes from 20~dB to 30~dB, since much better channel condition is sufficient for serving all the users.}
\begin{table*}[!ht]
\centering
\fontsize{8}{8}\selectfont
\renewcommand{\arraystretch}{1.5}
\newcolumntype{C}[1]{>{\centering}p{#1}}
\caption{Performance normalized by GUSBF with various SNR settings.}\label{Tab05}
\begin{tabular}{C{4em}C{3em}cccccccc}
\hline
\multirow{3}{*}{$\mathrm{SNR}(\mathrm{dB})$} & \multirow{3}{*}{$K$} & \multicolumn{8}{c}{$R_{1}$ and $R_{2}$ with varying $(d_{l},d_{r})$} \\\cline{310}
&& \multicolumn{2}{c}{$(60\mathrm{m},90\mathrm{m})$} & \multicolumn{2}{c}{$(90\mathrm{m},120\mathrm{m})$} & \multicolumn{2}{c}{$(50\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c}{$(100\mathrm{m},150\mathrm{m})$} \\\cline{310}
&& $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ \\\hline
\multirow{3}{*}{5} & 10 & $98.96\%$ & \blue{$92.13\%$} & $98.43\%$ & \blue{$94.50\%$} & $99.37\%$ & \blue{$92.56\%$} & $97.74\%$ & \blue{$93.17\%$} \\\cline{210}
& 20 & $98.22\%$ & \blue{$91.08\%$} & $97.82\%$ & \blue{$92.43\%$} & $98.82\%$ & \blue{$91.15\%$} & $96.59\%$ & \blue{$92.03\%$} \\\cline{210}
& 30 & $97.17\%$ & \blue{$90.77\%$} & $96.58\%$ & \blue{$91.04\%$} & $97.75\%$ & \blue{$90.83\%$} & $95.73\%$ & \blue{$91.66\%$} \\\hline
\multirow{3}{*}{15} & 20 & $99.99\%$ & \blue{$90.15\%$} & $99.60\%$ & \blue{$90.69\%$} & $100\%$ & \blue{$90.48\%$} & $99.17\%$ & \blue{$91.15\%$} \\\cline{210}
& 30 & $99.87\%$ & \blue{$89.20\%$} & $99.61\%$ & \blue{$89.78\%$} & $99.97\%$ & \blue{$89.52\%$} & $98.46\%$ & \blue{$90.60\%$} \\\cline{210}
& 50 & $99.64\%$ & \blue{$88.34\%$} & $99.55\%$ & \blue{$89.06\%$} & $99.80\%$ & \blue{$88.70\%$} & $97.36\%$ & \blue{$89.94\%$} \\\hline
\end{tabular}
\end{table*}
\begin{figure*}[ht]
\centering
\subfigure[$(d_{l},d_{r})=(60\mathrm{m},90\mathrm{m})$]{
\label{FigSNR1}
\includegraphics[width=0.45\linewidth]{FigSNR1.eps}}
\subfigure[$(d_{l},d_{r})=(100\mathrm{m},150\mathrm{m})$]{
\label{FigSNR2}
\includegraphics[width=0.45\linewidth]{FigSNR2.eps}}
\caption{Performance of the algorithms with various SNR settings.}
\label{FigSNR}
\end{figure*}
\subsubsection{Performance with Various SINR Requirements}
The ultimate scheduling results of the investigated problem are significantly affected by the peruser minimum SINR requirement, where value $\widetilde{\gamma}=F_{\mathrm{\gamma}}(D,n,\epsilon)$ is obtained with different system parameters $D$, $n$, and $\epsilon$, and the results are summarized in Table~\ref{Tab06}. From the table, It is observed that the average performance of JUSBF remains above $88.97\%$ compared with GUSBF under different SINR requirements and user distribution distances. However, one needs to point out that with the reduction of SINR requirements, the performance improvement of JUSBF is lower than GUSBF, especially when $(d_{l},d_{r})=(60\mathrm{m},80\mathrm{m})$. Therefore, JUSBF shows a slight performance degradation compared with GUSBF when the SINR requirement is reduced, while the performance improvement of SCAUSBF increases at the same time.
\begin{table*}[!ht]
\centering
\fontsize{8}{8}\selectfont
\renewcommand{\arraystretch}{1.5}
\newcolumntype{C}[1]{>{\centering}p{#1}}
\caption{Performance normalized by GUSBF with various SINR requirements.}\label{Tab06}
\begin{tabular}{C{7em}C{3em}cccccccc}
\hline
\multirow{3}{*}{$F_{\mathrm{\gamma}}(D,n,\epsilon)$} & \multirow{3}{*}{$\widetilde{\gamma}$} & \multicolumn{8}{c}{$R_{1}$ and $R_{2}$ with varying $(d_{l},d_{r})$} \\\cline{310}
&& \multicolumn{2}{c}{$(60\mathrm{m},80\mathrm{m})$} & \multicolumn{2}{c}{$(80\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c}{$(60\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c}{$(80\mathrm{m},120\mathrm{m})$} \\\cline{310}
&& $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ & $R_{1}$ & $R_{2}$ \\\hline
$(256,256,10^{6})$ & $1.633$ & $99.92\%$ & \blue{$88.97\%$} & $99.96\%$ & \blue{$90.36\%$} & $99.97\%$ & \blue{$89.82\%$} & $99.91\%$ & \blue{$91.03\%$} \\\hline
$(256,128,10^{6})$ & $5.054$ & $99.63\%$ & \blue{$90.33\%$} & $98.30\%$ & \blue{$91.87\%$} & $98.77\%$ & \blue{$91.25\%$} & $98.36\%$ & \blue{$92.78\%$} \\\hline
$(256,96,10^{6})$ & $9.291$ & $96.41\%$ & \blue{$90.79\%$} & $94.22\%$ & \blue{$92.94\%$} & $95.84\%$ & \blue{$91.84\%$} & $95.38\%$ & \blue{$93.62\%$} \\\hline
$(256,64,10^{6})$ & $27.97$ & $95.58\%$ & \blue{$91.05\%$} & $93.95\%$ & \blue{$93.05\%$} & $94.55\%$ & \blue{$92.76\%$} & $94.19\%$ & \blue{$94.08\%$} \\\hline
\end{tabular}
\end{table*}
\subsubsection{Generalizability with Various User Distributions}
Generalizability is another critical evaluation property for JUSBF, and it focuses on investigating whether the trained network model has the ability to perform well in unknown WCN scenarios. To test the generalizability, JUSBF is trained from a certain scenario whose system parameters are different from the test ones. Specifically, JUSBF is trained with $(d_{l},d_{r})=(100\mathrm{m},120\mathrm{m})$, then the trained model is applied to the test scenarios with different $(d_{l},d_{r})$, without any further training\footnote{\textred{For scenarios with different number of users $K$, number of antennas $N$ and SINR requirements $\widetilde{\gamma}$, the generalizability of JUSBF performs poorly and needs to be further optimized.}}. Table~\ref{Tab07} shows comparison results of GUSBF and JUSBF, where $R_{4}=\frac{N_{\mathrm{J},(100,120)}}{N_{\mathrm{G}}}\times100\%$ represents the average performance of JUSBF normalized by GUSBF and $N_{\mathrm{J},(100,120)}$ is the average number of scheduled users using JUSBF. Form the table, it is observed that JUSBF performs well over the neighboring user distribution distances when the test distance interval is 40m. Moreover, when $(d_{l},d_{r})=(80\mathrm{m},100\mathrm{m})$ and there is no intersection with $(100\mathrm{m},120\mathrm{m})$, the performance of JUSBF is still acceptable at $K=10$. Based on the aforementioned analysis, our proposed JUSBF can be well generalized to scenarios with neighboring user distribution distances.
\begin{table}[!ht]
\centering
\fontsize{8}{8}\selectfont
\renewcommand{\arraystretch}{1.5}
\newcolumntype{C}[1]{>{\centering}p{#1}}
\caption{Generalizability with various user distributions.}\label{Tab07}
\begin{tabular}{C{3em}cccccccc}
\hline
\multirow{3}{*}{$K$} & \multicolumn{8}{c}{$N_{\mathrm{G}}$ and $R_{4}$ with varying $(d_{l},d_{r})$} \\\cline{29}
& \multicolumn{2}{c}{$(100\mathrm{m},120\mathrm{m})$} &
\multicolumn{2}{c}{$(80\mathrm{m},100\mathrm{m})$} & \multicolumn{2}{c}{$(80\mathrm{m},120\mathrm{m})$} &
\multicolumn{2}{c}{$(100\mathrm{m},140\mathrm{m})$} \\\cline{29}
& $N_{\mathrm{G}}$ & $R_{4}$ & $N_{\mathrm{G}}$ & $R_{4}$ & $N_{\mathrm{G}}$ & $R_{4}$ & $N_{\mathrm{G}}$ & $R_{4}$ \\\hline
10 & $5.068$ & \blue{$94.97\%$} & $7.504$ & \blue{$86.06\%$} & $6.36$ & \blue{$91.86\%$} & $4.394$ & \blue{$92.13\%$} \\\hline
20 & $5.624$ & \blue{$93.92\%$} & $8.548$ & \blue{$84.63\%$} & $7.496$ & \blue{$89.58\%$} & $5.068$ & \blue{$91.46\%$} \\\hline
30 & $5.924$ & \blue{$92.69\%$} & $9.054$ & \blue{$83.34\%$} & $8.16$ & \blue{$88.31\%$} & $5.372$ & \blue{$88.16\%$} \\\hline
40 & $6.038$ & \blue{$91.24\%$} & $9.304$ & \blue{$82.75\%$} & $8.508$ & \blue{$87.92\%$} & $5.608$ & \blue{$87.73\%$} \\\hline
50 & $6.15$ & \blue{$90.80\%$} & $9.538$ & \blue{$83.04\%$} & $8.818$ & \blue{$85.87\%$} & $5.782$ & \blue{$86.09\%$} \\\hline
\end{tabular}
\end{table}
\subsection{Computational Complexity Analysis}
\textred{In this subsection, the computational complexity of GUSBF, SCAUSBF and JUSBF is analyzed and compared. Considering the differences in implementation platforms and algorithm design languages, we count the floatingpoint computation of the proposed algorithms. Firstly, GUSBF includes the US optimization and BF design, whose floatingpoint computation is about $\sum\limits_{\hat{k}=2}^{K}4(K\hat{k}+1)(I_{1}(\hat{k}^{3}N+5\hat{k}^{2}N)+\hat{k}^{2})$, where $\hat{k}$ and $I_{1}$ represent the number of scheduled users and iterations, respectively. Secondly, SCAUSBF includes the inner and outer optimizations, whose floatingpoint computation is about $4I_{3}(I_{2}(7K^{2}N+4KN+14K^{2})+K(N^{3}+2N^{2}+2N))$, where $I_{2}$ and $I_{3}$ represent the number of iterations for both parts. For JUSBF, since the JEEPON model is trained offline, we mainly consider the computation of the testing stage, including the graph representation module, the GCN module and the SINR module. For simplicity, we assume that the GCN module is composed by MLPs with the dimensions $\mathcal{H}\triangleq\{h_{i}\}$. Therefore, the floatingpoint computation of JUSBF is about $2(2K^{2}N+2KN^{2}+K\sum\limits_{\ell=1}^{L}\sum\limits_{i=1}^{\mathcal{H}}(2+h_{\ell,i1})h_{\ell,i})$. For intuitive comparison, Fig.~\ref{Fig_Complexity} illustrates the comparison of the floatingpoint computational magnitude of each algorithm for different number of users and iterations. The computational magnitude of JUSBF is lower than that of GUSBF and SCAUSBF, which indicates its computational efficiency advantage.}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\columnwidth,keepaspectratio]{FigComplexity.eps}
\captionsetup{labelfont={footnotesize,color=red},font={footnotesize,color=red}}
\color{red}\caption{The floatingpoint computational magnitude of the algorithms.}
\label{Fig_Complexity}
\end{figure}
\section{Conclusions}
In this paper, the joint USBF optimization problem is studied for the multiuser MISO downlink system. Specifically, with the help of uplinkdownlink duality theory and mathematical transformations, we formulate the original problem into a convex optimization problem, and propose the GUSBF, SCAUSBF and the JUSBF. Numerical results show that JUSBF achieves close performance and higher computational efficiency. Additionally, the proposed JUSBF also enjoys the generalizability in dynamic WCN scenarios. \textred{For future directions, solving the problem of unbearable CSI acquisition burden and signaling overhead caused by the instantaneous perfect CSI applied in this work is interesting and meaningful. Deep learning based resource allocation algorithm needs to be redesigned, and statistical CSI may be helpful to achieve the goal.}
\begin{appendices}
\section{Design of The GUSBF Algorithm}
\textred{In this appendix, the GUSBF algorithm is proposed to slove problem~\eqref{Eq.(05)}, which is inspired by the work in~\cite{zhang2011adaptive} and the nearfar effect of WCNs. The feasibility problem in reference~\cite[problem (35)]{he2020beamforming} forms the basis of GUSBF, which is formulated as follows
\begin{subequations}\label{Eq.(A01)}
\begin{align}
\min\limits_{\{\mathbf{w}_{k}\}}\sum\limits_{k\in\mathcal{S}}\mathbf{w}_{k}^{2},\\
\mathrm{s.t.}~{r_k}\leq{R}(\overrightarrow{\gamma}_{k}),
\end{align}
\end{subequations}
where $\mathbf{w}_{k}\in\mathbb{C}^{N\times{1}}$ is the BF vector of user $k$, and its downlink power is denoted as $p_{k}=\\mathbf{w}_{k}\^{2}$. Solving problem~\eqref{Eq.(A01)} can be used to determine whether the scheduled user set $\mathcal{S}$ is feasible, i.e., the user rate constraint and the BS power budget need to be satisfied. GUSBF is designed with two stages, namely, the conventional greedy search stage and the user set optimization stage, as summarized in \textbf{Algorithm}~\ref{Alg.(A01)}. Here, GUSBF expands the scheduled user set $\mathcal{S}$ from the candidate user set $\mathcal{K}$ in the first stage, and then optimizes $\mathcal{S}$ in the second stage to achieve the goal of scheduling more users. Since the GUSBF algorithm has close performance and lower computational complexity compared with the exhaustive search algorithm, therefore, it is used as the baseline.}
\begin{algorithm}[!ht]
{\color{red}
\caption{The GUSBF Algorithm for Problem~\eqref{Eq.(05)}}\label{Alg.(A01)}
\begin{algorithmic}[1]
\STATE Input candidate user set $\mathcal{K}$ and user CSI $\{\mathbf{h}_{k}\}$, and initialize scheduled user set $\mathcal{S}=\varnothing$.
\STATE Sort the user channels of $\mathcal{K}$ from good to bad via the MRT method, and add the topranked user to $\mathcal{S}$.
\STATE In the greedy search stage, move one user from $\mathcal{K}$ to $\mathcal{S}$ in sequence without repetition, and obtain temporary user sets with $\mathcal{K}\backslash\mathcal{S}$ groups.
\STATE For each temporary user set, solve problem~\eqref{Eq.(A01)} to obtain $\{p_{k},\mathbf{w}_{k}\}$, and preserve the user set $\mathcal{S}_{1}^{(\ast)}$ with the smallest required power.
\STATE Let $\mathcal{K}\leftarrow\mathcal{K}\backslash\mathcal{S}_{1}^{(\ast)},\mathcal{S}\leftarrow\mathcal{S}_{1}^{(\ast)}$ if $\mathcal{K}\neq\varnothing$ and $\sum\limits_{k\in\mathcal{S}}p_{k}\leq{P}$ is obtained, then go to step 3. Otherwise, go to step 6.
\STATE In the user set optimization stage, move one user with the largest power consumption from $\mathcal{S}$ to $\mathcal{K}$, and obtain the user set $\mathcal{S}_{2}$.
\STATE Let $\mathcal{S}\leftarrow\mathcal{S}_{2}$ and run the greedy search again to obtain a new user set $\mathcal{S}_{2}^{(\ast)}$. If $\mathcal{S}_{1}^{(\ast)}=\mathcal{S}_{2}^{(\ast)}$, stop iteration then output $\mathcal{S}_{2}^{(\ast)}$ and $\{p_{k},\mathbf{w}_{k}\}$. Otherwise, let $\mathcal{S}\leftarrow\mathcal{S}_{2}^{(\ast)}$ and go to step 6.
\end{algorithmic}}
\end{algorithm}
\section{Design of The CNNUSBF Algorithm}
\textred{In this appendix, the CNNUSBF algorithm is proposed to slove problem~\eqref{Eq.(23)}, which is inspired by the work in~\cite{li2021survey}. In particular, CNNUSBF takes the WCN graph representation as input and outputs the USPA optimization strategy and BF vectors. To be specific, the update rule of CNNUSBF for node $v$ in graph $\mathcal{G}(\mathcal{V},\mathcal{E})$ is formulated as}
{\color{red}\begin{equation}\label{Eq.(32)}
\begin{aligned}
\mathrm{Input:}&~\mathbf{D}_{v}^{(0)}=[\mathbf{x}_{v},F_{\mathrm{max}}(\{\mathbf{e}_{u,v}\}),F_{\mathrm{mean}}(\{\mathbf{e}_{u,v}\})],u\in\mathcal{N}_{v},\\
\mathrm{CNN\raisebox{0mm}{}stage:}&~\mathbf{D}_{v}^{(i)}=F_{\mathrm{std}}(\mathrm{Cov1d}(\mathbf{D}_{v}^{(i1)})),i=1,2,\cdots,N_{\mathrm{1}},\\
\mathrm{DNN\raisebox{0mm}{}stage:}&~\mathbf{D}_{v}^{(i)}=F_{\mathrm{std}}(\mathrm{LNN}(\mathbf{D}_{v}^{(i1)})),i=N_{\mathrm{1}}+1,\cdots,N_{\mathrm{1}}+N_{\mathrm{2}},\\
\mathrm{Output:}&~\mathbf{D}_{v}^{(N_{\mathrm{2}})}=[\kappa_{v}^{(\ast)},q_{v}^{(\ast)}], \mathrm{and~BF~vector}~\mathbf{w}_{v}^{(\ast)},v\in\mathcal{V},
\end{aligned}
\end{equation}}
\textred{where $N_{\mathrm{1}}$ and $N_{\mathrm{2}}$ denote the layers of CNN and DNN, respectively. $\mathbf{D}_{v}^{(0)}$ is the features of node $v$ and its neighborhood edges, $\mathbf{D}_{v}^{(N_{\mathrm{2}})}$ is the USPA strategy of node $v$, and $F_{\mathrm{std}}(\mathbf{z})=F_{\mathrm{AC}}(F_{\mathrm{BN}}(\mathbf{z}))$ is the standardization function used to standardize the network input to accelerate training process and reduce generalization error, which is implemented by BN and AC layers. The neural network module of CNNUSBF is constructed through CNN and DNN, which are implemented and trained by \emph{Pytorch} and PDLF, respectively. The algorithm steps of CNNUSBF refer to JUSBF. Note that unless mentioned otherwise, the neural network structure of CNNUSBF refer to Table~\ref{Tab08} and is trained separately for different WCN scenarios.}
\begin{table}[!ht]
\centering
\renewcommand{\arraystretch}{1.1}
\captionsetup{labelfont={color=red},font={color=red}}
\caption{The neural network structure of CNNUSBF.}\label{Tab08}
{\color{red}\begin{tabular}{ll}
\hline
Layer & Parameters \\\hline
Layer 1 (Input) & Input of size $3K$, batch of size $K$, $N_{\mathrm{e}}$ epochs \\\hline
Layer 2 (Cov1d, BN and AC) & Input=$3$, output=$256$; Input=$256$; LReLU \\\hline
Layer 3 (Cov1d, BN and AC) & Input=$256$, output=$128$; Input=$128$; LReLU \\\hline
Layer 4 (Cov1d, BN and AC) & Input=$128$, output=$64$; Input=$64$; LReLU \\\hline
Layer 5 (LNN, BN and AC) & Input=$64$, output=$32$; Input=$32$; LReLU \\\hline
Layer 6 (LNN, BN and AC) & Input=$32$, output=$16$; Input=$16$; LReLU \\\hline
Layer 7 (LNN, BN and AC) & Input=$16$, output=$2$; Input=$2$; LReLU \\\hline
Layer 8 (Output and PAC) & Output of size $2K+KN$, Adam optimizer \\\hline
\end{tabular}}
\end{table}
\end{appendices}
\begin{small}

20240218T23:39:39.773Z  {
"paloma_paragraphs": []
}  20211206T02:16:48.000Z  {
"arxiv_id": "2112.01799",
"language": "en",
"timestamp": "20211206T02:16:48",
"url": "https://arxiv.org/abs/2112.01799",
"yymm": "2112"
}  proofpilearXiv_0651  {
"provenance": "001.jsonl.gz:2"
}  \section{Introduction}
Vector Quantised Variational AutoEncoder (VQVAE) ~\cite{van2017neural} is a popular method developed to compress images into discrete representations for the generation. Typically, after the compression and discretization representation by the convolutional network, an autoregressive model is used to model and sample in the discrete latent space, including PixelCNN family~\cite{oord2016conditional,van2016pixel,chen2018pixelsnail}, transformers family~\cite{ramesh2021zero,chen2020generative}, etc.
However, in addition to the disadvantage of the huge number of model parameters, these autoregressive models can only make predictions based on the observed pixels (left upper part of the target pixel) due to the inductive bias caused by the strict adherence to the progressive scan order~\cite{khan2021transformers,bengio2015scheduled}. If the conditional information is located at the end of the autoregressive sequence, it is difficult for the model to obtain relevant information.
\begin{figure}
\centering
\includegraphics[scale=0.55]{pafidr1234.png}
\caption{FID v.s. Operations and Parameters. The size of the blobs is proportional to the number of network parameters, the Xaxis indicates FLOPs on a log scale and the Yaxis is the FID score.}
\label{fig1}
\end{figure}
A recent alternative generative model is the Denoising Diffusion Model, which can effectively mitigate the lack of global information ~\cite{sohl2015deep,ho2020denoising}, also achieving comparable or stateoftheart performance in text~\cite{hoogeboom2021argmax,austin2021structured}, image~\cite{dhariwal2021diffusion} and speech generation~\cite{kong2020diffwave} tasks. Diffusion models are parameterized Markov chains trained to translate simple distributions to more sophisticated target data distributions in a finite set of steps. Typically the Markov chain begins with an isotropic Gaussian distribution in continuous state space, with the transitions of the chain for reversing a diffusion process that gradually adds Gaussian noise to source images. In the inverse process, as the current step is based on the global information of the previous step in the chain, this endows the diffusion model with the ability to capture the global information.
However, the diffusion model has a nonnegligible disadvantage in that the time and computational effort involved in generating the images are enormous. The main reason is that the reverse process typically contains thousands of steps. Although we do not need to iterate through all the steps when training, all these steps are still required when generating a sample, which is much slower compared to GANs and even autoregressive models.
Some recent works~\cite{song2020denoising,nichol2021improved} have attempted addressing these issues by decreasing the sampling steps, but the computation cost is still high as each step of the reverse process generates a fullresolution image.
In this work, we propose the \textbf{V}ector \textbf{Q}uantized \textbf{D}iscrete \textbf{D}iffusion \textbf{M}odel (VQDDM), a versatile framework for image generation consisting of a discrete variational autoencoder and a discrete diffusion model. VQDDM consists of two stages: (1) learning an abundant and efficient discrete representation of images, (2) fitting the prior distribution of such latent visual codes via discrete diffusion model.
VQDDM substantially reduces the computational resources and required time to generate highresolution images by using a discrete scheme. Then the common problem of the lack of global content and overly large number of parameters of the autoregressive model is solved by fitting a latent variable prior using the discrete diffusion model. Finally, since a bias of codebook will limit generation quality, while model size is also dependent on the number of categories, we propose a rebuild and finetune(ReFiT) strategy to construct a codebook with higher utilization, which will also reduce the number of parameters in our model.
In summary, our key contributions include the following:
\begin{itemize}
\item VQDDM fits the prior over discrete latent codes with a discrete diffusion model. The use of diffusion model allows the generative models consider the global information instead of only focusing on partially seen context to avoid sequential bias.
\item We propose a ReFiT approach to improve the utilisation of latent representations in the visual codebook, which can increase the code usage of VQGAN from $31.85\%$ to $97.07\%$, while the FID between reconstruction image and original training image is reduced from $10.18$ to $5.64$ on CelebAHQ $256\times256$.
\item VQDDM is highly efficient for the both number of parameters and generation speed. As shown in Figure~\ref{fig1}, using only 120M parameters, it outperforms VQVAE2 with around 10B parameters and is comparable with VQGAN with 1B parameters in image generation tasks in terms of image quality. It is also 10 $ \sim $ 100 times faster than other diffusion models for image generation~\cite{song2020denoising,ho2020denoising}.
\end{itemize}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.15]{pipeline.png}
\caption{The proposed VQDDM pipeline contains 2 stages: (1) Compress the image into discrete variables via discrete VAE. (2) Fit a prior distribution over discrete coding by a diffusion model.
Black squares in the diffusion diagram illustrate states when the underlying distributions are uninformative, but which become progressively more specific during the reverse process.
The bar chart at the bottom of the image represents the probability of a particular discrete variable being sampled.
}
\label{pipeline}
\end{figure*}
\section{Preliminaries}
\subsection{Diffusion Models in continuous state space}
Given data $\mathbf{x}_0$ from a data distribution $q(\mathbf{x}_0)$, the diffusion model consists of two processes: the \textit{diffusion process} and the \textit{reverse process}~\cite{sohl2015deep,ho2020denoising}.
The \textit{diffusion process} progressively destroys the data $\mathbf{x}_0$ into $\mathbf{x}_T$ over $T$ steps, via a fixed Markov chain that gradually introduces Gaussian noise to the data according to a variance schedule $\beta_{1:T} \in (0,1]^T$ as follows:
\begin{equation}
q(\mathbf{x}_{1:T}\mathbf{x}_0) = \prod_{t=1}^T q(\mathbf{x}_t\mathbf{x}_{t1}) ,
\end{equation}
\begin{equation}
q(\mathbf{x}_t  \mathbf{x}_{t1}) = \mathcal{N}(\mathbf{x}_t;\sqrt{1\beta_t}\mathbf{x}_{t1},\beta_t \mathbf{I}) .
\end{equation}
With an adequate number of steps $T$ and a suitable variance schedule $\beta$,
$p(\mathbf{x}_T)$ becomes an isotropic Gaussian distribution.
The \textit{reverse process} is defined as a Markov chain parameterized by $\theta$, which is used to restore the data from the noise:
\begin{equation}
p_{\theta}(\mathbf{x}_{0:T}) = p(\mathbf{x}_T) \prod_{t=1}^T p_{\theta} (\mathbf{x}_{t1}\mathbf{x}_t),
\end{equation}
\begin{equation}
p_{\theta}(\mathbf{x}_{t1}\mathbf{x}_{t}) = \mathcal{N} (\mathbf{x}_{t1};\mu_{\theta}(\mathbf{x}_t,t),\Sigma_{\theta}(\mathbf{x}_t,t)).
\end{equation}
The objective of training is to find the best $\theta$ to fit the data distribution $q(\mathbf{x}_0)$ by optimizing the variational lower bound (VLB)~\cite{kingma2013auto}
\begin{equation}
\begin{split}
\mathbb{E}_{q(\mathbf{x}_0)}& [\log p_{\theta}(\mathbf{x}_0)]\\ = &\mathbb{E}_{q(\mathbf{x}_0)}\log\mathbb{E}_{q(\mathbf{x}_{1:T}\mathbf{x}_0)} \left[ \frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}\mathbf{x}_0)} \right] \\ \geq &\mathbb{E}_{q(\mathbf{x}_{0:T})} \left[ \log \frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}\mathbf{x}_0)} \right] =: L_\mathrm{vlb}.
\end{split}
\label{vlb}
\end{equation}
Ho \etal \cite{ho2020denoising} revealed that the variational lower bound in Eq.~\ref{vlb} can be calculated with closed form expressions instead of Monte Carlo estimates as the \textit{diffusion process} posteriors and marginals are Gaussian, which allows sampling $\mathbf{x}_t$ at an arbitrary step $t$ with $\alpha_t = 1\beta_t$, $\bar{\alpha}_t=\prod_{s=0}^t \alpha_s$ and $\tilde{\beta_t}=\frac{1\bar{\alpha}_{t1}}{1\bar{\alpha}_t}$:
\begin{equation}
q(\mathbf{x}_t\mathbf{x}_0) = \mathcal{N}(\mathbf{x}_t  \sqrt{\bar{\alpha}_t} \mathbf{x}_0, (1\bar{\alpha}_t)\mathbf{I} ),
\end{equation}
\begin{equation}
\begin{split}
L_\mathrm{vlb} = \mathbb{E}_{q(\mathbf{x}_0)} &[ D_{\mathrm{KL}}(q(\mathbf{x}_T\mathbf{x}_0)  p(\mathbf{x}_T))  \log p_{\theta} ( \mathbf{x}_0  \mathbf{x}_1 ) \\
&+ \sum_{t=2}^T D_{\mathrm{KL}}(q(\mathbf{x}_{t1}\mathbf{x}_t,\mathbf{x}_0)  p_{\theta}(\mathbf{x}_{t1}\mathbf{x}_t)) ].
\end{split}
\label{kl}
\end{equation}
Thus the reverse process can be parameterized by neural networks $\epsilon_{\theta}$ and $\upsilon_{\theta}$, which can be defined as:
\begin{equation}
\mu_{\theta}(\mathbf{x}_t,t) = \frac{1}{\sqrt{\alpha_t}} \left(\mathbf{x}_t  \frac{\beta_t}{\sqrt{1\bar{\alpha}_t}} \epsilon_{\theta} (\mathbf{x}_t,t) \right),
\end{equation}
\begin{equation}
\begin{split}
\Sigma_{\theta}(\mathbf{x}_t,t) = \exp(\upsilon_{\theta}&(\mathbf{x}_t,t)\log\beta_t \\
&+ (1\upsilon_{\theta}(\mathbf{x}_t,t))\log\tilde{\beta_t}).
\end{split}
\end{equation}
Using a modified variant of the VLB loss as a simple loss function will offer better results in the case of fixed $\Sigma_{\theta}$~\cite{ho2020denoising}:
\begin{equation}
L_{\mathrm{simple}} = \mathbb{E}_{t,\mathbf{x}_0,\epsilon} \left[ \epsilon  \epsilon_{\theta}(\mathbf{x}_t,t)^2 \right],
\end{equation}
which is a reweighted version resembling denoising score matching over multiple noise scales indexed by $t$~\cite{song2019generative}.
Nichol \etal \cite{nichol2021improved} used an additional $L_{\mathrm{vlb}}$ to the simple loss for guiding a learned $\Sigma_{\theta}(\mathbf{x}_t,t)$, while keeping the $\mu_{\theta}(\mathbf{x}_t,t)$ still the dominant component of the total loss:
\begin{equation}
L_{\mathrm{hybrid}} = L_{\mathrm{simple}} + \lambda L_{\mathrm{vlb}}.
\end{equation}
\subsection{Discrete Representation of Images}
van den Oord \etal \cite{van2017neural} presented a discrete variational autoencoder with a categorical distribution as the latent prior, which is able to map the images into a sequence of discrete latent variables by an encoder and reconstruct the image according to those variables with a decoder.
Formally, given a codebook $\mathbb{Z}\in\mathbb{R}^{K\times d}$, where $K$ represents the capacity of latent variables in the codebook and $d$ is the dimension of each latent variable, after compressing the high dimension input data $\textbf{x}\in \mathbb{R}^{c\times H\times W}$ into latent vectors $\textbf{h}\in \mathbb{R}^{h\times w\times d}$ by an encoder $E$, $\textbf{z}$ is the quantised $\textbf{h}$, which substitutes the vectors $h_{i,j}\in\textbf{h}$ by the nearest neighbor $z_k \in \mathbb{Z}$. The decoder $D$ is trained to reconstruct the data from the quantised encoding $\textbf{z}_q$:
\begin{equation}
\textbf{z} = \mathrm{Quantize}(\textbf{h}) := \mathrm{arg\ min}_k h_{i,j}z_k ,
\end{equation}
\begin{equation}
\hat{\textbf{x}} = D(\textbf{z}) = D(\mathrm{Quantize}(E(\textbf{x}))).
\end{equation}
As $\mathrm{Quantize}(\cdot)$ has a nondifferentiable operation $\mathrm{arg\ min}$, the straightthrough gradient estimator is used for backpropagating the reconstruction error from decoder to encoder. The whole model can be trained in an endtoend manner by minimizing the following function:
\begin{equation}
L = \textbf{x}\hat{\textbf{x}}^2 + sg[E(\textbf{x})]  \textbf{z} + \beta  sg[\textbf{z}]  E(\textbf{x})  ,
\label{vqeq}
\end{equation}
where $sg[\cdot]$ denotes stop gradient and broadly the three terms are reconstruction loss, codebook loss and commitment loss, respectively.
VQGAN~\cite{esser2021taming} extends VQVAE~\cite{van2017neural} in multiple ways. It substitutes the L1 or L2 loss of the original VQVAE with a perceptual loss~\cite{zhang2018unreasonable}, and adds an additional discriminator to distinguish between real and generated patches~\cite{CycleGAN2017}.
The codebook update of the discrete variational autoencoder is intrinsically a dictionary learning process. Its objective uses L2 loss to narrow the gap between the codes $\mathbb{Z}_t \in \mathbb{R}^{K_t\times d}$ and the encoder output $\textbf{h}\in\mathbb{R}^{ h\times w \times d}$ \cite{van2017neural}. In other words, the codebook training is like $k$means clustering, where cluster centers are the discrete latent codes. However, since the volume of the codebook space is dimensionless and $\textbf{h}$ is updated each iteration, the discrete codes $\mathbb{Z}$ typically do not follow the encoder training quickly enough. Only a few codes get updated during training, with most unused after initialization.
\section{Methods}
Our goal is to leverage the powerful generative capability of the diffusion model to perform high fidelity image generation tasks with a low number of parameters.
Our proposed method, VQDDM, is capable of generating high fidelity images with a relatively small number of parameters and FLOPs, as summarized in Figure ~\ref{pipeline}.
Our solution starts by compressing the image into discrete variables via the discrete VAE and then constructs a powerful model to fit the joint distribution over the discrete codes by a diffusion model. During diffusion training, the darker coloured parts in Figure ~\ref{pipeline} represent noise introduced by uniform resampling. When the last moment is reached, the latent codes have been completely corrupted into noise.
In the sampling phase, the latent codes are drawn from an uniform categorical distribution at first, and then resampled by performing reverse process $T$ steps to get the target latent codes. Eventually, target latent codes are pushed into the decoder to generate the image.
\subsection{Discrete Diffusion Model} \label{ddm}
Assume the discretization is done with $K$ categories, i.e.\
$z_t \in \{1,\dots,K\}$, with the onehot vector representation given by $\textbf{z}_t \in \{0,1\}^K$. The corresponding probability distribution is expressed by $\textbf{z}_t^{\mathrm{logits}}$ in logits. We formulate the discrete diffusion process as
\begin{equation}
q(\textbf{z}_t\textbf{z}_{t1}) = \mathrm{Cat} (\textbf{z}_t ; \textbf{z}_{t1}^{\mathrm{logits}} \mathbf{Q}_t ),
\end{equation}
where $\mathrm{Cat}(\textbf{x}\textbf{p})$ is the categorical distribution parameterized by $\textbf{p}$, while $\mathbf{Q}_t$ is the process transition matrix. In our method, $\mathbf{Q}_t = (1\beta_t)\textbf{I} + \beta_t / K $, which means $\textbf{z}_t$ has $1\beta_t$ probability to keep the state from last timestep and $\beta_t$ chance to resample from a uniform categorical distribution. Formally, it can be written as
\begin{equation}
q(\textbf{z}_t\textbf{z}_{t1}) = \mathrm{Cat} (\textbf{z}_t ; (1\beta_t)\textbf{z}_{t1}^{\mathrm{logits}} + \beta_t / K).
\label{ddp}
\end{equation}
It is straightforward to get $\textbf{z}_t$ from $\textbf{z}_0$ under the schedule $\beta_t$ with $\alpha_t = 1\beta_t$, $\bar{\alpha}_t=\prod_{s=0}^t \alpha_s$:
\begin{equation}
q(\textbf{z}_t\textbf{z}_{0}) = \mathrm{Cat}(\textbf{z}_t ; \bar{\alpha}_t \textbf{z}_0 + (1\bar{\alpha}_t)/K)
\label{ddp0}
\end{equation}
\begin{equation}
or \quad q(\textbf{z}_t\textbf{z}_{0}) = \mathrm{Cat}(\textbf{z}_t ; \textbf{z}_0 \bar{\mathbf{Q}}_t) ; \ \bar{\mathbf{Q}}_t = \prod_{s=0}^t \mathbf{Q}_s.
\end{equation}
We use the same cosine noise schedule as \cite{nichol2021improved,hoogeboom2021argmax} because our discrete model is also established on the latent codes with a small $16\times16$ resolution. Mathematically, it can be expressed in the case of $\bar{\alpha}$ by
\begin{equation}
\bar{\alpha} = \frac{f(t)}{f(0)}, \quad f(t) =\mathrm{cos}\left(\frac{t/T+s}{1+s} \times \frac{\pi}{2}\right)^2 .
\label{noises}
\end{equation}
By applying Bayes' rule, we can compute the posterior $q(\textbf{z}_{t1}\textbf{z}_{t},\textbf{z}_{0})$ as:
\begin{equation}
\begin{split}
q(\textbf{z}_{t1}  \textbf{z}_{t},\textbf{z}_{0})& = \mathrm{Cat} \left(\textbf{z}_t ; \frac{\textbf{z}_t^{\mathrm{logits}} \mathbf{Q}_t^{\top} \odot \textbf{z}_0 \bar{\mathbf{Q}}_{t1} }{\textbf{z}_0 \bar{\mathbf{Q}}_{t} {\textbf{z}_t^{\mathrm{logits}}}^{\top}} \right) \\
= \mathrm{Cat} &(\textbf{z}_t ; \ \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) / \sum_{k=1}^K \theta_k (z_{t,k},z_{0,k}) ), \\
\end{split}
\label{qpost}
\end{equation}
\begin{equation}
\begin{split}
\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) = [\alpha_t \textbf{z}_t^{\mathrm{logits}} + & (1\alpha_t)/ K] \\ &\odot [\bar{\alpha}_{t1} \textbf{z}_0 + (1\bar{\alpha}_{t1}) / K].
\end{split}
\end{equation}
It is worth noting that $ \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) / \sum_{k=1}^K \theta_k (z_{t,k},z_{0,k})$ is the normalized version of $\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)$, and we use $\mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)]$ to denote $ \boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0) / \sum_{k=1}^K \theta_k (z_{t,k},z_{0,k})$ below.
Hoogeboom \etal \cite{hoogeboom2021argmax} predicted $\hat{\textbf{z}}_0$ from $\textbf{z}_t$ with a neural network $\mu(\textbf{z}_t,t)$, instead of directly predicting $p_{\theta}(\textbf{z}_{t1}\textbf{z}_{t})$. Thus the reverse process can be parameterized by the probability vector from $q(\textbf{z}_{t1}  \textbf{z}_{t},\hat{\textbf{z}}_{0})$. Generally, the reverse process $p_{\theta}(\textbf{z}_{t1}\textbf{z}_{t})$ can be expressed by
\begin{equation}
\begin{split}
p_{\theta}(\textbf{z}_0\textbf{z}_1) & = \mathrm{Cat} (\textbf{z}_0 \hat{\textbf{z}}_0), \\
p_{\theta}(\textbf{z}_{t1}\textbf{z}_{t})& = \mathrm{Cat} (\textbf{z}_t  \ \mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\hat{\textbf{z}}_0)]) .
\end{split}
\end{equation}
Inspired by~\cite{jang2016categorical,maddison2016concrete}, we use a neural network $\mu(\mathbf{Z}_t,t)$ to learn and predict the a noise $n_t$ and obtain the logits of $\hat{\mathbf{z}}_0$ from
\begin{equation}
\hat{\mathbf{z}}_0 = \mu(\mathbf{Z}_t,t) + \mathbf{Z}_t.
\label{pnois}
\end{equation}
It is worth noting that the neural network $\mu(\cdot)$ is based on the $\mathbf{Z}_t \in \mathbb{N}^{h\times w}$, where all the discrete representation $\mathbf{z}_t$ of the image are combined. The final noise prior $\mathbf{Z}_T$ is uninformative, and it is possible to separably sample from each axis during inference. However, the reverse process is jointly informed and evolves towards a highly coupled $\mathbf{Z}_0$. We do not define a specific joint prior for $\mathbf{z}_t$, but encode the joint relationship into the learned reverse process. This is implicitly done in the continuous domain diffusion. As $\mathbf{z}_{t1}$ is based on the whole previous representation $\mathbf{z}_t$, the reverse process can sample the whole discrete code map directly while capturing the global information.
The loss function used is the VLB from Eq.~\ref{kl}, where the summed KL
divergence for $T>2$ is given by
\begin{equation}
\begin{split}
\mathrm{KL}( q(\textbf{z}_{t1}  \textbf{z}_{t},\textbf{z}_{0})  p_{\theta}(\textbf{z}_{t1}\textbf{z}_{t})) &= \\
\sum_k \mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)] &\times
\log \frac{\mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\textbf{z}_0)] }{\mathrm{N}[\boldsymbol{\theta}(\textbf{z}_t,\hat{\textbf{z}}_0)] }.
\end{split}
\end{equation}
\subsection{Rebuild and Finetune Strategy}
Our discrete diffusion model is based on the latent representation of the discrete VAE codebook $\mathbb{Z}$. However, the codebooks with rich content are normally large, with some even reaching $K=16384$. This makes it highly unwieldy for our discrete diffusion model, as the transition matrices of discrete diffusion models have a quadratic level of growth to the number of classes $K$, \eg $O(K^2T)$~\cite{austin2021structured}.
To reduce the categories used for our diffusion model, we proposed a Rebuild and Finetune (ReFit) strategy to decrease the size $K$ of codebook $\mathbb{Z}$ and boost the reconstruction performance based on a welltrained discrete VAEs trained by the straightthrough method.
From Eq.~\ref{vqeq}, we can find the second term and the third term are related to the codebook, but only the second term is involved in the update of the codebook. $sg[E(\textbf{x})]  \textbf{z}$ reveals that only a few selected codes, the same number as the features from $E(\textbf{x})$, are engaged in the update per iteration. Most of the codes are not updated or used after initialization, and the update of the codebook can lapse into a local optimum.
We introduce a rebuild and finetune strategy to avoid the waste of codebook capacity. With the trained encoder, we reconstruct the codebook so that all codes in the codebook have the opportunity to be selected. This will greatly increase the usage of the codebook. Suppose we desire to obtain a discrete VAE having a codebook with $\mathbb{Z}_t$ based on a trained discrete VAE with an encoder $E_s$ and a decoder $D_s$. We first encode each image $\textbf{x}\in \mathbb{R}^{c\times H\times W}$ to latent features $\textbf{h}$, or loosely speaking, each image gives us $h\times w$ features with $d$ dimension. Next we sample $P$ features uniformly
from the entire set of features found in training images,
where $P$ is the sampling number and far larger than the desired codebook capacity $K_t$. This ensures that the rebuild codebook is composed of valid latent codes. Since the process of codebook training is basically the process of finding cluster centres, we directly employ kmeans with AFKMC$^2$~\cite{bachem2016fast} on the sampled $P$ features and utilize the centres to rebuild the codebook $\mathbb{Z}_t$. We then replace the original codebook with the rebuild $\mathbb{Z}_t$ and finetune it on top of the welltrained discrete VAE.
\section{Experiments and Analysis}
\subsection{Datasets and Implementation Details} \label{desc}
We show the effectiveness of the proposed VQDDM on \textit{CelebAHQ}~\cite{karras2017progressive} and \textit{LSUNChurch}~\cite{yu2015lsun} datasets and verify the proposed Rebuild and Finetune strategy on \textit{CelebAHQ} and \textit{ImageNet} datasets. The details of the dataset are given in the Appendix.
The discrete VAE follows the same training strategy as VQGAN\cite{esser2021taming}. All training images are processed to $256\times256$, and the compress ratio is set to $16$, which means the latent vector $\textbf{z} \in \mathbb{R}^{1\times16\times16}$.
When conducting Rebuild and Finetune, the sampling number $P$ is set to $20k$ for \textit{LSUN} and \textit{CelebA}. For the more contentrich case, we tried a larger P value $50k$ for \textit{ImageNet}. In practical experiments, we sample $P$ images with replacement uniformly from the whole training data and obtained corresponding latent features. For each feature map, we make another uniform sampling over the feature map size $16\times16$ to get the desired features. In the finetuning phase, we freeze the encoder and set the learning rate of the decoder to $1e$$6$ and the learning rate of the discriminator to $2e$$6$ with 8 instances per batch.
With regard to the diffusion model, the network for estimating $n_t$ has the same structure as~\cite{ho2020denoising}, which is a UNet~\cite{ronneberger2015u} with selfattention~\cite{vaswani2017attention}. The detailed settings of hyperparameters are provided in the Appendix. We set timestep $T=4000$ in our experiments and the noise schedule is the same as~\cite{nichol2021improved}
\subsection{Codebook Quality} \label{cbq}
A large codebook dramatically increases the cost of DDM.
To reduce the cost to an acceptable scale, we proposed a resample and finetune strategy to compress the size of the codebook, while maintaining quality.
To demonstrate the effectiveness of the proposed strategy, we compare the codebook usage and FID of reconstructed images of our method to VQGAN\cite{esser2021taming}, VQVAE2\cite{razavi2019generating} and DALLE\cite{ramesh2021zero}.
In this experiment, we compressed the images from $3\times256\times256$ to $1\times16\times16$ with two different codebook capacities $K=\{512,1024\}$.
We also proposed an indicator to measure the usage rate of the codebook, which is the number of discrete features that have appeared in the test set or training set divided by the codebook capacity.
The quantitative comparison results are shown in Table~\ref{codebook_com} while the reconstruct images are demonstrated in Figs.~\ref{Reconinr} \& \ref{Reconceleba}.
Reducing the codebook capacity from 1024 to 512 only brings $\sim 0.1$ decline in CelebA and $\sim 1$ in ImageNet.
As seen in Figure ~\ref{Reconceleba}, the reconstructed images (c,d) after ReFiT strategy are richer in colour and more realistic in expression than the reconstructions from VQGAN (b).
The codebook usage of our method has improved significantly compared to other methods, nearly 3x high than the second best.
Our method also achieves the equivalent reconstruction quality at the same compression rate and with 32$\times$ lower capacity $K$ of codebook $\mathbb{Z}$.
For VQGAN with capacity $16384$, although it only has $976$ effective codes, which is smaller than $1024$ in our ReFiT method when $P=20k$, it achieves a lower FID in reconstructed images vs validation images. One possible reason is that the value of $P$ is not large enough to cover some infrequent combinations of features during the rebuild phase. As the results in Table~\ref{codebook_com}, after we increase the sampling number $P$ from $20k$ to $100k$, we observe that increasing the value of $P$ achieved higher performance.
\begin{table}
\centering
\resizebox{0.46\textwidth}{!}
{%
\begin{threeparttable}
\begin{tabular}{ccccccc}
\toprule
Model &Latent Size & Capacity & \multicolumn{2}{c}{Usage of $\mathbb{Z}$} & \multicolumn{2}{c}{FID $\downarrow$} \\
& & & CelebA & ImageNet & CelebA & ImageNet \\\midrule
VQVAE2 & Cascade & 512 & $\sim$65\% & &  & $\sim$10 \\
DALLE & 32x32 & 8192 &  &  &  & 32.01 \\
VQGAN & 16x16 & 16384 &  & 5.96\% &  & 4.98 \\
VQGAN & 16x16 & 1024 & 31.85\% & 33.67\% & 10.18 & 7.94 \\
\textit{\textbf{ours}} ($P=100k$)& 16x16 & 1024 &  & 100\% &  &4.98 \\
\textit{\textbf{ours}} ($P=20k$)& 16x16 & 1024 & 97.07\% & 100\% & 5.59 & 5.99 \\
\textit{\textbf{ours}} ($P=20k$) & 16x16 & 512 & 93.06\% & 100\% & 5.64 &6.95 \\ \bottomrule
\end{tabular}%
\begin{tablenotes}
\item[1] All methods are trained straightthrough, except DALLE
with GumbelSoftmax~\cite{ramesh2021zero}.
\item[2] CelebAHQ at $256$$\times$$256$. Reported FID is between 30$k$ reconstructed data vs training data.
\item[3] Reported FID is between 50$k$ reconstructed data vs validation data
\end{tablenotes}
\end{threeparttable}
}
\caption{FID between reconstructed images and original images on CelebAHQ and ImageNet }
\label{codebook_com}
\end{table}
\begin{figure*}
\centering
\includegraphics[scale=0.20]{inrs.png}
\caption{Reconstruction images $384\times384$ from ImageNet based VQGAN and ReFiT}
\label{Reconinr}
\end{figure*}
\begin{figure}[t!]
\centering
\resizebox{0.50\textwidth}{!}{
\begin{subfigure}{0.125\textwidth}
\centering
\includegraphics[scale=0.20]{ori.png}
\caption{Source}
\end{subfigure}
\begin{subfigure}{0.125\textwidth}
\centering
\includegraphics[scale=0.20]{raw.png}
\caption{VQGAN}
\end{subfigure}
\begin{subfigure}{0.125\textwidth}
\centering
\includegraphics[scale=0.20]{re_1024.png}
\caption{ReFiT K=1024}
\end{subfigure}
\begin{subfigure}{0.125\textwidth}
\centering
\includegraphics[scale=0.20]{re.png}
\caption{ReFiT K=512}
\end{subfigure}
}
\caption{Reconstruction images of CelebA HQ $256\times256$ from VQGAN and ReFiT.}
\label{Reconceleba}
\end{figure}
\subsection{Generation Quality} \label{genq}
We evaluate the performance of VQDDM for the unconditional image generation on \textit{CelebAHQ} $256\times256$. Specifically, we evaluated the performance of our approach in terms of FID and compared it with various likelihoodbased based methods including GLOW~\cite{kingma2018glow}, NVAE~\cite{vahdat2020nvae}, VAEBM~\cite{xiao2020vaebm}, DCVAE~\cite{parmar2021dual}, VQGAN~\cite{esser2021taming} and likelihoodfree method, e.g., PGGAN~\cite{karras2017progressive}. We also conducted an experiment on \textit{LSUNChurch}.
In \textit{CelebAHQ} experiments, the discrete diffusion model was trained with $K=512$ and $K=1024$ codebooks respectively. We also report the different FID from $T=2$ to $T=4000$ with corresponding time consumption in Figure ~\ref{cost}. Regarding the generation speed, it took about 1000 hours to generate $50k$ $256\times256$ images using DDPM with 1000 steps on a NVIDIA 2080Ti GPU, 100 hours for DDIM with 100 steps~\cite{song2020denoising}, and around 10 hours for our VQDDM with 1000 steps.
\begin{figure}
\centering
\includegraphics[scale=.5]{cost.png}
\caption{Steps and corresponding FID during the sampling. The text annotations are hours to sample 50k latent feature maps on 1 NVIDIA 2080Ti GPU}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.4]{time.png}
\caption{Hours to sampling 50k latent codes by VQDDM and generating 50k images with VQDDM and DDPM}
\label{cost}
\end{figure}
\begin{figure*}[t!]
\centering
\subcaptionbox{Samples $(256\times256)$ from a VQDDM model trained on CelebA HQ. FID=$13.2$ \label{celebs}}{
\includegraphics[scale=0.27]{nc1.png}
}
\subcaptionbox{Samples $(256\times256)$ from a VQDDM model trained on LSUNChurch. FID=$16.9$ \label{lsuns}}{
\includegraphics[scale=0.27]{lsun.png}
}
\caption{Samples from VQDDM models.}
\end{figure*}
Table~\ref{celeba} shows the main results on VQDDM along with other established models. Although VQDDM is also a likelihoodbased method, the training phase relies on the negative loglikehood (NLL) of discrete hidden variables, so we do not compare the NLL between our method and the other methods. The training NLL is around $1.258$ and test NLL is $1.286$ while the FID is $13.2$. Fig.~\ref{celebs} shows the generated samples from VQDDM trained on the \textit{CelebAHQ}.
For \textit{LSUNChurch}, the codebook capacity $K$ is set to $1024$, while the other parameters are set exactly the same. The training NLL is $1.803$ and the test NLL is $1.756$ while the FID between the generated images and the training set is $16.9$. Some samples are shown in Fig.~\ref{lsuns}.
After utilizing ReFiT, the generation quality of the model is significantly improved, which implies a decent codebook can have a significant impact on the subsequent generative phase. Within a certain range, the larger the codebook capacity leads to a better performance. However, excessive number of codebook entries will cause the model collapse~\cite{hoogeboom2021argmax}.
\subsection{Image Inpainting} \label{gbq}
Autoregressive models have recently demonstrated superior performance in the image inpainting tasks~\cite{chen2020generative, esser2021taming}.
However, one limitation of this approach is that if the important context is found at the end of the autoregressive series, the models will not be able to correctly complete the images. As mentioned in Sec.~\ref{ddm}, the diffusion model will directly sample the full latent code map, with sampling steps based on the \emph{full} discrete map of the previous step. Hence it can significantly improve inpainting as it does not depend on context sequencing.
We perform the mask diffusion and reverse process in the discrete latent space. After encoding the masked image $x_0 \sim q(\textbf{x}_0)$ to discrete representations $z_{0} \sim q(\textbf{z}_{0})$, we diffuse $\textbf{z}_{0}$ with $t$ steps to $\tilde{\textbf{z}}_{t} \sim q(\textbf{z}_{t}\textbf{z}_{0})$. Thus the last step with mask $\tilde{\textbf{z}}_{T}^m$ can be demonstrated as $\tilde{\textbf{z}}_{T}^m = (1m) \times \tilde{\textbf{z}}_{T} + m \times \mathbb{C}$, where $\mathbb{C}\sim \mathrm{Cat}(K,1/K)$ is the sample from a uniform categorical distribution and $m \in \{0,1\}^K $ is the mask, $m=0$ means the context there is masked and $m=1$ means that given the information there. In the reverse process, $\textbf{z}_{T1}$ can be sampled from $p_{\theta}(\mathbf{z}_{T1}\tilde{\textbf{z}}_{T}^m)$ at $t=T$, otherwise, $\textbf{z}_{t1} \sim p_{\theta}(\mathbf{z}_{t1}\textbf{z}_{t}^m)$, and the masked $\textbf{z}_{t1}^m = (1m) \times \textbf{z}_{t1} + m \times \tilde{\textbf{z}}_{t1}$.
We compare our approach and another that exploits a transformer with a sliding attention window as an autoregressive generative model~\cite{esser2021taming}. The completions are shown in Fig.~\ref{global_if}, in the first row, the upper 62.5\% (160 out of 256 in latent space) of the input image is masked and the lower 37.5\% (96 out of 256) is retained, and in the second row, only a quarter of the image information in the lower right corner is retained as input. We also tried masking in an arbitrary position. In the third row, we masked the perimeter, leaving only a quarter part in the middle. Since the reverse diffusion process captures the global relationships, the image completions of our model performs much better. Our method can make a consistent completions based on arbitrary contexts, whereas the inpainting parts from transformer lack consistency. It is also worth noting that our model requires no additional training in solving the task of image inpainting.
\begin{table}[t!]
\centering
\resizebox{0.46\textwidth}{!}{
\begin{threeparttable}
\begin{tabular}{llll}
\toprule
Method & FID $\downarrow$ & Params & FLOPs \\ \midrule
\multicolumn{2}{l}{\textbf{\textit{Likelihoodbased}}} \\ \midrule
GLOW~\cite{kingma2018glow} & 60.9 & 220 M & 540 G \\
NVAE~\cite{vahdat2020nvae} & 40.3 & 1.26 G & 185 G \\
\textbf{\textit{ours}} ($K=1024$ w/o ReFiT) & 22.6 & 117 M & 1.06 G \\
VAEBM~\cite{xiao2020vaebm} & 20.4 & 127 M & 8.22 G \\
\textbf{\textit{ours}} ($K=512$ w/ ReFiT) & 18.8 & 117 M & \textbf{1.04 G } \\
DCVAE~\cite{parmar2021dual} & 15.8 &  &  \\
\textbf{\textit{ours}} ($K=1024$ w/ ReFiT) & 13.2 & 117 M & 1.06 G \\
DDIM(T=100)~\cite{song2020denoising} &10.9 &114 M &124 G \\
VQGAN + Transformer~\cite{esser2021taming} & 10.2 & 802 M & 102 G\tnote{a} \\ \midrule
\multicolumn{2}{l}{\textbf{\textit{Likelihoodfree}}} \\\midrule
PGGAN~\cite{karras2017progressive} & 8.0 & 46.1 M & 14.1 G \\ \bottomrule
\end{tabular}
\begin{tablenotes}
\item[a] VQGAN is an autoregressive model, and the number in the table is the computation needed to generate the full size latent feature map. The FLOPs needed to generate one discrete index out of 256 is 0.399 G.
\end{tablenotes}
\end{threeparttable}}
\caption{FID on CelebA HQ $256\times256$ dataset. All the FLOPs in the table only consider the generation stage or inference phase for one $256\times256$ images. }
\label{celeba}
\end{table}
\begin{figure*}[t!]
\centering{
\includegraphics[scale=0.145]{gb.png}
}
\caption{Completions with the arbitrary masks.}
\label{global_if}
\end{figure*}
\section{Related Work}
\subsection{Vector Quantised Variational Autoencoders}
VQVAE~\cite{van2017neural} leads a trend of discrete representation of images. The common practice is to model the discrete representations using an autoregressive model, e.g. PixelCNN~\cite{van2016pixel,chen2018pixelsnail}, transformers~\cite{esser2021taming,ramesh2021zero,ramesh2021zero}, etc.
Some works had attempted to fit the prior distribution of discrete latent variables using a light nonautoregressive approach, like EM approach~\cite{roy2018theory} and Markov chain with selforganizing map~\cite{fortuin2018som}, but yet they are struggling to fit a large scale of data.
Ho \etal \cite{ho2020denoising} have also shown that the diffusion models can be regarded as an autoregressive model along the time dimension, but in reality, it is nonautoregressive along the pixel dimension.
A concurrent work~\cite{esser2021imagebart} follow a similar pipeline which uses a diffusion model on discrete latent variables, but the work uses parallel modeling of multiple short Markov chains to achieve denoising.
\subsection{Diffusion Models}
SohlDickstein \etal \cite{sohl2015deep} presented a simple discrete diffusion model, which diffused the target distribution into the independent binomial distribution. Recently, Hoogeboom \etal \cite{hoogeboom2021argmax} have extended the discrete model from binomial to multinomial. Further, Austin \etal \cite{austin2021structured} proposed a generalized discrete diffusion structure, which provides several choices for the diffusion transition process.
In the continuous state space, there are some recent diffusion models that surpassed the stateoftheart in the image generation area. With the guidance from the classifiers, Dhariwal \etal \cite{dhariwal2021diffusion} enabled diffusion models called ADM to generate images beyond BigGAN, which was previously one of the most powerful generative models. In CDM~\cite{ho2021cascaded}, the authors performed the cascade pipeline on the diffusion model to generate the image with ultrahigh fidelity and reach stateoftheart on conditional ImageNet generation.
In addition, there have been several recent works that have attempted to use diffusion models to modelling the latent variables of VAE~\cite{kingma2021variational,wehenkel2021diffusion}, while revealed the connection among several diffusion models mentioned above.
\section{Conclusion}
In this paper, we introduce VQDDM, a highfidelity image generation model with a twostage pipeline. In the first stage, we train a discrete VAE with a wellutilized contentrich codebook. With the help of such an efficient codebook, it is possible to generate highquality images by a discrete diffusion model with relatively tiny parameters in the second stage. Simultaneously, benefiting from the discrete diffusion model, the sampling process captures the global information and the image inpainting is no longer affected by the location of the given context and mask. Meanwhile, in comparison with other diffusion models, our approach further reduces the gap in generation speed with respect to GAN.
We believe that VQDDM can also be utilized for audio, video and multimodal generation.
\subsection*{Limitations}
For a complete diffusion, we need a large number of steps, which will result in a very fluctuating training process and limit the image generation quality. Hence, our model may suffer from underperformance when exposed to the large scale and complex datasets.
{\small
\bibliographystyle{ieee_fullname}

20240218T23:39:39.775Z  {
"paloma_paragraphs": []
}  20211206T02:13:16.000Z  {"arxiv_id":"2112.01739","language":"en","timestamp":"20211206T02:13:16","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0652  {
"provenance": "001.jsonl.gz:3"
}  "\\section{Introduction}\nBlazars are the most extreme subclass of active galactic nuclei (AGN) with(...TRUNCATED) 
20240218T23:39:39.779Z  {
"paloma_paragraphs": []
}  20211206T02:15:59.000Z  {"arxiv_id":"2112.01785","language":"en","timestamp":"20211206T02:15:59","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0653  {
"provenance": "001.jsonl.gz:4"
}  "\\section{Introduction}\n\\label{intro}\nThe astrophysical plasmas characterized by high Lundquist (...TRUNCATED) 
20240218T23:39:39.782Z  {
"paloma_paragraphs": []
}  20211206T02:11:43.000Z  {"arxiv_id":"2112.01723","language":"en","timestamp":"20211206T02:11:43","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0654  {
"provenance": "001.jsonl.gz:5"
}  "\\section{Introduction}\\label{sec:intro}\n\n\nSpace provides a useful vantage point for monitoring(...TRUNCATED) 
20240218T23:39:39.785Z  {
"paloma_paragraphs": []
}  20220722T02:23:05.000Z  {"arxiv_id":"2112.01759","language":"en","timestamp":"20220722T02:23:05","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0655  {
"provenance": "001.jsonl.gz:6"
}  "\\section{Limitations and Conclusion}\n\\label{sec:conclusion}\n\nA major limitation of NeRFSR{} i(...TRUNCATED) 
20240218T23:39:39.787Z  {
"paloma_paragraphs": []
}  20211206T02:15:43.000Z  {"arxiv_id":"2112.01777","language":"en","timestamp":"20211206T02:15:43","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0656  {
"provenance": "001.jsonl.gz:7"
}  "\\section{Introduction}\n\nMachine Learning (ML) applications recently demonstrated widespread adop(...TRUNCATED) 
20240218T23:39:39.790Z  {
"paloma_paragraphs": []
}  20211228T02:11:58.000Z  {"arxiv_id":"2112.01752","language":"en","timestamp":"20211228T02:11:58","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0657  {
"provenance": "001.jsonl.gz:8"
}  "\\section{Introduction}\nSurface codes are an important class of error correcting codes in fault to(...TRUNCATED) 
20240218T23:39:39.793Z  {
"paloma_paragraphs": []
}  20220329T02:19:48.000Z  {"arxiv_id":"2112.01778","language":"en","timestamp":"20220329T02:19:48","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0658  {
"provenance": "001.jsonl.gz:9"
}  "\\section{Introduction}\n\\label{sec:intro}\nThere are numerous links between probabilistic cellula(...TRUNCATED) 
20240218T23:39:39.796Z  {
"paloma_paragraphs": []
}  20211206T02:16:01.000Z  {"arxiv_id":"2112.01787","language":"en","timestamp":"20211206T02:16:01","url":"https://arxiv.org/(...TRUNCATED)  proofpilearXiv_0659  {
"provenance": "001.jsonl.gz:10"
}  "\\section{Introduction} \\label{intro}}\n\n\\IEEEPARstart{F}{ace} detection, one of the most popula(...TRUNCATED) 
OLMoE Mix (September 2024)
The following data mix was used to train OLMoE1B7B, a MixtureofExperts LLM with 1B active and 7B total parameters released in September 2024.
The base version of OLMoE1B7B can be found at this page, the SFT of OLMoE1B7B is available here, and a version combining SFT and DPO is available following this link.
Statistics
Subset  Tokens  Words  Bytes  Docs 

DCLM Baseline 1.0  3.86 T  3.38 T  16.7 T  2.95 B 
Starcoder  101 B  63.9 B  325 B  78.7 M 
peS2o (Dolma) 
57.2 B  51.3 B  268 B  38.8 M 
Arxiv (RedPajama v1 via Proof Pile II) 
21.1 B  23.5 B  88.8 B  1.55 M 
OpenWebMath (Proof Pile II) 
12.7 B  10.2 B  42.4 B  2.91 M 
Algebraic Stack (Proof Pile II) 
12.6 B  9.6 B  39.3 B  2.83 M 
En Wikipedia + Wikibooks (Dolma) 
3.69 B  3.16 B  16.2 B  6.17 M 
Total  4.07 T  3.53 T  17.4 T  3.08 B 
Preprocessing
All subsets were preprocessed to remove documents with a sequence of 32 or more repeated ngrams.
 a ngram is a span of 1 to 13 tokens, included;
 tokens are obtained using the model tokenizer;
 a sequence is a contiguous span of repeated ngrams.
In addition of the above, Starcoder dataset was further processed by removing any document meeting any of the following rules:
 document is from a repository with fewer than 2 stars on GitHub;
 the top most frequent word in the document constitutes over 30% of the document;
 the two most frequent words in the document constitutes over 50% of the document.
Licensing Information
This mix is licensed under Open Data Commons Attribution License (ODCBy) v1.0. By using this dataset, you are bound to licenses and Terms of Services of underlying datasets, which you can access by clicking on the links in the table above.
 Downloads last month
 17